This paper evaluates the performance of a number of novel extensions of the hyperbox neural network algorithm, a method which uses different modes of learning for supervised classification problems. One hyperbox per class is defined that covers the full range of attribute values in the class. Each hyperbox has one or more neurons associated with it, which model the class distribution. During prediction, points falling into only one hyperbox can be classified immediately, with the neural outputs used only when points lie in overlapping regions of hyperboxes. Decomposing the learning problem into easier and harder regions allows extremely efficient classification. We introduce an unsupervised clustering stage in each hyperbox followed by supervised learning of a neuron per cluster. Both random and heuristic-driven initialisation of the cluster centres and initial weight vectors are considered. We also consider an adaptive activation function for use in the neural mode. The performance and computational efficiency of the hyperbox methods is evaluated on artificial datasets and publically available real datasets and compared with results obtained on the same datasets using Support Vector Machine, Decision tree, K-nearest neighbour, and Multilayer Perceptron (with Back Propagation) classifiers. We conclude that the method is competitively performing, computationally efficient and provide recommendations for best usage of the method based on results on artificial datasets, and evaluation of sensitivity to initialisation.
Bibliographical noteThe full text of this item is not available from the repository.
- hyperbox neural network
- modal learning
- adaptive functions neural network