×

Evolutionary extreme learning machine. (English) Zbl 1077.68791

Summary: Extreme Learning Machine (ELM), a novel learning algorithm that is much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks. However, ELM may need a higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004.; G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004.
[2] M.-B. Li, G.-B. Huang, P. Saratchandran, N. Sundararajan, Fully complex extreme learning machine, Neurocomputing, 2005, to appear.; M.-B. Li, G.-B. Huang, P. Saratchandran, N. Sundararajan, Fully complex extreme learning machine, Neurocomputing, 2005, to appear.
[3] Ghosh, R.; Verma, B., A hierarchical method for finding optimal architecture and weights using evolutionary least square based learning, Int. J. Neural Syst., 12, 1, 13-24 (2003)
[4] Storn, R.; Price, K., Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim., 11, 341-359 (1997) · Zbl 0888.90135
[5] Bartlett, P. L., The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network, IEEE Trans. Inform. Theory, 44, 2, 525-536 (1998) · Zbl 0901.68177
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.