×

Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix. (English) Zbl 1067.65038

The authors present a new approach to the well-studied problem of computing eigenvectors corresponding to the largest or smallest eigenvalues of a real symmetric matrix. A model of a neural network is proposed, which is described by a certain nonlinear differential equation, involving the given symmetric matrix. An interesting representation of the solutions of the network is provided, together with convergence theorems. Some good numerical examples are given to enlighten the theoretical results. At the present the computational performance is questionable, but it is expected to improve it by implementing the network in the hardware in the future.

MSC:

65F15 Numerical computation of eigenvalues and eigenvectors of matrices
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Diamantaras, K. J.; Hornik, K.; Strintzis, M. G., Optimal linear compression under unreliable representation and robust PCA neural models, IEEE Trans. Neural Networks, 10, 5, 1186-1195 (1999)
[2] Hornik, K.; Kuan, C. M., Convergence analysis of local feature extraction algorithms, Neural Networks, 5, 229-240 (1992)
[3] Luo, F.; Unbehauen, R.; Cichocki, A., A minor component analysis algorithm, Neural Networks, 10, 2, 291-297 (1997)
[4] Luo, F.; Unbehauen, R., A minor subspace analysis algorithm, IEEE Trans. Neural Networks, 8, 5, 1149-1153 (1997)
[5] Luo, F.; Unbehauen, R.; Li, Y. D., A principal component analysis algorithm with invariant norm, Neurocomputing, 8, 213-221 (1995) · Zbl 0833.68107
[6] Mathew, G.; Reddy, V. U., Development and analysis of a neural network approach to Pisarenko’s harmonic retrieval method, IEEE Trans. Signal Processing, 42, 3, 663-667 (1994)
[7] Mathew, G.; Reddy, V. U., Orthogonal eigensubspace estimation using neural networks, IEEE Trans. Signal Processing, 42, 7, 1803-1811 (1994)
[8] Mathew, G.; Reddy, V. U.; Dasgupta, S., Adaptive estimation of eigensubspace, IEEE Trans. Signal Processing, 43, 2, 401-411 (1995)
[9] Oja, E.; Karhunen, J., On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix, J. of Math. Anal. Appl., 106, 69-84 (1985) · Zbl 0583.62077
[10] Oja, E., Principal components, minor components, and linear neural networks, Neural Networks, 5, 927-935 (1992)
[11] Reif, K.; Lou, F.; Unbehauen, R., The exponential stability of the invariant norm PCA algorithm, IEEE Trans. Circuits and Sys.-II, 44, 10, 873-876 (1997)
[12] Taleb, A.; Cirrincione, G., Against the convergence of the minor component analysis neurons, IEEE Trans. Neural Networks, 10, 1, 207-210 (1999)
[13] Xu, L.; Oja, E.; Suen, C., Modified Hebbian learning for curve and surface fitting, Neural Networks, 5, 441-457 (1992)
[14] Yang, J. F.; Kaveh, M., Adaptive eigensubspace algorithms for direction or frequency estimation and tracking, IEEE Trans. Acoust., Speech, Signal Processing, 36, 2, 663-667 (1988)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.