Patan, Krzysztof Approximation of state-space trajectories by locally recurrent globally feed-forward neural networks. (English) Zbl 1254.93107 Neural Netw. 21, No. 1, 59-64 (2008). Summary: The paper deals with investigating approximation abilities of a special class of discrete-time dynamic neural networks. The networks considered are called locally recurrent globally feed-forward, because they are designed with dynamic neuron models which contain inner feedbacks, but interconnections between neurons are strict feed-forward ones like in the well-known multi-layer perceptron. The paper presents analytical results showing that a locally recurrent network with two hidden layers is able to approximate a state-space trajectory produced by any Lipschitz continuous function with arbitrary accuracy. Moreover, based on these results, the network can be simplified and transformed into a more practical structure needed in real world applications. Cited in 2 Documents MSC: 93C55 Discrete-time control/observation systems 92B20 Neural networks for/in biological studies, artificial life and related topics 37N35 Dynamical systems in control Keywords:recurrent networks; approximation ability; universal approximation theorem; state-space representation; Lipschitz mapping; cascade network PDFBibTeX XMLCite \textit{K. Patan}, Neural Netw. 21, No. 1, 59--64 (2008; Zbl 1254.93107) Full Text: DOI References: [1] Ayoubi, M. (1994). Fault diagnosis with dynamic neural structure and application to a turbo-charger. In Proc. int. symp. fault detection supervision and safety for technical processesVol. 2; Ayoubi, M. (1994). Fault diagnosis with dynamic neural structure and application to a turbo-charger. In Proc. int. symp. fault detection supervision and safety for technical processesVol. 2 [2] Campolucci, P.; Piazza, F., Intrinsic stability-control method for recursive filters and neural networks, IEEE Transactions on Circuit and Systems — II: Analog and Digital Signal Processing, 47, 8, 797-802 (2000) [3] Campolucci, P.; Uncini, A.; Piazza, F.; Rao, B. D., On-line learning algorithms for locally recurrent neural networks, IEEE Transactions on Neural Networks, 10, 253-271 (1999) [4] Cannas, B.; Cincotti, S.; Marchesi, M.; Pilo, F., Learning of Chua’s circuit attractors by locally recurrent neural networks, Chaos, Solitons and Fractals, 12, 2109-2115 (2001) · Zbl 0981.68135 [5] Cybenko, G., Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals, and Systems, 2, 303-314 (1989) · Zbl 0679.94019 [6] Garzon, M.; Botelho, F., Dynamical approximation by recurrent neural networks, Neurocomputing, 29, 25-46 (1999) [7] Gupta, M. M.; Jin, L.; Homma, N., Static and dynamic neural networks. From fundamentals to advanced theory (2003), John Wiley & Sons: John Wiley & Sons New Jersey [8] Gupta, M. M.; Rao, D. H., Dynamic neural units with application to the control of unknown nonlinear systems, Journal of Intelligent and Fuzzy Systems, 1, 73-92 (1993) [9] Haykin, S., Neural networks. A comprehensive foundation (1999), Prentice-Hall: Prentice-Hall New Jersey · Zbl 0934.68076 [10] Hirsch, M.; Smale, S., Differential equations, dynamical systems and linear algebra (1974), Academic Press: Academic Press New York · Zbl 0309.34001 [11] Hornik, K.; Stinchcombe, M.; White, H., Multilayer feedforward networks are universal approximators, Neural Networks, 2, 359-366 (1989) · Zbl 1383.92015 [12] Jin, L.; Nikiforuk, P. N.; Gupta, M. M., Approximation of discrete-time state-space trajectories using dynamic recurrent neural networks, IEEE Transactions on Automatic Control, 40, 1266-1270 (1995) · Zbl 0830.93030 [13] Leshno, M.; Lin, V.; Pinkus, A.; Schoken, S., Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks, 6, 861-867 (1993) [14] Marcu, T.; Mirea, L.; Frank, P. M., Development of dynamical neural networks with application to observer based fault detection and isolation, International Journal of Applied Mathematics and Computer Science, 9, 3, 547-570 (1999) · Zbl 0945.93517 [15] Nelles, O., Nonlinear system identification. From classical approaches to neural networks and fuzzy models (2001), Springer-Verlag: Springer-Verlag Berlin · Zbl 0963.93001 [16] Patan, K. (2004). Training of the dynamic neural networks via constrained optimization. In Proc. IEEE int. joint conference on neural networks; Patan, K. (2004). Training of the dynamic neural networks via constrained optimization. In Proc. IEEE int. joint conference on neural networks [17] Patan, K.; Korbicz, J.; Prȩtki, P., Global stability conditions of locally recurrent neural networks, (Lecture notes on computer science, Vol. 3697 (2005)), 191-196 [18] Patan, K.; Parisini, T., Identification of neural dynamic models for fault detection and isolation: The case of a real sugar evaporation process, Journal of Process Control, 15, 67-79 (2005) [19] Scarselli, F.; Tsoi, A. C., Universal approximation using feedforward neural networks: A survey of some existing methods, and some new results, Neural Networks, 11, 15-37 (1998) [20] Tsoi, A. C.; Back, A. D., Locally recurrent globally feedforward networks: A critical review of architectures, IEEE Transactions on Neural Networks, 5, 229-239 (1994) [21] Zamarreno, J. M.; Vega, P., State space neural network. Properties and application, Neural Networks, 11, 1099-1112 (1998) [22] Zhang, J.; Morris, A. J.; Martin, E. B., Long term prediction models based on mixed order locally recurrent neural networks, Computers & Chemical Engineering, 22, 1051-1063 (1998) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.