×

Approximation theory of the MLP model in neural networks. (English) Zbl 0959.68109

Acta Numerica 8, 143-195 (1999).
Summary: We discuss various approximation-theoretic problems that arise in the MultiLayer feedforward Perceptron (MLP) model in neural networks. The MLP model is one of the more popular and practical of the many neural network models. Mathematically it is also one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are approximation-theoretic in character. Most of the research we will discuss is of very recent vintage. We report on what has been done and on various unanswered questions. We will not be presenting practical (algorithmic) methods. We will, however, be exploring the capabilities and limitations of this mode.
In the first two sections we present a brief introduction and overview of neural networks and the multilayer feedforward perceptron model. In Section 3 we discuss in great detail the question of density. When does this model have the theoretical ability to approximate any reasonable function arbitrarily well? In Section 4 we present conditions for simultaneously approximating a function and its derivatives. Section 5 considers the interpolation capability of this model. In Section 6 we study upper and lower bounds on the order of approximation of this model. The material presented in Sections 3-6 treats the single hidden layer MLP model. In Section 7 we discuss some of the differences that arise when considering more than one hidden layer. The lengthy list of references includes many papers not cited in the text, but relevant to the subject matter of this survey.
For the entire collection see [Zbl 0921.00012].

MSC:

68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite