History

Please fill in your query. A complete syntax description you will find on the General Help page.
Some asymptotic results for learning in single hidden-layer feedforward network models. (English)
J. Am. Stat. Assoc. 84, No.408, 1003-1044 (1989).
The hidden-layer feedforward network model is a special inadequate nonlinear regression model: $$f(x,θ)=F(\sum\sp{q}\sb{j=1}ψ(x’γ\sb j)β\sb j),$$ where F and $ψ$ are known functions, and q is the known number of hidden units. In network theory recursive learning procedures are used to estimate $θ$ by $${\tilde θ}\sb n={\tilde θ}\sb{n-1}+η\nabla \tilde f’\sb n(Y\sb n- \tilde f\sb n)$$ with $\tilde f\sb n=f(x\sb n,{\tilde θ}\sb{n-1})$, $η$ is the learning rate and $θ\sb 0$ is an arbitrary starting value. This recursion is called the method of “back-propagation”. In this paper the basis of investigation is the fact that this method can be recognized as a simple multidimensional stochastic approximation procedure. The consistency and the asymptotic normality of ${\tilde θ}\sb n$ are proved. Further, the author shows back-propagation to be statistically inefficient and proposes a two-step procedure with efficiency equivalent to the least squares procedure.
Reviewer: S.Zwanzig (Berlin)