×

Convergence of the iterates of descent methods for analytic cost functions. (English) Zbl 1092.90036

It is shown that the iterates of numerical descent algorithms, for an analytic cost function converge to a well-defined limit point if they satisfy some descent conditions. The strong limit-point convergence results do not rely on the usual requirement that critical points are isolated and also require no assumptions on the convexity of the cost nor any nondegeneracy conditions on the Hessian of the cost at critical points. Instead, two conditions are required: a lower bound on the norm of the gradient of the cost function in terms of the cost function itself, called the Lojasiewicz gradient inequality (which is satisfied in particular for analytic cost functions) and some strong descent conditions. The strong descent conditions are satisfied for a wide variety of optimization schemes; they include line-search methods with an angle condition on the search direction and Armijo’s condition on the step length, and trust-region methods under the condition that the length of the update vector is bounded by a multiple of the length of the Cauchy update. The results obtained are applicable to a broad class of optimization schemes and stregthen classical “weak convergence” results for descent methods to “strong limit-point convergence” for a large class of cost functions of practical interest.

MSC:

90C26 Nonconvex programming, global optimization
37N40 Dynamical systems in optimization and economics
65K10 Numerical optimization and variational techniques
90C52 Methods of reduced gradient type
PDFBibTeX XMLCite
Full Text: DOI