Language:   Search:   Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.

Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1129.90339
Polyak, R.; Griva, I.
Primal-dual nonlinear rescaling method for convex optimization.
(English)
[J] J. Optim. Theory Appl. 122, No. 1, 111-156 (2004). ISSN 0022-3239; ISSN 1573-2878/e

Summary: We consider a general primal-dual nonlinear rescaling (PDNR) method for convex optimization with inequality constraints. We prove the global convergence of the PDNR method and estimate the error bounds for the primal and dual sequences. In particular, we prove that, under the standard second-order optimality conditions, the error bounds for the primal and dual sequences converge to zero with linear rate. Moreover, for any given ratio $0<\gamma<1$, there is a fixed scaling parameter $k_\gamma>0$ such that each PDNR step shrinks the primal-dual error bound by at least a factor $0<\gamma<1$, for any $k\ge k_\gamma$. The PDNR solver was tested on a variety of NLP problems including the constrained optimization problems (COPS) set. The results obtained show that the PDNR solver is numerically stable and produces results with high accuracy. Moreover, for most of the problems solved, the number of Newton steps is practically independent of the problem size.
MSC 2000:
*90C25 Convex programming
90C46 Optimality conditions, duality
90C51 Interior-point methods

Keywords: nonlinear rescaling; duality; Lagrangian; primal-dual methods; multipliers methods

Highlights
Master Server