×

Self-adaptive inexact proximal point methods. (English) Zbl 1147.90414

Summary: We propose a class of self-adaptive proximal point methods suitable for degenerate optimization problems where multiple minimizers may exist, or where the Hessian may be singular at a local minimizer. If the proximal regularization parameter has the form \(\mu({\mathbf{x}})=\beta\|\nabla f({\mathbf{x}})\|^{\eta}\) where \(\eta \in [0,2)\) and \(\beta >0\) is a constant, we obtain convergence to the set of minimizers that is linear for \(\eta =0\) and \(\beta\) sufficiently small, superlinear for \(\eta \in (0,1)\), and at least quadratic for \(\eta \in [1,2)\). Two different acceptance criteria for an approximate solution to the proximal problem are analyzed. These criteria are expressed in terms of the gradient of the proximal function, the gradient of the original function, and the iteration difference. With either acceptance criterion, the convergence results are analogous to those of the exact iterates. Preliminary numerical results are presented using some ill-conditioned CUTE test problems.

MSC:

90C55 Methods of successive quadratic programming type

Software:

CUTE; CUTEr; CG_DESCENT
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995) · Zbl 0886.65058 · doi:10.1145/200979.201043
[2] Combettes, P.L., Pennanen, T.: Proximal methods for cohypomonotone operators. SIAM J. Control 43, 731–742 (2004) · Zbl 1078.47003 · doi:10.1137/S0363012903427336
[3] Fan, J., Yuan, Y.: On the convergence of the Levenberg-Marquardt method without nonsingularity assumption. Computing 74, 23–39 (2005) · Zbl 1076.65047 · doi:10.1007/s00607-004-0083-1
[4] Ha, C.D.: A generalization of the proximal point algorithm. SIAM J. Control 28, 503–512 (1990) · Zbl 0699.49037 · doi:10.1137/0328029
[5] Hager, W.W., Zhang, H.: CG_DESCENT user’s guide. Tech. Rep., Department of Mathematics, University of Florida, Gainesville (2004)
[6] Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005) · Zbl 1093.90085 · doi:10.1137/030601880
[7] Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32, 113–137 (2006) · Zbl 1346.90816 · doi:10.1145/1132973.1132979
[8] Humes, C., Silva, P.: Inexact proximal point algorithms and descent methods in optimization. Optim. Eng. 6, 257–271 (2005) · Zbl 1080.90072 · doi:10.1007/s11081-005-6798-9
[9] Iusem, A.N., Pennanen, T., Svaiter, B.F.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13, 1080–1097 (2003) · Zbl 1053.90134 · doi:10.1137/S1052623401399587
[10] Kaplan, A., Tichatschke, R.: Proximal point methods and nonconvex optimization. J. Glob. Optim. 13, 389–406 (1998) · Zbl 0916.90224 · doi:10.1023/A:1008321423879
[11] Li, D., Fukushima, M., Qi, L., Yamashita, N.: Regularized Newton methods for convex minimization problems with singular solutions. Comput. Optim. Appl. 28, 131–147 (2004) · Zbl 1056.90111 · doi:10.1023/B:COAP.0000026881.96694.32
[12] Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control 22, 277–293 (1984) · Zbl 0533.49028 · doi:10.1137/0322019
[13] Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Francaise Inf. Rech. Oper. Ser. R-3 4, 154–158 (1970) · Zbl 0215.21103
[14] Martinet, B.: Determination approachée d’un point fixe d’une application pseudo-contractante. C.R. Séances Acad. Sci. 274, 163–165 (1972) · Zbl 0226.47032
[15] Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27, 170–191 (2002) · Zbl 1082.90582 · doi:10.1287/moor.27.1.170.331
[16] Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 2, 97–116 (1976) · Zbl 0402.90076 · doi:10.1287/moor.1.2.97
[17] Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control 14, 877–898 (1976) · Zbl 0358.90053 · doi:10.1137/0314056
[18] Tseng, P.: Error bounds and superlinear convergence analysis of some Newton-type methods in optimization. In: Pillo, G.D., Giannessi, F. (eds.) Nonlinear Optimization and Related Topics, pp. 445–462. Kluwer, Dordrecht (2000) · Zbl 0965.65091
[19] Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969) · Zbl 0177.20603 · doi:10.1137/1011036
[20] Wolfe, P.: Convergence conditions for ascent methods II: some corrections. SIAM Rev. 13, 185–188 (1971) · Zbl 0216.26901 · doi:10.1137/1013035
[21] Yamashita, N., Fukushima, M.: The proximal point algorithm with genuine superlinear convergence for the monotone complementarity problem. SIAM J. Optim. 11, 364–379 (2000) · Zbl 1004.47031 · doi:10.1137/S105262349935949X
[22] Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg-Marquardt method. In: Topics in Numerical Analysis. Comput. Suppl., vol. 15, pp. 239–249. Springer, New York (2001) · Zbl 1001.65047
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.