×

On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization. (English) Zbl 1211.90333

Summary: We discuss necessary and sufficient conditions for a sensing matrix to be “\(s\)-good”—to allow for exact \(\ell_{1}\)-recovery of sparse signals with \(s\) nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect \(\ell_{1}\)-recovery (nonzero measurement noise, nearly \(s\)-sparse signal, near-optimal solution of the optimization problem yielding the \(\ell_{1}\)-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse \(\ell_{1}\)-recovery and to efficiently computable upper bounds on those \(s\) for which a given sensing matrix is \(s\)-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties.

MSC:

90C90 Applications of mathematical programming
90C05 Linear programming
65K05 Numerical mathematical programming methods
94A12 Signal theory (characterization, reconstruction, filtering, etc.)

Software:

Mosek
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Andersen E.D., Andersen K.D.: The MOSEK optimization tools manual. Version 5.0 http://www.mosek.com/fileadmin/products/5_0/tools/doc/html/tools/index.html
[2] Ben-Tal A., Nemirovski A.: Lectures on Modern Convex Optimization. SIAM, Philadelphia (2001) · Zbl 0986.90032
[3] Bickel P.J.: Discussion of The Dantzig selector: statistical estimation when p is much larger than n. In: Candes, E.J., Tao, T. Annals of Stat. 35, 2352–2357 (2007)
[4] Bickel, P.J., Ritov, Ya., Tsybakov, A.: Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat. (to appear 2008) · Zbl 1173.62022
[5] Candès E. J., Tao T.: The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35, 2313–2351 (2007) · Zbl 1139.62019 · doi:10.1214/009053606000001523
[6] Candès E. J., Tao T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51, 4203–4215 (2006) · Zbl 1264.94121 · doi:10.1109/TIT.2005.858979
[7] Candès E. J., Tao T.: Near-optimal signal recovery from random projections and universal encoding strategies. IEEE Trans. Inform. Theory 52, 5406–5425 (2006) · Zbl 1309.94033 · doi:10.1109/TIT.2006.885507
[8] Candès E. J., Romberg J., Tao T.: Signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2005) · Zbl 1098.94009 · doi:10.1002/cpa.20124
[9] Candès, E. J.: Compressive sampling. Sanz-Solé M. In: Javier, S., Juan, L. V., Joan, V. (eds). International Congress of Mathematicians, Madrid 2006, vol. III, pp. 1437–1452. European Mathematical Society Publishing House (2006) · Zbl 1130.94013
[10] Candès E. J.: The restricted isometry property and its implications for compressed sensing. Comptes Rendus de l’Acad. des Sci. Ser. I 346, 589–592 (2008) · Zbl 1153.94002
[11] Cohen, A., Dahmen, W., DeVore, R.: Compressed Sensing and Best k-term Approximation. Preprint, http://www.math.sc.edu/\(\sim\)devore/publications/CDDSensing_6.pdf (2006) · Zbl 1206.94008
[12] d’Aspremont, A., El Ghaoui, L.: Testing the Nullspace Property using Semidefinite Programming. Preprint. http://arxiv.org/abs/0807.3520 (2008)
[13] DeVore, R.: Deterministic Constructions of Compressed Sensing Matrices. Preprint, Department of Mathematics, University of South Carolina (2007) · Zbl 1134.94312
[14] Donoho D., Huo X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001) · Zbl 1019.94503 · doi:10.1109/18.959265
[15] Donoho, D.: High-dimensional centrally-symmetric polytopes with neighborliness proportional to dimension. Technical report, Department of Statistics, Stanford University (2004) · Zbl 1095.52500
[16] Donoho, D.: Neighborly polytopes and sparse solutions of underdetermined linear equations. Technical report, Department of Statistics, Stanford University (2004)
[17] Donoho D., Elad M., Temlyakov V. N.: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006) · Zbl 1288.94017 · doi:10.1109/TIT.2005.860430
[18] Fuchs J.-J.: On sparse representations in arbitrary redundant bases. IEEE Trans. Inf. Theory 50, 1341–1344 (2004) · Zbl 1284.94018 · doi:10.1109/TIT.2004.828141
[19] Fuchs J.-J.: Recovery of exact sparse representations in the presence of bounded noise. IEEE Trans. Inf. Theory 51, 3601–3608 (2005) · Zbl 1286.94031 · doi:10.1109/TIT.2005.855614
[20] Nemirovski A.: Prox-method with rate of convergence O(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2004) · Zbl 1106.90059 · doi:10.1137/S1052623403425629
[21] Tibshirani R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996) · Zbl 0850.62538
[22] Tropp J.A.: Just relax: convex programming methods for identifying sparse signals. IEEE Trans. Info. Theory 51(3), 1030–1051 (2006) · Zbl 1288.94025 · doi:10.1109/TIT.2005.864420
[23] Zhang, Y.: A simple proof for recoverability of ell-1-minimization: go over or under? Rice CAAM Department Technical Report TR05-09 (2005)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.