×

Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. (English) Zbl 1306.62155

Summary: We derive the \(l_{\infty }\) convergence rate simultaneously for Lasso and Dantzig estimators in a high-dimensional linear regression model under a mutual coherence assumption on the Gram matrix of the design and two different assumptions on the noise: Gaussian noise and general noise with finite variance. Then we prove that simultaneously the thresholded Lasso and Dantzig estimators with a proper choice of the threshold enjoy a sign concentration property provided that the non-zero components of the target vector are not too small.

MSC:

62J05 Linear regression; mixed models
62F12 Asymptotic properties of parametric estimators

Software:

PDCO
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] P.J. Bickel, Y. Ritov and A.B. Tsybakov (2007). Simultaneous analysis of Lasso and Dantzig selector. Submitted to, Ann. Statist . Available at http://www.proba.jussieu.fr/pageperso/tsybakov/. · Zbl 1173.62022
[2] F. Bunea (2007). Consistent selection via the Lasso for high dimensional approximating regression models., IMS Lecture Notes-Monograph Series , · doi:10.1214/074921708000000101
[3] F. Bunea, A.B. Tsybakov and M.H. Wegkamp (2007). Sparsity oracle inequalities for the Lasso., Electronic Journal of Statistics 1 , 169-194. · Zbl 1146.62028 · doi:10.1214/07-EJS008
[4] F. Bunea, A.B. Tsybakov and M.H. Wegkamp (2007). Aggregation for Gaussian regression., Ann. Statist. 35 4, 1674-1697. · Zbl 1209.62065 · doi:10.1214/009053606000001587
[5] S.S. Chen, D.L. Donoho and M.A. Saunders (1999). Atomic Decomposition by Basis Pursuit., SIAM Journal on Scientific Computing 20 , 33-61. · Zbl 0919.94002 · doi:10.1137/S1064827596304010
[6] E. Candes and T. Tao (2007). The Dantzig selector: statistical estimation when, p is much larger than n . Ann. Statist. , · Zbl 1139.62019 · doi:10.1214/009053606000001523
[7] D.L. Donoho, M. Elad and V. Temlyakov (2006). Stable recovery of Sparse Overcomplete representations in the Presence of Noise., IEEE Trans. on Information Theory 52 , 6-18. · Zbl 1288.94017 · doi:10.1109/TIT.2005.860430
[8] B. Efron, T. Hastie, I. Johnstone and R. Tibshirani (2004). Least angle regression., Ann. Statist. 32 , 402-451. · Zbl 1091.62054 · doi:10.1214/009053604000000067
[9] E. Greenshtein, Y. Ritov (2004). Persistence in high-dimensional linear predictor selection and the virtue of overparametrization., Bernoulli 10 6, 971-988. · Zbl 1055.62078 · doi:10.3150/bj/1106314846
[10] K. Knight and W. J. Fu (2000). Asymptotics for lasso-type estimators., Ann. Statist. 28 , 1356-1378. · Zbl 1105.62357 · doi:10.1214/aos/1015957397
[11] V. Koltchinskii (2006). Sparsity in penalized empirical risk minimization., Manuscript . · Zbl 1168.62044
[12] V. Koltchinskii (2007). Dantzig selector and sparsity oracle inequalities., Manuscript . · Zbl 1452.62486
[13] N. Meinshausen and P. Bühlmann (2006). High dimensional graphs and variable selection with the Lasso., Ann. Statist. 34 , 1436-1462. · Zbl 1113.62082 · doi:10.1214/009053606000000281
[14] N. Meinshausen and B. Yu (2006). Lasso-type recovery of sparse representations for high-dimensional data., Ann. Statist. , · Zbl 1155.62050 · doi:10.1214/07-AOS582
[15] A. Nemirovski (2000). Topics in nonparametric statistics. In, Lectures on probability theory and statistics (Saint Flour, 1998), Lecture Notes in Math., vol. 1738 . Springer, Berlin, 85 - 277. · Zbl 0998.62033 · doi:10.1007/BFb0106703
[16] M.R. Osborne, B. Presnell and B.A. Turlach (2000). On the Lasso and its dual., Journal of Computational and Graphical Statistics 9 319-337.
[17] R. Tibshirani (1996). Regression shrinkage and selection via the Lasso., Journal of the Royal Statistical Society, Series B 58 , 267-288. · Zbl 0850.62538
[18] S.A. Van der Geer (2007). High dimensional generalized linear models and the Lasso., Ann. Statist. , · Zbl 1138.62323 · doi:10.1214/009053607000000929
[19] S.A. Van der Geer (2007). The Deterministic Lasso. Tech Report n^\circ 140, Seminar für Statistik ETH, Zürich.
[20] M.J. Wainwright (2006). Sharp thresholds for noisy and high-dimensional recovery of sparsity using, l 1 -constrained quadratic programming. Technical report n^\circ 709, Department of Statistics, UC Berkeley.
[21] C.H. Zhang and J. Huang (2007). The sparsity and biais of the Lasso selection in high-dimensional linear regression., Ann. Statist. , · Zbl 1142.62044 · doi:10.1214/07-AOS520
[22] P. Zhao and B. Yu (2007). On model selection consistency of Lasso., Journal of Machine Learning Research 7 , 2541-2567. · Zbl 1222.62008
[23] H. Zou (2006). The adaptive Lasso and its oracle properties., Journal of the American Statistical Association 101 n^\circ 476, 1418-1429. · Zbl 1171.62326 · doi:10.1198/016214506000000735
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.