Simultaneous analysis of Lasso and Dantzig selector
Tóm tắt
Từ khóa
Tài liệu tham khảo
[1] Bickel, P. J. (2007). Discussion of “The Dantzig selector: Statistical estimation when <i>p</i> is much larger than <i>n</i>,” by E. Candes and T. Tao. <i>Ann. Statist.</i> <b>35</b> 2352–2357.
[3] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2006). Aggregation and sparsity via <i>ℓ</i><sub>1</sub> penalized least squares. In <i>Proceedings of 19th Annual Conference on Learning Theory (COLT 2006)</i> (G. Lugosi and H. U. Simon, eds.). <i>Lecture Notes in Artificial Intelligence</i> <b>4005</b> 379–391. Springer, Berlin.
[4] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2007). Aggregation for Gaussian regression. <i>Ann. Statist.</i> <b>35</b> 1674–1697.
[5] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2007). Sparsity oracle inequalities for the Lasso. <i>Electron. J. Statist.</i> <b>1</b> 169–194.
[6] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2007). Sparse density estimation with <i>ℓ</i><sub>1</sub> penalties. In <i>Proceedings of 20th Annual Conference on Learning Theory (COLT 2007)</i> (N. H. Bshouty and C. Gentile, eds.). <i>Lecture Notes in Artificial Intelligence</i> <b>4539</b> 530–543. Springer, Berlin.
[7] Candes, E. and Tao, T. (2007). The Dantzig selector: Statistical estimation when <i>p</i> is much larger than <i>n. Ann. Statist.</i> <b>35</b> 2313–2351.
[8] Donoho, D. L., Elad, M. and Temlyakov, V. (2006). Stable recovery of sparse overcomplete representations in the presence of noise. <i>IEEE Trans. Inform. Theory</i> <b>52</b> 6–18.
[9] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression. <i>Ann. Statist.</i> <b>32</b> 407–451.
[10] Friedman, J., Hastie, T., Höfling, H. and Tibshirani, R. (2007). Pathwise coordinate optimization. <i>Ann. Appl. Statist.</i> <b>1</b> 302–332.
[11] Fu, W. and Knight, K. (2000). Asymptotics for Lasso-type estimators. <i>Ann. Statist.</i> <b>28</b> 1356–1378.
[12] Greenshtein, E. and Ritov, Y. (2004). Persistency in high dimensional linear predictor-selection and the virtue of over-parametrization. <i>Bernoulli</i> <b>10</b> 971–988.
[13] Juditsky, A. and Nemirovski, A. (2000). Functional aggregation for nonparametric estimation. <i>Ann. Statist.</i> <b>28</b> 681–712.
[16] Meier, L., van de Geer, S. and Bühlmann, P. (2008). The Group Lasso for logistic regression. <i>J. Roy. Statist. Soc. Ser. B</i> <b>70</b> 53–71.
[17] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. <i>Ann. Statist.</i> <b>34</b> 1436–1462.
[19] Nemirovski, A. (2000). Topics in nonparametric statistics. In <i>Ecole d’Eté de Probabilités de Saint-Flour XXVIII—1998. Lecture Notes in Math.</i> <b>1738</b>. Springer, New York.
[20] Osborne, M. R., Presnell, B. and Turlach, B. A (2000a). On the Lasso and its dual. <i>J. Comput. Graph. Statist.</i> <b>9</b> 319–337.
[21] Osborne, M. R., Presnell, B. and Turlach, B. A (2000b). A new approach to variable selection in least squares problems. <i>IMA J. Numer. Anal.</i> <b>20</b> 389–404.
[22] Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. <i>J. Roy. Statist. Soc. Ser. B</i> <b>58</b> 267–288.
[23] Tsybakov, A. B. (2006). Discussion of “Regularization in Statistics,” by P. Bickel and B. Li. <i>TEST</i> <b>15</b> 303–310.
[25] van de Geer, S. A. (2008). High dimensional generalized linear models and the Lasso. <i>Ann. Statist.</i> <b>36</b> 614–645.
[26] Zhang, C.-H. and Huang, J. (2008). Model-selection consistency of the Lasso in high-dimensional regression. <i>Ann. Statist.</i> <b>36</b> 1567–1594.
[27] Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. <i>J. Mach. Learn. Res.</i> <b>7</b> 2541–2563.
[2] Bunea, F., Tsybakov, A. B. and Wegkamp, M. H. (2004). Aggregation for regression learning. Preprint LPMA, Univ. Paris 6–Paris 7, n<sup>○</sup> 948. Available at arXiv:math. ST/0410214 and at https://hal.ccsd.cnrs.fr/ccsd-00003205.
[14] Koltchinskii, V. (2006). Sparsity in penalized empirical risk minimization. <i>Ann. Inst. H. Poincaré Probab. Statist.</i> To appear.
[15] Koltchinskii, V. (2007). Dantzig selector and sparsity oracle inequalities. Unpublished manuscript.
[18] Meinshausen, N. and Yu, B. (2006). Lasso type recovery of sparse representations for high dimensional data. <i>Ann. Statist.</i> To appear.
[24] Turlach, B. A. (2005). On algorithms for solving least squares problems under an L1 penalty or an L1 constraint. In <i>2004 Proceedings of the American Statistical Association, Statistical Computing Section [CD-ROM]</i> 2572–2577. Amer. Statist. Assoc., Alexandria, VA.