Sparse estimation of large covariance matrices via a nested Lasso penalty
Tóm tắt
Từ khóa
Tài liệu tham khảo
Adam, B., Qu, Y., Davis, J., Ward, M., Clements, M., Cazares, L., Semmes, O., Schellhammer, P., Yasui, Y., Feng, Z. and Wright, G. (2002). Serum protein fingerprinting coupled with a pattern-matching algorithm distinguishes prostate cancer from benign prostate hyperplasia and healthy men., <i>Cancer Research</i> <b>62</b> 3609–3614.
Bickel, P. J. and Levina, E. (2004). Some theory for Fisher’s linear discriminant function, “naive Bayes,” and some alternatives when there are many more variables than observations., <i>Bernoulli</i> <b>10</b> 989–1010.
Dey, D. K. and Srinivasan, C. (1985). Estimation of a covariance matrix under Stein’s loss., <i>Ann. Statist.</i> <b>13</b> 1581–1591.
Diggle, P. and Verbyla, A. (1998). Nonparametric estimation of covariance structure in longitudinal data., <i>Biometrics</i> <b>54</b> 401–415.
Djavan, B., Zlotta, A., Kratzik, C., Remzi, M., Seitz, C., Schulman, C. and Marberger, M. (1999). Psa, psa density, psa density of transition zone, free/total psa ratio, and psa velocity for early detection of prostate cancer in men with serum psa 2.5 to 4.0 ng/ml., <i>Urology</i> <b>54</b> 517–522.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties., <i>J. Amer. Statist. Assoc.</i> <b>96</b> 1348–1360.
Friedman, J. (1989). Regularized discriminant analysis., <i>J. Amer. Statist. Assoc.</i> <b>84</b> 165–175.
Friedman, J., Hastie, T., Höfling, H. G. and Tibshirani, R. (2007). Pathwise coordinate optimization., <i>Ann. Appl. Statist.</i> <b>1</b> 302–332.
Fu, W. (1998). Penalized regressions: The bridge versus the lasso., <i>J. Comput. Graph. Statist.</i> <b>7</b> 397–416.
Furrer, R. and Bengtsson, T. (2007). Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants., <i>J. Multivariate Anal.</i> <b>98</b> 227–255.
Haff, L. R. (1980). Empirical Bayes estimation of the multivariate normal covariance matrix., <i>Ann. Statist.</i> <b>8</b> 586–597.
Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems., <i>Technometrics</i> <b>12</b> 55–67.
Huang, J., Liu, N., Pourahmadi, M. and Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood., <i>Biometrika</i> <b>93</b> 85–98.
Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis., <i>Ann. Statist.</i> <b>29</b> 295–327.
Ledoit, O. and Wolf, M. (2003). A well-conditioned estimator for large-dimensional covariance matrices., <i>J. Multivariate Anal.</i> <b>88</b> 365–411.
Pannek, J. and Partin, A. (1998). The role of psa and percent free psa for staging and prognosis prediction in clinically localized prostate cancer., <i>Semin. Urol. Oncol.</i> <b>16</b> 100–105.
Pourahmadi, M. (1999). Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation., <i>Biometrika</i> <b>86</b> 677–690.
Smith, M. and Kohn, R. (2002). Parsimonious covariance matrix estimation for longitudinal data., <i>J. Amer. Statist. Assoc.</i> <b>97</b> 1141–1153.
Stamey, T., Johnstone, I., McNeal, J., Lu, A. and Yemoto, C. (2002). Preoperative serum prostate specific antigen levels between 2 and 22 ng/ml correlate poorly with post-radical prostatectomy cancer morphology: Prostate specific antigen cure rates appear constant between 2 and 9 ng/ml., <i>J. Urol.</i> <b>167</b> 103–111.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso., <i>J. Roy. Statist. Soc. Ser. B</i> <b>58</b> 267–288.
Tibshirani, R., Hastie, T., Narasimhan, B. and Chu, G. (2003). Class prediction by nearest shrunken centroids, with applications to DNA microarrays., <i>Statist. Sci.</i> <b>18</b> 104–117.
Tibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. (2005). Sparsity and smoothness via the fused lasso., <i>J. Roy. Statist. Soc. Ser. B</i> <b>67</b> 91–108.
Wong, F., Carter, C. and Kohn, R. (2003). Efficient estimation of covariance selection models., <i>Biometrika</i> <b>90</b> 809–830.
Wu, W. B. and Pourahmadi, M. (2003). Nonparametric estimation of large covariance matrices of longitudinal data., <i>Biometrika</i> <b>90</b> 831–844.
Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model., <i>Biometrika</i> <b>94</b> 19–35.
Anderson, T. W. (1958)., <i>An Introduction to Multivariate Statistical Analysis</i>. Wiley, New York.
Banerjee, O., d’Aspremont, A. and El Ghaoui, L. (2006). Sparse covariance selection via robust maximum likelihood estimation. In, <i>Proceedings of ICML</i>.
Bickel, P. J. and Levina, E. (2007). Regularized estimation of large covariance matrices., <i>Ann. Statist.</i> To appear.
Fan, J., Fan, Y. and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model., <i>J. Econometrics</i>. To appear.
Hastie, T., Tibshirani, R. and Friedman, J. (2001)., <i>The Elements of Statistical Learning</i>. Springer, Berlin.
Johnstone, I. M. and Lu, A. Y. (2007). Sparse principal components analysis., <i>J. Amer. Statist. Assoc.</i> To appear.
Mardia, K. V., Kent, J. T. and Bibby, J. M. (1979)., <i>Multivariate Analysis</i>. Academic Press, New York.
Zhao, P., Rocha, G. and Yu, B. (2006). Grouped and hierarchical model selection through composite absolute penalties. Technical Report 703, Dept. Statistics, UC, Berkeley.