Collinearity: a review of methods to deal with it and a simulation study evaluating their performance
Tóm tắt
Collinearity refers to the non independence of predictor variables, usually in a regression‐type analysis. It is a common feature of any descriptive ecological data set and can be a problem for parameter estimation because it inflates the variance of regression parameters and hence potentially leads to the wrong identification of relevant predictors in a statistical model. Collinearity is a severe problem when a model is trained on data from one region or time, and predicted to another with a different or unknown structure of collinearity. To demonstrate the reach of the problem of collinearity in ecology, we show how relationships among predictors differ between biomes, change over spatial scales and through time. Across disciplines, different approaches to addressing collinearity problems have been developed, ranging from clustering of predictors, threshold‐based pre‐selection, through latent variable methods, to shrinkage and regularisation. Using simulated data with five predictor‐response relationships of increasing complexity and eight levels of collinearity we compared ways to address collinearity with standard multiple regression and machine‐learning approaches. We assessed the performance of each approach by testing its impact on prediction to new data. In the extreme, we tested whether the methods were able to identify the true underlying relationship in a training dataset with strong collinearity by evaluating its performance on a test dataset without any collinearity. We found that methods specifically designed for collinearity, such as latent variable methods and tree based models, did not outperform the traditional GLM and threshold‐based pre‐selection. Our results highlight the value of GLM in combination with penalised methods (particularly ridge) and threshold‐based pre‐selection when omitted variables are considered in the final interpretation. However, all approaches tested yielded degraded predictions under change in collinearity structure and the ‘folk lore’‐thresholds of correlation coefficients between predictor variables of |r| >0.7 was an appropriate indicator for when collinearity begins to severely distort model estimation and subsequent prediction. The use of ecological understanding of the system in pre‐analysis variable selection and the choice of the least sensitive statistical approaches reduce the problems of collinearity, but cannot ultimately solve them.
Từ khóa
Tài liệu tham khảo
Abdi H, 2003, Encyclopedia of social sciences research methods, 792
Aichison J, 2003, The statistical analysis of compositional data
Belsley D. A, 1991, Conditioning diagnostics: collinearity and weak data regression
Booth G. D., 1994, Identifying proxy sets in multiple linear regression: an aid to better coefficient interpretation, US Dept of Agriculture, Forest Service
Bortz J, 1993, Statistik für Sozialwissenschaftler
De Veaux R. D., 1994, Selecting models from data: AI and statistics IV, 293
Ding C., 2004, K‐means clustering via principal component analysis, Proc. Int. Conf. Machine Learn., 225
Dobson A. J, 2002, An introduction to generalized linear models
Fan R.‐E, 2005, Working set selection using second order information for training SVM, J. Machine Learn. Res., 6, 1889
Faraway J. J, 2005, Linear models with R
Gelman A., 2007, Data analysis using regression and multilevel/hierarchical models
GoemanJ.2009.penalized: L1(lasso) and L2(ridge) penalized estimation in GLMs and in the Cox model.R package version 0.9‐23. –<http://CRAN.R‐project.org/package penalized>.
Guerard J., 1989, The handbook of financial modeling: the financial executive’s reference guide to accounting, finance, and investment models
Gunst R. F., 1980, Regression analysis and its application: a data‐oriented approach
Hair J. F. Jr, 1995, Multivariate data analysis
HilleRisLambers J., 2006, Hierarchical modelling for the environmental sciences, 59
Johnston J, 1984, Econometric methods
Joliffe I. T, 2002, Principal component analysis
KrämerN.et al.2007.Penalized partial least squares with applications to B‐splines transformations and functional data. –<http://ml.cs.tu‐berlin.de/nkraemer/publications.html">http://ml.cs.tu‐berlin.de/nkraemer/publications.html>.
Lebart L., 1995, Statistique exploratoire multidimensionelle
Tabachnick B., 1989, Using multivariate statistics
Tibshirani R, 1996, Regression shrinkage and selection via the lasso, J. R. Stat. Soc. B, 58, 267, 10.1111/j.2517-6161.1996.tb02080.x
Weisberg S, 2008, dr: methods for dimension reduction for regression, R package ver. 3.0.3
Zha H., 2001, Spectral relaxation for K‐means clustering, Neural Inform. Process. Syst., 14, 1057