Supervised projection approach for boosting classifiers
Tài liệu tham khảo
Breiman, 1996, Stacked regressions, Machine Learning, 24, 49, 10.1007/BF00117832
Kohavi, 1997, Option decision trees with majority voting, 161
Bauer, 1999, An empirical comparison of voting classification algorithms: bagging, boosting, and variants, Machine Learning, 36, 105, 10.1023/A:1007515423169
Webb, 2000, Multiboosting: a technique for combining boosting and wagging, Machine Learning, 40, 159, 10.1023/A:1007659514849
García-Pedrajas, 2005, Cooperative coevolution of artificial neural network ensembles for pattern classification, IEEE Transactions on Evolutionary Computation, 9, 271, 10.1109/TEVC.2005.844158
Merz, 1999, Using correspondence analysis to combine classifiers, Machine Learning, 36, 33, 10.1023/A:1007559205422
Dietterich, 2000, Ensemble methods in machine learning, vol. 1857, 1
Freund, 1996, Experiments with a new boosting algorithm, 148
Dzeroski, 2004, Is combining classifiers with stacking better than selecting the best one?, Machine Learning, 54, 255, 10.1023/B:MACH.0000015881.36452.6e
Fern, 2003, Online ensemble learning: an empirical study, Machine Learning, 53, 71, 10.1023/A:1025619426553
L. Breiman, Bias, variance, and arcing classifiers, Technical Report 460, Department of Statistics, University of California, Berkeley, CA, 1996.
Schapire, 1998, Boosting the margin: a new explanation for the effectiveness of voting methods, Annals of Statistics, 26, 1651, 10.1214/aos/1024691352
Freund, 1997, A decision-theoretic generalization of on-line learning and an application to boosting, Journal of Computer and System Sciences, 55, 119, 10.1006/jcss.1997.1504
Friedman, 2000, Additive logistic regression: a statistical view of boosting, Annals of Statistics, 28, 337, 10.1214/aos/1016218223
Dietterich, 2000, An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization, Machine Learning, 40, 139, 10.1023/A:1007607513941
Schapire, 1999, Improved boosting algorithms using confidence-rated predictions, Machine Learning, 37, 297, 10.1023/A:1007614523901
Sun, 2006, Reducing the overfitting of adaboost by controlling its data distribution skewdness, International Journal of Patterns Recognition and Artificial Intelligence, 20, 1093, 10.1142/S0218001406005137
Rätsch, 2001, Soft margins for adaboost, Machine Learning, 42, 287, 10.1023/A:1007618119488
Vapnik, 1999
García-Pedrajas, 2007, Nonlinear boosting projections for ensemble construction, Journal of Machine Learning Research, 8, 1
Yu, 2006, Multi-output regularized feature projection, IEEE Transactions on Knowledge and Data Engineering, 18, 1600, 10.1109/TKDE.2006.194
Ho, 1998, The random subspace method for constructing decision forests, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 832, 10.1109/34.709601
Geng, 2005, Supervised nonlinear dimensionality reduction for visualization and classification, IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 35, 1098, 10.1109/TSMCB.2005.850151
Li, 2005, Dynamic projection network for supervised pattern classification, International Journal of Approximate Reasoning, 40, 243, 10.1016/j.ijar.2005.06.001
Lee, 2006, Margin preserving projections, Electronic Letters, 42, 1249, 10.1049/el:20062087
Zhao, 2006, Local structure based supervised feature extraction, Pattern Recognition, 39, 1546, 10.1016/j.patcog.2006.02.023
Lerner, 1998, On pattern classification with Sammon's nonlinear mapping—An experimental study, Pattern Recognition, 31, 371, 10.1016/S0031-3203(97)00064-2
Tenenbaum, 2000, A global geometric framework for nonlinear dimensionality reduction, Science, 290, 2319, 10.1126/science.290.5500.2319
Kruskal, 1969, Toward a practical method which helps uncover the structure of a set of multivariate observations by finding the linear transformation which optimizes a new ‘index of condensation’, 427
Polzehl, 1995, Projection pursuit discriminant analysis, Computational Statistics and Data Analysis, 20, 141, 10.1016/0167-9473(94)00035-H
Lee, 2005, Projection pursuit for exploratory supervised classification, Journal of Computational & Graphical Statistics, 14, 831, 10.1198/106186005X77702
Lazarevic, 2000, Adaptive boosting for spatial functions with unstable driving attributes, 329
Gorman, 1988, Analysis of hidden units in a layered network trained to classify sonar targets, Neural Networks, 1, 75, 10.1016/0893-6080(88)90023-8
Mao, 1995, Artificial neural networks for feature extraction and multivariate data projection, IEEE Transactions on Neural Networks, 6, 296, 10.1109/72.363467
Haykin, 1999
Lerner, 1999, A comparative study of neural networks based feature extraction paradigms, Pattern Recognition Letters, 20, 7, 10.1016/S0167-8655(98)00120-2
Kuncheva, 2003, Error bounds for aggressive and conservative adaboost, vol. 2709, 25
S. Hettich, C. Blake, C. Merz, UCI repository of machine learning databases, 1998 〈http://www.ics.uci.edu/∼mlearn/MLRepository.html〉.
Dietterich, 1998, Approximate statistical tests for comparing supervised classification learning algorithms, Neural Computation, 10, 1895, 10.1162/089976698300017197
Demšar, 2006, Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research, 7, 1
Rumelhart, 1986, Learning internal representations by error propagation, 318
Quinlan, 1993
Cristianini, 2000
C.-C. Chang, C.-J. Lin, LIBSVM: a library for support vector machines, 2001, software available at 〈http://www.csie.ntu.edu.tw/∼cjlin/libsvm〉.
Zhang, 2005, Boosting with early stooping: convergence and consistency, The Annals of Statistics, 33, 1538, 10.1214/009053605000000255
Domingo, 2000, MadaBoost: a modification of AdaBoost, 180
Grove, 1998, Boosting in the limit: maximizing the margin of learned ensembles, 692
Mason, 2000, Improved generalization through explicit optimization of margins, Machine Learning, 38, 243, 10.1023/A:1007697429651
Breiman, 1999, Prediction games and arcing algorithms, Neural Computation, 11, 1493, 10.1162/089976699300016106
Ortiz-Boyer, 2005, Cixl2: a crossover operator for evolutionary algorithms based on population features, Journal of Artificial Intelligence Research, 24, 33, 10.1613/jair.1660
Sierra, 2006, Evolutionary discriminant analysis, IEEE Transactions on Evolutionary Computation, 10, 81, 10.1109/TEVC.2005.856069
Quinlan, 1996, Bagging, boosting, and c4.5, 725
J.R. Quinlan, Boosting first-order learning, in: Proceedings of the Algorithmic Learning Theory, Seventh International Workshop, ALT ’96, Sydney, Australia, October 1996, vol. 1160, Springer, Berlin, 1996, pp. 143–155.
Rosset, 2004, Boosting as a regularized path to a maximum margin classifier, Journal of Machine Learning Research, 5, 941
Breiman, 2001, Random forests, Machine Learning, 45, 5, 10.1023/A:1010933404324
Schapire, 1997, Boosting the margin: a new explanation for the effectiveness of voting methods, 322
Weigend, 1993, On overfitting and the effective number of hidden units, 335
Kong, 1995, Error-correcting output coding corrects bias and variance, 275
Kohavi, 1996, Bias plus variance decomposition for zero–one loss functions, 275