A Parallel Mixture of SVMs for Very Large Scale Problems

Neural Computation - Tập 14 Số 5 - Trang 1105-1114 - 2002
Ronan Collobert1, Samy Bengio2, Yoshua Bengio3
1Dalle Molle Institute for Perceptual Artificial Intelligence, 1920 Martigny, Switzerland, and Université de Montréal, DIRO, Montréal, Québec, Canada
2Dalle Molle Institute for Perceptual Artificial Intelligence, 1920 Martigny, Switzerland
3Université de Montréal, DIRO, Montréal, Québec, Canada

Tóm tắt

Support vector machines (SVMs) are the state-of-the-art models for many classification problems, but they suffer from the complexity of their training algorithm, which is at least quadratic with respect to the number of examples. Hence, it is hopeless to try to solve real-life problems having more than a few hundred thousand examples with SVMs. This article proposes a new mixture of SVMs that can be easily implemented in parallel and where each SVM is trained on a small subset of the whole data set. Experiments on a large benchmark data set (Forest) yielded significant time improvement (time complexity appears empirically to locally grow linearly with the number of examples). In addition, and surprisingly, a significant improvement in generalization was observed.

Từ khóa


Tài liệu tham khảo

10.1162/15324430152733142

10.1007/BF00994018

10.1162/neco.1991.3.1.79