Generalized Discriminant Analysis Using a Kernel ApproachNeural Computation - Tập 12 Số 10 - Trang 2385-2404 - 2000
G. Baudat, F. Anouar
We present a new method that we call generalized discriminant analysis (GDA) to
deal with nonlinear discriminant analysis using kernel function operator. The
underlying theory is close to the support vector machines (SVM) insofar as the
GDA method provides a mapping of the input vectors into high-dimensional feature
space. In the transformed space, linear properties make it easy to extend and
gene... hiện toàn bộ
Finite State Automata and Simple Recurrent NetworksNeural Computation - Tập 1 Số 3 - Trang 372-381 - 1989
Axel Cleeremans, David Servan‐Schreiber, James L. McClelland
We explore a network architecture introduced by Elman (1988) for predicting
successive elements of a sequence. The network uses the pattern of activation
over a set of hidden units from time-step t−1, together with element t, to
predict element t + 1. When the network is trained with strings from a
particular finite-state grammar, it can learn to be a perfect finite-state
recognizer for the gramma... hiện toàn bộ
Canonical Correlation Analysis: An Overview with Application to Learning MethodsNeural Computation - Tập 16 Số 12 - Trang 2639-2664 - 2004
David R. Hardoon, Sándor Szedmák, John Shawe‐Taylor
We present a general method using kernel canonical correlation analysis to learn
a semantic representation to web images and their associated text. The semantic
space provides a common representation and enables a comparison between the text
and images. In the experiments, we look at two approaches of retrieving images
based on only their content from a text query. We compare orthogonalization
app... hiện toàn bộ
Nonlinear Component Analysis as a Kernel Eigenvalue ProblemNeural Computation - Tập 10 Số 5 - Trang 1299-1319 - 1998
Bernhard Schölkopf, Alexander J. Smola, Klaus‐Robert Müller
A new method for performing a nonlinear form of principal component analysis is
proposed. By the use of integral operator kernel functions, one can efficiently
compute principal components in high-dimensional feature spaces, related to
input space by some nonlinear map—for instance, the space of all possible
five-pixel products in 16 × 16 images. We give the derivation of the method and
present ex... hiện toàn bộ
Regularization Theory and Neural Networks ArchitecturesNeural Computation - Tập 7 Số 2 - Trang 219-269 - 1995
Federico Girosi, Michael Jones, Tomaso Poggio
We had previously shown that regularization principles lead to approximation
schemes that are equivalent to networks with one layer of hidden units, called
regularization networks. In particular, standard smoothness functionals lead to
a subclass of regularization networks, the well known radial basis functions
approximation schemes. This paper shows that regularization networks encompass a
much b... hiện toàn bộ
Neural Networks and the Bias/Variance DilemmaNeural Computation - Tập 4 Số 1 - Trang 1-58 - 1992
Stuart Geman, Elie Bienenstock, René Doursat
Feedforward neural networks trained by error backpropagation are examples of
nonparametric regression estimators. We present a tutorial on nonparametric
inference and its relation to neural networks, and we use the statistical
viewpoint to highlight strengths and weaknesses of neural models. We illustrate
the main points with some recognition experiments involving artificial data as
well as handwr... hiện toàn bộ
A Parallel Mixture of SVMs for Very Large Scale ProblemsNeural Computation - Tập 14 Số 5 - Trang 1105-1114 - 2002
Ronan Collobert, Samy Bengio, Yoshua Bengio
Support vector machines (SVMs) are the state-of-the-art models for many
classification problems, but they suffer from the complexity of their training
algorithm, which is at least quadratic with respect to the number of examples.
Hence, it is hopeless to try to solve real-life problems having more than a few
hundred thousand examples with SVMs. This article proposes a new mixture of SVMs
that can ... hiện toàn bộ
Training v-Support Vector Classifiers: Theory and AlgorithmsNeural Computation - Tập 13 Số 9 - Trang 2119-2147 - 2001
Chih-Chung Chang, Chih‐Jen Lin
The ν-support vector machine (ν-SVM) for classification proposed by Schölkopf,
Smola, Williamson, and Bartlett (2000) has the advantage of using a parameter ν
on controlling the number of support vectors. In this article, we investigate
the relation between ν-SVM and C-SVM in detail. We show that in general they are
two different problems with the same optimal solution set. Hence, we may expect
th... hiện toàn bộ
The Transition to Perfect Generalization in PerceptronsNeural Computation - Tập 3 Số 3 - Trang 386-401 - 1991
Eric B. Baum, Yuh‐Dauh Lyuu
Several recent papers (Gardner and Derrida 1989; Györgyi 1990; Sompolinsky et
al. 1990) have found, using methods of statistical physics, that a transition to
perfect generalization occurs in training a simple perceptron whose weights can
only take values ±1. We give a rigorous proof of such a phenomena. That is, we
show, for α = 2.0821, that if at least αn examples are drawn from the uniform
dist... hiện toàn bộ