Deep learning in neural networks: An overview

Neural Networks - Tập 61 - Trang 85-117 - 2015
Jürgen Schmidhuber1
1The Swiss AI Lab IDSIA, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale, University of Lugano, Switzerland#TAB#

Tóm tắt

Từ khóa


Tài liệu tham khảo

Aberdeen, 2003

Abounadi, 2002, Learning algorithms for Markov decision processes with average cost, SIAM Journal on Control and Optimization, 40, 681, 10.1137/S0363012999361974

Akaike, 1970, Statistical predictor identification, Annals of the Institute of Statistical Mathematics, 22, 203, 10.1007/BF02506337

Akaike, 1973, Information theory and an extension of the maximum likelihood principle, 267

Akaike, 1974, A new look at the statistical model identification, IEEE Transactions on Automatic Control, 19, 716, 10.1109/TAC.1974.1100705

Allender, 1992, Application of time-bounded Kolmogorov complexity in complexity theory, 6

Almeida, L. B. (1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In IEEE 1st international conference on neural networks, vol. 2 (pp. 609–618).

Almeida, 1997

Amari, 1967, A theory of adaptive pattern classifiers, IEEE Transactions on Electronic Computers, 16, 299, 10.1109/PGEC.1967.264666

Amari, 1998, Natural gradient works efficiently in learning, Neural Computation, 10, 251, 10.1162/089976698300017746

Amari, 1996, A new learning algorithm for blind signal separation

Amari, 1993, Statistical theory of learning curves under entropic loss criterion, Neural Computation, 5, 140, 10.1162/neco.1993.5.1.140

Amit, 1997, Dynamics of a recurrent network of spiking neurons before and following learning, Network: Computation in Neural Systems, 8, 373, 10.1088/0954-898X/8/4/003

An, 1996, The effects of adding noise during backpropagation training on a generalization performance, Neural Computation, 8, 643, 10.1162/neco.1996.8.3.643

Andrade, 1993, Evaluation of secondary structure of proteins from UV circular dichroism spectra using an unsupervised learning neural network, Protein Engineering, 6, 383, 10.1093/protein/6.4.383

Andrews, 1995, Survey and critique of techniques for extracting rules from trained artificial neural networks, Knowledge-Based Systems, 8, 373, 10.1016/0950-7051(96)81920-4

Anguita, 1996, Mixing floating- and fixed-point formats for neural network learning on neuroprocessors, Microprocessing and Microprogramming, 41, 757, 10.1016/0165-6074(96)00012-9

Anguita, 1994, An efficient implementation of BP on RISC-based workstations, Neurocomputing, 6, 57, 10.1016/0925-2312(94)90034-5

Arel, 2010, Deep machine learning—a new frontier in artificial intelligence research, IEEE Computational Intelligence Magazine, 5, 13, 10.1109/MCI.2010.938364

Ash, 1989, Dynamic node creation in backpropagation neural networks, Connection Science, 1, 365, 10.1080/09540098908915647

Atick, 1992, Understanding retinal color coding from first principles, Neural Computation, 4, 559, 10.1162/neco.1992.4.4.559

Atiya, 2000, New results on recurrent network training: unifying the algorithms and accelerating convergence, IEEE Transactions on Neural Networks, 11, 697, 10.1109/72.846741

Ba, 2013, Adaptive dropout for training deep neural networks, 3084

Baird, H. (1990). Document image defect models. In Proceddings, IAPR workshop on syntactic and structural pattern recognition.

Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation. In International conference on machine learning (pp. 30–37).

Baird, 1999, Gradient descent for general reinforcement learning, 968

Bakker, 2002, Reinforcement learning with long short-term memory, 1475

Bakker, 2004, Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization, 438

Bakker, B., Zhumatiy, V., Gruener, G., & Schmidhuber, J. (2003). A robot that reinforcement-learns to identify and memorize important previous observations. In Proceedings of the 2003 IEEE/RSJ international conference on intelligent robots and systems (pp. 430–435).

Baldi, 1995, Gradient descent learning algorithms overview: A general dynamical systems perspective, IEEE Transactions on Neural Networks, 6, 182, 10.1109/72.363438

Baldi, 2012, Autoencoders, unsupervised learning, and deep architectures, Journal of Machine Learning Research, 27, 37

Baldi, 1999, Exploiting the past and the future in protein secondary structure prediction, Bioinformatics, 15, 937, 10.1093/bioinformatics/15.11.937

Baldi, 1993, Neural networks for fingerprint recognition, Neural Computation, 5, 402, 10.1162/neco.1993.5.3.402

Baldi, 1996, Hybrid modeling, HMM/NN architectures, and protein applications, Neural Computation, 8, 1541, 10.1162/neco.1996.8.7.1541

Baldi, 1989, Neural networks and principal component analysis: learning from examples without local minima, Neural Networks, 2, 53, 10.1016/0893-6080(89)90014-2

Baldi, 1995, Learning in linear networks: a survey, IEEE Transactions on Neural Networks, 6, 837, 10.1109/72.392248

Baldi, 2003, The principled design of large-scale recursive neural network architectures—DAG-RNNs and the protein structure prediction problem, Journal of Machine Learning Research, 4, 575

Baldi, 2014, The dropout learning algorithm, Artificial Intelligence, 210C, 78, 10.1016/j.artint.2014.02.004

Ballard, D. H. (1987). Modular learning in neural networks. In Proc. AAAI (pp. 279–284).

Baluja, 1994

Balzer, 1985, A 15 year perspective on automatic programming, IEEE Transactions on Software Engineering, 11, 1257, 10.1109/TSE.1985.231877

Barlow, 1989, Unsupervised learning, Neural Computation, 1, 295, 10.1162/neco.1989.1.3.295

Barlow, 1989, Finding minimum entropy codes, Neural Computation, 1, 412, 10.1162/neco.1989.1.3.412

Barrow, 1987, Learning receptive fields, 115

Barto, 2003, Recent advances in hierarchical reinforcement learning, Discrete Event Dynamic Systems, 13, 341, 10.1023/A:1025696116075

Barto, 2004, Intrinsically motivated learning of hierarchical collections of skills, 112

Barto, 1983, Neuronlike adaptive elements that can solve difficult learning control problems, IEEE Transactions on Systems, Man and Cybernetics, SMC-13, 834, 10.1109/TSMC.1983.6313077

Battiti, 1989, Accelerated backpropagation learning: two optimization methods, Complex Systems, 3, 331

Battiti, 1992, First- and second-order methods for learning: between steepest descent and Newton’s method, Neural Computation, 4, 141, 10.1162/neco.1992.4.2.141

Baum, 1989, What size net gives valid generalization?, Neural Computation, 1, 151, 10.1162/neco.1989.1.1.151

Baum, 1966, Statistical inference for probabilistic functions of finite state Markov chains, The Annals of Mathematical Statistics, 1554, 10.1214/aoms/1177699147

Baxter, 2001, Infinite-horizon policy-gradient estimation, Journal of Artificial Intelligence Research, 15, 319, 10.1613/jair.806

Bayer, J., & Osendorfer, C. (2014). Variational inference of latent state sequences using recurrent networks. ArXiv Preprint arXiv:1406.1655.

Bayer, J., Osendorfer, C., Chen, N., Urban, S., & van der Smagt, P. (2013). On fast dropout and its applicability to recurrent networks. ArXiv Preprint arXiv:1311.0701.

Bayer, J., Wierstra, D., Togelius, J., & Schmidhuber, J. (2009). Evolving memory cell structures for sequence learning. In Proc. ICANN (2) (pp. 755–764).

Bayes, 1763, An essay toward solving a problem in the doctrine of chances, Philosophical Transactions of the Royal Society of London, 53, 370, 10.1098/rstl.1763.0053

Becker, 1991, Unsupervised learning procedures for neural networks, International Journal of Neural Systems, 2, 17, 10.1142/S0129065791000030

Becker, 1989, Improving the convergence of back-propagation learning with second order methods, 29

Behnke, S. (1999). Hebbian learning and competition in the neural abstraction pyramid. In Proceedings of the international joint conference on neural networks, vol. 2 (pp. 1356–1361).

Behnke, 2001, Learning iterative image reconstruction in the neural abstraction pyramid, International Journal of Computational Intelligence and Applications, 1, 427, 10.1142/S1469026801000342

Behnke, S. (2002). Learning face localization using hierarchical recurrent networks. In Proceedings of the 12th international conference on artificial neural networks (pp. 1319–1324).

Behnke, S. (2003a). Discovering hierarchical speech features using convolutional non-negative matrix factorization. In Proceedings of the international joint conference on neural networks, vol. 4 (pp. 2758–2763).

Behnke, 2003, Vol. 2766

Behnke, 2005, Face localization and tracking in the neural abstraction pyramid, Neural Computing and Applications, 14, 97, 10.1007/s00521-004-0444-x

Behnke, S., & Rojas, R. (1998). Neural abstraction pyramid: a hierarchical image understanding architecture. In Proceedings of international joint conference on neural networks, vol. 2 (pp. 820–825).

Bell, 1995, An information-maximization approach to blind separation and blind deconvolution, Neural Computation, 7, 1129, 10.1162/neco.1995.7.6.1129

Bellman, 1957

Belouchrani, 1997, A blind source separation technique using second-order statistics, IEEE Transactions on Signal Processing, 45, 434, 10.1109/78.554307

Bengio, 1991

Bengio, 2009, Vol. 2(1)

Bengio, 2013, Representation learning: a review and new perspectives, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1798, 10.1109/TPAMI.2013.50

Bengio, 2007, Greedy layer-wise training of deep networks, 153

Bengio, 1994, Learning long-term dependencies with gradient descent is difficult, IEEE Transactions on Neural Networks, 5, 157, 10.1109/72.279181

Beringer, 2005, Classifying unprompted speech by retraining LSTM nets, Vol. 3696, 575

Bertsekas, 2001

Bertsekas, 1996

Bichot, 2005, Parallel and serial neural mechanisms for visual search in macaque area V4, Science, 308, 529, 10.1126/science.1109676

Biegler-König, 1993, A learning algorithm for multilayered neural networks based on linear least squares problems, Neural Networks, 6, 127, 10.1016/S0893-6080(05)80077-2

Bishop, 1993, Curvature-driven smoothing: A learning algorithm for feed-forward networks, IEEE Transactions on Neural Networks, 4, 882, 10.1109/72.248466

Bishop, 2006

Blair, 1997, Analysis of dynamical recognizers, Neural Computation, 9, 1127, 10.1162/neco.1997.9.5.1127

Blondel, 2000, A survey of computational complexity results in systems and control, Automatica, 36, 1249, 10.1016/S0005-1098(00)00050-9

Bluche, T., Louradour, J., Knibbe, M., Moysset, B., Benzeghiba, F., & Kermorvant, C. (2014). The A2iA Arabic handwritten text recognition system at the OpenHaRT2013 evaluation. In International workshop on document analysis systems.

Blum, 1992, Training a 3-node neural network is NP-complete, Neural Networks, 5, 117, 10.1016/S0893-6080(05)80010-3

Blumer, 1987, Occam’s razor, Information Processing Letters, 24, 377, 10.1016/0020-0190(87)90114-1

Bobrowski, 1978, Learning processes in multilayer threshold nets, Biological Cybernetics, 31, 1, 10.1007/BF00337365

Bodén, 2000, Context-free and context-sensitive dynamics in recurrent neural networks, Connection Science, 12, 197, 10.1080/095400900750060122

Bodenhausen, 1991, The Tempo 2 algorithm: adjusting time-delays by supervised learning, 155

Bohte, 2002, Error-backpropagation in temporally encoded networks of spiking neurons, Neurocomputing, 48, 17, 10.1016/S0925-2312(01)00658-0

Boltzmann, 1909

Bottou, 1991

Bourlard, 1994

Boutilier, C., & Poole, D. (1996). Computing optimal policies for partially observable Markov decision processes using compact representations. In Proceedings of the AAAI.

Bradtke, 1996, Linear least-squares algorithms for temporal difference learning, Machine Learning, 22

Brafman, 2002, R-MAX—a general polynomial time algorithm for near-optimal reinforcement learning, Journal of Machine Learning Research, 3, 213

Brea, 2013, Matching recall and storage in sequence learning with spiking neural networks, The Journal of Neuroscience, 33, 9565, 10.1523/JNEUROSCI.4098-12.2013

Breiman, 1996, Bagging predictors, Machine Learning, 24, 123, 10.1007/BF00058655

Brette, 2007, Simulation of networks of spiking neurons: a review of tools and strategies, Journal of Computational Neuroscience, 23, 349, 10.1007/s10827-007-0038-6

Breuel, 2013, High-performance OCR for printed English and Fraktur using LSTM networks, 683

Bromley, 1993, Signature verification using a Siamese time delay neural network, International Journal of Pattern Recognition and Artificial Intelligence, 7, 669, 10.1142/S0218001493000339

Broyden, 1965, A class of methods for solving nonlinear simultaneous equations, Mathematics of Computation, 19, 577, 10.1090/S0025-5718-1965-0198670-6

Brueckner, R., & Schulter, B. (2014). Social signal classification using deep BLSTM recurrent neural networks. In Proceedings 39th IEEE international conference on acoustics, speech, and signal processing (pp. 4856–4860).

Brunel, 2000, Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons, Journal of Computational Neuroscience, 8, 183, 10.1023/A:1008925309027

Bryson, A. E. (1961). A gradient method for optimizing multi-stage allocation processes. In Proc. Harvard Univ. symposium on digital computers and their applications.

Bryson Jr., 1961

Bryson, 1969

Buhler, 2001, Efficient large-scale sequence comparison by locality-sensitive hashing, Bioinformatics, 17, 419, 10.1093/bioinformatics/17.5.419

Buntine, 1991, Bayesian back-propagation, Complex Systems, 5, 603

Burgess, 1994, A constructive algorithm that converges for real-valued input patterns, International Journal of Neural Systems, 5, 59, 10.1142/S0129065794000074

Cardoso, J.-F. (1994). On the performance of orthogonal source separation algorithms. In Proc. EUSIPCO (pp. 776–779).

Carreira-Perpinan, 2001

Carter, 1990, Operational fault tolerance of CMAC networks, 340

Caruana, 1997, Multitask learning, Machine Learning, 28, 41, 10.1023/A:1007379606734

Casey, 1996, The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction, Neural Computation, 8, 1135, 10.1162/neco.1996.8.6.1135

Cauwenberghs, 1993, A fast stochastic error-descent algorithm for supervised learning and optimization, 244

Chaitin, 1966, On the length of programs for computing finite binary sequences, Journal of the ACM, 13, 547, 10.1145/321356.321363

Chalup, 2003, Incremental training of first order recurrent neural networks to predict a context-sensitive language, Neural Networks, 16, 955, 10.1016/S0893-6080(03)00054-6

Chellapilla, K., Puri, S., & Simard, P. (2006). High performance convolutional neural networks for document processing. In International workshop on Frontiers in handwriting recognition.

Chen, 2011, Learning speaker-specific characteristics with a deep neural architecture, IEEE Transactions on Neural Networks, 22, 1744, 10.1109/TNN.2011.2167240

Cho, 2014

Cho, 2012, Tikhonov-type regularization for restricted Boltzmann machines, 81

Cho, 2013, Enhanced gradient for training restricted Boltzmann machines, Neural Computation, 25, 805, 10.1162/NECO_a_00397

Church, 1936, An unsolvable problem of elementary number theory, The American Journal of Mathematics, 58, 345, 10.2307/2371045

Ciresan, 2012, Deep neural networks segment neuronal membranes in electron microscopy images, 2852

Ciresan, D. C., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2013). Mitosis detection in breast cancer histology images with deep neural networks. In Proc. MICCAI, vol. 2 (pp. 411–418).

Ciresan, 2010, Deep big simple neural nets for handwritten digit recogntion, Neural Computation, 22, 3207, 10.1162/NECO_a_00052

Ciresan, D. C., Meier, U., Masci, J., Gambardella, L. M., & Schmidhuber, J. (2011). Flexible, high performance convolutional neural networks for image classification. In Intl. joint conference on artificial intelligence (pp. 1237–1242).

Ciresan, D. C., Meier, U., Masci, J., & Schmidhuber, J. (2011). A committee of neural networks for traffic sign classification. In International joint conference on neural networks (pp. 1918–1921).

Ciresan, 2012, Multi-column deep neural network for traffic sign classification, Neural Networks, 32, 333, 10.1016/j.neunet.2012.02.023

Ciresan, D. C., Meier, U., & Schmidhuber, J. (2012a). Multi-column deep neural networks for image classification. In IEEE Conference on computer vision and pattern recognition. Long preprint arXiv:1202.2745v1  [cs.CV].

Ciresan, D. C., Meier, U., & Schmidhuber, J. (2012b). Transfer learning for Latin and Chinese characters with deep neural networks. In International joint conference on neural networks (pp. 1301–1306).

Ciresan, 2013

Cliff, 1993, Evolving recurrent dynamical networks for robot control, 428

Clune, 2013, The evolutionary origins of modularity, Proceedings of the Royal Society B: Biological Sciences, 280, 20122863, 10.1098/rspb.2012.2863

Clune, 2011, On the performance of indirect encoding across the continuum of regularity, IEEE Transactions on Evolutionary Computation, 15, 346, 10.1109/TEVC.2010.2104157

Coates, A., Huval, B., Wang, T., Wu, D. J., Ng, A. Y., & Catanzaro, B. (2013). Deep learning with COTS HPC systems. In Proc. international conference on machine learning.

Cochocki, 1993

Collobert, 2008, A unified architecture for natural language processing: deep neural networks with multitask learning, 160

Comon, 1994, Independent component analysis—a new concept?, Signal Processing, 36, 287, 10.1016/0165-1684(94)90029-9

Connor, 2007, Transformation of shape information in the ventral pathway, Current Opinion in Neurobiology, 17, 140, 10.1016/j.conb.2007.03.002

Connor, 1994, Recurrent neural networks and robust time series prediction, IEEE Transactions on Neural Networks, 5, 240, 10.1109/72.279188

Cook, 1971, The complexity of theorem-proving procedures, 151

Cramer, 1985, A representation for the adaptive generation of simple sequential programs

Craven, 1979, Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation, Numerische Mathematik, 31, 377, 10.1007/BF01404567

Cuccu, 2011, Intrinsically motivated evolutionary search for vision-based reinforcement learning, 1

Dahl, 2013, Improving deep neural networks for LVCSR using rectified linear units and dropout, 8609

Dahl, 2012, Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition, IEEE Transactions on Audio, Speech and Language Processing, 20, 30, 10.1109/TASL.2011.2134090

D’Ambrosio, D. B., & Stanley, K. O. (2007). A novel generative encoding for exploiting neural network sensor and output geometry. In Proceedings of the conference on genetic and evolutionary computation (pp. 974–981).

Datar, 2004, Locality-sensitive hashing scheme based on p-stable distributions, 253

Dayan, 1993, Feudal reinforcement learning, 271

Dayan, 1996, Varieties of Helmholtz machine, Neural Networks, 9, 1385, 10.1016/S0893-6080(96)00009-3

Dayan, 1995, The Helmholtz machine, Neural Computation, 7, 889, 10.1162/neco.1995.7.5.889

Dayan, 1995, Competition and multiple cause models, Neural Computation, 7, 565, 10.1162/neco.1995.7.3.565

Deco, 1997, Non-linear feature extraction by redundancy reduction in an unsupervised stochastic neural network, Neural Networks, 10, 683, 10.1016/S0893-6080(96)00110-4

Deco, 2005, Neurodynamics of biased competition and cooperation for attention: a model with spiking neurons, Journal of Neurophysiology, 94, 295, 10.1152/jn.01095.2004

De Freitas, 2003

DeJong, 1986, Explanation-based learning: an alternative view, Machine Learning, 1, 145, 10.1007/BF00114116

DeMers, 1993, Non-linear dimensionality reduction, 580

Dempster, 1977, Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society B, 39, 10.1111/j.2517-6161.1977.tb01600.x

Deng, 2014

Desimone, 1984, Stimulus-selective properties of inferior temporal neurons in the macaque, The Journal of Neuroscience, 4, 2051, 10.1523/JNEUROSCI.04-08-02051.1984

de Souto, 1999, The loading problem for pyramidal neural networks, Electronic Journal on Mathematics of Computation

De Valois, 1982, Spatial frequency selectivity of cells in macaque visual cortex, Vision Research, 22, 545, 10.1016/0042-6989(82)90113-4

Deville, 1994, Logic program synthesis, Journal of Logic Programming, 19, 321, 10.1016/0743-1066(94)90029-9

de Vries, 1991, A theory for neural networks with time delays, 162

DiCarlo, 2012, How does the brain solve visual object recognition?, Neuron, 73, 415, 10.1016/j.neuron.2012.01.010

Dickmanns, E. D., Behringer, R., Dickmanns, D., Hildebrandt, T., Maurer, M., & Thomanek, F., et al. (1994). The seeing passenger car ’VaMoRs-P’. In Proc. int. symp. on intelligent vehicles (pp. 68–73).

Dickmanns, 1987

Dietterich, 2000, Ensemble methods in machine learning, 1

Dietterich, 2000, Hierarchical reinforcement learning with the MAXQ value function decomposition, Journal of Artificial Intelligence Research (JAIR), 13, 227, 10.1613/jair.639

Di Lena, 2012, Deep architectures for protein contact map prediction, Bioinformatics, 28, 2449, 10.1093/bioinformatics/bts475

Director, 1969, Automated network design—the frequency-domain case, IEEE Transactions on Circuit Theory, CT-16, 330, 10.1109/TCT.1969.1082967

Dittenbach, 2000, The growing hierarchical self-organizing map, 6015

Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., & Tzeng, E., et al. (2013). DeCAF: a deep convolutional activation feature for generic visual recognition. ArXiv Preprint arXiv:1310.1531.

Dorffner, G. (1996). Neural networks for time series processing. In Neural network world.

Doya, 2002, Multiple model-based reinforcement learning, Neural Computation, 14, 1347, 10.1162/089976602753712972

Dreyfus, 1962, The numerical solution of variational problems, Journal of Mathematical Analysis and Applications, 5, 30, 10.1016/0022-247X(62)90004-5

Dreyfus, 1973, The computational solution of optimal control problems with time lag, IEEE Transactions on Automatic Control, 18, 383, 10.1109/TAC.1973.1100330

Duchi, 2011, Adaptive subgradient methods for online learning and stochastic optimization, The Journal of Machine Learning, 12, 2121

Egorova, A., Gloye, A., Göktekin, C., Liers, A., Luft, M., & Rojas, R., et al. (2004). FU-fighters small size 2004, team description. In RoboCup 2004 symposium: papers and team description papers. CD edition.

Elfwing, 2010, Free-energy based reinforcement learning for vision-based navigation with high-dimensional sensory inputs, 215

Eliasmith, 2013

Eliasmith, 2012, A large-scale model of the functioning brain, Science, 338, 1202, 10.1126/science.1225266

Elman, 1990, Finding structure in time, Cognitive Science, 14, 179, 10.1207/s15516709cog1402_1

Erhan, 2010, Why does unsupervised pre-training help deep learning?, Journal of Machine Learning Research, 11, 625

Escalante-B, 2013, How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis, Journal of Machine Learning Research, 14, 3683

Eubank, 1988, Spline smoothing and nonparametric regression

Euler, L. (1744). Methodus inveniendi.

Eyben, F., Weninger, F., Squartini, S., & Schuller, B. (2013). Real-life voice activity detection with LSTM recurrent neural networks and an application to Hollywood movies. In Proc. 38th IEEE international conference on acoustics, speech, and signal processing (pp. 483–487).

Faggin, F. (1992). Neural network hardware. In International joint conference on neural networks, vol. 1 (p. 153).

Fahlman, 1988

Fahlman, 1991, The recurrent cascade-correlation learning algorithm, 190

Falconbridge, 2006, A simple Hebbian/anti-Hebbian network learns the sparse, independent components of natural images, Neural Computation, 18, 415, 10.1162/089976606775093891

Fan, Y., Qian, Y., Xie, F., & Soong, F. K. (2014). TTS synthesis with bidirectional LSTM based recurrent neural networks. In Proc. Interspeech.

Farabet, 2013, Learning hierarchical features for scene labeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1915, 10.1109/TPAMI.2012.231

Farlow, 1984

Feldkamp, 1998, Enhanced multi-stream Kalman filter training for recurrent networks, 29

Feldkamp, 2003, Simple and conditioned adaptive behavior from Kalman filter trained recurrent networks, Neural Networks, 16, 683, 10.1016/S0893-6080(03)00127-8

Feldkamp, 1998, A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification, Proceedings of the IEEE, 86, 2259, 10.1109/5.726790

Felleman, 1991, Distributed hierarchical processing in the primate cerebral cortex, Cerebral Cortex, 1, 1, 10.1093/cercor/1.1.1

Fernández, S., Graves, A., & Schmidhuber, J. (2007a). An application of recurrent neural networks to discriminative keyword spotting. In Proc. ICANN (2) (pp. 220–229).

Fernandez, S., Graves, A., & Schmidhuber, J. (2007b). Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proceedings of the 20th international joint conference on artificial intelligence.

Fernandez, R., Rendel, A., Ramabhadran, B., & Hoory, R. (2014). Prosody contour prediction with long short-term memory, bi-directional, deep recurrent neural networks. In Proc. Interspeech.

Field, 1987, Relations between the statistics of natural images and the response properties of cortical cells, Journal of the Optical Society of America, 4, 2379, 10.1364/JOSAA.4.002379

Field, 1994, What is the goal of sensory coding?, Neural Computation, 6, 559, 10.1162/neco.1994.6.4.559

Fieres, J., Schemmel, J., & Meier, K. (2008). Realizing biological spiking network models in a configurable wafer-scale hardware system. In IEEE International joint conference on neural networks (pp. 969–976).

Fine, 1998, The hierarchical hidden Markov model: analysis and applications, Machine Learning, 32, 41, 10.1023/A:1007469218079

Fischer, 2014, Training restricted Boltzmann machines: an introduction, Pattern Recognition, 47, 25, 10.1016/j.patcog.2013.05.025

FitzHugh, 1961, Impulses and physiological states in theoretical models of nerve membrane, Biophysical Journal, 1, 445, 10.1016/S0006-3495(61)86902-6

Fletcher, 1963, A rapidly convergent descent method for minimization, The Computer Journal, 6, 163, 10.1093/comjnl/6.2.163

Floreano, 2001, Evolution of spiking neural controllers for autonomous vision-based robots, 38

Fogel, 1990, Evolving neural networks, Biological Cybernetics, 63, 487, 10.1007/BF00199581

Fogel, 1966

Földiák, 1990, Forming sparse representations by local anti-Hebbian learning, Biological Cybernetics, 64, 165, 10.1007/BF02331346

Földiák, 1995, Sparse coding in the primate cortex, 895

Förster, A., Graves, A., & Schmidhuber, J. (2007). RNN-based learning of compact maps for efficient robot localization. In 15th European symposium on artificial neural networks (pp. 537–542).

Franzius, 2007, Slowness and sparseness lead to place, head-direction, and spatial-view cells, PLoS Computational Biology, 3, 166, 10.1371/journal.pcbi.0030166

Friedman, J., Hastie, T., & Tibshirani, R. (2001). Springer series in statistics: Vol. 1. The elements of statistical learning. New York.

Frinken, 2012, Long-short term memory neural networks language modeling for handwriting recognition, 701

Fritzke, 1994, A growing neural gas network learns topologies, 625

Fu, 1977

Fukada, 1999, Phoneme boundary estimation using bidirectional recurrent neural networks and its applications, Systems and Computers in Japan, 30, 20, 10.1002/(SICI)1520-684X(199904)30:4<20::AID-SCJ3>3.0.CO;2-E

Fukushima, 1979, Neural network model for a mechanism of pattern recognition unaffected by shift in position—Neocognitron, Transactions of the IECE, J62-A, 658

Fukushima, 1980, Neocognitron: A self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position, Biological Cybernetics, 36, 193, 10.1007/BF00344251

Fukushima, 2011, Increasing robustness against background noise: visual pattern recognition by a neocognitron, Neural Networks, 24, 767, 10.1016/j.neunet.2011.03.017

Fukushima, 2013, Artificial vision by multi-layered neural networks: neocognitron and its advances, Neural Networks, 37, 103, 10.1016/j.neunet.2012.09.016

Fukushima, 2013, Training multi-layered neural network neocognitron, Neural Networks, 40, 18, 10.1016/j.neunet.2013.01.001

Gabor, 1946, Theory of communication. Part 1: the analysis of information, Electrical Engineers-Part III: Journal of the Institution of Radio and Communication Engineering, 93, 429

Gallant, 1988, Connectionist expert systems, Communications of the ACM, 31, 152, 10.1145/42372.42377

Gauss, C. F. (1809). Theoria motus corporum coelestium in sectionibus conicis solem ambientium.

Gauss, C. F. (1821). Theoria combinationis observationum erroribus minimis obnoxiae (Theory of the combination of observations least subject to error).

Ge, 2010

Geiger, J. T., Zhang, Z., Weninger, F., Schuller, B., & Rigoll, G. (2014). Robust speech recognition using long short-term memory recurrent neural networks for hybrid acoustic modelling. In Proc. interspeech.

Geman, 1992, Neural networks and the bias/variance dilemma, Neural Computation, 4, 1, 10.1162/neco.1992.4.1.1

Gers, 2000, Recurrent nets that time and count, 189

Gers, 2001, LSTM recurrent networks learn simple context free and context sensitive languages, IEEE Transactions on Neural Networks, 12, 1333, 10.1109/72.963769

Gers, 2000, Learning to forget: continual prediction with LSTM, Neural Computation, 12, 2451, 10.1162/089976600300015015

Gers, 2002, Learning precise timing with LSTM recurrent networks, Journal of Machine Learning Research, 3, 115

Gerstner, 2002

Gerstner, 1992, Associative memory in a network of spiking neurons, Network: Computation in Neural Systems, 3, 139, 10.1088/0954-898X/3/2/004

Ghavamzadeh, M., & Mahadevan, S. (2003). Hierarchical policy gradient algorithms. In Proceedings of the twentieth conference on machine learning (pp. 226–233).

Gherrity, M. (1989). A learning algorithm for analog fully recurrent neural networks. In IEEE/INNS International joint conference on neural networks, San Diego, vol. 1 (pp. 643–644).

Girshick, 2013

Gisslen, 2011, Sequential constant size compressor for reinforcement learning, 31

Giusti, A., Ciresan, D. C., Masci, J., Gambardella, L. M., & Schmidhuber, J. (2013). Fast image scanning with deep max-pooling convolutional neural networks. In Proc. ICIP.

Glackin, 2005, A novel approach for the implementation of large scale spiking neural networks on FPGA hardware, 552

Glasmachers, 2010, Exponential natural evolution strategies, 393

Glorot, X., Bordes, A., & Bengio, Y. (2011). Deep sparse rectifier networks. In AISTATS, vol. 15 (pp. 315–323).

Gloye, 2005, Reinforcing the driving quality of soccer playing robots by anticipation, IT—Information Technology, 47, 10.1524/itit.2005.47.5_2005.250

Gödel, 1931, Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I, Monatshefte für Mathematik und Physik, 38, 173, 10.1007/BF01700692

Goldberg, 1989

Goldfarb, 1970, A family of variable-metric methods derived by variational means, Mathematics of Computation, 24, 23, 10.1090/S0025-5718-1970-0258249-6

Golub, 1979, Generalized cross-validation as a method for choosing a good ridge parameter, Technometrics, 21, 215, 10.1080/00401706.1979.10489751

Gomez, 2003

Gomez, F. J., & Miikkulainen, R. (2003). Active guidance for a finless rocket using neuroevolution. In Proc. GECCO 2003.

Gomez, 2005, Co-evolving recurrent neurons learn deep memory POMDPs

Gomez, 2008, Accelerated neural evolution through cooperatively coevolved synapses, Journal of Machine Learning Research, 9, 937

Gomi, 1993, Neural network control for a closed-loop system using feedback-error-learning, Neural Networks, 6, 933, 10.1016/S0893-6080(09)80004-X

Gonzalez-Dominguez, J., Lopez-Moreno, I., Sak, H., Gonzalez-Rodriguez, J., & Moreno, P. J. (2014). Automatic language identification using long short-term memory recurrent neural networks. In Proc. Interspeech.

Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., & Shet, V. (2014). Multi-digit number recognition from street view imagery using deep convolutional neural networks. ArXiv Preprint arXiv:1312.6082v4.

Goodfellow, I. J., Courville, A., & Bengio, Y. (2011). Spike-and-slab sparse coding for unsupervised feature discovery. In NIPS Workshop on challenges in learning hierarchical models.

Goodfellow, I. J., Courville, A. C., & Bengio, Y. (2012). Large-scale feature learning with spike-and-slab sparse coding. In Proceedings of the 29th international conference on machine learning.

Goodfellow, 2014

Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., & Bengio, Y. (2013). Maxout networks. In International conference on machine learning.

Graves, 2011, Practical variational inference for neural networks, 2348

Graves, A., Eck, D., Beringer, N., & Schmidhuber, J. (2003). Isolated digit recognition with LSTM recurrent networks. In First international workshop on biologically inspired approaches to advanced information technology.

Graves, A., Fernandez, S., Gomez, F. J., & Schmidhuber, J. (2006). Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. In ICML’06: Proceedings of the 23rd international conference on machine learning (pp. 369–376).

Graves, 2008, Unconstrained on-line handwriting recognition with recurrent neural networks, 577

Graves, A., & Jaitly, N. (2014). Towards end-to-end speech recognition with recurrent neural networks. In Proc. 31st International conference on machine learning (pp. 1764–1772).

Graves, 2009, A novel connectionist system for improved unconstrained handwriting recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 10.1109/TPAMI.2008.137

Graves, 2013, Speech recognition with deep recurrent neural networks, 6645

Graves, 2005, Framewise phoneme classification with bidirectional LSTM and other neural network architectures, Neural Networks, 18, 602, 10.1016/j.neunet.2005.06.042

Graves, 2009, Offline handwriting recognition with multidimensional recurrent neural networks, 545

Graziano, 2009

Griewank, A. (2012). Documenta Mathematica—Extra Volume ISMP, (pp. 389–400).

Grondman, 2012, A survey of actor-critic reinforcement learning: standard and natural policy gradients, IEEE Transactions on Systems, Man, and Cybernetics Part C: Applications and Reviews, 42, 1291, 10.1109/TSMCC.2012.2218595

Grossberg, 1969, Some networks that can learn, remember, and reproduce any number of complicated space–time patterns, I, Journal of Mathematics and Mechanics, 19, 53

Grossberg, 1976, Adaptive pattern classification and universal recoding, 1: parallel development and coding of neural feature detectors, Biological Cybernetics, 23, 187, 10.1007/BF00344744

Grossberg, 1976, Adaptive pattern classification and universal recoding, 2: feedback, expectation, olfaction, and illusions, Biological Cybernetics, 23, 10.1007/BF00340335

Gruau, 1996

Grünwald, 2005

Grüttner, 2010, Multi-dimensional deep memory atari-go players for parameter exploring policy gradients, 114

Guo, 2014, Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning

Guyon, 1992, Structural risk minimization for character recognition, 471

Hadamard, 1908

Hadsell, 2006, Dimensionality reduction by learning an invariant mapping

Hagras, H., Pounds-Cornish, A., Colley, M., Callaghan, V., & Clarke, G. (2004). Evolving spiking neural network controllers for autonomous robots. In IEEE International conference on robotics and automation, vol. 5 (pp. 4620–4626).

Hansen, 2003, Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES), Evolutionary Computation, 11, 1, 10.1162/106365603321828970

Hansen, 2001, Completely derandomized self-adaptation in evolution strategies, Evolutionary Computation, 9, 159, 10.1162/106365601750190398

Hanson, 1990, A stochastic version of the delta rule, Physica D: Nonlinear Phenomena, 42, 265, 10.1016/0167-2789(90)90081-Y

Hanson, 1989, Comparing biases for minimal network construction with back-propagation, 177

Happel, 1994, Design and evolution of modular neural network architectures, Neural Networks, 7, 985, 10.1016/S0893-6080(05)80155-8

Hashem, 1992, Improving model accuracy using optimal linear combinations of trained neural networks, IEEE Transactions on Neural Networks, 6, 792, 10.1109/72.377990

Hassibi, 1993, Second order derivatives for network pruning: optimal brain surgeon, 164

Hastie, 1990, Vol. 43

Hastie, 2009

Hawkins, 2006

Haykin, 2001

Hebb, 1949

Hecht-Nielsen, 1989, Theory of the backpropagation neural network, 593

Heemskerk, 1995, Overview of neural hardware

Heess, N., Silver, D., & Teh, Y. W. (2012). Actor-critic reinforcement learning with energy-based policies. In Proc. European workshop on reinforcement learning (pp. 43–57).

Heidrich-Meisner, 2009, Neuroevolution strategies for episodic reinforcement learning, Journal of Algorithms, 64, 152, 10.1016/j.jalgor.2009.04.002

Herrero, 2001, A hierarchical unsupervised growing neural network for clustering gene expression patterns, Bioinformatics, 17, 126, 10.1093/bioinformatics/17.2.126

Hertz, 1991

Hestenes, 1952, Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards, 49, 409, 10.6028/jres.049.044

Hihi, 1996, Hierarchical recurrent neural networks for long-term dependencies, 493

Hinton, 1989, Connectionist learning procedures, Artificial Intelligence, 40, 185, 10.1016/0004-3702(89)90049-0

Hinton, 2002, Training products of experts by minimizing contrastive divergence, Neural Computation, 14, 1771, 10.1162/089976602760128018

Hinton, 1995, The wake-sleep algorithm for unsupervised neural networks, Science, 268, 1158, 10.1126/science.7761831

Hinton, 2012, Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups, IEEE Signal Processing Magazine, 29, 82, 10.1109/MSP.2012.2205597

Hinton, 1997, Generative models for discovering sparse distributed representations, Philosophical Transactions of the Royal Society B, 352, 1177, 10.1098/rstb.1997.0101

Hinton, 2006, A fast learning algorithm for deep belief nets, Neural Computation, 18, 1527, 10.1162/neco.2006.18.7.1527

Hinton, 2006, Reducing the dimensionality of data with neural networks, Science, 313, 504, 10.1126/science.1127647

Hinton, 1986, Learning and relearning in Boltzmann machines, 282

Hinton, 2012

Hinton, 1993, Keeping neural networks simple, 11

Hochreiter, 1991

Hochreiter, 2001, Gradient flow in recurrent nets: the difficulty of learning long-term dependencies

Hochreiter, S., & Obermayer, K. (2005). Sequence classification for protein analysis. In Snowbird workshop, Snowbird: Utah. Computational and Biological Learning Society.

Hochreiter, 1996, Bridging long time lags by weight guessing and Long Short-Term Memory, Vol. 37, 65

Hochreiter, 1997, Flat minima, Neural Computation, 9, 1, 10.1162/neco.1997.9.1.1

Hochreiter, 1997, Long short-term memory, Neural Computation, 9, 1735, 10.1162/neco.1997.9.8.1735

Hochreiter, 1999, Feature extraction through LOCOCODE, Neural Computation, 11, 679, 10.1162/089976699300016629

Hochreiter, 2001, Learning to learn using gradient descent, Vol. 2130, 87

Hodgkin, 1952, A quantitative description of membrane current and its application to conduction and excitation in nerve, The Journal of Physiology, 117, 500, 10.1113/jphysiol.1952.sp004764

Hoerzer, 2014, Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning, Cerebral Cortex, 24, 677, 10.1093/cercor/bhs348

Holden, 1994

Holland, 1975

Honavar, 1988, A network of neuron-like units that learns to perceive by generation as well as reweighting of its links, 472

Honavar, 1993, Generative learning structures and processes for generalized connectionist networks, Information Sciences, 70, 75, 10.1016/0020-0255(93)90049-R

Hopfield, 1982, Neural networks and physical systems with emergent collective computational abilities, Proceedings of the National Academy of Sciences, 79, 2554, 10.1073/pnas.79.8.2554

Hornik, 1989, Multilayer feedforward networks are universal approximators, Neural Networks, 2, 359, 10.1016/0893-6080(89)90020-8

Hubel, 1962, Receptive fields, binocular interaction, and functional architecture in the cat’s visual cortex, Journal of Physiology (London), 160, 106, 10.1113/jphysiol.1962.sp006837

Hubel, 1968, Receptive fields and functional architecture of monkey striate cortex, The Journal of Physiology, 195, 215, 10.1113/jphysiol.1968.sp008455

Huffman, 1952, A method for construction of minimum-redundancy codes, Proceedings IRE, 40, 1098, 10.1109/JRPROC.1952.273898

Hung, 2005, Fast readout of object identity from macaque inferior temporal cortex, Science, 310, 863, 10.1126/science.1117593

Hutter, 2002, The fastest and shortest algorithm for all well-defined problems, International Journal of Foundations of Computer Science, 13, 431, 10.1142/S0129054102001199

Hutter, 2005

Hyvärinen, 1999, Sparse code shrinkage: denoising by maximum likelihood estimation

Hyvärinen, 2001

ICPR (2012). Contest on Mitosis Detection in Breast Cancer Histological Images (2012). IPAL laboratory and TRIBVN company and pitie-salpetriere hospital and CIALAB of Ohio State Univ. http://ipal.cnrs.fr/ICPR2012/.

Igel, 2003, Neuroevolution for reinforcement learning using evolution strategies, 2588

Igel, 2003, Empirical evaluation of the improved Rprop learning algorithm, Neurocomputing, 50, 105, 10.1016/S0925-2312(01)00700-7

Ikeda, 1976, Sequential GMDH algorithm and its application to river flow prediction, IEEE Transactions on Systems, Man and Cybernetics, 473, 10.1109/TSMC.1976.4309532

Indermuhle, 2012, Mode detection in online handwritten documents using BLSTM neural networks, 302

Indermuhle, 2011, Keyword spotting in online handwritten documents containing text and non-text using BLSTM neural networks, 73

Indiveri, 2011, Neuromorphic silicon neuron circuits, Frontiers in Neuroscience, 5

Ivakhnenko, 1968, The group method of data handling—a rival of the method of stochastic approximation, Soviet Automatic Control, 13, 43

Ivakhnenko, 1971, Polynomial theory of complex systems, IEEE Transactions on Systems, Man and Cybernetics, 364, 10.1109/TSMC.1971.4308320

Ivakhnenko, 1995, The review of problems solvable by algorithms of the group method of data handling (GMDH), Pattern Recognition and Image Analysis/Raspoznavaniye Obrazov I Analiz Izobrazhenii, 5, 527

Ivakhnenko, 1965

Ivakhnenko, 1967

Izhikevich, 2003, Simple model of spiking neurons, IEEE Transactions on Neural Networks, 14, 1569, 10.1109/TNN.2003.820440

Jaakkola, 1995, Reinforcement learning algorithm for partially observable Markov decision problems, 345

Jackel, L., Boser, B., Graf, H.-P., Denker, J., LeCun, Y., & Henderson, D., et al. (1990). VLSI implementation of electronic neural networks: and example in character recognition. In IEEE (Ed.), IEEE international conference on systems, man, and cybernetics (pp. 320–322).

Jacob, 1994, Genetic L-system programming

Jacobs, 1988, Increased rates of convergence through learning rate adaptation, Neural Networks, 1, 295, 10.1016/0893-6080(88)90003-2

Jaeger, 2001

Jaeger, 2004, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304, 78, 10.1126/science.1091277

Jain, 2009, Natural image denoising with convolutional networks, 769

Jameson, 1991, Delayed reinforcement learning with multiple time scale hierarchical backpropagated adaptive critics

Ji, 2013, 3D convolutional neural networks for human action recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 221, 10.1109/TPAMI.2012.59

Jim, 1995, Effects of noise on convergence and generalization in recurrent networks, 649

Jin, 2010, Modeling spiking neural networks on SpiNNaker, Computing in Science and Engineering, 12, 91, 10.1109/MCSE.2010.112

Jodogne, 2007, Closed-loop learning of visual control policies, Journal of Artificial Intelligence Research, 28, 349, 10.1613/jair.2110

Jones, 1987, An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex, Journal of Neurophysiology, 58, 1233, 10.1152/jn.1987.58.6.1233

Jordan, 1986

Jordan, 1988

Jordan, 1997, Serial order: a parallel distributed processing approach, Advances in Psychology, 121, 471, 10.1016/S0166-4115(97)80111-2

Jordan, 1990

Jordan, 2001

Joseph, 1961

Juang, 2004, A hybrid of genetic algorithm and particle swarm optimization for recurrent network design, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34, 997, 10.1109/TSMCB.2003.818557

Judd, 1990

Jutten, 1991, Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture, Signal Processing, 24, 1, 10.1016/0165-1684(91)90079-X

Kaelbling, 1995

Kaelbling, 1996, Reinforcement learning: A survey, Journal of AI Research, 4, 237

Kak, S., Chen, Y., & Wang, L. (2010). Data mining using surface and deep agents based on neural networks. In AMCIS 2010 proceedings.

Kalinke, 1998, Computation in recurrent neural networks: from counters to iterated function systems, Vol. 1502

Kalman, 1960, A new approach to linear filtering and prediction problems, Journal of Basic Engineering, 82, 35, 10.1115/1.3662552

Karhunen, 1995, Generalizations of principal component analysis, optimization problems, and neural networks, Neural Networks, 8, 549, 10.1016/0893-6080(94)00098-7

Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In IEEE conference on computer vision and pattern recognition.

Kasabov, 2014, Neucube: a spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data, Neural Networks, 10.1016/j.neunet.2014.01.006

Kelley, 1960, Gradient theory of optimal flight paths, ARS Journal, 30, 947, 10.2514/8.5282

Kempter, 1999, Hebbian learning and spiking neurons, Physical Review E, 59, 4498, 10.1103/PhysRevE.59.4498

Kerlirzin, 1993, Robustness in multilayer perceptrons, Neural Computation, 5, 473, 10.1162/neco.1993.5.3.473

Khan, S. H., Bennamoun, M., Sohel, F., & Togneri, R. (2014). Automatic feature learning for robust shadow detection. In IEEE conference on computer vision and pattern recognition.

Khan, M. M., Khan, G. M., & Miller, J. F. (2010). Evolution of neural networks using Cartesian Genetic Programming. In IEEE congress on evolutionary computation (pp. 1–8).

Khan, 2008, SpiNNaker: mapping neural networks onto a massively-parallel chip multiprocessor, 2849

Kimura, H., Miyazaki, K., & Kobayashi, S. (1997). Reinforcement learning in POMDPs with function approximation. In ICML, vol. 97 (pp. 152–160).

Kistler, 1997, Reduction of the Hodgkin–Huxley equations to a single-variable threshold model, Neural Computation, 9, 1015, 10.1162/neco.1997.9.5.1015

Kitano, 1990, Designing neural networks using genetic algorithms with graph generation system, Complex Systems, 4, 461

Klampfl, 2013, Emergence of dynamic memory traces in cortical microcircuit models through STDP, The Journal of Neuroscience, 33, 11515, 10.1523/JNEUROSCI.5044-12.2013

Klapper-Rybicka, 2001, Unsupervised learning in LSTM recurrent neural networks, Vol. 2130, 684

Kobatake, 1994, Neuronal selectivities to complex object features in the ventral visual pathway of the macaque cerebral cortex, Journal of Neurophysiology, 71, 856, 10.1152/jn.1994.71.3.856

Kohl, 2004, Policy gradient reinforcement learning for fast quadrupedal locomotion, 2619

Kohonen, 1972, Correlation matrix memories, IEEE Transactions on Computers, 100, 353, 10.1109/TC.1972.5008975

Kohonen, 1982, Self-organized formation of topologically correct feature maps, Biological Cybernetics, 43, 59, 10.1007/BF00337288

Kohonen, 1988

Koikkalainen, 1990, Self-organizing hierarchical feature maps, 279

Kolmogorov, 1965, On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition, Doklady Akademii Nauk SSSR, 114, 679

Kolmogorov, 1965, Three approaches to the quantitative definition of information, Problems of Information Transmission, 1, 1

Kompella, 2012, Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams, Neural Computation, 24, 2994, 10.1162/NECO_a_00344

Kondo, 1998, GMDH neural network algorithm using the heuristic self-organization method and its application to the pattern identification problem, 1143

Kondo, 2008, Multi-layered GMDH-type neural network self-selecting optimum neural network architecture and its application to 3-dimensional medical image recognition of blood vessels, International Journal of Innovative Computing, Information and Control, 4, 175

Kordík, 2003, Modified GMDH method and models quality evaluation by visualization, Control Systems and Computers, 2, 68

Korkin, M., de Garis, H., Gers, F., & Hemmi, H. (1997). CBM (CAM-Brain Machine)—a hardware tool which evolves a neural net module in a fraction of a second and runs a million neuron artificial brain in real time.

Kosko, 1990, Unsupervised learning in noise, IEEE Transactions on Neural Networks, 1, 44, 10.1109/72.80204

Koutník, 2013, Evolving large-scale neural networks for vision-based reinforcement learning, 1061

Koutník, J., Gomez, F., & Schmidhuber, J. (2010). Evolving neural networks in compressed weight space. In Proceedings of the 12th annual conference on genetic and evolutionary computation (pp. 619–626).

Koutník, J., Greff, K., Gomez, F., & Schmidhuber, J. (2014). A clockwork RNN. In Proceedings of the 31th international conference on machine learning, vol. 32 (pp. 1845–1853). arXiv:1402.3511  [cs.NE].

Koza, 1992

Kramer, 1991, Nonlinear principal component analysis using autoassociative neural networks, AIChE Journal, 37, 233, 10.1002/aic.690370209

Kremer, 2001

Kriegeskorte, 2008, Matching categorical object representations in inferior temporal cortex of man and monkey, Neuron, 60, 1126, 10.1016/j.neuron.2008.10.043

Krizhevsky, 2012, Imagenet classification with deep convolutional neural networks, 4

Krogh, 1992, A simple weight decay can improve generalization, 950

Kruger, 2013, Deep hierarchies in the primate visual cortex: what can we learn for computer vision?, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1847, 10.1109/TPAMI.2012.272

Kullback, 1951, On information and sufficiency, The Annals of Mathematical Statistics, 79, 10.1214/aoms/1177729694

Kurzweil, 2012

Lagoudakis, 2003, Least-squares policy iteration, Journal of Machine Learning Research, 4, 1107

Lampinen, 1992, Clustering properties of hierarchical self-organizing maps, Journal of Mathematical Imaging and Vision, 2, 261, 10.1007/BF00118594

Lang, 1990, A time-delay neural network architecture for isolated word recognition, Neural Networks, 3, 23, 10.1016/0893-6080(90)90044-L

Lange, S., & Riedmiller, M. (2010). Deep auto-encoder neural networks in reinforcement learning. In Neural networks, The 2010 international joint conference on (pp. 1–8).

Lapedes, 1986, A self-optimizing, nonsymmetrical neural net for content addressable memory and pattern recognition, Physica D, 22, 247, 10.1016/0167-2789(86)90244-7

Laplace, 1774, Mémoire sur la probabilité des causes par les évènements, Mémoires de l’Academie Royale des Sciences Presentés par Divers Savan, 6, 621

Larraanaga, 2001

Le, Q. V., Ranzato, M., Monga, R., Devin, M., Corrado, G., & Chen, K., et al. (2012). Building high-level features using large scale unsupervised learning. In Proc. ICML’12.

LeCun, Y. (1985). Une procédure d’apprentissage pour réseau à seuil asymétrique. In Proceedings of cognitiva 85 (pp. 599–604).

LeCun, 1988, A theoretical framework for back-propagation, 21

LeCun, 1989, Back-propagation applied to handwritten zip code recognition, Neural Computation, 1, 541, 10.1162/neco.1989.1.4.541

LeCun, 1990, Handwritten digit recognition with a back-propagation network, 396

LeCun, 1998, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86, 2278, 10.1109/5.726791

LeCun, 1990, Optimal brain damage, 598

LeCun, 2006, Off-road obstacle avoidance through end-to-end learning

LeCun, 1993, Automatic learning rate maximization by on-line estimation of the Hessian’s eigenvectors

Lee, 1996

Lee, 2007, Efficient sparse coding algorithms, 801

Lee, 2007, Sparse deep belief net model for visual area V2, 873

Lee, H., Grosse, R., Ranganath, R., & Ng, A. Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th international conference on machine learning (pp. 609–616).

Lee, 1991, A Gaussian potential function network with hierarchically self-organizing learning, Neural Networks, 4, 207, 10.1016/0893-6080(91)90005-P

Lee, H., Pham, P. T., Largman, Y., & Ng, A. Y. (2009). Unsupervised feature learning for audio classification using convolutional deep belief networks. In Proc. NIPS, vol. 9 (pp. 1096–1104).

Legendre, 1805

Legenstein, 2002, Neural circuits for pattern recognition with small total wire length, Theoretical Computer Science, 287, 239, 10.1016/S0304-3975(02)00097-X

Legenstein, 2010, Reinforcement learning on slow features of high-dimensional input streams, PLoS Computational Biology, 6, 10.1371/journal.pcbi.1000894

Leibniz, G. W. (1676). Memoir using the chain rule (cited in TMME 7:2&3 p. 321–332, 2010).

Leibniz, 1684, Nova methodus pro maximis et minimis, itemque tangentibus, quae nec fractas, nec irrationales quantitates moratur, et singulare pro illis calculi genus, Acta Eruditorum, 467

Lenat, 1983, Theory formation by heuristic search, Machine Learning, 21

Lenat, 1984, Why AM an EURISKO appear to work, Artificial Intelligence, 23, 269, 10.1016/0004-3702(84)90016-X

Lennie, 2005, Coding of color and form in the geniculostriate visual pathway, Journal of the Optical Society of America A, 22, 2013, 10.1364/JOSAA.22.002013

Levenberg, 1944, A method for the solution of certain problems in least squares, Quarterly of Applied Mathematics, 2, 164, 10.1090/qam/10666

Levin, 1973, On the notion of a random sequence, Soviet Mathematics Doklady, 14, 1413

Levin, 1973, Universal sequential search problems, Problems of Information Transmission, 9, 265

Levin, 1994, Fast pruning using principal components, 35

Levin, 1995, Control of nonlinear dynamical systems using neural networks. II. Observability, identification, and control, IEEE Transactions on Neural Networks, 7, 30, 10.1109/72.478390

Lewicki, 1998, Inferring sparse, overcomplete image codes using an efficient coding framework, 815

L’Hôpital, 1696

Li, 1997

Li, 2014, Deep learning based imaging data completion for improved brain disease diagnosis

Lin, 1993

Lin, 1996, Learning long-term dependencies in NARX recurrent neural networks, IEEE Transactions on Neural Networks, 7, 1329, 10.1109/72.548162

Lindenmayer, 1968, Mathematical models for cellular interaction in development, Journal of Theoretical Biology, 18, 280, 10.1016/0022-5193(68)90079-9

Lindstädt, 1993, Comparison of two unsupervised neural network models for redundancy reduction, 308

Linnainmaa, 1970

Linnainmaa, 1976, Taylor expansion of the accumulated rounding error, BIT Numerical Mathematics, 16, 146, 10.1007/BF01931367

Linsker, 1988, Self-organization in a perceptual network, IEEE Computer, 21, 105, 10.1109/2.36

Littman, 1995, Learning policies for partially observable environments: scaling up, 362

Liu, 2001, Orientation-selective aVLSI spiking neurons, Neural Networks, 14, 629, 10.1016/S0893-6080(01)00054-5

Ljung, 1998

Logothetis, 1995, Shape representation in the inferior temporal cortex of monkeys, Current Biology, 5, 552, 10.1016/S0960-9822(95)00108-4

Loiacono, 2011

Loiacono, D., Lanzi, P. L., Togelius, J., Onieva, E., Pelta, D. A., & Butz, M. V., et al. (2009). The 2009 simulated car racing championship.

Lowe, D. (1999). Object recognition from local scale-invariant features. In The Proceedings of the seventh IEEE international conference on computer vision, vol. 2 (pp. 1150–1157).

Lowe, 2004, Distinctive image features from scale-invariant key-points, International Journal of Computer Vision, 60, 91, 10.1023/B:VISI.0000029664.99615.94

Luciw, 2013, An intrinsic value system for developing multiple invariant representations with incremental slowness learning, Frontiers in Neurorobotics, 7

Lusci, 2013, Deep architectures and deep learning in chemoinformatics: the prediction of aqueous solubility for drug-like molecules, Journal of Chemical Information and Modeling, 53, 1563, 10.1021/ci400187y

Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities improve neural network acoustic models. In International conference on machine learning.

Maass, 1996, Lower bounds for the computational power of networks of spiking neurons, Neural Computation, 8, 1, 10.1162/neco.1996.8.1.1

Maass, 1997, Networks of spiking neurons: the third generation of neural network models, Neural Networks, 10, 1659, 10.1016/S0893-6080(97)00011-7

Maass, 2000, On the computational power of winner-take-all, Neural Computation, 12, 2519, 10.1162/089976600300014827

Maass, 2002, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Computation, 14, 2531, 10.1162/089976602760407955

MacKay, 1992, A practical Bayesian framework for backprop networks, Neural Computation, 4, 448, 10.1162/neco.1992.4.3.448

MacKay, 1990, Analysis of Linsker’s simulation of Hebbian rules, Neural Computation, 2, 173, 10.1162/neco.1990.2.2.173

Maclin, 1993, Using knowledge-based neural networks to improve algorithms: Refining the Chou–Fasman algorithm for protein folding, Machine Learning, 11, 195, 10.1007/BF00993077

Maclin, R., & Shavlik, J. W. (1995). Combining the predictions of multiple classifiers: Using competitive learning to initialize neural networks. In Proc. IJCAI (pp. 524–531).

Madala, 1994

Madani, 2003, On the undecidability of probabilistic planning and related stochastic optimization problems, Artificial Intelligence, 147, 5, 10.1016/S0004-3702(02)00378-8

Maei, H. R., & Sutton, R. S. (2010). GQ(λ): A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In Proceedings of the third conference on artificial general intelligence, vol. 1 (pp. 91–96).

Maex, 1996, Model circuit of spiking neurons generating directional selectivity in simple cells, Journal of Neurophysiology, 75, 1515, 10.1152/jn.1996.75.4.1515

Mahadevan, 1996, Average reward reinforcement learning: Foundations, algorithms, and empirical results, Machine Learning, 22, 159, 10.1007/BF00114727

Malik, 1990, Preattentive texture discrimination with early vision mechanisms, Journal of the Optical Society of America A, 7, 923, 10.1364/JOSAA.7.000923

Maniezzo, 1994, Genetic evolution of the topology and weight distribution of neural networks, IEEE Transactions on Neural Networks, 5, 39, 10.1109/72.265959

Manolios, 1994, First-order recurrent neural networks and deterministic finite state automata, Neural Computation, 6, 1155, 10.1162/neco.1994.6.6.1155

Marchi, E., Ferroni, G., Eyben, F., Gabrielli, L., Squartini, S., & Schuller, B. (2014). Multi-resolution linear prediction based features for audio onset detection with bidirectional LSTM neural networks. In Proc. 39th IEEE international conference on acoustics, speech, and signal processing (pp. 2183–2187).

Markram, 2012, The human brain project, Scientific American, 306, 50, 10.1038/scientificamerican0612-50

Marquardt, 1963, An algorithm for least-squares estimation of nonlinear parameters, Journal of the Society for Industrial & Applied Mathematics, 11, 431, 10.1137/0111030

Martens, 2010, Deep learning via Hessian-free optimization, 735

Martens, J., & Sutskever, I. (2011). Learning recurrent neural networks with Hessian-free optimization. In Proceedings of the 28th international conference on machine learning (pp. 1033–1040).

Martinetz, 1990, Three-dimensional neural net for learning visuomotor coordination of a robot arm, IEEE Transactions on Neural Networks, 1, 131, 10.1109/72.80212

Masci, J., Giusti, A., Ciresan, D. C., Fricout, G., & Schmidhuber, J. (2013). A fast learning algorithm for image segmentation with max-pooling convolutional networks. In International conference on image processing (pp. 2713–2717).

Matsuoka, 1992, Noise injection into inputs in back-propagation learning, IEEE Transactions on Systems, Man and Cybernetics, 22, 436, 10.1109/21.155944

Mayer, 2008, A system for robotic heart surgery that learns to tie knots using recurrent neural networks, Advanced Robotics, 22, 1521, 10.1163/156855308X360604

McCallum, 1996, Learning to use selective attention and short-term memory in sequential tasks, 315

McCulloch, 1943, A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, 7, 115, 10.1007/BF02478259

Melnik, O., Levy, S. D., & Pollack, J. B. (2000). RAAM for infinite context-free languages. In Proc. IJCNN (5) (pp. 585–590).

Memisevic, 2010, Learning to represent spatial transformations with factored higher-order Boltzmann machines, Neural Computation, 22, 1473, 10.1162/neco.2010.01-09-953

Menache, I., Mannor, S., & Shimkin, N. (2002). Q-cut—dynamic discovery of sub-goals in reinforcement learning. In Proc. ECML’02 (pp. 295–306).

Merolla, 2014, A million spiking-neuron integrated circuit with a scalable communication network and interface, Science, 345, 668, 10.1126/science.1254642

Mesnil, G., Dauphin, Y., Glorot, X., Rifai, S., Bengio, Y., & Goodfellow, I., et al. (2011). Unsupervised and transfer learning challenge: a deep learning approach. In JMLR W&CP: proc. unsupervised and transfer learning, vol. 7.

Meuleau, N., Peshkin, L., Kim, K. E., & Kaelbling, L. P. (1999). Learning finite state controllers for partially observable environments. In 15th international conference of uncertainty in AI (pp. 427–436).

Miglino, 1995, Evolving mobile robots in simulated and real environments, Artificial Life, 2, 417, 10.1162/artl.1995.2.4.417

Miller, 1994, A model for the development of simple cell receptive fields and the ordered arrangement of orientation columns through activity-dependent competition between on- and off-center inputs, Journal of Neuroscience, 14, 409, 10.1523/JNEUROSCI.14-01-00409.1994

Miller, 2009, Cartesian genetic programming, 3489

Miller, 2000, Cartesian genetic programming, 121

Miller, 1989, Designing neural networks using genetic algorithms, 379

Miller, 1995

Minai, 1994, Perturbation response in feedforward networks, Neural Networks, 7, 783, 10.1016/0893-6080(94)90100-7

Minsky, 1963, Steps toward artificial intelligence, 406

Minsky, 1969

Minton, 1989, Explanation-based learning: A problem solving perspective, Artificial Intelligence, 40, 63, 10.1016/0004-3702(89)90047-7

Mitchell, 1997

Mitchell, 1986, Explanation-based generalization: A unifying view, Machine Learning, 1, 47, 10.1007/BF00116250

Mnih, 2013

Mohamed, A., & Hinton, G. E. (2010). Phone recognition using restricted Boltzmann machines. In IEEE international conference on acoustics, speech and signal processing (pp. 4354–4357).

Molgedey, 1994, Separation of independent signals using time-delayed correlations, Physical Review Letters, 72, 3634, 10.1103/PhysRevLett.72.3634

Møller, 1993

Montana, 1989, Training feedforward neural networks using genetic algorithms, 762

Montavon, 2012, Vol. 7700

Moody, 1989, Fast learning in multi-resolution hierarchies, 29

Moody, 1992, The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, 847

Moody, 1994, Architecture selection strategies for neural networks: Application to corporate bond rating prediction

Moore, 1993, Prioritized sweeping: Reinforcement learning with less data and less time, Machine Learning, 13, 103, 10.1007/BF00993104

Moore, 1995, The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces, Machine Learning, 21, 199, 10.1007/BF00993591

Moriarty, 1997

Moriarty, 1996, Efficient reinforcement learning through symbiotic evolution, Machine Learning, 22, 11, 10.1007/BF00114722

Morimoto, 2000, Robust reinforcement learning, 1061

Mosteller, 1968, Data analysis, including statistics

Mozer, 1989, A focused back-propagation algorithm for temporal sequence recognition, Complex Systems, 3, 349

Mozer, 1991, Discovering discrete distributed representations with iterative competitive learning, 627

Mozer, 1992, Induction of multiscale temporal structure, 275

Mozer, 1989, Skeletonization: A technique for trimming the fat from a network via relevance assessment, 107

Muller, 1995, Fast neural net simulation with a DSP processor array, IEEE Transactions on Neural Networks, 6, 203, 10.1109/72.363436

Munro, P. W. (1987). A dual back-propagation scheme for scalar reinforcement learning. In Proceedings of the ninth annual conference of the cognitive science society (pp. 165–176).

Murray, 1993, Synaptic weight noise during MLP learning enhances fault-tolerance, generalisation and learning trajectory, 491

Nadal, 1994, Non-linear neurons in the low noise limit: a factorial code maximises information transfer, Networks, 5, 565, 10.1088/0954-898X/5/4/008

Nagumo, 1962, An active pulse transmission line simulating nerve axon, Proceedings of the IRE, 50, 2061, 10.1109/JRPROC.1962.288235

Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted Boltzmann machines. In International conference on machine learning.

Narendra, 1990, Identification and control of dynamical systems using neural networks, IEEE Transactions on Neural Networks, 1, 4, 10.1109/72.80202

Narendra, 1974, Learning automata—a survey, IEEE Transactions on Systems, Man and Cybernetics, 4, 323, 10.1109/TSMC.1974.5408453

Neal, 1995

Neal, 2006, Classification with Bayesian neural networks, Vol. 3944, 28

Neal, 2006, High dimensional classification with Bayesian neural networks and Dirichlet diffusion trees, 265

Neftci, 2014, Event-driven contrastive divergence for spiking neuromorphic systems, Frontiers in Neuroscience, 7

Neil, 2014, Minitaur, an event-driven FPGA-based spiking network accelerator, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, PP, 1

Nessler, 2013, Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity, PLoS Computational Biology, 9, e1003037, 10.1371/journal.pcbi.1003037

Neti, 1992, Maximally fault tolerant neural networks, IEEE Transactions on Neural Networks, 3, 14, 10.1109/72.105414

Neuneier, 1996, How to train neural networks, Vol. 1524, 373

Newton, 1687

Nguyen, 1989, The truck backer-upper: An example of self learning in neural networks, 357

Nilsson, 1980

Nolfi, 1994, How to evolve autonomous robots: Different approaches in evolutionary robotics, 190

Nolfi, 1994, Learning and evolution in neural networks, Adaptive Behavior, 3, 5, 10.1177/105971239400300102

Nowak, 2006, Sampling strategies for bag-of-features image classification, 490

Nowlan, 1992, Simplifying neural networks by soft weight sharing, Neural Computation, 4, 173, 10.1162/neco.1992.4.4.473

O’Connor, 2013, Real-time classification and sensor fusion with a spiking deep belief network, Frontiers in Neuroscience, 7

Oh, 2004, GPU implementation of neural networks, Pattern Recognition, 37, 1311, 10.1016/j.patcog.2004.01.013

Oja, 1989, Neural networks, principal components, and subspaces, International Journal of Neural Systems, 1, 61, 10.1142/S0129065789000475

Oja, 1991, Data compression, feature extraction, and autoassociation in feedforward neural networks, 737

Olshausen, 1996, Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, 381, 607, 10.1038/381607a0

Omlin, 1996, Extraction of rules from discrete-time recurrent neural networks, Neural Networks, 9, 41, 10.1016/0893-6080(95)00086-0

Oquab, 2013

O’Reilly, 1996, Biologically plausible error-driven learning using local activation differences: The generalized recirculation algorithm, Neural Computation, 8, 895, 10.1162/neco.1996.8.5.895

O’Reilly, 2003

O’Reilly, 2013, Recurrent processing during object recognition, Frontiers in Psychology, 4, 124, 10.3389/fpsyg.2013.00124

Orr, 1998, Vol. 1524

Ostrovskii, 1971, Über die Berechnung von Ableitungen, Wissenschaftliche Zeitschrift der Technischen Hochschule für Chemie, 13, 382

Otsuka, 2010

Otsuka, M., Yoshimoto, J., & Doya, K. (2010). Free-energy-based reinforcement learning in a partially observable environment. In Proc. ESANN.

Otte, 2012, Local feature based online mode detection with recurrent neural networks, 533

Oudeyer, 2013, Intrinsically motivated learning of real world sensorimotor skills with developmental constraints

Pachitariu, M., & Sahani, M. (2013). Regularization and nonlinearities for neural language models: when are they needed? arXiv Preprint arXiv:1301.5650.

Palm, 1980, On associative memory, Biological Cybernetics, 36, 10.1007/BF00337019

Palm, 1992, On the information storage capacity of local learning rules, Neural Computation, 4, 703, 10.1162/neco.1992.4.5.703

Pan, 2010, A survey on transfer learning, The IEEE Transactions on Knowledge and Data Engineering, 22, 1345, 10.1109/TKDE.2009.191

Parekh, 2000, Constructive neural network learning algorithms for multi-category pattern classification, IEEE Transactions on Neural Networks, 11, 436, 10.1109/72.839013

Parker, 1985

Pascanu, R., Gulcehre, C., Cho, K., & Bengio, Y. (2013). How to construct deep recurrent neural networks. arXiv Preprint arXiv:1312.6026.

Pascanu, R., Mikolov, T., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. In ICML’13: JMLR: W&CP, vol. 28.

Pasemann, 1999, Evolving structure and function of neurocontrollers, 1973

Pearlmutter, 1989, Learning state space trajectories in recurrent neural networks, Neural Computation, 1, 263, 10.1162/neco.1989.1.2.263

Pearlmutter, 1994, Fast exact multiplication by the Hessian, Neural Computation, 6, 147, 10.1162/neco.1994.6.1.147

Pearlmutter, 1995, Gradient calculations for dynamic recurrent neural networks: A survey, IEEE Transactions on Neural Networks, 6, 1212, 10.1109/72.410363

Pearlmutter, B. A., & Hinton, G. E. (1986). G-maximization: An unsupervised learning procedure for discovering regularities. In Denker, J.S., (Ed.), Neural networks for computing: American institute of physics conference proceedings 151, vol. 2 (pp. 333–338).

Peng, 1996, Incremental multi-step Q-learning, Machine Learning, 22, 283, 10.1007/BF00114731

Pérez-Ortiz, 2003, Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets, Neural Networks, 241, 10.1016/S0893-6080(02)00219-8

Perrett, 1992, Organization and functions of cells responsive to faces in the temporal cortex [and discussion], Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 335, 23, 10.1098/rstb.1992.0003

Perrett, 1982, Visual neurones responsive to faces in the monkey temporal cortex, Experimental Brain Research, 47, 329, 10.1007/BF00239352

Peters, 2010, Policy gradient methods, Scholarpedia, 5, 3698, 10.4249/scholarpedia.3698

Peters, 2008, Natural actor-critic, Neurocomputing, 71, 1180, 10.1016/j.neucom.2007.11.026

Peters, 2008, Reinforcement learning of motor skills with policy gradients, Neural Networks, 21, 682, 10.1016/j.neunet.2008.02.003

Pham, V., Kermorvant, C., & Louradour, J. (2013). Dropout improves recurrent neural networks for handwriting recognition. arXiv Preprint arXiv:1312.4569.

Pineda, 1987, Generalization of back-propagation to recurrent neural networks, Physical Review Letters, 19, 2229, 10.1103/PhysRevLett.59.2229

Plate, 1993, Holographic recurrent networks, 34

Plumbley, 1991

Pollack, J. B. (1988). Implications of recursive distributed representations. In Proc. NIPS (pp. 527–536).

Pollack, 1990, Recursive distributed representation, Artificial Intelligence, 46, 77, 10.1016/0004-3702(90)90005-K

Pontryagin, 1961

Poon, 2011, Sum–product networks: A new deep architecture, 689

Post, 1936, Finite combinatory processes-formulation 1, The Journal of Symbolic Logic, 1, 103, 10.2307/2269031

Prasoon, 2013, Voxel classification based on triplanar convolutional neural networks applied to cartilage segmentation in knee MRI, Vol. 8150, 246

Precup, 1998, Multi-time models for temporally abstract planning, 1050

Prokhorov, 2010, A convolutional learning system for object classification in 3-D LIDAR data, IEEE Transactions on Neural Networks, 21, 858, 10.1109/TNN.2010.2044802

Prokhorov, D. V., Feldkamp, L. A., & Tyukin, I. Y. (2002). Adaptive behavior with fixed weights in RNN: an overview. In Proceedings of the IEEE international joint conference on neural networks (pp. 2018–2023).

Prokhorov, 2001, Dynamical neural networks for control, 23

Prokhorov, 1997, Adaptive critic design, IEEE Transactions on Neural Networks, 8, 997, 10.1109/72.623201

Puskorius, 1994, Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks, IEEE Transactions on Neural Networks, 5, 279, 10.1109/72.279191

Raiko, T., Valpola, H., & LeCun, Y. (2012). Deep learning made easier by linear transformations in perceptrons. In International conference on artificial intelligence and statistics (pp. 924–932).

Raina, 2009, Large-scale deep unsupervised learning using graphics processors, 873

Ramacher, 1993, Multiprocessor and memory architecture of the neurocomputer SYNAPSE-1, International Journal of Neural Systems, 4, 333, 10.1142/S0129065793000274

Ranzato, 2007, Unsupervised learning of invariant feature hierarchies with applications to object recognition, 1

Ranzato, 2006, Efficient learning of sparse representations with an energy-based model

Rauber, 2002, The growing hierarchical self-organizing map: exploratory analysis of high-dimensional data, IEEE Transactions on Neural Networks, 13, 1331, 10.1109/TNN.2002.804221

Razavian, A. S., Azizpour, H., Sullivan, J., & Carlsson, S. (2014). CNN features off-the-shelf: an astounding baseline for recognition. ArXiv Preprint arXiv:1403.6382.

Rechenberg, 1971

Redlich, 1993, Redundancy reduction as a strategy for unsupervised learning, Neural Computation, 5, 289, 10.1162/neco.1993.5.2.289

Refenes, 1994, Stock performance modeling using neural networks: a comparative study with regression models, Neural Networks, 7, 375, 10.1016/0893-6080(94)90030-2

Rezende, 2014, Stochastic variational learning in recurrent spiking networks, Frontiers in Computational Neuroscience, 8, 38

Riedmiller, 2005, Neural fitted Q iteration—first experiences with a data efficient neural reinforcement learning method, 317

Riedmiller, 1993, A direct adaptive method for faster backpropagation learning: The Rprop algorithm, 586

Riedmiller, M., Lange, S., & Voigtlaender, A. (2012). Autonomous reinforcement learning on raw visual input data in a real world application. In International joint conference on neural networks (pp. 1–8).

Riesenhuber, 1999, Hierarchical models of object recognition in cortex, Nature Neuroscience, 2, 1019, 10.1038/14819

Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y. (2011). Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th international conference on machine learning (pp. 833–840).

Ring, 1991, Incremental development of complex behaviors through automatic construction of sensory-motor hierarchies, 343

Ring, 1993, Learning sequential tasks by incrementally adding higher orders, 115

Ring, 1994

Ring, M., Schaul, T., & Schmidhuber, J. (2011). The two-dimensional organization of behavior. In Proceedings of the first joint conference on development learning and on epigenetic robotics.

Risi, 2012, A unified approach to evolving plasticity and neural geometry, 1

Rissanen, 1986, Stochastic complexity and modeling, The Annals of Statistics, 14, 1080, 10.1214/aos/1176350051

Ritter, 1989, Self-organizing semantic maps, Biological Cybernetics, 61, 241, 10.1007/BF00203171

Robinson, 1987

Robinson, T., & Fallside, F. (1989). Dynamic reinforcement driven error propagation networks with application to game playing. In Proceedings of the 11th conference of the cognitive science society (pp. 836–843).

Rodriguez, 1998, Recurrent neural networks can learn to implement symbol-sensitive counting, 87

Rodriguez, 1999, A recurrent neural network that learns to count, Connection Science, 11, 5, 10.1080/095400999116340

Roggen, 2003, Hardware spiking neural network with run-time reconfigurable connectivity in an autonomous robot, 189

Rohwer, 1989, The ‘moving targets’ training method

Rosenblatt, 1958, The perceptron: a probabilistic model for information storage and organization in the brain, Psychological Review, 65, 386, 10.1037/h0042519

Rosenblatt, 1962

Roux, 2013, Mitosis detection in breast cancer histological images—an ICPR 2012 contest, Journal of Pathology Informatics, 4, 8, 10.4103/2153-3539.112693

Rubner, 1990, Development of feature detectors by self-organization: A network model, Biological Cybernetics, 62, 193, 10.1007/BF00198094

Rückstieß, 2008, State-dependent exploration for policy gradient methods, Vol. 5212, 234

Rumelhart, 1986, Learning internal representations by error propagation, 318

Rumelhart, 1986, Feature discovery by competitive learning, 151

Rummery, 1994

Russell, 1995

Saito, 1997, Partial BFGS update and efficient step-length calculation for three-layer neural networks, Neural Computation, 9, 123, 10.1162/neco.1997.9.1.123

Sak, H., Senior, A., & Beaufays, F. (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Proc. interspeech.

Sak, H., Vinyals, O., Heigold, G., Senior, A., McDermott, E., & Monga, R., et al. (2014). Sequence discriminative distributed training of long short-term memory recurrent neural networks. In Proc. Interspeech.

Salakhutdinov, 2009, Semantic hashing, International Journal of Approximate Reasoning, 50, 969, 10.1016/j.ijar.2008.11.006

Sallans, 2004, Reinforcement learning with factored states and actions, Journal of Machine Learning Research, 5, 1063

Sałustowicz, 1997, Probabilistic incremental program evolution, Evolutionary Computation, 5, 123, 10.1162/evco.1997.5.2.123

Samejima, 2003, Inter-module credit assignment in modular reinforcement learning, Neural Networks, 16, 985, 10.1016/S0893-6080(02)00235-6

Samuel, 1959, Some studies in machine learning using the game of checkers, IBM Journal of Research and Development, 3, 210, 10.1147/rd.33.0210

Sanger, 1989, An optimality principle for unsupervised learning, 11

Santamaría, 1997, Experiments with reinforcement learning in problems with continuous state and action spaces, Adaptive Behavior, 6, 163, 10.1177/105971239700600201

Saravanan, 1995, Evolving neural control systems, IEEE Expert, 23, 10.1109/64.393139

Saund, 1994, Unsupervised learning of mixtures of multiple causes in binary data, 27

Schaback, 1992

Schäfer, 2006, Learning long term dependencies with recurrent neural networks, Vol. 4131, 71

Schapire, 1990, The strength of weak learnability, Machine Learning, 5, 197, 10.1007/BF00116037

Schaul, 2010, Metalearning, Scholarpedia, 6, 4650, 10.4249/scholarpedia.4650

Schaul, T., Zhang, S., & LeCun, Y. (2013). No more pesky learning rates. In Proc. 30th International conference on machine learning.

Schemmel, 2006, Implementing synaptic plasticity in a VLSI spiking neural network model, 1

Scherer, D., Müller, A., & Behnke, S. (2010). Evaluation of pooling operations in convolutional architectures for object recognition. In Proc. International conference on artificial neural networks (pp. 92–101).

Schmidhuber, 1987

Schmidhuber, 1989, Accelerated learning in back-propagation nets, 429

Schmidhuber, 1989, A local learning algorithm for dynamic feedforward and recurrent networks, Connection Science, 1, 403, 10.1080/09540098908915650

Schmidhuber, 1990

Schmidhuber, 1990, Learning algorithms for networks with internal and external feedback, 52

Schmidhuber, J. (1990c). The neural heat exchanger. Talks at TU Munich (1990), University of Colorado at Boulder (1992), and Z. Li’s NIPS*94 workshop on unsupervised learning. Also published at the Intl. conference on neural information processing, vol. 1 (pp. 194–197), 1996.

Schmidhuber, J. (1990d). An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Proc. IEEE/INNS international joint conference on neural networks, vol. 2 (pp. 253–258).

Schmidhuber, 1991, Curious model-building control systems, 1458

Schmidhuber, 1991, Learning to generate sub-goals for action sequences, 967

Schmidhuber, 1991, Reinforcement learning in Markovian and non-Markovian environments, 500

Schmidhuber, 1992, A fixed size storage O(n3) time complexity learning algorithm for fully recurrent continually running networks, Neural Computation, 4, 243, 10.1162/neco.1992.4.2.243

Schmidhuber, 1992, Learning complex, extended sequences using the principle of history compression, Neural Computation, 4, 234, 10.1162/neco.1992.4.2.234

Schmidhuber, 1992, Learning factorial codes by predictability minimization, Neural Computation, 4, 863, 10.1162/neco.1992.4.6.863

Schmidhuber, 1993, An introspective network that can learn to run its own weight change algorithm, 191

Schmidhuber, 1993

Schmidhuber, 1997, Discovering neural nets with low Kolmogorov complexity and high generalization capability, Neural Networks, 10, 857, 10.1016/S0893-6080(96)00127-X

Schmidhuber, 2002, The speed prior: a new simplicity measure yielding near-optimal computable predictions, 216

Schmidhuber, 2004, Optimal ordered problem solver, Machine Learning, 54, 211, 10.1023/B:MACH.0000015880.99707.b2

Schmidhuber, 2006, Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts, Connection Science, 18, 173, 10.1080/09540090600768658

Schmidhuber, 2006, Gödel machines: Fully self-referential optimal universal self-improvers, 199

Schmidhuber, 2007, Prototype resilient, self-modeling robots, Science, 316, 688, 10.1126/science.316.5825.688c

Schmidhuber, 2012

Schmidhuber, 2013

Schmidhuber, 2013, PowerPlay: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem, Frontiers in Psychology, 10.3389/fpsyg.2013.00313

Schmidhuber, J., Ciresan, D., Meier, U., Masci, J., & Graves, A. (2011). On fast deep nets for AGI vision. In Proc. fourth conference on artificial general intelligence (pp. 243–246).

Schmidhuber, 1996, Semilinear predictability minimization produces well-known feature detectors, Neural Computation, 8, 773, 10.1162/neco.1996.8.4.773

Schmidhuber, 1991, Learning to generate artificial fovea trajectories for target detection, International Journal of Neural Systems, 2, 135

Schmidhuber, 1993, Continuous history compression, 87

Schmidhuber, 1992

Schmidhuber, 1992, Planning simple trajectories using neural subgoal generators, 196

Schmidhuber, 2007, Training recurrent networks by Evolino, Neural Computation, 19, 757, 10.1162/neco.2007.19.3.757

Schmidhuber, 1997, Reinforcement learning with self-modifying policies, 293

Schmidhuber, 1997, Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement, Machine Learning, 28, 105, 10.1023/A:1007383707642

1998

Schraudolph, 2002, Fast curvature matrix–vector products for second-order gradient descent, Neural Computation, 14, 1723, 10.1162/08997660260028683

Schraudolph, 1993, Unsupervised discrimination of clustered data via optimization of binary information gain, 499

Schraudolph, 1996, Tempering backpropagation networks: not all weights are created equal, 563

Schrauwen, B., Verstraeten, D., & Van Campenhout, J. (2007). An overview of reservoir computing: theory, applications and implementations. In Proceedings of the 15th European symposium on artificial neural networks (pp. 471–482).

Schuster, 1992, Learning by maximization the information transfer through nonlinear noisy neurons and “noise breakdown”, Physical Review A, 46, 2131, 10.1103/PhysRevA.46.2131

Schuster, 1999

Schuster, 1997, Bidirectional recurrent neural networks, IEEE Transactions on Signal Processing, 45, 2673, 10.1109/78.650093

Schwartz, A. (1993). A reinforcement learning method for maximizing undiscounted rewards. In Proc. ICML (pp. 298–305).

Schwefel, 1974

Segmentation of Neuronal Structures in EM Stacks Challenge,  (2012). IEEE International symposium on biomedical imaging. http://tinyurl.com/d2fgh7g.

Sehnke, 2010, Parameter-exploring policy gradients, Neural Networks, 23, 551, 10.1016/j.neunet.2009.12.004

Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). OverFeat: integrated recognition, localization and detection using convolutional networks. ArXiv Preprint arXiv:1312.6229.

Sermanet, P., & LeCun, Y. (2011). Traffic sign recognition with multi-scale convolutional networks. In Proceedings of international joint conference on neural networks (pp. 2809–2813).

Serrano-Gotarredona, 2009, Caviar: A 45 k neuron, 5 m synapse, 12 g connects/s AER hardware sensory–processing–learning–actuating system for high-speed visual object recognition and tracking, IEEE Transactions on Neural Networks, 20, 1417, 10.1109/TNN.2009.2023653

Serre, 2002, On the role of object-specific features for real world object recognition in biological vision, 387

Seung, 2003, Learning in spiking neural networks by reinforcement of stochastic synaptic transmission, Neuron, 40, 1063, 10.1016/S0896-6273(03)00761-X

Shan, H., & Cottrell, G. (2014). Efficient visual coding: From retina to V2. In Proc. international conference on learning representations. ArXiv Preprint arXiv:1312.6077.

Shan, 2007, Recursive ICA, 1273

Shanno, 1970, Conditioning of quasi-Newton methods for function minimization, Mathematics of Computation, 24, 647, 10.1090/S0025-5718-1970-0274029-X

Shannon, 1948, A mathematical theory of communication (parts I and II), Bell System Technical Journal, XXVII, 379, 10.1002/j.1538-7305.1948.tb01338.x

Shao, 2014, Learning deep and wide: A spectral method for learning deep networks, IEEE Transactions on Neural Networks and Learning Systems, 10.1109/TNNLS.2014.2308519

Shavlik, 1994, Combining symbolic and neural learning, Machine Learning, 14, 321, 10.1007/BF00993982

Shavlik, 1989, Combining explanation-based and neural learning: An algorithm and empirical results, Connection Science, 1, 233, 10.1080/09540098908915640

Siegelmann, 1992

Siegelmann, 1991, Turing computability with neural nets, Applied Mathematics Letters, 4, 77, 10.1016/0893-9659(91)90080-F

Silva, 1990, Speeding up back-propagation, 151

Síma, 1994, Loading deep networks is hard, Neural Computation, 6, 842, 10.1162/neco.1994.6.5.842

Síma, 2002, Training a single sigmoidal neuron is hard, Neural Computation, 14, 2709, 10.1162/089976602760408035

Simard, P., Steinkraus, D., & Platt, J. (2003). Best practices for convolutional neural networks applied to visual document analysis. In Seventh international conference on document analysis and recognition (pp. 958–963).

Sims, 1994, Evolving virtual creatures, 15, 10.1145/192161.192167

Simsek, Ö., & Barto, A. G. (2008). Skill characterization based on betweenness. In NIPS’08 (pp. 1497–1504).

Singh, S. P. (1994). Reinforcement learning algorithms for average-payoff Markovian decision processes. In National conference on artificial intelligence (pp. 700–705).

Singh, 2005, Intrinsically motivated reinforcement learning

Smith, 1980

Smolensky, 1986, Parallel distributed processing: Explorations in the microstructure of cognition, 194

Solla, 1988, Accelerated learning in layered neural networks, Complex Systems, 2, 625

Solomonoff, 1964, A formal theory of inductive inference. Part I, Information and Control, 7, 1, 10.1016/S0019-9958(64)90223-2

Solomonoff, 1978, Complexity-based induction systems, IEEE Transactions on Information Theory, IT-24, 422, 10.1109/TIT.1978.1055913

Soloway, 1986, Learning to program = learning to construct mechanisms and explanations, Communications of the ACM, 29, 850, 10.1145/6592.6594

Song, 2000, Competitive Hebbian learning through spike-timing-dependent synaptic plasticity, Nature Neuroscience, 3, 919, 10.1038/78829

Speelpenning, 1980

Srivastava, 2013, Compete to compute, 2310

Stallkamp, 2011, The German traffic sign recognition benchmark: A multi-class classification competition, 1453

Stallkamp, 2012, Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition, Neural Networks, 32, 323, 10.1016/j.neunet.2012.02.016

Stanley, 2009, A hypercube-based encoding for evolving large-scale neural networks, Artificial Life, 15, 185, 10.1162/artl.2009.15.2.15202

Stanley, 2002, Evolving neural networks through augmenting topologies, Evolutionary Computation, 10, 99, 10.1162/106365602320169811

Steijvers, 1996, A recurrent network that performs a contextsensitive prediction task

Steil, 2007, Online reservoir adaptation by intrinsic plasticity for backpropagation–decorrelation and echo state learning, Neural Networks, 20, 353, 10.1016/j.neunet.2007.04.011

Stemmler, 1996, A single spike suffices: the simplest form of stochastic resonance in model neurons, Network: Computation in Neural Systems, 7, 687, 10.1088/0954-898X/7/4/005

Stoianov, 2012, Emergence of a ‘visual number sense’ in hierarchical generative models, Nature Neuroscience, 15, 194, 10.1038/nn.2996

Stone, 1974, Cross-validatory choice and assessment of statistical predictions, Journal of the Royal Statistical Society B, 36, 111, 10.1111/j.2517-6161.1974.tb00994.x

Stoop, 2000, When pyramidal neurons lock, when they respond chaotically, and when they like to synchronize, Neuroscience Research, 36, 81, 10.1016/S0168-0102(99)00108-X

Stratonovich, 1960, Conditional Markov processes, Theory of Probability and Its Applications, 5, 156, 10.1137/1105015

Sun, 1993, Time warping invariant neural networks, 180

Sun, 1993

Sun, 2013, A linear time natural evolution strategy for non-separable functions, 61

Sun, Y., Wierstra, D., Schaul, T., & Schmidhuber, J. (2009). Efficient natural evolution strategies. In Proc. 11th genetic and evolutionary computation conference (pp. 539–546).

Sutskever, I., Hinton, G. E., & Taylor, G. W. (2008). The recurrent temporal restricted Boltzmann machine. In NIPS, vol. 21 (p. 2008).

Sutskever, 2014

Sutton, 1998

Sutton, 1999, Policy gradient methods for reinforcement learning with function approximation, 1057

Sutton, 1999, Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning, Artificial Intelligence, 112, 181, 10.1016/S0004-3702(99)00052-1

Sutton, 2008, A convergent O(n) algorithm for off-policy temporal-difference learning with linear function approximation, 1609

Szabó, 2006, Cross-entropy optimization for independent process analysis, 909

Szegedy, 2014

Szegedy, C., Toshev, A., & Erhan, D. (2013). Deep neural networks for object detection (pp. 2553–2561).

Taylor, 2011, Learning invariance through imitation, 2729

Tegge, 2009, NNcon: improved protein contact map prediction using 2D-recursive neural networks, Nucleic Acids Research, 37, W515, 10.1093/nar/gkp305

Teichmann, 2012, Learning invariance from natural images inspired by observations in the primary visual cortex, Neural Computation, 24, 1271, 10.1162/NECO_a_00268

Teller, 1994, The evolution of mental models, 199

Tenenberg, 1993, Learning via task decomposition, 337

Tesauro, 1994, TD-gammon, a self-teaching backgammon program, achieves master-level play, Neural Computation, 6, 215, 10.1162/neco.1994.6.2.215

Tieleman, 2012, Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning

Tikhonov, 1977

Ting, K. M., & Witten, I. H. (1997). Stacked generalization: when does it work? In Proc. international joint conference on artificial intelligence.

Tiňo, 2004, Architectural bias in recurrent neural networks: Fractal analysis, Neural Computation, 15, 1931, 10.1162/08997660360675099

Tonkes, B., & Wiles, J. (1997). Learning a context-free task with a recurrent neural network: An analysis of stability. In Proceedings of the fourth Biennial conference of the Australasian cognitive science society.

Towell, 1994, Knowledge-based artificial neural networks, Artificial Intelligence, 70, 119, 10.1016/0004-3702(94)90105-8

Tsitsiklis, 1996, Feature-based methods for large scale dynamic programming, Machine Learning, 22, 59, 10.1007/BF00114724

Tsodyks, 1998, Neural networks with dynamic synapses, Neural Computation, 10, 821, 10.1162/089976698300017502

Tsodyks, 1996, Population dynamics and theta rhythm phase precession of hippocampal place cell firing: a spiking neuron model, Hippocampus, 6, 271, 10.1002/(SICI)1098-1063(1996)6:3<271::AID-HIPO5>3.3.CO;2-Q

Turaga, 2010, Convolutional networks can learn to generate affinity graphs for image segmentation, Neural Computation, 22, 511, 10.1162/neco.2009.10-08-881

Turing, 1936, On computable numbers, with an application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, Series 2, 41, 230

Turner, A. J., & Miller, J. F. (2013). Cartesian genetic programming encoded artificial neural networks: A comparison using three benchmarks. In Proceedings of the conference on genetic and evolutionary computation, GECCO (pp. 1005–1012).

Ueda, 2000, Optimal linear combination of neural networks for improving classification performance, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 207, 10.1109/34.825759

Urlbe, 1999

Utgoff, 2002, Many-layered learning, Neural Computation, 14, 2497, 10.1162/08997660260293319

Vahed, 2004, A machine learning method for extracting symbolic knowledge from recurrent neural networks, Neural Computation, 16, 59, 10.1162/08997660460733994

Vaillant, 1994, Original approach for the localisation of objects in images, IEE Proceedings Vision, Image, and Signal Processing, 141, 245, 10.1049/ip-vis:19941301

van den Berg, T., & Whiteson, S. (2013). Critical factors in the performance of HyperNEAT. In GECCO 2013: proceedings of the genetic and evolutionary computation conference (pp. 759–766).

van Hasselt, 2012, Reinforcement learning in continuous state and action spaces, 207

Vapnik, 1992, Principles of risk minimization for learning theory, 831

Vapnik, 1995

Versino, 1996, Learning fine motion by using the hierarchical extended Kohonen map, 221

Veta, M., Viergever, M., Pluim, J., Stathonikos, N., & van Diest, P. J. (2013). MICCAI 2013 grand challenge on mitosis detection.

Vieira, 2003, A training algorithm for classification of high-dimensional data, Neurocomputing, 50, 461, 10.1016/S0925-2312(02)00635-5

Viglione, 1970, Applications of pattern recognition technology

Vincent, 2008, Extracting and composing robust features with denoising autoencoders, 1096

Vlassis, 2012, On the computational complexity of stochastic controller optimization in POMDPs, ACM Transactions on Computation Theory, 4, 12, 10.1145/2382559.2382563

Vogl, 1988, Accelerating the convergence of the back-propagation method, Biological Cybernetics, 59, 257, 10.1007/BF00332914

von der Malsburg, 1973, Self-organization of orientation sensitive cells in the striate cortex, Kybernetik, 14, 85, 10.1007/BF00288907

Waldinger, 1969, PROW: a step toward automatic program writing, 241

Wallace, 1968, An information theoretic measure for classification, The Computer Journal, 11, 185, 10.1093/comjnl/11.2.185

Wan, 1994, Time series prediction by using a connectionist network with internal delay lines, 265

Wang, S., & Manning, C. (2013). Fast dropout training. In Proceedings of the 30th international conference on machine learning (pp. 118–126).

Wang, 1994, Optimal stopping and effective machine complexity in learning, 303

Watanabe, 1985

Watanabe, 1992, Kolmogorov complexity and computational complexity

Watkins, 1989

Watkins, 1992, Q-learning, Machine Learning, 8, 279, 10.1007/BF00992698

Watrous, 1992, Induction of finite-state automata using second-order recurrent networks, 309

Waydo, 2008, Unsupervised learning of individuals and categories from images, Neural Computation, 20, 1165, 10.1162/neco.2007.03-07-493

Weigend, 1993, Results of the time series prediction competition at the Santa Fe Institute, 1786

Weigend, 1991, Generalization by weight-elimination with application to forecasting, 875

Weiss, 1994, Hierarchical chunking in classifier systems, 1335

Weng, 1992, Cresceptron: a self-organizing neural network which grows adaptively, 576

Weng, 1997, Learning recognition and segmentation using the cresceptron, International Journal of Computer Vision, 25, 109, 10.1023/A:1007967800668

Werbos, 1974

Werbos, P. J. (1981). Applications of advances in nonlinear sensitivity analysis. In Proceedings of the 10th IFIP conference, 31.8-4.9, NYC (pp. 762–770).

Werbos, 1987, Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research, IEEE Transactions on Systems, Man and Cybernetics, 17, 10.1109/TSMC.1987.289329

Werbos, 1988, Generalization of backpropagation with application to a recurrent gas market model, Neural Networks, 1, 10.1016/0893-6080(88)90007-X

Werbos, P. J. (1989a). Backpropagation and neurocontrol: A review and prospectus. In IEEE/INNS International joint conference on neural networks, vol. 1 (pp. 209–216).

Werbos, P. J. (1989b). Neural networks for control and system identification. In Proceedings of IEEE/CDC Tampa.

Werbos, 1992, Neural networks, system identification, and control in the chemical industries, 283

Werbos, 2006, Backwards differentiation in AD and neural nets: Past links and new opportunities, 15

West, 1995, Adaptive back-propagation in on-line learning of multilayer networks, 323

White, 1989, Learning in artificial neural networks: A statistical perspective, Neural Computation, 1, 425, 10.1162/neco.1989.1.4.425

Whitehead, 1992

Whiteson, 2012, Evolutionary computation for reinforcement learning, 325

Whiteson, 2005, Evolving keepaway soccer players through task decomposition, Machine Learning, 59, 5, 10.1007/s10994-005-0460-9

Whiteson, 2006, Evolutionary function approximation for reinforcement learning, Journal of Machine Learning Research, 7, 877

Widrow, 1962, Associative storage and retrieval of digital information in networks of adaptive neurons, Biological Prototypes and Synthetic Systems, 1, 160, 10.1007/978-1-4684-1716-6_25

Widrow, 1994, Neural networks: Applications in industry, business and science, Communications of the ACM, 37, 93, 10.1145/175247.175257

Wieland, 1991, Evolving neural network controllers for unstable systems, 667

Wiering, 1996, Solving POMDPs with Levin search and EIRA, 534

Wiering, 1998, HQ-learning, Adaptive Behavior, 6, 219, 10.1177/105971239700600202

Wiering, 1998, Fast online Q(λ), Machine Learning, 33, 105, 10.1023/A:1007562800292

Wiering, 2012

Wierstra, 2010, Recurrent policy gradients, Logic Journal of IGPL, 18, 620, 10.1093/jigpal/jzp049

Wierstra, D., Schaul, T., Peters, J., & Schmidhuber, J. (2008). Natural evolution strategies. In Congress of evolutionary computation.

Wiesel, 1959, Receptive fields of single neurones in the cat’s striate cortex, Journal of Physiology, 148, 574, 10.1113/jphysiol.1959.sp006308

Wiles, 1995, Learning to count without a counter: A case study of dynamics and activation landscapes in recurrent networks, 482

1965

Williams, 1986

Williams, 1988

Williams, 1989

Williams, 1992, Simple statistical gradient-following algorithms for connectionist reinforcement learning, Machine Learning, 8, 229, 10.1007/BF00992696

Williams, 1992, Training recurrent networks using the extended Kalman filter, 241

Williams, 1990, An efficient gradient-based algorithm for on-line training of recurrent network trajectories, Neural Computation, 4, 491

Williams, 1988

Williams, 1989, Experimental analysis of the real-time recurrent learning algorithm, Connection Science, 1, 87, 10.1080/09540098908915631

Williams, 1989, A learning algorithm for continually running fully recurrent networks, Neural Computation, 1, 270, 10.1162/neco.1989.1.2.270

Willshaw, 1976, How patterned neural connections can be set up by self-organization, Proceedings of the Royal Society of London. Series B, 194, 431, 10.1098/rspb.1976.0087

Windisch, 2005, Loading deep networks is hard: The pyramidal case, Neural Computation, 17, 487, 10.1162/0899766053011519

Wiskott, 2002, Slow feature analysis: Unsupervised learning of invariances, Neural Computation, 14, 715, 10.1162/089976602317318938

Witczak, 2006, A GMDH neural network-based approach to robust fault diagnosis: Application to the DAMADICS benchmark problem, Control Engineering Practice, 14, 671, 10.1016/j.conengprac.2005.04.007

Wöllmer, 2011, On-line driver distraction detection using long short-term memory, IEEE Transactions on Intelligent Transportation Systems (TITS), 12, 574, 10.1109/TITS.2011.2119483

Wöllmer, 2013, Keyword spotting exploiting long short-term memory, Speech Communication, 55, 252, 10.1016/j.specom.2012.08.006

Wolpert, 1992, Stacked generalization, Neural Networks, 5, 241, 10.1016/S0893-6080(05)80023-1

Wolpert, 1994, Bayesian backpropagation over i-o functions rather than weights, 200

Wu, 2008, Learning to play go using recursive neural networks, Neural Networks, 21, 1392, 10.1016/j.neunet.2008.02.002

Wu, D., & Shao, L. (2014). Leveraging hierarchical parametric networks for skeletal joints based action segmentation and recognition. In Proc. conference on computer vision and pattern recognition.

Wyatte, 2012, The limits of feedforward vision: Recurrent processing promotes robust object recognition when objects are degraded, Journal of Cognitive Neuroscience, 24, 2248, 10.1162/jocn_a_00282

Wysoski, 2010, Evolving spiking neural networks for audiovisual information processing, Neural Networks, 23, 819, 10.1016/j.neunet.2010.04.009

Yamauchi, 1994, Sequential behavior and learning in evolved dynamical neural networks, Adaptive Behavior, 2, 219, 10.1177/105971239400200301

Yamins, 2013, Hierarchical modular optimization of convolutional networks achieves representations similar to macaque IT and human ventral stream, 1

Yang, M., Ji, S., Xu, W., Wang, J., Lv, F., & Yu, K., et al. (2009). Detecting human actions in surveillance videos. In TREC video retrieval evaluation workshop.

Yao, 1993, A review of evolutionary artificial neural networks, International Journal of Intelligent Systems, 4, 203

Yin, 2012, A developmental approach to structural self-organization in reservoir computing, IEEE Transactions on Autonomous Mental Development, 4, 273, 10.1109/TAMD.2012.2182765

Yin, F., Wang, Q.-F., Zhang, X.-Y., & Liu, C.-L. (2013). ICDAR 2013 Chinese handwriting recognition competition. In 12th international conference on document analysis and recognition (pp. 1464–1470).

Young, 2014, Hierarchical spatiotemporal feature extraction using recurrent online clustering, Pattern Recognition Letters, 37, 115, 10.1016/j.patrec.2013.07.013

Yu, 1995, Dynamic learning rate optimization of the backpropagation algorithm, IEEE Transactions on Neural Networks, 6, 669, 10.1109/72.377972

Zamora-Martínez, 2014, Neural network language models for off-line handwriting recognition, Pattern Recognition, 47, 1642, 10.1016/j.patcog.2013.10.020

Zeiler, M. D. (2012). ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.

Zeiler, 2013

Zemel, 1993

Zemel, 1994, Developing population codes by minimizing description length, 11

Zeng, 1994, Discrete recurrent neural networks for grammatical inference, IEEE Transactions on Neural Networks, 5

Zimmermann, 2012, Forecasting with recurrent neural networks: 12 tricks, Vol. 7700, 687

Zipser, 1993, A spiking network model of short-term active memory, The Journal of Neuroscience, 13, 3406, 10.1523/JNEUROSCI.13-08-03406.1993