Evolutionary convolutional neural networks: An application to handwriting recognition
Tài liệu tham khảo
Yao, 1997, A new evolutionary system for evolving artificial neural networks, IEEE Trans. Neural Netw., 8, 694, 10.1109/72.572107
Stanley, 2002, Evolving neural networks through augmenting topologies, Evol. Comput., 10, 99, 10.1162/106365602320169811
Kassahun, 2005, Efficient reinforcement learning through evolutionary acquisition of neural topologies, 259
Koutník, 2014, Evolving deep unsupervised convolutional networks for vision-based reinforcement learning, 541
Verbancsics, 2015, Image classification using generative neuroevolution for deep learning, 488
Stanley, 2009, A hypercube-based encoding for evolving large-scale neural networks, Artif. Life, 15, 185, 10.1162/artl.2009.15.2.15202
Young, 2015, Optimizing deep learning hyper-parameters through an evolutionary algorithm
Loshchilov, 2016, CMA-ES for hyperparameter optimization of deep neural networks,
Fernando, 2016, Convolution by evolution: differentiable pattern producing networks, 109
B. Baker, 2017, Designing neural network architectures using reinforcement learning
Zoph, 2017, Neural architecture search with reinforcement learning, arXiv, abs/1611.01578
Yu, 2017, iPrivacy: image privacy protection by identifying sensitive objects via deep multi-task learning, IEEE Trans. Inf. Forensics Secur., 12, 1005, 10.1109/TIFS.2016.2636090
Xie, 2017, Genetic CNN, arXiv, abs/1703.01513
Miikkulainen, 2017, Evolving deep neural networks, arXiv, abs/1703.00548
Desell, 2017, Large scale evolution of convolutional neural networks using volunteer computing, arXiv, abs/1703.05422
J. Davison, DEvol: automated deep neural network design via genetic programming, 2017, https://www.github.com/joeddav/devol; last visited on 2017-07-01.
Suganuma, 2017, A genetic programming approach to designing convolutional neural network architectures, arXiv, abs/1704.00764
Krizhevsky, 2012, Imagenet classification with deep convolutional neural networks, 1097
Ian Goodfellow, 2017
LeCun, 1998, Gradient-based learning applied to document recognition, Proc. IEEE, 86, 2278, 10.1109/5.726791
LeCun, 1998, Convolutional networks for images, speech, and time series, 255
Guo, 2016, Deep learning for visual understanding: a review, Neurocomputing, 187, 27, 10.1016/j.neucom.2015.09.116
Tsironi, 2017, An analysis of convolutional long short-term memory recurrent neural networks for gesture recognition, Neurocomputing, 268, 76, 10.1016/j.neucom.2016.12.088
nez, 2016, Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition, Sensors, 16, 115, 10.3390/s16010115
Szegedy, 2015, Going deeper with convolutions, 1
Hochreiter, 1997, Long short-term memory, Neural Comput., 9, 1735, 10.1162/neco.1997.9.8.1735
Cho, 2014, On the properties of neural machine translation: encoder-decoder approaches, arXiv, abs/1409.1259
Greff, 2016, LSTM: a search space odyssey, IEEE Trans. Neural Netw. Learn. Syst., PP
Srivastava, 2014, Dropout: a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res., 15, 1929
Duchi, 2011, Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res., 12, 2121
Zeiler, 2012, ADADELTA: an adaptive learning rate method, arXiv, abs/1212.5701
T. Tieleman, G. Hinton, Neural networks for machine learning, lecture 6.5 – RMSProp, 2012, Coursera, video available in http://www.youtube.com/watch?v=O3sxAc4hxZU.
Kingma, 2014, Adam: a method for stochastic optimization, arXiv, abs/1412.6980
Holland, 1975
Ryan, 1998, Grammatical evolution: evolving programs for an arbitrary language, 1391, 83
Zhang, 2016, Hybrid orthogonal projection and estimation (HOPE): a new framework to learn neural networks, J. Mach. Learn. Res., 17, 1
Deng, 2011, Deep convex net: a scalable architecture for speech pattern classification, 2285
Lee, 2009, Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations, 609
Yang, 2010, Supervised translation-invariant sparse coding, 3517
Goodfellow, 2013, Multi-prediction deep Boltzmann machines, 548
Min, 2009, A deep non-linear feature mapping for large-margin kNN classification, arXiv, abs/0906.1814
Salakhutdinov, 2009, Deep Boltzmann machines, 5, 448
Chang, 2015, Batch-normalized maxout network in network, arXiv, abs/1511.02583
Lee, 2015, Generalizing pooling functions in convolutional neural networks: mixed, gated, and tree, 51, 464
Alom, 2017, Inception recurrent convolutional neural network for object recognition, arXiv, abs/1704.07709
Liang, 2015, Recurrent convolutional neural network for object recognition, 3367
Liao, 2015, On the importance of normalisation layers in deep learning with piecewise linear activation units, arXiv, abs/1508.00330
Hertel, 2015, Deep convolutional neural networks as generic feature extractors
Graham, 2015, Fractional max-pooling, arXiv, abs/1412.6071
Liao, 2015, Competitive multi-scale convolution, arXiv, abs/1511.05635
McFonnell, 2015, Enhanced image classification with a fast-learning shallow convolutional neural network
Mishkin, 2016, All you need is a good init
Lee, 2015, Deeply-supervised nets, Vol. 38, 562
Mairal, 2014, Convolutional kernel networks, 2627
Xu, 2015, Multi-loss regularized deep neural network, IEEE Trans. Circuits Systems Video Technol., 26, 2273, 10.1109/TCSVT.2015.2477937
K. Jarrett, 2009, What is the best multi-stage architecture for object recognition?, 2146
Srivastava, 2015, Training very deep networks, 2377
Lin, 2014, Network in network
Zeiler, 2013, Stochastic pooling for regularization of deep convolutional neural networks, arXiv, abs/1301.3557
Wan, 2013, Regularization of neural networks using DropConnect, Vol. 28
Labusch, 2008, Simple method for high-performance digit recognition based on sparse coding, IEEE Trans. Neural Netw., 19, 1985, 10.1109/TNN.2008.2005830
Ranzato, 2006, Efficient learning of sparse representations with an energy-based model, 1137
Ranzato, 2007, Unsupervised learning of invariant feature hierarchies with applications to object recognition
Calderón, 2003, Handwritten digit recognition using convolutional neural networks and Gabor filters
Le, 2011, On optimization methods for deep learning
Yang, 2015, Deep fried convnets
Lauer, 2007, A trainable feature extractor for handwritten digit recognition, Pattern Recogn., 40, 1816, 10.1016/j.patcog.2006.10.011
McFonnell, 2015, Fast, simple and accurate handwritten digit classification by training shallow neural network classifiers with the ‘extreme learning machine’ algorithm, PLoS ONE, 10
Real, 2017, Large-scale evolution of image classifiers, arXiv, abs/1703.01041
Qian, 2018, Adaptive activation functions in convolutional neural networks, Neurocomputing, 272, 204, 10.1016/j.neucom.2017.06.070
Yang, 2017, The Euclidean embedding learning based on convolutional neural network for stereo matching, Neurocomputing, 267, 195, 10.1016/j.neucom.2017.06.007
Li, 2018, Training deep neural networks with discrete state transition, Neurocomputing, 272, 154, 10.1016/j.neucom.2017.06.058
Yu, 2017, Deep multimodal distance metric learning using click constraints for image ranking, IEEE Trans. Cybern., 47, 4014, 10.1109/TCYB.2016.2591583