Deep learning: an overview and main paradigms
Tóm tắt
Từ khóa
Tài liệu tham khảo
Rosenblatt, F., Principles of Neurodynamics; Perceptrons and the Theory of Brain Mechanisms, Washington: Spartan Books, 1962, p. 616.
Minsky, M. and Papert, S., Perceptrons: An Introduction to Computational Geometry, MIT Press, 1969.
Hinton, G.E., Osindero, S., and Teh, Y., A fast learning algorithm for deep belief nets, Neural Computation, 2006, vol. 18, pp. 1527–1554.
Hinton, G., Training products of experts by minimizing contrastive divergence, Neural Computation, 2002, vol. 14, pp. 1771–1800.
Hinton, G. and Salakhutdinov, R., Reducing the dimensionality of data with neural networks, Science, 2006, vol. 313, no. 5786, pp. 504–507.
Hinton, G.E., A practical guide to training restricted Boltzmann machines, Tech. Rep. 2010-000, Toronto: Machine Learning Group, University of Toronto, 2010.
Widrow, B. and Hoff, M., Adaptive switching circuits, in 1960 IRE WESCON Convention Record, DUNNO, 1960, pp. 96–104.
Golovko, V., Neural Networks: Training, Organization and Application, Moscow: IPRZHR, 2001, p. 256 (in Russian).
Golovko, V., Technique of Learning Rate Estimation for Efficient Training of MLP, Golovko, V., Savitsky, Y., Laopoulos, T., Sachenko, A., and Grandinetti, L., Proc. of the IEEE–INNS–ENNS Int. Joint Conf. on Neural Networks IJCNN’2000, Como, Italy, 2000, Danvers: IEEE Computer Society, 2000, pp. 323–329.
Golovko, V., From multilayers perceptrons to deep belief neural networks: Training paradigms and application, Lections on Neuroinformatics, Golovko, V.A., Ed., Moscow: NRNU MEPhI, 2015, pp. 47–84 (in Russian).
Rumelhart, D., Hinton, G., and Williams, R., Learning representation by backpropagation errors, Nature, 1986, no. 323, pp. 533–536.
Lippmann, R.P., An introduction to computing with neural nets, IEEEASSP Mag., 1987, vol. 4, no. 2, pp. 4–22.
Cybenko, G., Approximations by superpositions of a sigmoidal function, Math. Control Signals, Syst., 1989, vol. 2, pp. 303–314.
Bengio, Y., Learning deep architectures for AI, Foundations Trends Mach. Learning, 2009, vol. 2, no. 1, pp. 1–127.
Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H., Greedy layer-wise training of deep networks, in Advances in Neural Information Processing Systems, Schölkopf, B., Platt, J.C., and Hoffman, T., Eds., MA: MIT Press, Cambridge,2007, vol. 11, pp. 153–160.
Erhan, D., Bengio, Y., Courville, A., Manzagol, P.-A., Vincent, P., and Bengio, S. Why does unsupervised pretraining help deep learning?, J. Mach. Learning Res., 2010, vol. 11, pp. 625–660.
Larochelle, H., Bengio, Y., Louradour, J., and Lamblin, P., Exploring strategies for training deep neural networks, J. Mach. Learning Res., 2009, vol. 1, pp. 1–40.
Glorot, X., Bordes, A., and Bengio, Y., Deep sparse rectifier networks, in Proc. of the 14th International Conference on Artificial Intelligence and Statistics, JMLR W&CP, 2011, vol. 15, pp. 315–323.
Golovko, V., A learning technique for deep belief neural networks, Neural Networks and Artificial Intelligence, vol. 440: Communication in Computer and Information Science, Golovko, V., Kroshchanka, A., Rubanau, U., and Jankowski, S., Ed., Springer, 2014, pp. 136–146.
Golovko, V., A New Technique for Restricted Boltzmann Machine Learning, Kroshchanka, A., Turchenko, V., Jankowski, S., and Treadwell, D., Proc. of the 8th IEEE International Conference IDAACS-2015, Warsaw 24–26 September 2015, Warsaw, 2015, pp. 182–186.