Analog VLSI Stochastic Perturbative Learning Architectures
Tóm tắt
Từ khóa
Tài liệu tham khảo
C. A. Mead, “Neuromorphic electronic systems.” Proceedings of the IEEE 78(10), pp. 1629–1639, 1990.
C. A. Mead, Analog VLSI and Neural Systems. Addison-Wesley: Reading, MA, 1989.
G. M. Shepherd, The Synaptic Organization of the Brain. 3rd ed. Oxford Univ. Press: New York, NY, 1992.
P. S. Churchland and T. J. Sejnowski, The Computational Brain. MIT Press: Cambridge, MA, 1990.
S. R. Kelso and T. H. Brown, “Differential conditioning of associative synaptic enhancement in Hippocampal brain slices.” Science 232, pp. 85–87, 1986.
R. D. Hawkins, T. W. Abrams, T. J. Carew, and E. R. Kandell, “A cellular mechanism of classical conditioning in Aplysia: activity-dependent amplification of presynaptic facilitation.” Science 219, pp. 400–405, 1983.
P. R. Montague, P. Dayan, C. Person and T. J. Sejnowski, “Bee foraging in uncertain environments using predictive Hebbian learning.” Nature 377(6551), pp. 725–728, 1996.
C. A. Mead and M. Ismail, Eds., Analog VLSI Implementation of Neural Systems. Kluwer: Norwell, MA, 1989.
A. Dembo and T. Kailath, “Model-free distributed learning.” IEEE Transactions on Neural Networks 1(1), pp. 58–70, 1990.
S. Grossberg, “A neural model of attention, reinforcement, and discrimination learning.” International Review of Neurobiology 18, pp. 263–327, 1975.
A. G. Barto, R. S. Sutton, and C. W. Anderson, “Neuronlike adaptive elements that can solve difficult learning control problems.” IEEE Transactions on Systems, Man, and Cybernetics 13(5), pp. 834–846, 1983.
R. S. Sutton, “Learning to predict by the methods of temporal differences.” Machine Learning 3, pp. 9–44, 1988.
C. Watkins and P. Dayan, “Q-Learning.” Machine Learning 8, pp. 279–292, 1992.
P. J. Werbos, “A menu of designs for reinforcement learning over time,” in Neural Networks for Control (W. T. Miller, R. S. Sutton and P. J. Werbos, eds.). MIT Press: Cambridge, MA, 1990, pp. 67–95.
G. Cauwenberghs, “A fast stochastic error-descent algorithm for supervised learning and optimization,” in Advances in Neural Information Processing Systems, vol. 5. Morgan Kaufman: San Mateo, CA, 1993, pp. 244–251.
P. Werbos, “Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences,” Ph.D. dissertation, 1974. Reprinted in P. Werbos, The Roots of Backpropagation. Wiley: New York, 1993.
H. J. Kushner, and D. S. Clark, Stochastic Approximation Methods for Constrained and Unconstrained Systems. Springer-Verlag: New York, NY, 1978.
H. Robins and S. Monro, “A stochastic approximation method.” Annals of Mathematical Statistics 22, pp. 400–407, 1951.
J. C. Spall, “A stochastic approximation technique for generating maximum likelihood parameter estimates.” Proceedings of the 1987 American Control Conference, Minneapolis, MN, 1987.
M. A. Styblinski and T.-S. Tang, “Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated annealing.” Neural Networks 3(4), pp. 467–483, 1990.
M. Jabri and B. Flower, “Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayered networks.” IEEE Transactions on Neural Networks 3(1), pp. 154–157, 1992.
J. Alspector, R. Meir, B. Yuhas, and A. Jayakumar, “A parallel gradient descent method for learning in analog VLSI neural networks,” in Advances in Neural Information Processing Systems, vol. 5. Morgan Kaufman: San Mateo, CA, 1993, pp. 836–844.
B. Flower and M. Jabri, “Summed weight neuron perturbation: An O(n) improvement over weight perturbation,” in Advances in Neural Information Processing Systems, vol. 5, Morgan Kaufman: San Mateo, CA, 1993, pp. 212–219.
G. Cauwenberghs, “A learning analog neural network chip with continuous-recurrent dynamics,” in Advances in Neural Information Processing Systems, vol. 6. Morgan Kaufman: San Mateo, CA, 1994, pp. 858–865.
G. Cauwenberghs, “An analog VLSI recurrent neural network learning a continuous-time trajectory.” IEEE Transactions on Neural Networks 7(2), March 1996.
F. Pineda, “Mean-field theory for batched-TD(λ),” submitted to Neural Computation, 1996.
S. Grossberg and D. S. Levine, “Neural dynamics of attentionally modulated pavlovian conditioning: Blocking, interstimulus interval, and secondary reinforcement.” Applied Optics 26, pp. 5015–5030, 1987.
E. Niebur and C. Koch, “A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons.” Journal of Computational Neuroscience 1, pp. 141–158, 1994.
G. Cauwenberghs, “Reinforcement learning in a nonlinear noise shaping oversampled A/D converter.” To appear in Proc. Int. Symp. Circuits and Systems, Hong Kong, June 1997.
J. C. Candy and G. C. Temes, “Oversampled methods for A/D and D/A conversion,” in Oversampled Delta-Sigma Data Converters, IEEE Press, 1992, pp. 1–29.
E. Vittoz and J. Fellrath, “CMOS analog integrated circuits based on weak inversion operation.” IEEE Journal on Solid-State Circuits 12(3), pp. 224–231, 1977.
A. G. Andreou, K. A. Boahen, P. O. Pouliquen, A. Pavasovic, R. E. Jenkins, and K. Strohbehn, “Current-mode subthreshold MOS circuits for analog VLSI neural systems.” IEEE Transactions on Neural Networks 2(2), pp. 205–213, 1991.
A. L. Hodgkin and A. F. Huxley, “Current carried by sodium and potassium ions through the membrane of the giant axon of Loligo.” Journal of Physiology 116, pp. 449–472, 1952.
C. Diorio, P. Hassler, B. Minch and C. A. Mead, “A single-transistor silicon synapse.” To appear in IEEE Transactions on Electron Devices.
C. A. Mead, “Adaptive retina,” in Analog VLSI Implementation of Neural Systems (C. Mead and M. Ismail, eds.). Kluwer Academic Pub.: Norwell, MA, 1989, pp. 239–246.
G. Cauwenberghs, C. F. Neugebauer, and A. Yariv, “Analysis and verification of an analog VLSI outer-product incremental learning system.” IEEE Transactions on Neural Networks 3(3), pp. 488–497, 1992.
Y. Horio, and S. Nakamura, “Analog memories for VLSI Nneurocomputing,” in Artificial Neural Networks: Paradigms, Applications, and Hardware Implementations (C. Lau and E. Sanchez-Sinencio, eds.), IEEE Press, 1992, pp. 344–363.
G. Cauwenberghs, and A. Yariv, “Fault-tolerant dynamic multi-level storage in analog VLSI.” IEEE Transactions on Circuits and Systems II 41(12), pp. 827–829, 1994.
G. Cauwenberghs, “Analog VLSI long-term dynamic storage,” in Proceedings of the International Symposium on Circuits and Systems, Atlanta, GA, 1996.
G. Cauwenberghs, “A micropower CMOS algorithmic A/D/A converter.” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 42(11), pp. 913–919, 1995.
D. Kirk, D. Kerns, K. Fleischer, and A. Barr, “Analog VLSI implementation of gradient descent,” in Advances in Neural Information Processing Systems, vol. 5. Morgan Kaufman: San Mateo, CA, 1993, pp. 789–796.