Artificial optic-neural synapse for colored and color-mixed pattern recognition
Tóm tắt
Từ khóa
Tài liệu tham khảo
Yu, S. et al. An electronic synapse device based on metal oxide resistive switching memory for neuromorphic computation. IEEE Trans. 8, 2729–2737 (2011).
Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).
Burgt, Yvd et al. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Nat. Mater. 4856, 1–6 (2017).
Kim, S., Yoon, J., Kim, H.-D. & Choi, S.-J. Carbon nanotube synaptic transistor network for pattern recognition. ACS Ami. 7, 25479–25486 (2015).
Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101–108 (2017).
Choi, S. et al. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations. Nat. Mater. 17, 335–340 (2018).
Sangwan, V. K. et al. Multi-terminal memtransistors from polycrystalline monolayer molybdenum disulfide. Nature 554, 500–504 (2018).
Yoon, S. M., Tokumitsu, E. & Ishiwara, H. An electrically modifiable synapse array composed of metal-ferroelectric-semiconductor (MFS). IEEE Electron Device Lett. 20, 229–231 (1999).
Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).
Wang, Z. et al. Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 1, 137–145 (2018).
Bi, G.-q & Poo, M.-m Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464 (1998).
Whitlock, J. R., Heynen, A. J., Shouler, M. G. & Bear, M. F. Learning induces long-term potentiation in the hippocampus. Science 313, 1093–1097 (2006).
Takeo, O. et al. Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater. 3054, 591–595 (2011).
Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. 62, 3498–3507 (2015).
Yang, J. J., Strukov, D. B. & Stewart, D. R. Memristive devices for computing. Nat. Nanotech. 8, 13–24 (2013).
Chen, P. Y., Peng, X. & Yu, S. NeuroSim+: an integrated device-to-algorithm framework for benchmarking synaptic devices and array architectures. In 2017 IEEE International Electron Devices Meeting (IEDM) 135–138 (IEEE, San Francisco, 2017).
Laughltn, S. B., de Ruyter van Steveninck, R. R. & Anderson, J. C. The metabolic cost of neural information. Nat. Neurosci. 1, 36–41 (1998).
Shim, J. et al. Electronic and optoelectronic devices based on two-dimensional materials: from fabrication to application. Adv. Electron. Mat. 3, 1600364 (2017).
Lemme, M. C., Li, L.-J., Palacios, T. & Schwierz, F. Two-dimensional materials for electronic applications. Mrs. Bull. 39, 711–718 (2014).
Jo, S.-H. et al. A high-performance WSe2/h-BN photodetector using a triphenylphosphine (PPh3)-based n-doping technique. Adv. Mat. 28, 4824–4831 (2016).
Mochida, R. et al. A 4M synapses integrated analog ReRAM based 66.5 TOPS/W neural network processor with Cell Current Writing and Flexible Network Architecture. In 2018 Symposium on VLSI Technology (IEEE, Honolulu, 2018).
Ambrogio. et al. Equivalent-accuracy accelerated neural-entwork training using analogue memory. Nature 558, 60–67 (2018).
Querlioz, D., Bichler, O., Dollfus, P. & Gamrat, C. Immunity to device variations in a spiking neural network with memristive nanodevices. IEEE Trans. 12, 288–295 (2013).
Anold, A. J. et al. Mimicking neurotransmitter release in chemical synapses via hysteresis engineering in MoS2 transistors. ACS Nano 11, 3310–3118 (2017).
Rumelhart, D. E., Hinton, G. E., & Willians, R. J. in Parallel Distributed Processing: Explorations in Macrostructure of Cognition. Vol. 1 (eds Rumelhart, D. E. & McClelland, J. L.) 318–362 (MIT Press, Cambridge,1986).
Lim, S. et al. Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices. Preprint available at http://arXiv.org/abs/1707.06381 (2017).
Han, S., Mao, H. & Dally, W. J. Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. In International Conference on Learning Representations 2016 (ICLR, 2016).
Moons, B. & Verhelst, M. A 0.3-2.6 TOPS/W Precision-Scalable Processor for Real-Time Large-Scale ConvNets. In 2016 Symposium on VLSI Circuits (IEEE, Honolulu, ICLR, San Juan, 2016).
Dan, Y. & Poo, M.-M. Spike timing-dependent plasticity: from synapse to perception. Phy. Rev. 86, 1033–1048 (2006).
Hertz, J., Krogh, A. & Palmer, R. G. Introduction to the Theory of Neural Computation (Perseus, Cambridge, 1991).