The End of Moore’s Law: Opportunities for Natural Computing?

New Generation Computing - Tập 35 - Trang 253-269 - 2017
Ferdinand Peper1
1Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan

Tóm tắt

The impending end of Moore’s Law has started a rethinking of the way computers are built and computation is done. This paper discusses two directions that are currently attracting much attention as future computation paradigms: the merging of logic and memory, and brain-inspired computing. Natural computing has been known for its innovative methods to conduct computation, and as such may play an important role in the shaping of the post-Moore era.

Tài liệu tham khảo

Backus, J.: Can programming be liberated from the Von Neumann style?: A functional style and its algebra of programs. Commun. ACM 21(8), 613–641 (1978) Bandyopadhyay, A., Pati, R., Sahu, S., Peper, F., Fujita, D.: Massively parallel computing on an organic molecular layer. Nat. Phys. 6(5), 369–375 (2010) Benioff, P.: The computer as a physical system: a microscopic quantum mechanical hamiltonian model of computers as represented by Turing machines. J. Stat. Phys. 22(5), 563–591 (1980) Benjamin, B.V., Gao, P., McQuinn, E., Choudhary, S., Chandrasekaran, A.R., Bussat, J.M., Alvarez-Icaza, R., Arthur, J.V., Merolla, P.A., Boahen, K.: Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102(5), 699–716 (2014) Biafore, M.: Cellular automata for nanometer-scale computation. Phys. D Nonlinear Phenom. 70(4), 415–433 (1994) Brette, R.: Philosophy of the spike: rate-based vs. spike-based theories of the brain. Front. Syst. Neurosci. 9, 151 (2015) Brunel, N., Hakim, V., Richardson, M.J.: Single neuron dynamics and computation. Curr. Opin. Neurobiol. 25, 149–155 (2014) Cross, T.: After Moore’s law: double, double, toil and trouble. The Economist, Technology Quarterly, vol. 1. http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law (2016). Accessed 9 June 2017 Debanne, D., Campanac, E., Bialowas, A., Carlier, E., Alcaraz, G.: Axon physiology. Physiol. Rev. 91(2), 555–602 (2011) Dennard, R.H., Gaensslen, F.H., Rideout, V.L., Bassous, E., LeBlanc, A.R.: Design of ion-implanted MOSFET’s with very small physical dimensions. IEEE J. Solid State Circuits 9(5), 256–268 (1974) Destexhe, A.: Intracellular and computational evidence for a dominant role of internal network activity in cortical computations. Curr. Opin. Neurobiol. 21(5), 717–725 (2011) Dewdney, A.K.: Computer recreations: on the spaghetti computer and other analog gadgets for problem solving. Sci. Am. 250(6), 15–19 (1984) Durbeck, L.J.K., Macias, N.J.: The cell matrix: an architecture for nanocomputing. Nanotechnology 12(3), 217 (2001) Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 594–611 (2006) Fields, R.D.: Change in the brain’s white matter. Science 330(6005), 768–769 (2010) Fields, R.D.: A new mechanism of nervous system plasticity: activity-dependent myelination. Nat. Rev. Neurosci. 16(12), 756–767 (2015) Furber, S.B., Galluppi, F., Temple, S., Plana, L.A.: The SpiNNaker project. Proc. IEEE 102(5), 652–665 (2014) Gardner, M.: Mathematical games: the fantastic combinations of John Conway’s new solitaire game “life”. Sci. Am. 223(4), 120–123 (1970) Gibson, E.M., Purger, D., Mount, C.W., Goldstein, A.K., Lin, G.L., Wood, L.S., Inema, I., Miller, S.E., Bieri, G., Zuchero, J.B., et al.: Neuronal activity promotes oligodendrogenesis and adaptive myelination in the mammalian brain. Science 344(6183), 1252304 (2014) Graves, A., Wayne, G., Danihelka, I.: Neural Turing machines. arXiv preprint arXiv:1410.5401 (2014) Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., et al.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016) Grollier, J., Querlioz, D., Stiles, M.D.: Spintronic nanodevices for bioinspired computing. Proc. IEEE 104(10), 2024–2039 (2016) Heinrich, A., Lutz, C., Gupta, J., Eigler, D.: Molecule cascades. Science 298, 1381–1387 (2002) Henderson, R.: Intel claims that by 2026 processors will have as many transistors as there are neurons in a brain. http://www.pocket-lint.com/news/126289-intel-claims-that-by-2026-processors-will-have-as-many-transistors-as-there-are-neurons-in-a-brain (2014). Accessed 9 June 2017 Herz, A.V., Gollisch, T., Machens, C.K., Jaeger, D.: Modeling single-neuron dynamics and computations: a balance of detail and abstraction. Science 314(5796), 80–85 (2006) Hopfield, J.J.: Pattern recognition computation using action potential timing for stimulus representation. Nature 376(6535), 33 (1995) Izhikevich, E.: Polychronization: computation with spikes. Neural Comput. 18(2), 245–282 (2006) Kari, L., Rozenberg, G.: The many facets of natural computing. Commun. ACM 51(10), 72–83 (2008) Keller, R.: Towards a theory of universal speed-independent modules. IEEE Trans. Comput. C-23(1), 21–33 (1974) Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. In: Proceedings of the National Academy of Sciences. p. 201611835 (2017) Kish, L.B., Khatri, S., Sethuraman, S.: Noise-based logic hyperspace with the superposition of 2N states in a single wire. Phys. Lett. A 373(22), 1928–1934 (2009) Kurenkov, A.: A ‘brief’ history of neural nets and deep learning. http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning (2015). Accessed 12 April 2017 Lazzaro, J., Wawrzynek, J., Mahowald, M., Sivilotti, M., Gillespie, D.: Silicon auditory processors as computer peripherals. IEEE Trans. Neural Netw. 4(3), 523–528 (1993) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015) Lee, J., Peper, F.: On Brownian Cellular Automata. In: Proc. of Automata 2008, pp. 278–291. Luniver Press, Bristol, UK (2008) Lee, J., Peper, F., Cotofana, S., Naruse, M., Ohtsu, M., Kawazoe, T., Takahashi, Y., Shimokawa, T., Kish, L., Kubota, T.: Brownian circuits: designs. Int. J. Unconv. Comput. 12(5–6), 341–362 (2016) Lee, S.W., O'Doherty, J.P., Shimojo, S.: Neural computations mediating one-shot learning in the human brain. PLoS Biol 13(4), e1002137 (2015) Loh, G.H., Jayasena, N., Oskin, M., Nutter, M., Roberts, D., Meswani, M., Zhang, D.P., Ignatowski, M.: A processing in memory taxonomy and a case for studying fixed-function PIM. In: Workshop on Near-Data Processing (WoNDP), Davis, California (2013) Luczak, A., McNaughton, B.L., Harris, K.D.: Packet-based communication in the cortex. Nat. Rev. Neurosci. 16, 745–755 (2015) Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002) Mack, C.: The multiple lives of Moore’s law. IEEE Spectr 52(4), 31–31 (2015) Markram, H., Lübke, J., Frotscher, M., Sakmann, B.: Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275(5297), 213–215 (1997) Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., Cassidy, A.S., Sawada, J., Akopyan, F., Jackson, B.L., Imam, N., Guo, C., Nakamura, Y., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014) Minnick, R.C.: A survey of microcellular research. J. ACM 14(2), 203–241 (1967) Moore, G.E.: Cramming more components onto integrated circuits. Electronics 38(8), 114–117 (1965) Moore, G.E.: Progress in digital integrated electronics. In: Digest of the 1975. International Electron Devices Meeting, pp. 11–13. Washington, DC (1975) Murata, T.: Petri nets: properties, analysis and applications. Proc. IEEE 77(4), 541–580 (1989) Peper, F.: Simplifying brownian cellular automata: two states and an average of two rules per cell. In: 2012 Third International Conference on Networking and Computing, pp. 367–370. Naha, Japan (2012) Peper, F., Lee, J., Abo, F., Isokawa, T., Adachi, S., Matsui, N., Mashiko, S.: Fault-tolerance in nanocomputers: a cellular array approach. IEEE Trans. Nanotechnol. 3(1), 187–201 (2004) Peper, F., Lee, J., Adachi, S., Mashiko, S.: Laying out circuits on asynchronous cellular arrays: a step towards feasible nanocomputers? Nanotechnology 14(4), 469–485 (2003) Peper, F., Lee, J., Carmona, J., Cortadella, J., Morita, K.: Brownian circuits: fundamentals. ACM J. Emerg. Technol. Comput. Syst. 9(1), 3-1–3-24 (2013) Reich, D., Mechler, F., Victor, J.: Temporal coding of contrast in primary visual cortex: when, what, and why. J. Neurophysiol. 85(3), 1039–1050 (2001) Riehle, A., Grn, S., Diesmann, M., Aertsen, A.: Spike synchronization and rate modulation differentially involved in motor cortical function. Science 278(5345), 1950–1953 (1997) Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T.: One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065 (2016) Semiconductor Industry Association and others: International technology roadmap of semiconductors (ITRS), chapter on emerging research devices (ERD) (2011), International Roadmap Committee Shah, A.: Intel Will Change Its Approach to PC Chip Upgrades, Deemphasize Process Sizes. PC World, San Francisco, California (2017) Smith, L.S.: Neuromorphic systems: past, present and future. In: Brain Inspired Cognitive Systems 2008, Advances in Experimental Medicine and Biology, vol. 657. Springer, New York (2010) Szatmáry, B., Izhikevich, E.M.: Spike-timing theory of working memory. PLoS Comput. Biol. 6(8), e1000879 (2010) Szymanski, B.K., Chen, G.G.: Computing with time: from neural networks to sensor networks. Comput. J. 51(4), 511–522 (2008)