On Logical Inference over Brains, Behaviour, and Artificial Neural Networks

Olivia Guest1, Andrea E. Martin2
1Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
2Language and Computation in Neural Systems Group, Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands

Tóm tắt

AbstractIn the cognitive, computational, and neuro-sciences, practitioners often reason about what computational models represent or learn, as well as what algorithm is instantiated. The putative goal of such reasoning is to generalize claims about the model in question, to claims about the mind and brain, and the neurocognitive capacities of those systems. Such inference is often based on a model’s performance on a task, and whether that performance approximates human behavior or brain activity. Here we demonstrate how such argumentation problematizes the relationship between models and their targets; we place emphasis on artificial neural networks (ANNs), though any theory-brain relationship that falls into the same schema of reasoning is at risk. In this paper, we model inferences from ANNs to brains and back within a formal framework — metatheoretical calculus — in order to initiate a dialogue on both how models are broadly understood and used, and on how to best formally characterize them and their functions. To these ends, we express claims from the published record about models’ successes and failures in first-order logic. Our proposed formalization describes the decision-making processes enacted by scientists to adjudicate over theories. We demonstrate that formalizing the argumentation in the literature can uncover potential deep issues about how theory is related to phenomena. We discuss what this means broadly for research in cognitive science, neuroscience, and psychology; what it means for models when they lose the ability to mediate between theory and data in a meaningful way; and what this means for the metatheoretical calculus our fields deploy when performing high-level scientific inference.

Từ khóa


Tài liệu tham khảo

Barak, O. (2017). Recurrent neural networks as versatile tools of neuroscience research. Current Opinion in Neurobiology, 46, 1–6.

Blokpoel, M., Wareham, H., Haselager, W., Toni, I., & van Rooij, I. (2018). Deep analogical inference as the origin of hypotheses. The Journal of Problem Solving.

Bowers, J.S., Malhotra, G., Dujmović, M., Montero, M.L., Tsvetkov, C., & Biscione, V. (2022). Deep problems with neural network models of human vision.

Broadbent, D. (1985). A question of levels: Comment on McClelland and Rumelhart. Journal of Experimental Psychology: General.

Chalmers, D.J. (2020). What is conceptual engineering and what should it be? Inquiry, 1–18.

Chirimuuta, M. (2021). Prediction versus understanding in computationally enhanced neuroscience. Synthese, 199(1), 767–790.

Chollet, F., et al. (2015). Keras. https://keras.io.

Cichy, R.M., & Kaiser, D. (2019). Deep neural networks as scientific models. Trends in Cognitive Sciences, 23(4), 305– 317.

Collins, P.J., & Hahn, U. (2020). We might be wrong, but we think that hedging doesn’t protect your reputation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(7), 1328–1348.

Collins, P.J., Krzyżanowska, K., Hartmann, S., Wheeler, G., & Hahn, U. (2020). Conditionals and testimony. Cognitive Psychology, 122, 101329.

Craver, C.F., & Kaplan, D.M. (2020). Are more details better? on the norms of completeness for mechanistic explanations. The British Journal for the Philosophy of Science.

Cummins, R. (2000). In Keil, F., & Wilson, R. (Eds.), Explanation and cognition, (pp. 117–145). Cambridge: MIT Press.

Devereux, B.J., Clarke, A., & Tyler, L.K. (2018). Integrated deep visual and semantic attractor neural networks predict FMRI pattern-information along the ventral object processing pathway. Scientific Reports, 8(1).

Dujmović, M., Bowers, J., Adolfi, F., & Malhotra, G. (2022). The pitfalls of measuring representational similarity using representational similarity analysis. bioRxiv.

Dujmović, M., Malhotra, G., & Bowers, J. (2020). What do adversarial images tell us about human vision? bioRxiv.

Elsayed, G.F., Shankar, S., Cheung, B., Papernot, N., Kurakin, A., Goodfellow, I., & et al. (2018). Adversarial examples that fool both computer vision and time-limited humans. arXiv:1802.08195.

Falkenburg, B., & Schiemann, G. (2019). Mechanistic explanations in physics and beyond. Berlin: Springer.

Firestone, C. (2020). Performance vs. competence in human-machine comparisons. Proceedings of the National Academy of Sciences, 117(43), 26562–26571.

Fodor, J.A., & Pylyshyn, Z.W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3–71.

Frankfurt, H.G. (1958). Peirce’s notion of abduction. The Journal of Philosophy, 55(14), 593–597.

Funke, C.M., Borowski, J., Stosio, K., Brendel, W., Wallis, T.S., & Bethge, M. (2020). The notorious difficulty of comparing human and machine perception. arXiv:2004.09406.

Geirhos, R., Meding, K., & Wichmann, F.A. (2020). Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency. arXiv:2006.16736.

van Gerven, M., & Bohte, S. (2017). Editorial: Artificial neural networks as models of neural information processing. Frontiers in Computational Neuroscience, 11.

Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., & et al. (2021). Thinking ahead: spontaneous next word predictions in context as a keystone of language in humans and machines. bioRxiv.

Guest, O., Caso, A., & Cooper, R.P. (2020). On simulating neural damage in connectionist networks. Computational Brain & Behavior, 3(3), 289–321.

Guest, O., & Martin, A.E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 0(0), 1745691620970585. (PMID: 33482070).

Harding, S. (1975). Can theories be refuted?: Essays on the Duhem-Quine thesis Vol. 81. Berlin: Springer Science & Business Media.

Hasson, U., Nastase, S.A., & Goldstein, A. (2019). Direct-fit to nature: an evolutionary perspective on biological (and artificial) neural networks.

Jason, G. (1988). Hedging as a fallacy of language. Informal Logic, 10(3).

Jonas, E., & Kording, K.P. (2017). Could a neuroscientist understand a microprocessor? PLoS Computational Biology, 13(1), e1005268.

Kaplan, D.M. (2011). Explanation and description in computational neuroscience. Synthese, 183 (3), 339–373.

Kaplan, D.M., & Craver, C.F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78(4), 601–627.

Kay, K.N. (2018). Principles for models of neural information processing. NeuroImage, 180, 101–109.

Kell, A.J., Yamins, D.L., Shook, E.N., Norman-Haignere, S.V., & McDermott, J.H. (2018). A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3), 630–644.

Khaligh-Razavi, S.M., & Kriegeskorte, N. (2014). Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Computational Biology, 10(11), e1003915.

Kietzmann, T.C., McClure, P., & Kriegeskorte, N. (2019). Deep neural networks in computational neuroscience. Oxford Research Encyclopedia of Neuroscience.

Kriegeskorte, N. (2015). Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual Review of Vision Science, 1, 417–446.

Kriegeskorte, N., & Douglas, P.K. (2018). Cognitive computational neuroscience. Nature Neuroscience, 21(9), 1148–1160.

Kriegeskorte, N., Mur, M., Ruff, D.A., Kiani, R., Bodurka, J., Esteky, H., & et al. (2008). Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron, 60 (6), 1126–1141.

Lampson, B. (2006). Lazy and speculative execution. Microsoft Research OPODIS, Bordeaux, France, 21.

Leeds, D.D., Seibert, D.A., Pyles, J.A., & Tarr, M.J. (2013). Comparing visual representations across human fMRI and computational vision. Journal of Vision, 13(13), 25–25.

Lindsay, G.W., & Miller, K.D. (2018). How biological attention mechanisms improve task performance in a large-scale visual system model. eLife, 7.

Linzen, T., & Leonard, B. (2018). Distinct patterns of syntactic agreement errors in recurrent networks and humans. arXiv:1807.06882.

Love, B.C. (2021). Levels of biological plausibility. Philosophical Transactions of the Royal Society B, 376(1815), 20190632.

Luo, X., Roads, B.D., & Love, B.C. (2021). The costs and benefits of goal-directed attention in deep convolutional neural networks. Comput Brain Behav, 4, 213–230. https://doi.org/10.1007/s42113-021-00098-yhttps://doi.org/10.1007/s42113-021-00098-y.

Ma, W.J., & Peters, B. (2020). A neural network walks into a lab: towards using deep nets as models for human behavior. arXiv:2005.02181.

Mareschal, D., & French, R.M. (2017). TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1711), 20160057.

Martin, A.E., & Doumas, L.A.A. (2019). Predicate learning in neural systems: using oscillations to discover latent structure. Current Opinion in Behavioral Sciences, 29, 77–83.

Martin, A.E., & Doumas, L.A.A. (2020). Tensors and compositionality in neural systems. Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1791), 20190306.

Marx, K. (1894). Capital: volume III. International Publishers, NY.

Massaro, D.W. (1988). Some criticisms of connectionist models of human performance. Journal of Memory and Language, 27(2), 213–234.

McClelland, J., & Botvinick, M. (2020). Deep learning: Implications for human learning and memory. PsyArXiv.

Meijer, G. (2021). Neurons in the mouse brain correlate with cryptocurrency price: a cautionary tale.

Navarro, D.J. (2021). If mathematical psychology did not exist we might need to invent it: A comment on theory building in psychology. Perspectives on Psychological Science, 174569162097476.

Nicholson, D.A., & Prinz, A.A. (2020). Deep neural network models of object recognition exhibit human-like limitations when performing visual search tasks.

Nietzsche, F. (1886). Beyond good and evil. In (chap. Chapter IV (Apophthegms and Interludes)). Friedrich Nietzsche Internet Archive (marxists.org).

Norton, J.D. (2003). A material theory of induction. Philosophy of Science, 70(4), 647–670.

Peterson, J.C., Abbott, J.T., & Griffiths, T.L. (2016). Adapting deep network features to capture psychological representations. arXiv:1608.02164.

Plutynski, A. (2011). Four problems of abduction: A brief history. HOPOS: The Journal of the International Society for the History of Philosophy of Science, 1(2), 227–248.

Potochnik, A., & Sanches de Oliveira, G. (2019). Patterns in cognitive phenomena and pluralism of explanatory styles. Topics in Cognitive Science, 12(4), 1306–1320.

Putnam, H. (1967). Psychological predicates. Art, Mind, and Religion, 1, 37–48.

Quilty-Dunn, J. (2020). Polysemy and thought: Toward a generative theory of concepts. Mind & Language, 36(1), 158–185. https://doi.org/10.1111/mila.12328.

Quine, W.V. (1951). Main trends in recent philosophy: Two dogmas of empiricism. The Philosophical Review, 60(1), 20–43.

Ramakrishnan, K., Scholte, S., Lamme, V., Smeulders, A., & Ghebreab, S. (2015). Convolutional neural networks in the brain: an fMRI study. Journal of Vision, 15(12), 371–371.

Reverberi, C., Pischedda, D., Burigo, M., & Cherubini, P. (2012). Deduction without awareness. Acta Psychologica, 139(1), 244–253. https://doi.org/10.1016/j.actpsy.2011.09.011.

Rich, P., de Haan, R., Wareham, T., & van Rooij, I. (2021). How hard is cognitive science?. In Proceedings of the annual meeting of the cognitive science society.

Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? a comment on theory testing. Psychological Review, 107(2), 358–367.

Rogers, T.T., Lambon Ralph, M.A., Garrard, P., Bozeat, S., McClelland, J.L., Hodges, J.R., & et al. (2004). Structure and deterioration of semantic memory: a neuropsychological and computational investigation. Psychological Review, 111(1), 205.

Russell, B. (1918). The philosophy of logical atomism. Evanston: Routledge.

Salmon, M.H. (2013). Introduction to logic and critical thinking (6th ed). Cengage Learning.

Saxe, A., Nelli, S., & Summerfield, C. (2020). If deep learning is the answer, what is the question? Nature Reviews Neuroscience, 1–13.

Seeliger, K., Güçlü, U., Ambrogioni, L., Güçlütürk, Y., & van Gerven, M.A. (2018). Generative adversarial networks for reconstructing natural images from brain activity. NeuroImage, 181, 775–785.

Shepard, R.N., & Chipman, S. (1970). Second-order isomorphism of internal representations: Shapes of states. Cognitive Psychology, 1(1), 1–17.

Shiffrin, R.M., Bassett, D.S., Kriegeskorte, N., & Tenenbaum, J.B. (2020). The brain produces mind by modeling. Proceedings of the National Academy of Sciences, 117(47), 29299–29301.

Sundholm, G. (1994). Existence, proof and truth-making: A perspective on the intuitionistic conception of truth. Topoi, 13(2), 117–126.

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & et al. (2014). Intriguing properties of neural networks.

Tacchetti, A., Isik, L., & Poggio, T. (2017). Invariant recognition drives neural representations of action sequences. PLoS Computational Biology, 13(12), e1005859.

Truzzi, A., & Cusack, R. (2020). Understanding CNNs as a model of the inferior temporal cortex: using mediation analysis to unpack the contribution of perceptual and semantic features in random and trained networks.

Turing, A.M. (1950). Computing machinery and intelligence. Creative Computing, 6(1), 44–53.

Vickers, P. (2019). Towards a realistic success-to-truth inference for scientific realism. Synthese, 196(2), 571–585.

Wald, H. (1975). Introduction to dialectical logic Vol. 14. John Benjamins Publishing.

Wray, K.B. (2013). Success and truth in the realism/anti-realism debate. Synthese, 190(9), 1719–1729.

Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv:1611.03530.

Zhou, Z., & Firestone, C. (2019). Humans can decipher adversarial images. Nature Communications, 10(1), 1–9.

Zipser, D., & Andersen, R.A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331(6158), 679–684.