thumbnail

Computational Brain & Behavior

SCOPUS (2018-2023)

  2522-087X

  2522-0861

 

Cơ quản chủ quản:  Springer Nature

Lĩnh vực:
Neuropsychology and Physiological PsychologyDevelopmental and Educational Psychology

Các bài báo tiêu biểu

Computational Resource Demands of a Predictive Bayesian Brain
- 2020
Johan Kwisthout, Iris van Rooij
Estimating Semantic Networks of Groups and Individuals from Fluency Data
Tập 1 Số 1 - Trang 36-58 - 2018
Jeffrey C. Zemla, Joseph L. Austerweil
On Logical Inference over Brains, Behaviour, and Artificial Neural Networks
- 2023
Olivia Guest, Andrea E. Martin
AbstractIn the cognitive, computational, and neuro-sciences, practitioners often reason about what computational models represent or learn, as well as what algorithm is instantiated. The putative goal of such reasoning is to generalize claims about the model in question, to claims about the mind and brain, and the neurocognitive capacities of those systems. Such inference is often based on a model’s performance on a task, and whether that performance approximates human behavior or brain activity. Here we demonstrate how such argumentation problematizes the relationship between models and their targets; we place emphasis on artificial neural networks (ANNs), though any theory-brain relationship that falls into the same schema of reasoning is at risk. In this paper, we model inferences from ANNs to brains and back within a formal framework — metatheoretical calculus — in order to initiate a dialogue on both how models are broadly understood and used, and on how to best formally characterize them and their functions. To these ends, we express claims from the published record about models’ successes and failures in first-order logic. Our proposed formalization describes the decision-making processes enacted by scientists to adjudicate over theories. We demonstrate that formalizing the argumentation in the literature can uncover potential deep issues about how theory is related to phenomena. We discuss what this means broadly for research in cognitive science, neuroscience, and psychology; what it means for models when they lose the ability to mediate between theory and data in a meaningful way; and what this means for the metatheoretical calculus our fields deploy when performing high-level scientific inference.
The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks
- 2021
Xiaoliang Luo, Brett D. Roads, Bradley C. Love
AbstractPeople deploy top-down, goal-directed attention to accomplish tasks, such as finding lost keys. By tuning the visual system to relevant information sources, object recognition can become more efficient (a benefit) and more biased toward the target (a potential cost). Motivated by selective attention in categorisation models, we developed a goal-directed attention mechanism that can process naturalistic (photographic) stimuli. Our attention mechanism can be incorporated into any existing deep convolutional neural networks (DCNNs). The processing stages in DCNNs have been related to ventral visual stream. In that light, our attentional mechanism incorporates top-down influences from prefrontal cortex (PFC) to support goal-directed behaviour. Akin to how attention weights in categorisation models warp representational spaces, we introduce a layer of attention weights to the mid-level of a DCNN that amplify or attenuate activity to further a goal. We evaluated the attentional mechanism using photographic stimuli, varying the attentional target. We found that increasing goal-directed attention has benefits (increasing hit rates) and costs (increasing false alarm rates). At a moderate level, attention improves sensitivity (i.e. increases $d^{\prime }$ d ) at only a moderate increase in bias for tasks involving standard images, blended images and natural adversarial images chosen to fool DCNNs. These results suggest that goal-directed attention can reconfigure general-purpose DCNNs to better suit the current task goal, much like PFC modulates activity along the ventral stream. In addition to being more parsimonious and brain consistent, the mid-level attention approach performed better than a standard machine learning approach for transfer learning, namely retraining the final network layer to accommodate the new task.
The Importance of Standards for Sharing of Computational Models and Data
- 2019
Russell A. Poldrack, Franklin Feingold, Michael J. Frank, Padraig Gleeson, Gilles de Hollander, Quentin J. M. Huys, Bradley C. Love, Christopher J. Markiewicz, Rosalyn J. Moran, Petra Ritter, Timothy T. Rogers, B. E. Turner, Tal Yarkoni, Ming Zhan, Jonathan D. Cohen
Neural Habituation Enhances Novelty Detection: an EEG Study of Rapidly Presented Words
Tập 3 Số 2 - Trang 208-227 - 2020
Len P. L. Jacob, David E. Huber
Aversion to Option Loss in a Restless Bandit Task
- 2018
Daniel J. Navarro, Peter Tran, Nicole Baz
Simulating Code-switching Using a Neural Network Model of Bilingual Sentence Production
- 2021
Chara Tsoukala, Mirjam Broersma, Antal van den Bosch, Stefan L. Frank
AbstractCode-switching is the alternation from one language to the other during bilingual speech. We present a novel method of researching this phenomenon using computational cognitive modeling. We trained a neural network of bilingual sentence production to simulate early balanced Spanish–English bilinguals, late speakers of English who have Spanish as a dominant native language, and late speakers of Spanish who have English as a dominant native language. The model produced code-switches even though it was not exposed to code-switched input. The simulations predicted how code-switching patterns differ between early balanced and late non-balanced bilinguals; the balanced bilingual simulation code-switches considerably more frequently, which is in line with what has been observed in human speech production. Additionally, we compared the patterns produced by the simulations with two corpora of spontaneous bilingual speech and identified noticeable commonalities and differences. To our knowledge, this is the first computational cognitive model simulating the code-switched production of non-balanced bilinguals and comparing the simulated production of balanced and non-balanced bilinguals with that of human bilinguals.
Hierarchical Hidden Markov Models for Response Time Data
Tập 4 Số 1 - Trang 70-86 - 2021
Deborah Kunkel, Zhifei Yan, Peter F. Craigmile, Mario Peruggia, Trisha Van Zandt