Journal of Cognitive Neuroscience
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
It remains under debate whether the fusiform visual word form area (VWFA) is specific to visual word form and whether visual expertise increases its sensitivity (Xue et al., 2006; Cohen et al., 2002). The present study examined three related issues: (1) whether the VWFA is also involved in processing foreign writing that significantly differs from the native one, (2) the effect of visual word form training on VWFA activation after controlling the task difficulty, and (3) the transfer of visual word form learning. Eleven native English speakers were trained, during five sessions, to judge whether two subsequently flashed (100-msec duration with 200-msec interval) foreign characters (i.e., Korean Hangul) were identical or not. Visual noise was added to the stimuli to manipulate task difficulty. In functional magnetic resonance imaging scans before and after training, subjects performed the task once with the same noise level (i.e., parameter-matched scan) and once with noise level changed to match performance from pretraining to posttraining (i.e., performance-matched scan). Results indicated that training increased the accuracy in parameter-matched condition but remained constant in performance-matched condition (because of increasing task difficulty). Pretraining scans revealed stronger activation for English words than for Korean characters in the left inferior temporal gyrus and the left inferior frontal cortex, but not in the VWFA. Visual word form training significantly decreased the activation in the bilateral middle and left posterior fusiform when either parameters or performance were matched and for both trained and new items. These results confirm our conjecture that the VWFA is not dedicated to words, and visual expertise acquired with training reduces rather than increases its activity.
The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7–12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition.
Language and arithmetic are both lateralized to the left hemisphere in the majority of right-handed adults. Yet, does this similar lateralization reflect a single overall constraint of brain organization, such an overall “dominance” of the left hemisphere for all linguistic and symbolic operations? Is it related to the lateralization of specific cerebral subregions? Or is it merely coincidental? To shed light on this issue, we performed a “colateralization analysis” over 209 healthy subjects: We investigated whether normal variations in the degree of left hemispheric asymmetry in areas involved in sentence listening and reading are mirrored in the asymmetry of areas involved in mental arithmetic. Within the language network, a region-of-interest analysis disclosed partially dissociated patterns of lateralization, inconsistent with an overall “dominance” model. Only two of these areas presented a lateralization during sentence listening and reading which correlated strongly with the lateralization of two regions active during calculation. Specifically, the profile of asymmetry in the posterior superior temporal sulcus during sentence processing covaried with the asymmetry of calculation-induced activation in the intraparietal sulcus, and a similar colateralization linked the middle frontal gyrus with the superior posterior parietal lobule. Given recent neuroimaging results suggesting a late emergence of hemispheric asymmetries for symbolic arithmetic during childhood, we speculate that these colateralizations might constitute developmental traces of how the acquisition of linguistic symbols affects the cerebral organization of the arithmetic network.
Visual and auditory cortices traditionally have been considered to be “modality-specific.” Thus, their activity has been thought to be unchanged by information in other sensory modalities. However, using functional magnetic resonance imaging (fMRI), the present experiments revealed that ongoing activity in the visual cortex could be modulated by auditory information and ongoing activity in the auditory cortex could be modulated by visual information. In both cases, this cross-modal modulation of activity took the form of deactivation. Yet, the deactivation response was not evident in either cortical area during the paired presentation of visual and auditory stimuli. These data suggest that cross-modal inhibitory processes operate within traditional modality-specific cortices and that these processes can be switched on or off in different circumstances.
The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.
The present study analyzed the neural correlates of acoustic stimulus representation in echoic sensory memory. The neural traces of auditory sensory memory were indirectly studied by using the mismatch negativity (MMN), an event-related potential component elicited by a change in a repetitive sound. The MMN is assumed to reflect change detection in a comparison process between the sensory input from a deviant stimulus and the neural representation of repetitive stimuli in echoic memory. The scalp topographies of the MMNs elicited by pure tones deviating from standard tones by either frequency, intensity, or duration varied according to the type of stimulus deviance, indicating that the MMNs for different attributes originate, at least in part, from distinct neural populations in the auditory cortex. This result was supported by dipole-model analysis. If the MMN generator process occurs where the stimulus information is stored, these findings strongly suggest that the frequency, intensity, and duration of acoustic stimuli have a separate neural representation in sensory memory.
Language comprises a lexicon for storing words and a grammar for generating rule-governed forms. Evidence is presented that the lexicon is part of a temporal-parietalhnedial-temporal “declarative memory” system and that granlmatical rules are processed by a frontamasal-ganglia “procedural” system. Patients produced past tenses of regular and novel verbs (looked and plagged), which require an -ed-suffixation rule, and irregular verbs (dug), which are retrieved from memory. Word-finding difficulties in posterior aphasia, and the general declarative memory impairment in Alzheimer's disease, led to more errors with irregular than regular and novel verbs. Grammatical difficulties in anterior aphasia, and the general impairment of procedures in Parkinson's disease, led to the opposite pattern. In contrast to the Parkinson's patients, who showed sup pressed motor activity and rule use, Huntington's disease patients showed excess motor activity and rule use, underscoring a role for the basal ganglia in grammatical processing.
The ability to cognitively regulate emotional responses to aversive events is important for mental and physical health. Little is known, however, about neural bases of the cognitive control of emotion. The present study employed functional magnetic resonance imaging to examine the neural systems used to reappraise highly negative scenes in unemotional terms. Reappraisal of highly negative scenes reduced subjective experience of negative affect. Neural correlates of reappraisal were increased activation of the lateral and medial prefrontal regions and decreased activation of the amygdala and medial orbito-frontal cortex. These findings support the hypothesis that prefrontal cortex is involved in constructing reappraisal strategies that can modulate activity in multiple emotion-processing systems.
Researchers have long debated whether knowledge about the self is unique in terms of its functional anatomic representation within the human brain. In the context of memory function, knowledge about the self is typically remembered better than other types of semantic information. But why does this memorial effect emerge? Extending previous research on this topic (see Craik et al., 1999), the present study used event-related functional magnetic resonance imaging to investigate potential neural substrates of self-referential processing. Participants were imaged while making judgments about trait adjectives under three experimental conditions (self-relevance, other-relevance, or case judgment). Relevance judgments, when compared to case judgments, were accompanied by activation of the left inferior frontal cortex and the anterior cingulate. A separate region of the medial prefrontal cortex was selectively engaged during self-referential processing. Collectively, these findings suggest that self-referential processing is functionally dissociable from other forms of semantic processing within the human brain.
Recent behavioral and event-related brain potential (ERP) studies have revealed cross-modal interactions in endogenous spatial attention between vision and audition, plus vision and touch. The present ERP study investigated whether these interactions reflect supramodal attentional control mechanisms, and whether similar cross-modal interactions also exist between audition and touch. Participants directed attention to the side indicated by a cue to detect infrequent auditory or tactile targets at the cued side. The relevant modality (audition or touch) was blocked. Attentional control processes were reflected in systematic ERP modulations elicited during cued shifts of attention. An anterior negativity contralateral to the cued side was followed by a contralateral positivity at posterior sites. These effects were similar whether the cue signaled which side was relevant for audition or for touch. They also resembled previously observed ERP modulations for shifts of visual attention, thus implicating supramodal mechanisms in the control of spatial attention. Following each cue, single auditory, tactile, or visual stimuli were presented at the cued or uncued side. Although stimuli in task-irrelevant modalities could be completely ignored, visual and auditory ERPs were nevertheless affected by spatial attention when touch was relevant, revealing cross-modal interactions. When audition was relevant, visual ERPs, but not tactile ERPs, were affected by spatial attention, indicating that touch can be decoupled from cross-modal attention when task-irrelevant.
- 1
- 2
- 3
- 4
- 5
- 6
- 10