Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Neurosci ; 44(15)2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38388426

ABSTRACT

Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Speech Perception/physiology , Temporal Lobe , Magnetic Resonance Imaging , Attention/physiology , Auditory Perception/physiology , Acoustic Stimulation
2.
Sci Data ; 8(1): 250, 2021 09 28.
Article in English | MEDLINE | ID: mdl-34584100

ABSTRACT

The "Narratives" collection aggregates a variety of functional MRI datasets collected while human subjects listened to naturalistic spoken stories. The current release includes 345 subjects, 891 functional scans, and 27 diverse stories of varying duration totaling ~4.6 hours of unique stimuli (~43,000 words). This data collection is well-suited for naturalistic neuroimaging analysis, and is intended to serve as a benchmark for models of language and narrative comprehension. We provide standardized MRI data accompanied by rich metadata, preprocessed versions of the data ready for immediate use, and the spoken story stimuli with time-stamped phoneme- and word-level transcripts. All code and data are publicly available with full provenance in keeping with current best practices in transparent and reproducible neuroimaging.


Subject(s)
Comprehension , Language , Magnetic Resonance Imaging , Adolescent , Adult , Brain Mapping , Electronic Data Processing , Female , Humans , Male , Middle Aged , Narration , Young Adult
3.
Cereb Cortex ; 31(8): 3622-3640, 2021 07 05.
Article in English | MEDLINE | ID: mdl-33749742

ABSTRACT

Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.


Subject(s)
Brain Mapping , Imagination/physiology , Music/psychology , Acoustic Stimulation , Adolescent , Adult , Auditory Cortex/physiology , Auditory Perception/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Movement/physiology , Pitch Discrimination , Pitch Perception , Sensation/physiology , Young Adult
4.
J Neurosci ; 41(12): 2713-2722, 2021 03 24.
Article in English | MEDLINE | ID: mdl-33536196

ABSTRACT

Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Magnetoencephalography/methods , Music , Speech Perception/physiology , Adult , Auditory Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging/methods , Male , Psychomotor Performance/physiology , Young Adult
5.
Cereb Cortex ; 29(10): 4017-4034, 2019 09 13.
Article in English | MEDLINE | ID: mdl-30395174

ABSTRACT

How does attention route information from sensory to high-order areas as a function of task, within the relatively fixed topology of the brain? In this study, participants were simultaneously presented with 2 unrelated stories-one spoken and one written-and asked to attend one while ignoring the other. We used fMRI and a novel intersubject correlation analysis to track the spread of information along the processing hierarchy as a function of task. Processing the unattended spoken (written) information was confined to auditory (visual) cortices. In contrast, attending to the spoken (written) story enhanced the stimulus-selective responses in sensory regions and allowed it to spread into higher-order areas. Surprisingly, we found that the story-specific spoken (written) responses for the attended story also reached secondary visual (auditory) regions of the unattended sensory modality. These results demonstrate how attention enhances the processing of attended input and allows it to propagate across brain areas.


Subject(s)
Attention/physiology , Brain/physiopathology , Pattern Recognition, Visual/physiology , Reading , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Neural Pathways/physiology , Photic Stimulation , Young Adult
6.
J Neurosci ; 33(40): 15978-88, 2013 Oct 02.
Article in English | MEDLINE | ID: mdl-24089502

ABSTRACT

Linguistic content can be conveyed both in speech and in writing. But how similar is the neural processing when the same real-life information is presented in spoken and written form? Using functional magnetic resonance imaging, we recorded neural responses from human subjects who either listened to a 7 min spoken narrative or read a time-locked presentation of its transcript. Next, within each brain area, we directly compared the response time courses elicited by the written and spoken narrative. Early visual areas responded selectively to the written version, and early auditory areas to the spoken version of the narrative. In addition, many higher-order parietal and frontal areas demonstrated strong selectivity, responding far more reliably to either the spoken or written form of the narrative. By contrast, the response time courses along the superior temporal gyrus and inferior frontal gyrus were remarkably similar for spoken and written narratives, indicating strong modality-invariance of linguistic processing in these circuits. These results suggest that our ability to extract the same information from spoken and written forms arises from a mixture of selective neural processes in early (perceptual) and high-order (control) areas, and modality-invariant responses in linguistic and extra-linguistic areas.


Subject(s)
Brain/physiology , Comprehension/physiology , Reading , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Female , Functional Neuroimaging , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Reaction Time/physiology , Speech/physiology
7.
Disaster Med Public Health Prep ; 2 Suppl 1: S45-50, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18769267

ABSTRACT

BACKGROUND: : In 2005, Hurricane Katrina caused extensive damage to parts of Mississippi, Louisiana, and Alabama, causing many people, including vulnerable older adults, to evacuate to safe surroundings. Approximately 23,000 evacuees--many of them 65 years old or older, frail, and lacking family to advocate for their care--arrived at the Reliant Astrodome Complex in Houston, Texas. There was no method for assessing the immediate and long-term needs of this vulnerable population. METHODS: A 13-item rapid needs assessment tool was piloted on 228 evacuees 65 years old and older by the Seniors Without Families Team (SWiFT), to test the feasibility of triaging vulnerable older adults with medical and mental health needs, financial needs, and/or social needs. RESULTS: The average age of the individuals triaged was 66.1 +/- 12.72 (mean +/- standard deviation [SD]) years. Of these, 68% were triaged for medical and or mental health needs, 18% were triaged for financial assistance, and 4% were triaged for social assistance. More than half of the SWiFT-triaged older adults reported having hypertension. CONCLUSIONS: The SWiFT tool is a feasible approach for triaging vulnerable older adults and provides a rapid determination of the level of need or assistance necessary for vulnerable older people during disasters. The tool was only piloted, thus further testing to determine reliability and validity is necessary. Potentially important implications for using such a tool and suggestions for preparing for and responding to disaster situations in which vulnerable older adults are involved are provided.


Subject(s)
Disaster Planning , Disasters , Relief Work , Triage/methods , Adolescent , Adult , Age Factors , Aged , Feasibility Studies , Female , Health Services Needs and Demand , Humans , Louisiana , Male , Middle Aged , Pilot Projects , Texas , Triage/organization & administration
SELECTION OF CITATIONS
SEARCH DETAIL
...