Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Brain Topogr ; 37(2): 287-295, 2024 03.
Article in English | MEDLINE | ID: mdl-36939988

ABSTRACT

Electroencephalography (EEG) microstates are short successive periods of stable scalp field potentials representing spontaneous activation of brain resting-state networks. EEG microstates are assumed to mediate local activity patterns. To test this hypothesis, we correlated momentary global EEG microstate dynamics with the local temporo-spectral evolution of electrocorticography (ECoG) and stereotactic EEG (SEEG) depth electrode recordings. We hypothesized that these correlations involve the gamma band. We also hypothesized that the anatomical locations of these correlations would converge with those of previous studies using either combined functional magnetic resonance imaging (fMRI)-EEG or EEG source localization. We analyzed resting-state data (5 min) of simultaneous noninvasive scalp EEG and invasive ECoG and SEEG recordings of two participants. Data were recorded during the presurgical evaluation of pharmacoresistant epilepsy using subdural and intracranial electrodes. After standard preprocessing, we fitted a set of normative microstate template maps to the scalp EEG data. Using covariance mapping with EEG microstate timelines and ECoG/SEEG temporo-spectral evolutions as inputs, we identified systematic changes in the activation of ECoG/SEEG local field potentials in different frequency bands (theta, alpha, beta, and high-gamma) based on the presence of particular microstate classes. We found significant covariation of ECoG/SEEG spectral amplitudes with microstate timelines in all four frequency bands (p = 0.001, permutation test). The covariance patterns of the ECoG/SEEG electrodes during the different microstates of both participants were similar. To our knowledge, this is the first study to demonstrate distinct activation/deactivation patterns of frequency-domain ECoG local field potentials associated with simultaneous EEG microstates.


Subject(s)
Brain Mapping , Electrocorticography , Humans , Brain Mapping/methods , Electroencephalography/methods , Brain/diagnostic imaging , Brain/physiology , Scalp
2.
Cereb Cortex ; 33(6): 2804-2822, 2023 03 10.
Article in English | MEDLINE | ID: mdl-35771593

ABSTRACT

Joint music performance requires flexible sensorimotor coordination between self and other. Cognitive and sensory parameters of joint action-such as shared knowledge or temporal (a)synchrony-influence this coordination by shifting the balance between self-other segregation and integration. To investigate the neural bases of these parameters and their interaction during joint action, we asked pianists to play on an MR-compatible piano, in duet with a partner outside of the scanner room. Motor knowledge of the partner's musical part and the temporal compatibility of the partner's action feedback were manipulated. First, we found stronger activity and functional connectivity within cortico-cerebellar audio-motor networks when pianists had practiced their partner's part before. This indicates that they simulated and anticipated the auditory feedback of the partner by virtue of an internal model. Second, we observed stronger cerebellar activity and reduced behavioral adaptation when pianists encountered subtle asynchronies between these model-based anticipations and the perceived sensory outcome of (familiar) partner actions, indicating a shift towards self-other segregation. These combined findings demonstrate that cortico-cerebellar audio-motor networks link motor knowledge and other-produced sounds depending on cognitive and sensory factors of the joint performance, and play a crucial role in balancing self-other integration and segregation.


Subject(s)
Music , Psychomotor Performance , Music/psychology , Adaptation, Physiological , Feedback, Sensory
3.
Cereb Cortex ; 32(21): 4885-4901, 2022 10 20.
Article in English | MEDLINE | ID: mdl-35136980

ABSTRACT

During conversations, speech prosody provides important clues about the speaker's communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.


Subject(s)
Speech Perception , Speech , Humans , Speech Perception/physiology , Evoked Potentials/physiology , Electroencephalography/methods , Linguistics
4.
Cereb Cortex ; 32(18): 4110-4127, 2022 09 04.
Article in English | MEDLINE | ID: mdl-35029645

ABSTRACT

When people interact with each other, their brains synchronize. However, it remains unclear whether interbrain synchrony (IBS) is functionally relevant for social interaction or stems from exposure of individual brains to identical sensorimotor information. To disentangle these views, the current dual-EEG study investigated amplitude-based IBS in pianists jointly performing duets containing a silent pause followed by a tempo change. First, we manipulated the similarity of the anticipated tempo change and measured IBS during the pause, hence, capturing the alignment of purely endogenous, temporal plans without sound or movement. Notably, right posterior gamma IBS was higher when partners planned similar tempi, it predicted whether partners' tempi matched after the pause, and it was modulated only in real, not in surrogate pairs. Second, we manipulated the familiarity with the partner's actions and measured IBS during joint performance with sound. Although sensorimotor information was similar across conditions, gamma IBS was higher when partners were unfamiliar with each other's part and had to attend more closely to the sound of the performance. These combined findings demonstrate that IBS is not merely an epiphenomenon of shared sensorimotor information but can also hinge on endogenous, cognitive processes crucial for behavioral synchrony and successful social interaction.


Subject(s)
Brain Mapping , Interpersonal Relations , Music , Humans , Brain , Diencephalon , Movement
5.
Cereb Cortex ; 32(18): 3878-3895, 2022 09 04.
Article in English | MEDLINE | ID: mdl-34965579

ABSTRACT

Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.


Subject(s)
Music , Brain , Hand , Movement , Prefrontal Cortex/diagnostic imaging
6.
Eur J Neurol ; 29(3): 873-882, 2022 03.
Article in English | MEDLINE | ID: mdl-34661326

ABSTRACT

BACKGROUND AND PURPOSE: This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. METHODS: Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. RESULTS: Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. CONCLUSIONS: These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.


Subject(s)
Auditory Perceptual Disorders , Music , Auditory Perceptual Disorders/etiology , Cohort Studies , Humans , Magnetic Resonance Imaging , Speech Disorders
7.
Hum Brain Mapp ; 42(1): 161-174, 2021 01.
Article in English | MEDLINE | ID: mdl-32996647

ABSTRACT

Language comprehension depends on tight functional interactions between distributed brain regions. While these interactions are established for semantic and syntactic processes, the functional network of speech intonation - the linguistic variation of pitch - has been scarcely defined. Particularly little is known about intonation in tonal languages, in which pitch not only serves intonation but also expresses meaning via lexical tones. The present study used psychophysiological interaction analyses of functional magnetic resonance imaging data to characterise the neural networks underlying intonation and tone processing in native Mandarin Chinese speakers. Participants categorised either intonation or tone of monosyllabic Mandarin words that gradually varied between statement and question and between Tone 2 and Tone 4. Intonation processing induced bilateral fronto-temporal activity and increased functional connectivity between left inferior frontal gyrus and bilateral temporal regions, likely linking auditory perception and labelling of intonation categories in a phonological network. Tone processing induced bilateral temporal activity, associated with the auditory representation of tonal (phonemic) categories. Together, the present data demonstrate the breadth of the functional intonation network in a tonal language including higher-level phonological processes in addition to auditory representations common to both intonation and tone.


Subject(s)
Connectome/methods , Nerve Net/physiology , Pitch Perception/physiology , Prefrontal Cortex/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Nerve Net/diagnostic imaging , Prefrontal Cortex/diagnostic imaging , Psycholinguistics , Temporal Lobe/diagnostic imaging , Young Adult
8.
Hum Brain Mapp ; 41(7): 1842-1858, 2020 05.
Article in English | MEDLINE | ID: mdl-31957928

ABSTRACT

Intonation, the modulation of pitch in speech, is a crucial aspect of language that is processed in right-hemispheric regions, beyond the classical left-hemispheric language system. Whether or not this notion generalises across languages remains, however, unclear. Particularly, tonal languages are an interesting test case because of the dual linguistic function of pitch that conveys lexical meaning in form of tone, in addition to intonation. To date, only few studies have explored how intonation is processed in tonal languages, how this compares to tone and between tonal and non-tonal language speakers. The present fMRI study addressed these questions by testing Mandarin and German speakers with Mandarin material. Both groups categorised mono-syllabic Mandarin words in terms of intonation, tone, and voice gender. Systematic comparisons of brain activity of the two groups between the three tasks showed large cross-linguistic commonalities in the neural processing of intonation in left fronto-parietal, right frontal, and bilateral cingulo-opercular regions. These areas are associated with general phonological, specific prosodic, and controlled categorical decision-making processes, respectively. Tone processing overlapped with intonation processing in left fronto-parietal areas, in both groups, but evoked additional activity in bilateral temporo-parietal semantic regions and subcortical areas in Mandarin speakers only. Together, these findings confirm cross-linguistic commonalities in the neural implementation of intonation processing but dissociations for semantic processing of tone only in tonal language speakers.


Subject(s)
Language , Pitch Perception/physiology , Adolescent , Adult , Brain/diagnostic imaging , Brain/physiology , Brain Mapping , China , Decision Making/physiology , Female , Frontal Lobe/diagnostic imaging , Frontal Lobe/physiology , Germany , Humans , Linguistics , Magnetic Resonance Imaging , Male , Parietal Lobe/diagnostic imaging , Parietal Lobe/physiology , Reaction Time/physiology , Sex Characteristics , Speech , Speech Perception , Voice , Young Adult
9.
Ear Hear ; 41(2): 395-410, 2020.
Article in English | MEDLINE | ID: mdl-31397704

ABSTRACT

OBJECTIVES: A major issue in the rehabilitation of children with cochlear implants (CIs) is unexplained variance in their language skills, where many of them lag behind children with normal hearing (NH). Here, we assess links between generative language skills and the perception of prosodic stress, and with musical and parental activities in children with CIs and NH. Understanding these links is expected to guide future research and toward supporting language development in children with a CI. DESIGN: Twenty-one unilaterally and early-implanted children and 31 children with NH, aged 5 to 13, were classified as musically active or nonactive by a questionnaire recording regularity of musical activities, in particular singing, and reading and other activities shared with parents. Perception of word and sentence stress, performance in word finding, verbal intelligence (Wechsler Intelligence Scale for Children (WISC) vocabulary), and phonological awareness (production of rhymes) were measured in all children. Comparisons between children with a CI and NH were made against a subset of 21 of the children with NH who were matched to children with CIs by age, gender, socioeconomic background, and musical activity. Regression analyses, run separately for children with CIs and NH, assessed how much variance in each language task was shared with each of prosodic perception, the child's own music activity, and activities with parents, including singing and reading. All statistical analyses were conducted both with and without control for age and maternal education. RESULTS: Musically active children with CIs performed similarly to NH controls in all language tasks, while those who were not musically active performed more poorly. Only musically nonactive children with CIs made more phonological and semantic errors in word finding than NH controls, and word finding correlated with other language skills. Regression analysis results for word finding and VIQ were similar for children with CIs and NH. These language skills shared considerable variance with the perception of prosodic stress and musical activities. When age and maternal education were controlled for, strong links remained between perception of prosodic stress and VIQ (shared variance: CI, 32%/NH, 16%) and between musical activities and word finding (shared variance: CI, 53%/NH, 20%). Links were always stronger for children with CIs, for whom better phonological awareness was also linked to improved stress perception and more musical activity, and parental activities altogether shared significantly variance with word finding and VIQ. CONCLUSIONS: For children with CIs and NH, better perception of prosodic stress and musical activities with singing are associated with improved generative language skills. In addition, for children with CIs, parental singing has a stronger positive association to word finding and VIQ than parental reading. These results cannot address causality, but they suggest that good perception of prosodic stress, musical activities involving singing, and parental singing and reading may all be beneficial for word finding and other generative language skills in implanted children.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Music , Speech Perception , Child , Deafness/surgery , Hearing , Humans , Perception
10.
Neuroimage ; 185: 96-101, 2019 01 15.
Article in English | MEDLINE | ID: mdl-30336253

ABSTRACT

Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.


Subject(s)
Cerebral Cortex/physiology , Music , Pitch Perception/physiology , Speech Perception/physiology , Adult , Brain Mapping/methods , Electroencephalography/methods , Female , Humans , Male , Young Adult
11.
Hum Brain Mapp ; 40(9): 2623-2638, 2019 06 15.
Article in English | MEDLINE | ID: mdl-30834624

ABSTRACT

Generation of hierarchical structures, such as the embedding of subordinate elements into larger structures, is a core feature of human cognition. Processing of hierarchies is thought to rely on lateral prefrontal cortex (PFC). However, the neural underpinnings supporting active generation of new hierarchical levels remain poorly understood. Here, we created a new motor paradigm to isolate this active generative process by means of fMRI. Participants planned and executed identical movement sequences by using different rules: a Recursive hierarchical embedding rule, generating new hierarchical levels; an Iterative rule linearly adding items to existing hierarchical levels, without generating new levels; and a Repetition condition tapping into short term memory, without a transformation rule. We found that planning involving generation of new hierarchical levels (Recursive condition vs. both Iterative and Repetition) activated a bilateral motor imagery network, including cortical and subcortical structures. No evidence was found for lateral PFC involvement in the generation of new hierarchical levels. Activity in basal ganglia persisted through execution of the motor sequences in the contrast Recursive versus Iteration, but also Repetition versus Iteration, suggesting a role of these structures in motor short term memory. These results showed that the motor network is involved in the generation of new hierarchical levels during motor sequence planning, while lateral PFC activity was neither robust nor specific. We hypothesize that lateral PFC might be important to parse hierarchical sequences in a multi-domain fashion but not to generate new hierarchical levels.


Subject(s)
Imagination/physiology , Memory, Short-Term/physiology , Motor Activity/physiology , Nerve Net/physiology , Prefrontal Cortex/physiology , Psychomotor Performance/physiology , Serial Learning/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Nerve Net/diagnostic imaging , Prefrontal Cortex/diagnostic imaging , Young Adult
12.
Epilepsia ; 59(3): e23-e27, 2018 03.
Article in English | MEDLINE | ID: mdl-29388192

ABSTRACT

The objective of our study was to assess alterations in speech as a possible localizing sign in frontal lobe epilepsy. Ictal speech was analyzed in 18 patients with frontal lobe epilepsy (FLE) during seizures and in the interictal period. Matched identical words were analyzed regarding alterations in fundamental frequency (ƒo) as an approximation of pitch. In patients with FLE, ƒo of ictal utterances was significantly higher than ƒo in interictal recordings (p = 0.016). Ictal ƒo increases occurred in both FLE of right and left seizure origin. In contrast, a matched temporal lobe epilepsy (TLE) group showed less pronounced increases in ƒo, and only in patients with right-sided seizure foci. This study for the first time shows significant voice alterations in ictal speech in a cohort of patients with FLE. This may contribute to the localization of the epileptic focus. Increases in ƒo were interestingly found in frontal lobe seizures with origin in either hemisphere, suggesting a bilateral involvement to the planning of speech production, in contrast to a more right-sided lateralization of pitch perception in prosodic processing.


Subject(s)
Epilepsy, Frontal Lobe/diagnosis , Epilepsy, Frontal Lobe/physiopathology , Verbal Behavior/physiology , Voice/physiology , Adolescent , Adult , Child , Child, Preschool , Electrocardiography/trends , Female , Humans , Male , Middle Aged , Young Adult
13.
J Cogn Neurosci ; 28(1): 41-54, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26351994

ABSTRACT

Complex human behavior is hierarchically organized. Whether or not syntax plays a role in this organization is currently under debate. The present ERP study uses piano performance to isolate syntactic operations in action planning and to demonstrate their priority over nonsyntactic levels of movement selection. Expert pianists were asked to execute chord progressions on a mute keyboard by copying the posture of a performing model hand shown in sequences of photos. We manipulated the final chord of each sequence in terms of Syntax (congruent/incongruent keys) and Manner (conventional/unconventional fingering), as well as the strength of its predictability by varying the length of the Context (five-chord/two-chord progressions). The production of syntactically incongruent compared to congruent chords showed a response delay that was larger in the long compared to the short context. This behavioral effect was accompanied by a centroparietal negativity in the long but not in the short context, suggesting that a syntax-based motor plan was prepared ahead. Conversely, the execution of the unconventional manner was not delayed as a function of Context and elicited an opposite electrophysiological pattern (a posterior positivity). The current data support the hypothesis that motor plans operate at the level of musical syntax and are incrementally translated to lower levels of movement selection.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Evoked Potentials/physiology , Movement , Music , Acoustic Stimulation , Adult , Analysis of Variance , Electroencephalography , Female , Fourier Analysis , Humans , Male , Motor Skills/physiology , Photic Stimulation , Reaction Time , Young Adult
14.
Neurocase ; 22(6): 496-504, 2016 12.
Article in English | MEDLINE | ID: mdl-27726501

ABSTRACT

Song and speech represent two auditory categories the brain usually classifies fairly easily. Functionally, this classification ability may depend to a great extent on characteristic features of pitch patterns present in song melody and speech prosody. Anatomically, the temporal lobe (TL) has been discussed as playing a prominent role in the processing of both. Here we tested individuals with congenital amusia and patients with unilateral left and right TL lesions in their ability to categorize song and speech. In a forced-choice paradigm, specifically designed auditory stimuli representing sung, spoken and "ambiguous" stimuli (being perceived as "halfway between" song and speech), had to be classified as either "song" or "speech". Congenital amusics and TL patients, contrary to controls, exhibited a surprising bias to classifying the ambiguous stimuli as "song" despite their apparent deficit to correctly process features typical for song. This response bias possibly reflects a strategy where, based on available context information (here: forced choice for either speech or song), classification of non-processable items may be achieved through elimination of processable classes. This speech-based strategy masks the pitch processing deficit in congenital amusics and TL lesion patients.


Subject(s)
Auditory Perceptual Disorders/complications , Brain Injuries/complications , Music , Speech Perception/physiology , Temporal Lobe/pathology , Acoustic Stimulation , Auditory Perceptual Disorders/diagnostic imaging , Brain Injuries/diagnostic imaging , Brain Injuries/pathology , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Neuropsychological Tests , Statistics, Nonparametric , Temporal Lobe/diagnostic imaging
15.
Neuroimage ; 100: 135-44, 2014 Oct 15.
Article in English | MEDLINE | ID: mdl-24814212

ABSTRACT

Our knowledge on temporal lobe epilepsy (TLE) with hippocampal sclerosis has evolved towards the view that this syndrome affects widespread brain networks. Diffusion weighted imaging studies have shown alterations of large white matter tracts, most notably in left temporal lobe epilepsy, but the degree of altered connections between cortical and subcortical structures remains to be clarified. We performed a whole brain connectome analysis in 39 patients with refractory temporal lobe epilepsy and unilateral hippocampal sclerosis (20 right and 19 left) and 28 healthy subjects. We performed whole-brain probabilistic fiber tracking using MRtrix and segmented 164 cortical and subcortical structures with Freesurfer. Individual structural connectivity graphs based on these 164 nodes were computed by mapping the mean fractional anisotropy (FA) onto each tract. Connectomes were then compared using two complementary methods: permutation tests for pair-wise connections and Network Based Statistics to probe for differences in large network components. Comparison of pair-wise connections revealed a marked reduction of connectivity between left TLE patients and controls, which was strongly lateralized to the ipsilateral temporal lobe. Specifically, infero-lateral cortex and temporal pole were strongly affected, and so was the perisylvian cortex. In contrast, for right TLE, focal connectivity loss was much less pronounced and restricted to bilateral limbic structures and right temporal cortex. Analysis of large network components revealed furthermore that both left and right hippocampal sclerosis affected diffuse global and interhemispheric connectivity. Thus, left temporal lobe epilepsy was associated with a much more pronounced pattern of reduced FA, that included major landmarks of perisylvian language circuitry. These distinct patterns of connectivity associated with unilateral hippocampal sclerosis show how a focal pathology influences global network architecture, and how left or right-sided lesions may have differential and specific impacts on cerebral connectivity.


Subject(s)
Connectome/methods , Epilepsy, Temporal Lobe/physiopathology , Functional Laterality/physiology , Nerve Net/physiopathology , Adult , Epilepsy, Temporal Lobe/etiology , Female , Hippocampus/pathology , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Sclerosis/complications , Sclerosis/pathology
16.
Sci Adv ; 10(20): eadp9620, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38748801

ABSTRACT

Equitable collaboration between culturally diverse scientists reveals that acoustic fingerprints of human speech and song share parallel relationships across the globe.


Subject(s)
Cultural Diversity , Speech , Humans , Music
17.
Neuroimage ; 64: 134-46, 2013 Jan 01.
Article in English | MEDLINE | ID: mdl-23000255

ABSTRACT

Despite general agreement on shared syntactic resources in music and language, the neuroanatomical underpinnings of this overlap remain largely unexplored. While previous studies mainly considered frontal areas as supramodal grammar processors, the domain-general syntactic role of temporal areas has been so far neglected. Here we capitalized on the excellent spatial and temporal resolution of subdural EEG recordings to co-localize low-level syntactic processes in music and language in the temporal lobe in a within-subject design. We used Brain Surface Current Density mapping to localize and compare neural generators of the early negativities evoked by violations of phrase structure grammar in both music and spoken language. The results show that the processing of syntactic violations relies in both domains on bilateral temporo-fronto-parietal neural networks. We found considerable overlap of these networks in the superior temporal lobe, but also differences in the hemispheric timing and relative weighting of their fronto-temporal constituents. While alluding to the dissimilarity in how shared neural resources may be configured depending on the musical or linguistic nature of the perceived stimulus, the combined data lend support for a co-localization of early musical and linguistic syntax processing in the temporal lobe.


Subject(s)
Auditory Perception/physiology , Electroencephalography/methods , Language , Music , Nerve Net/physiology , Parietal Lobe/physiology , Temporal Lobe/physiology , Adolescent , Adult , Brain Mapping/methods , Female , Humans , Male , Middle Aged , Young Adult
18.
J Neurosci ; 30(10): 3572-8, 2010 Mar 10.
Article in English | MEDLINE | ID: mdl-20219991

ABSTRACT

The cognitive relationship between lyrics and tunes in song is currently under debate, with some researchers arguing that lyrics and tunes are represented as separate components, while others suggest that they are processed in integration. The present study addressed this issue by means of a functional magnetic resonance adaptation paradigm during passive listening to unfamiliar songs. The repetition and variation of lyrics and/or tunes in blocks of six songs was crossed in a 2 x 2 factorial design to induce selective adaptation for each component. Reductions of the hemodynamic response were observed along the superior temporal sulcus and gyrus (STS/STG) bilaterally. Within these regions, the left mid-STS showed an interaction of the adaptation effects for lyrics and tunes, suggesting an integrated processing of the two components at prelexical, phonemic processing levels. The degree of integration decayed toward more anterior regions of the left STS, where the lack of such an interaction and the stronger adaptation for lyrics than for tunes was suggestive of an independent processing of lyrics, perhaps resulting from the processing of meaning. Finally, evidence for an integrated representation of lyrics and tunes was found in the left dorsal precentral gyrus (PrCG), possibly relating to the build-up of a vocal code for singing in which musical and linguistic features of song are fused. Overall, these results demonstrate that lyrics and tunes are processed at varying degrees of integration (and separation) through the consecutive processing levels allocated along the posterior-anterior axis of the left STS and the left PrCG.


Subject(s)
Acoustic Stimulation/methods , Adaptation, Physiological/physiology , Auditory Perception/physiology , Magnetic Resonance Imaging , Music , Recognition, Psychology/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male
19.
Brain ; 133(9): 2643-55, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20802205

ABSTRACT

Contemporary neural models of auditory language comprehension proposed that the two hemispheres are differently specialized in the processing of segmental and suprasegmental features of language. While segmental processing of syntactic and lexical semantic information is predominantly assigned to the left hemisphere, the right hemisphere is thought to have a primacy for the processing of suprasegmental prosodic information such as accentuation and boundary marking. A dynamic interplay between the hemispheres is assumed to allow for the timely coordination of both information types. The present event-related potential study investigated whether the anterior and/or posterior portion of the corpus callosum provide the crucial brain basis for the online interaction of syntactic and prosodic information. Patients with lesions in the anterior two-thirds of the corpus callosum connecting orbital and frontal structures, or the posterior third of the corpus callosum connecting temporal, parietal and occipital areas, as well as matched healthy controls, were tested in a paradigm that crossed syntactic and prosodic manipulations. An anterior negativity elicited by a mismatch between syntactically predicted phrase structure and prosodic intonation was analysed as a marker for syntax-prosody interaction. Healthy controls and patients with lesions in the anterior corpus callosum showed this anterior negativity demonstrating an intact interplay between syntax and prosody. No such effect was found in patients with lesions in the posterior corpus callosum, although they exhibited intact, prosody-independent syntactic processing comparable with healthy controls and patients with lesions in the anterior corpus callosum. These data support the interplay between the speech processing streams in the left and right hemispheres via the posterior portion of the corpus callosum, building the brain basis for the coordination and integration of local syntactic and prosodic features during auditory speech comprehension.


Subject(s)
Brain Injuries/pathology , Corpus Callosum/physiopathology , Speech Perception/physiology , Verbal Behavior/physiology , Acoustic Stimulation/methods , Adult , Aged , Analysis of Variance , Brain Injuries/etiology , Brain Mapping , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Female , Functional Laterality/physiology , Humans , Male , Middle Aged , Neuropsychological Tests , Reaction Time/physiology , Time Factors , Young Adult
20.
Brain Sci ; 10(8)2020 Aug 02.
Article in English | MEDLINE | ID: mdl-32748810

ABSTRACT

Neurocomparative music and language research has seen major advances over the past two decades. The goal of this Special Issue "Advances in the Neurocognition of Music and Language" was to showcase the multiple neural analogies between musical and linguistic information processing, their entwined organization in human perception and cognition and to infer the applicability of the combined knowledge in pedagogy and therapy. Here, we summarize the main insights provided by the contributions and integrate them into current frameworks of rhythm processing, neuronal entrainment, predictive coding and cognitive control.

SELECTION OF CITATIONS
SEARCH DETAIL