RESUMO
BACKGROUND: Perceptual and speech production abilities of children with cochlear implants (CIs) are usually tested by word and sentence repetition or naming tests. However, these tests are quite far apart from daily life linguistic contexts. AIM: Here, we describe a way of investigating the link between language comprehension and anticipatory verbal behaviour promoting the use of more complex listening situations. METHODS AND PROCEDURE: The setup consists in watching the audio-visual dialogue of two actors. Children's gaze switches from one speaker to the other serve as a proxy of their prediction abilities. Moreover, to better understand the basis and the impact of anticipatory behaviour, we also measured children's ability to understand the dialogue content, their speech perception and memory skills as well as their rhythmic skills, that also require temporal predictions. Importantly, we compared children with CI performances with those of an age-matched group of children with normal hearing (NH). OUTCOMES AND RESULTS: While children with CI revealed poorer speech perception and verbal working memory abilities than NH children, there was no difference in gaze anticipatory behaviour. Interestingly, in children with CI only, we found a significant correlation between dialogue comprehension, perceptual skills and gaze anticipatory behaviour. CONCLUSION: Our results extend to a dialogue context of previous findings showing an absence of predictive deficits in children with CI. The current design seems an interesting avenue to provide an accurate and objective estimate of anticipatory language behaviour in a more ecological linguistic context also with young children. WHAT THIS PAPER ADDS: What is already known on the subject Children with cochlear implants seem to have difficulties extracting structure from and learning sequential input patterns, possibly due to signal degradation and auditory deprivation in the first years of life. They also seem to have a reduced use of contextual information and slow language processing among children with hearing loss. What this paper adds to existing knowledge Here we show that when adopting a rather complex linguistic context such as watching a dialogue of two individuals, children with cochlear implants are able to use the speech and language structure to anticipate gaze switches to the upcoming speaker. What are the clinical implications of this work? The present design seems an interesting avenue to provide an accurate and objective estimate of anticipatory behaviour in a more ecological and dynamic linguistic context. Importantly, this measure is implicit and it has been previously used with very young (normal-hearing) children, showing that they spontaneously make anticipatory gaze switches by age two. Thus, this approach may be of interest to refine the speech comprehension assessment at a rather early age after cochlear implantation where explicit behavioural tests are not always reliable and sensitive.
RESUMO
Stereo-electroencephalography (SEEG) is the surgical implantation of electrodes in the brain to better localize the epileptic network in pharmaco-resistant epileptic patients. This technique has exquisite spatial and temporal resolution. Still, the number and the position of the electrodes in the brain is limited and determined by the semiology and/or preliminary non-invasive examinations, leading to a large number of unexplored brain structures in each patient. Here, we propose a new approach to reconstruct the activity of non-sampled structures in SEEG, based on independent component analysis (ICA) and dipole source localization. We have tested this approach with an auditory stimulation dataset in ten patients. The activity directly recorded from the auditory cortex served as ground truth and was compared to the ICA applied on all non-auditory electrodes. Our results show that the activity from the auditory cortex can be reconstructed at the single trial level from contacts as far as â¼40 mm from the source. Importantly, this reconstructed activity is localized via dipole fitting in the proximity of the original source. In addition, we show that the size of the confidence interval of the dipole fitting is a good indicator of the reliability of the result, which depends on the geometry of the SEEG implantation. Overall, our approach allows reconstructing the activity of structures far from the electrode locations, partially overcoming the spatial sampling limitation of intracerebral recordings.
Assuntos
Mapeamento Encefálico , Epilepsia , Humanos , Mapeamento Encefálico/métodos , Reprodutibilidade dos Testes , Eletroencefalografia/métodos , EncéfaloRESUMO
Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4-8 Hz) and beta-gamma (around 15-40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.
Assuntos
Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Eletrodos Implantados , Eletroencefalografia , Epilepsia , Feminino , Lateralidade Funcional/fisiologia , Humanos , MasculinoRESUMO
Cortical oscillations have been proposed to play a functional role in speech and music perception, attentional selection, and working memory, via the mechanism of neural entrainment. One of the properties of neural entrainment that is often taken for granted is that its modulatory effect on ongoing oscillations outlasts rhythmic stimulation. We tested the existence of this phenomenon by studying cortical neural oscillations during and after presentation of melodic stimuli in a passive perception paradigm. Melodies were composed of â¼60 and â¼80 Hz tones embedded in a 2.5 Hz stream. Using intracranial and surface recordings in male and female humans, we reveal persistent oscillatory activity in the high-γ band in response to the tones throughout the cortex, well beyond auditory regions. By contrast, in response to the 2.5 Hz stream, no persistent activity in any frequency band was observed. We further show that our data are well captured by a model of damped harmonic oscillator and can be classified into three classes of neural dynamics, with distinct damping properties and eigenfrequencies. This model provides a mechanistic and quantitative explanation of the frequency selectivity of auditory neural entrainment in the human cortex.SIGNIFICANCE STATEMENT It has been proposed that the functional role of cortical oscillations is subtended by a mechanism of entrainment, the synchronization in phase or amplitude of neural oscillations to a periodic stimulation. One of the properties of neural entrainment that is often taken for granted is that its modulatory effect on ongoing oscillations outlasts rhythmic stimulation. Using intracranial and surface recordings of humans passively listening to rhythmic auditory stimuli, we reveal consistent oscillatory responses throughout the cortex, with persistent activity of high-γ oscillations. On the contrary, neural oscillations do not outlast low-frequency acoustic dynamics. We interpret our results as reflecting harmonic oscillator properties, a model ubiquitous in physics but rarely used in neuroscience.
Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Periodicidade , Fala/fisiologia , Adulto JovemRESUMO
Good scientific practice (GSP) refers to both explicit and implicit rules, recommendations, and guidelines that help scientists to produce work that is of the highest quality at any given time, and to efficiently share that work with the community for further scrutiny or utilization. For experimental research using magneto- and electroencephalography (MEEG), GSP includes specific standards and guidelines for technical competence, which are periodically updated and adapted to new findings. However, GSP also needs to be regularly revisited in a broader light. At the LiveMEEG 2020 conference, a reflection on GSP was fostered that included explicitly documented guidelines and technical advances, but also emphasized intangible GSP: a general awareness of personal, organizational, and societal realities and how they can influence MEEG research. This article provides an extensive report on most of the LiveMEEG contributions and new literature, with the additional aim to synthesize ongoing cultural changes in GSP. It first covers GSP with respect to cognitive biases and logical fallacies, pre-registration as a tool to avoid those and other early pitfalls, and a number of resources to enable collaborative and reproducible research as a general approach to minimize misconceptions. Second, it covers GSP with respect to data acquisition, analysis, reporting, and sharing, including new tools and frameworks to support collaborative work. Finally, GSP is considered in light of ethical implications of MEEG research and the resulting responsibility that scientists have to engage with societal challenges. Considering among other things the benefits of peer review and open access at all stages, the need to coordinate larger international projects, the complexity of MEEG subject matter, and today's prioritization of fairness, privacy, and the environment, we find that current GSP tends to favor collective and cooperative work, for both scientific and for societal reasons.
Assuntos
Eletroencefalografia , HumanosRESUMO
It is poorly known whether musical training is associated with improvements in general cognitive abilities, such as statistical learning (SL). In standard SL paradigms, musicians have shown better performances than nonmusicians. However, this advantage could be due to differences in auditory discrimination, in memory or truly in the ability to learn sequence statistics. Unfortunately, these different hypotheses make similar predictions in terms of expected results. To dissociate them, we developed a Bayesian model and recorded electroencephalography (EEG). Our results confirm that musicians perform approximately 15% better than nonmusicians at predicting items in auditory sequences that embed either low or high-order statistics. These higher performances are explained in the model by parameters governing the learning of high-order statistics and the selection stage noise. EEG recordings reveal a neural underpinning of the musician's advantage: the P300 amplitude correlates with the surprise elicited by each item, and so, more strongly for musicians. Finally, early EEG components correlate with the surprise elicited by low-order statistics, as opposed to late EEG components that correlate with the surprise elicited by high-order statistics and this effect is stronger for musicians. Overall, our results demonstrate that musical expertise is associated with improved neural SL in the auditory domain. SIGNIFICANCE STATEMENT: It is poorly known whether musical training leads to improvements in general cognitive skills. One fundamental cognitive ability, SL, is thought to be enhanced in musicians, but previous studies have reported mixed results. This is because such musician's advantage can embrace very different explanations, such as improvement in auditory discrimination or in memory. To solve this problem, we developed a Bayesian model and recorded EEG to dissociate these explanations. Our results reveal that musical expertise is truly associated with an improved ability to learn sequence statistics, especially high-order statistics. This advantage is reflected in the electroencephalographic recordings, where the P300 amplitude is more sensitive to surprising items in musicians than in nonmusicians.
Assuntos
Música , Estimulação Acústica/métodos , Percepção Auditiva , Teorema de Bayes , Eletroencefalografia/métodos , Aprendizagem , Música/psicologiaRESUMO
OBJECTIVES: Children with hearing loss (HL), in spite of early cochlear implantation, often struggle considerably with language acquisition. Previous research has shown a benefit of rhythmic training on linguistic skills in children with HL, suggesting that improving rhythmic capacities could help attenuating language difficulties. However, little is known about general rhythmic skills of children with HL and how they relate to speech perception. The aim of this study is twofold: (1) to assess the abilities of children with HL in different rhythmic sensorimotor synchronization tasks compared to a normal-hearing control group and (2) to investigate a possible relation between sensorimotor synchronization abilities and speech perception abilities in children with HL. DESIGN: A battery of sensorimotor synchronization tests with stimuli of varying acoustic and temporal complexity was used: a metronome, different musical excerpts, and complex rhythmic patterns. Synchronization abilities were assessed in 32 children (aged from 5 to 10 years) with a severe to profound HL mainly fitted with one or two cochlear implants (n = 28) or with hearing aids (n = 4). Working memory and sentence repetition abilities were also assessed. Performance was compared to an age-matched control group of 24 children with normal hearing. The comparison took into account variability in working memory capacities. For children with HL only, we computed linear regressions on speech, sensorimotor synchronization, and working memory abilities, including device-related variables such as onset of device use, type of device, and duration of use. RESULTS: Compared to the normal-hearing group, children with HL performed poorly in all sensorimotor synchronization tasks, but the effect size was greater for complex as compared to simple stimuli. Group differences in working memory did not explain this result. Linear regression analysis revealed that working memory, synchronization to complex rhythms performances, age, and duration of device use predicted the number of correct syllables produced in a sentence repetition task. CONCLUSION: Despite early cochlear implantation or hearing aid use, hearing impairment affects the quality of temporal processing of acoustic stimuli in congenitally deaf children. This deficit seems to be more severe with stimuli of increasing rhythmic complexity highlighting a difficulty in structuring sounds according to a temporal hierarchy.
Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Perda Auditiva , Percepção da Fala , Criança , Pré-Escolar , HumanosRESUMO
When listening to temporally regular rhythms, most people are able to extract the beat. Evidence suggests that the neural mechanism underlying this ability is the phase alignment of endogenous oscillations to the external stimulus, allowing for the prediction of upcoming events (i.e., dynamic attending). Relatedly, individuals with dyslexia may have deficits in the entrainment of neural oscillations to external stimuli, especially at low frequencies. The current experiment investigated rhythmic processing in adults with dyslexia and matched controls. Regular and irregular rhythms were presented to participants while electroencephalography was recorded. Regular rhythms contained the beat at 2 Hz; while acoustic energy was maximal at 4 Hz and 8 Hz. These stimuli allowed us to investigate whether the brain responds non-linearly to the beat-level of a rhythmic stimulus, and whether beat-based processing differs between dyslexic and control participants. Both groups showed enhanced stimulus-brain coherence for regular compared to irregular rhythms at the frequencies of interest, with an overrepresentation of the beat-level in the brain compared to the acoustic signal. In addition, we found evidence that controls extracted subtle temporal regularities from irregular stimuli, whereas dyslexics did not. Findings are discussed in relation to dynamic attending theory and rhythmic processing deficits in dyslexia.
Assuntos
Percepção Auditiva/fisiologia , Dislexia/fisiopatologia , Percepção do Tempo/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto JovemRESUMO
Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.
Assuntos
Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Periodicidade , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Música , Estatísticas não Paramétricas , Adulto JovemRESUMO
The cortico-limbic system is critically involved in emotional responses and resulting adaptive behaviors. Within this circuit, complementary regions are believed to be involved in either the appraisal or the regulation of affective state. However, the respective contribution of these bottom-up and top-down mechanisms during emotion processing remains to be clarified. We used a new functional magnetic resonance imaging (fMRI) paradigm varying 3 parameters: emotional valence, emotional congruency, and allocation of attention, to distinguish the functional variation in activity and connectivity between amygdala, anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC). Bottom-up appraisal of negative compared with positive stimuli led to a greater amygdala response and stronger functional interaction between amygdala and both dorsal ACC and DLPFC. Top-down resolution of emotional conflict was associated with increased activity within ACC and higher functional connectivity between this structure, and both the amygdala and DLPFC. Finally, increased top-down attentional control caused greater engagement of the DLPFC, accompanied by increased connectivity between DLPFC and dorsal ACC. This novel task provides an efficient tool for exploring bottom-up and top-down processes underlying emotion and may be particularly helpful for investigating the neurofunctional underpinnings of psychiatric disorders.
Assuntos
Tonsila do Cerebelo/fisiologia , Atenção/fisiologia , Emoções/fisiologia , Giro do Cíngulo/fisiologia , Sistema Límbico/citologia , Córtex Pré-Frontal/fisiologia , Adulto , Transtorno Depressivo Maior/fisiopatologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Córtex Pré-Frontal/fisiopatologia , Adulto JovemRESUMO
This event-related potential study examined whether French listeners use stress at a phonological level when discriminating between stressed and unstressed words in their language. Participants heard five words and made same/different decisions about the final word (male voice) with respect to the four preceding words (different female voices). Compared to the first four context words, the target word was (i) phonemically and prosodically identical (/Êu/-/Êu/; control condition), (ii) phonemically identical but differing in the presence of a primary stress (/Êu'/-/Êu/), (iii) prosodically identical but phonemically different (/Êo/-/Êu/), or (iv) both phonemically and prosodically different (/Êo'/-/Êu/). Crucially, differences on the P200 and the following N200 components were observed for the /Êu'/-/Êu/ and the /Êo/-/Êu/ conditions compared to the /Êu/-/Êu/ control condition. Moreover, on the N200 component more negativity was observed for the /Êo/-/Êu/ condition compared to the /Êu'/-/Êu/ conditions, while no difference emerged between these two conditions on the earlier P200 component. Crucially, the results suggest that French listeners are capable of creating an abstract representation of stress. However, as they receive more input, participants react more strongly to phonemic than to stress information.
Assuntos
Fonética , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Acústica , Adolescente , Adulto , Audiometria da Fala , Sinais (Psicologia) , Discriminação Psicológica , Eletroencefalografia , Feminino , Humanos , Masculino , Multilinguismo , Psicoacústica , Espectrografia do Som , Adulto JovemRESUMO
Converging evidence points to a link between anxiety proneness and altered emotional functioning, including threat-related biases in selective attention and higher susceptibility to emotionally ambiguous stimuli. However, during these complex emotional situations, it remains unclear how trait anxiety affects the engagement of the prefrontal emotional control system and particularly the anterior cingulate cortex (ACC), a core region at the intersection of the limbic and prefrontal systems. Using an emotional conflict task and functional magnetic resonance imaging (fMRI), we investigated in healthy subjects the relations between trait anxiety and both regional activity and functional connectivity (psychophysiological interaction) of the ACC. Higher levels of anxiety were associated with stronger task-related activation in ACC but with reduced functional connectivity between ACC and lateral prefrontal cortex (LPFC). These results support the hypothesis that when one is faced with emotionally incompatible information, anxiety leads to inefficient high-order control, characterized by insufficient ACC-LPFC functional coupling and increases, possibly compensatory, in activation of ACC. Our findings provide a deeper understanding of the pathophysiology of the neural circuitry underlying anxiety and may offer potential treatment markers for anxiety disorders.
Assuntos
Ansiedade/psicologia , Conflito Psicológico , Emoções/fisiologia , Personalidade/fisiologia , Córtex Pré-Frontal/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Giro do Cíngulo/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Autorrelato , Adulto JovemRESUMO
When we direct attentional resources to a certain point in time, expectation and preparedness is heightened and behavior is, as a result, more efficient. This future-oriented attending can be guided either voluntarily, by externally defined cues, or implicitly, by perceived temporal regularities. Inspired by dynamic attending theory, our aim was to study the extent to which metrical structure, with its beats of greater or lesser relative strength, modulates attention implicitly over time and to uncover the neural circuits underlying this process of dynamic attending. We used fMRI to investigate whether auditory meter generated temporal expectancies and, consequently, how it affected processing of auditory and visual targets. Participants listened to a continuous auditory metrical sequence and pressed a button whenever an auditory or visual target was presented. The independent variable was the time of target presentation with respect to the metrical structure of the sequence. Participants' RTs to targets occurring on strong metrical positions were significantly faster than responses to events falling on weak metrical positions. Events falling on strong beats were accompanied by increased activation of the left inferior parietal cortex, a region crucial for orienting attention in time, and, by greater functional connectivity between the left inferior parietal cortex and the visual and auditory cortices, the SMA and the cerebellum. These results support the predictions of the dynamic attending theory that metrical structure with its relative strong and weak beats modulates attentional resources over time and, in turn, affects the functioning of both perceptual and motor preparatory systems.
Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Lobo Parietal/fisiologia , Detecção de Sinal Psicológico/fisiologia , Percepção do Tempo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Encéfalo/fisiologia , Feminino , Lateralidade Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Reconhecimento Fisiológico de Modelo/fisiologia , Estimulação Luminosa , Psicofísica , Tempo de Reação , Fatores de Tempo , Adulto JovemRESUMO
Rhythmic entrainment is an important component of emotion induction by music, but brain circuits recruited during spontaneous entrainment of attention by music and the influence of the subjective emotional feelings evoked by music remain still largely unresolved. In this study we used fMRI to test whether the metric structure of music entrains brain activity and how music pleasantness influences such entrainment. Participants listened to piano music while performing a speeded visuomotor detection task in which targets appeared time-locked to either strong or weak beats. Each musical piece was presented in both a consonant/pleasant and dissonant/unpleasant version. Consonant music facilitated target detection and targets presented synchronously with strong beats were detected faster. FMRI showed increased activation of bilateral caudate nucleus when responding on strong beats, whereas consonance enhanced activity in attentional networks. Meter and consonance selectively interacted in the caudate nucleus, with greater meter effects during dissonant than consonant music. These results reveal that the basal ganglia, involved both in emotion and rhythm processing, critically contribute to rhythmic entrainment of subcortical brain circuits by music.
Assuntos
Encéfalo/fisiologia , Música/psicologia , Periodicidade , Prazer , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto JovemRESUMO
The role of music training in fostering brain plasticity and developing high cognitive skills, notably linguistic abilities, is of great interest from both a scientific and a societal perspective. Here, we report results of a longitudinal study over 2 years using both behavioral and electrophysiological measures and a test-training-retest procedure to examine the influence of music training on speech segmentation in 8-year-old children. Children were pseudo-randomly assigned to either music or painting training and were tested on their ability to extract meaningless words from a continuous flow of nonsense syllables. While no between-group differences were found before training, both behavioral and electrophysiological measures showed improved speech segmentation skills across testing sessions for the music group only. These results show that music training directly causes facilitation in speech segmentation, thereby pointing to the importance of music for speech perception and more generally for children's language development. Finally these results have strong implications for promoting the development of music-based remediation strategies for children with language-based learning impairments.
Assuntos
Desenvolvimento Infantil , Música/psicologia , Fala , Criança , Feminino , Humanos , Estudos Longitudinais , Masculino , Pinturas/psicologia , Prática PsicológicaRESUMO
Why do humans spontaneously dance to music? To test the hypothesis that motor dynamics reflect predictive timing during music listening, we created melodies with varying degrees of rhythmic predictability (syncopation) and asked participants to rate their wanting-to-move (groove) experience. Degree of syncopation and groove ratings are quadratically correlated. Magnetoencephalography data showed that, while auditory regions track the rhythm of melodies, beat-related 2-hertz activity and neural dynamics at delta (1.4 hertz) and beta (20 to 30 hertz) rates in the dorsal auditory pathway code for the experience of groove. Critically, the left sensorimotor cortex coordinates these groove-related delta and beta activities. These findings align with the predictions of a neurodynamic model, suggesting that oscillatory motor engagement during music listening reflects predictive timing and is effected by interaction of neural dynamics along the dorsal auditory pathway.
Assuntos
Música , Humanos , Membrana Celular , Córtex Cerebral , MagnetoencefalografiaRESUMO
Speech comprehension is enhanced when preceded (or accompanied) by a congruent rhythmic prime reflecting the metrical sentence structure. Although these phenomena have been described for auditory and motor primes separately, their respective and synergistic contribution has not been addressed. In this experiment, participants performed a speech comprehension task on degraded speech signals that were preceded by a rhythmic prime that could be auditory, motor or audiomotor. Both auditory and audiomotor rhythmic primes facilitated speech comprehension speed. While the presence of a purely motor prime (unpaced tapping) did not globally benefit speech comprehension, comprehension accuracy scaled with the regularity of motor tapping. In order to investigate inter-individual variability, participants also performed a Spontaneous Speech Synchronization test. The strength of the estimated perception-production coupling correlated positively with overall speech comprehension scores. These findings are discussed in the framework of the dynamic attending and active sensing theories.
Assuntos
Compreensão , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Masculino , Feminino , Adulto Jovem , Compreensão/fisiologia , Adulto , Estimulação Acústica , Desempenho Psicomotor/fisiologia , Percepção Auditiva/fisiologia , Fala/fisiologiaRESUMO
To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared, preferred, and domain-selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.
Assuntos
Música , Humanos , Masculino , Feminino , Adulto , Rede Nervosa/fisiologia , Fala/fisiologia , Percepção Auditiva/fisiologia , Epilepsia/fisiopatologia , Adulto Jovem , Eletroencefalografia , Córtex Cerebral/fisiologia , Eletrocorticografia , Percepção da Fala/fisiologia , Pessoa de Meia-Idade , Mapeamento EncefálicoRESUMO
One way to investigate the cortical tracking of continuous auditory stimuli is to use the stimulus reconstruction approach. However, the cognitive and behavioral factors impacting this cortical representation remain largely overlooked. Two possible candidates are familiarity with the stimulus and the ability to resist internal distractions. To explore the possible impacts of these two factors on the cortical representation of natural music stimuli, forty-one participants listened to monodic natural music stimuli while we recorded their neural activity. Using the stimulus reconstruction approach and linear mixed models, we found that familiarity positively impacted the reconstruction accuracy of music stimuli and that this effect of familiarity was modulated by mind wandering.
RESUMO
Musical training is known to modify auditory perception and related cortical organization. Here, we show that these modifications may extend to higher cognitive functions and generalize to processing of speech. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and nonlinguistic stimuli based only on probabilities of occurrence between adjacent syllables or tones. In the present experiment, we used an artificial (sung) language learning design coupled with an electrophysiological approach. While behavioral results were not clear cut in showing an effect of expertise, Event-Related Potentials data showed that musicians learned better than did nonmusicians both musical and linguistic structures of the sung language. We discuss these findings in terms of practice-related changes in auditory processing, stream segmentation, and memory processes.