Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
J Neurosci ; 44(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37949655

RESUMO

The key assumption of the predictive coding framework is that internal representations are used to generate predictions on how the sensory input will look like in the immediate future. These predictions are tested against the actual input by the so-called prediction error units, which encode the residuals of the predictions. What happens to prediction errors, however, if predictions drawn by different stages of the sensory hierarchy contradict each other? To answer this question, we conducted two fMRI experiments while female and male human participants listened to sequences of sounds: pure tones in the first experiment and frequency-modulated sweeps in the second experiment. In both experiments, we used repetition to induce predictions based on stimulus statistics (stats-informed predictions) and abstract rules disclosed in the task instructions to induce an orthogonal set of (task-informed) predictions. We tested three alternative scenarios: neural responses in the auditory sensory pathway encode prediction error with respect to (1) the stats-informed predictions, (2) the task-informed predictions, or (3) a combination of both. Results showed that neural populations in all recorded regions (bilateral inferior colliculus, medial geniculate body, and primary and secondary auditory cortices) encode prediction error with respect to a combination of the two orthogonal sets of predictions. The findings suggest that predictive coding exploits the non-linear architecture of the auditory pathway for the transmission of predictions. Such non-linear transmission of predictions might be crucial for the predictive coding of complex auditory signals like speech.Significance Statement Sensory systems exploit our subjective expectations to make sense of an overwhelming influx of sensory signals. It is still unclear how expectations at each stage of the processing pipeline are used to predict the representations at the other stages. The current view is that this transmission is hierarchical and linear. Here we measured fMRI responses in auditory cortex, sensory thalamus, and midbrain while we induced two sets of mutually inconsistent expectations on the sensory input, each putatively encoded at a different stage. We show that responses at all stages are concurrently shaped by both sets of expectations. The results challenge the hypothesis that expectations are transmitted linearly and provide for a normative explanation of the non-linear physiology of the corticofugal sensory system.


Assuntos
Córtex Auditivo , Vias Auditivas , Humanos , Masculino , Feminino , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Córtex Auditivo/fisiologia , Encéfalo/fisiologia , Som , Estimulação Acústica
2.
Brain ; 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39110638

RESUMO

Developmental dyslexia (DD) is one of the most common learning disorders, affecting millions of children and adults worldwide. To date, scientific research has attempted to explain DD primarily based on pathophysiological alterations in the cerebral cortex. In contrast, several decades ago, pioneering research on five post-mortem human brains suggested that a core characteristic of DD might be morphological alterations in a specific subdivision of the visual thalamus - the magnocellular LGN (M-LGN). However, due to considerable technical challenges in investigating LGN subdivisions non-invasively in humans, this finding was never confirmed in-vivo, and its relevance for DD pathology remained highly controversial. Here, we leveraged recent advances in high-resolution magnetic resonance imaging (MRI) at high field strength (7 Tesla) to investigate the M-LGN in DD in-vivo. Using a case-control design, we acquired data from a large sample of young adults with DD (n = 26; age 28 ± 7 years; 13 females) and matched control participants (n = 28; age 27 ± 6 years; 15 females). Each participant completed a comprehensive diagnostic behavioral test battery and participated in two MRI sessions, including three functional MRI experiments and one structural MRI acquisition. We measured blood-oxygen-level-dependent responses and longitudinal relaxation rates to compare both groups on LGN subdivision function and myelination. Based on previous research, we hypothesized that the M-LGN is altered in DD and that these alterations are associated with a key DD diagnostic score, i.e., rapid letter and number naming (RANln). The results showed aberrant responses of the M-LGN in DD compared to controls, which was reflected in a different functional lateralization of this subdivision between groups. These alterations were associated with RANln performance, specifically in male DD. We also found lateralization differences in the longitudinal relaxation rates of the M-LGN in DD relative to controls. Conversely, the other main subdivision of the LGN, the parvocellular LGN (P-LGN), showed comparable blood-oxygen-level-dependent responses and longitudinal relaxation rates between groups. The present study is the first to unequivocally show that M-LGN alterations are a hallmark of DD, affecting both the function and microstructure of this subdivision. It further provides a first functional interpretation of M-LGN alterations and a basis for a better understanding of sex-specific differences in DD with implications for prospective diagnostic and treatment strategies.

3.
J Neurosci ; 43(45): 7690-7699, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37848284

RESUMO

During face-to-face communication, the perception and recognition of facial movements can facilitate individuals' understanding of what is said. Facial movements are a form of complex biological motion. Separate neural pathways are thought to processing (1) simple, nonbiological motion with an obligatory waypoint in the motion-sensitive visual middle temporal area (V5/MT); and (2) complex biological motion. Here, we present findings that challenge this dichotomy. Neuronavigated offline transcranial magnetic stimulation (TMS) over V5/MT on 24 participants (17 females and 7 males) led to increased response times in the recognition of simple, nonbiological motion as well as visual speech recognition compared with TMS over the vertex, an active control region. TMS of area V5/MT also reduced practice effects on response times, that are typically observed in both visual speech and motion recognition tasks over time. Our findings provide the first indication that area V5/MT causally influences the recognition of visual speech.SIGNIFICANCE STATEMENT In everyday face-to-face communication, speech comprehension is often facilitated by viewing a speaker's facial movements. Several brain areas contribute to the recognition of visual speech. One area of interest is the motion-sensitive visual medial temporal area (V5/MT), which has been associated with the perception of simple, nonbiological motion such as moving dots, as well as more complex, biological motion such as visual speech. Here, we demonstrate using noninvasive brain stimulation that area V5/MT is causally relevant in recognizing visual speech. This finding provides new insights into the neural mechanisms that support the perception of human communication signals, which will help guide future research in typically developed individuals and populations with communication difficulties.


Assuntos
Percepção de Movimento , Percepção da Fala , Córtex Visual , Masculino , Feminino , Humanos , Estimulação Magnética Transcraniana , Percepção de Movimento/fisiologia , Fala , Córtex Visual/fisiologia , Estimulação Luminosa
4.
J Neurosci ; 41(33): 7136-7147, 2021 08 18.
Artigo em Inglês | MEDLINE | ID: mdl-34244362

RESUMO

Recognizing speech in background noise is a strenuous daily activity, yet most humans can master it. An explanation of how the human brain deals with such sensory uncertainty during speech recognition is to-date missing. Previous work has shown that recognition of speech without background noise involves modulation of the auditory thalamus (medial geniculate body; MGB): there are higher responses in left MGB for speech recognition tasks that require tracking of fast-varying stimulus properties in contrast to relatively constant stimulus properties (e.g., speaker identity tasks) despite the same stimulus input. Here, we tested the hypotheses that (1) this task-dependent modulation for speech recognition increases in parallel with the sensory uncertainty in the speech signal, i.e., the amount of background noise; and that (2) this increase is present in the ventral MGB, which corresponds to the primary sensory part of the auditory thalamus. In accordance with our hypothesis, we show, by using ultra-high-resolution functional magnetic resonance imaging (fMRI) in male and female human participants, that the task-dependent modulation of the left ventral MGB (vMGB) for speech is particularly strong when recognizing speech in noisy listening conditions in contrast to situations where the speech signal is clear. The results imply that speech in noise recognition is supported by modifications at the level of the subcortical sensory pathway providing driving input to the auditory cortex.SIGNIFICANCE STATEMENT Speech recognition in noisy environments is a challenging everyday task. One reason why humans can master this task is the recruitment of additional cognitive resources as reflected in recruitment of non-language cerebral cortex areas. Here, we show that also modulation in the primary sensory pathway is specifically involved in speech in noise recognition. We found that the left primary sensory thalamus (ventral medial geniculate body; vMGB) is more involved when recognizing speech signals as opposed to a control task (speaker identity recognition) when heard in background noise versus when the noise was absent. This finding implies that the brain optimizes sensory processing in subcortical sensory pathway structures in a task-specific manner to deal with speech recognition in noisy environments.


Assuntos
Mapeamento Encefálico , Corpos Geniculados/fisiologia , Colículos Inferiores/fisiologia , Ruído , Percepção da Fala/fisiologia , Tálamo/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Fonética , Projetos Piloto , Tempo de Reação , Razão Sinal-Ruído , Incerteza , Reconhecimento de Voz/fisiologia
5.
J Neurosci ; 41(41): 8618-8631, 2021 10 13.
Artigo em Inglês | MEDLINE | ID: mdl-34429380

RESUMO

The role of the motor cortex in perceptual and cognitive functions is highly controversial. Here, we investigated the hypothesis that the motor cortex can be instrumental for translating foreign language vocabulary. Human participants of both sexes were trained on foreign language (L2) words and their native language translations over 4 consecutive days. L2 words were accompanied by complementary gestures (sensorimotor enrichment) or pictures (sensory enrichment). Following training, participants translated the auditorily presented L2 words that they had learned. During translation, repetitive transcranial magnetic stimulation was applied bilaterally to a site within the primary motor cortex (Brodmann area 4) located in the vicinity of the arm functional compartment. Responses within the stimulated motor region have previously been found to correlate with behavioral benefits of sensorimotor-enriched L2 vocabulary learning. Compared to sham stimulation, effective perturbation by repetitive transcranial magnetic stimulation slowed down the translation of sensorimotor-enriched L2 words, but not sensory-enriched L2 words. This finding suggests that sensorimotor-enriched training induced changes in L2 representations within the motor cortex, which in turn facilitated the translation of L2 words. The motor cortex may play a causal role in precipitating sensorimotor-based learning benefits, and may directly aid in remembering the native language translations of foreign language words following sensorimotor-enriched training. These findings support multisensory theories of learning while challenging reactivation-based theories.SIGNIFICANCE STATEMENT Despite the potential for sensorimotor enrichment to serve as a powerful tool for learning in many domains, its underlying brain mechanisms remain largely unexplored. Using transcranial magnetic stimulation and a foreign language (L2) learning paradigm, we found that sensorimotor-enriched training can induce changes in L2 representations within the motor cortex, which in turn causally facilitate the translation of L2 words. The translation of recently acquired L2 words may therefore rely not only on auditory information stored in memory or on modality-independent L2 representations, but also on the sensorimotor context in which the words have been experienced.


Assuntos
Córtex Motor/fisiologia , Multilinguismo , Desempenho Psicomotor/fisiologia , Tradução , Aprendizagem Verbal/fisiologia , Vocabulário , Adulto , Feminino , Seguimentos , Humanos , Idioma , Masculino , Estimulação Magnética Transcraniana/métodos
6.
Hum Brain Mapp ; 43(6): 1955-1972, 2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-35037743

RESUMO

Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech-in-noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full-scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non-vocal sounds. Within the control group, the left and right IC responded more when recognising speech-in-noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Adulto , Vias Auditivas/diagnóstico por imagem , Transtorno do Espectro Autista/diagnóstico por imagem , Transtorno Autístico/complicações , Transtorno Autístico/diagnóstico por imagem , Comunicação , Humanos , Fala
7.
PLoS Comput Biol ; 17(3): e1008787, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33657098

RESUMO

Frequency modulation (FM) is a basic constituent of vocalisation in many animals as well as in humans. In human speech, short rising and falling FM-sweeps of around 50 ms duration, called formant transitions, characterise individual speech sounds. There are two representations of FM in the ascending auditory pathway: a spectral representation, holding the instantaneous frequency of the stimuli; and a sweep representation, consisting of neurons that respond selectively to FM direction. To-date computational models use feedforward mechanisms to explain FM encoding. However, from neuroanatomy we know that there are massive feedback projections in the auditory pathway. Here, we found that a classical FM-sweep perceptual effect, the sweep pitch shift, cannot be explained by standard feedforward processing models. We hypothesised that the sweep pitch shift is caused by a predictive feedback mechanism. To test this hypothesis, we developed a novel model of FM encoding incorporating a predictive interaction between the sweep and the spectral representation. The model was designed to encode sweeps of the duration, modulation rate, and modulation shape of formant transitions. It fully accounted for experimental data that we acquired in a perceptual experiment with human participants as well as previously published experimental results. We also designed a new class of stimuli for a second perceptual experiment to further validate the model. Combined, our results indicate that predictive interaction between the frequency encoding and direction encoding neural representations plays an important role in the neural processing of FM. In the brain, this mechanism is likely to occur at early stages of the processing hierarchy.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Modelos Neurológicos , Percepção da Fala/fisiologia , Adulto , Biologia Computacional , Feminino , Humanos , Masculino , Adulto Jovem
8.
Cereb Cortex ; 31(1): 513-528, 2021 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-32959878

RESUMO

Despite a rise in the use of "learning by doing" pedagogical methods in praxis, little is known as to how the brain benefits from these methods. Learning by doing strategies that utilize complementary information ("enrichment") such as gestures have been shown to optimize learning outcomes in several domains including foreign language (L2) training. Here we tested the hypothesis that behavioral benefits of gesture-based enrichment are critically supported by integrity of the biological motion visual cortices (bmSTS). Prior functional neuroimaging work has implicated the visual motion cortices in L2 translation following sensorimotor-enriched training; the current study is the first to investigate the causal relevance of these structures in learning by doing contexts. Using neuronavigated transcranial magnetic stimulation and a gesture-enriched L2 vocabulary learning paradigm, we found that the bmSTS causally contributed to behavioral benefits of gesture-enriched learning. Visual motion cortex integrity benefitted both short- and long-term learning outcomes, as well as the learning of concrete and abstract words. These results adjudicate between opposing predictions of two neuroscientific learning theories: While reactivation-based theories predict no functional role of specialized sensory cortices in vocabulary learning outcomes, the current study supports the predictive coding theory view that these cortices precipitate sensorimotor-based learning benefits.


Assuntos
Córtex Cerebral/fisiologia , Idioma , Aprendizagem/fisiologia , Vocabulário , Adulto , Feminino , Gestos , Humanos , Masculino , Lobo Parietal/fisiologia , Estimulação Magnética Transcraniana/métodos , Córtex Visual/fisiologia
9.
Neuroimage ; 244: 118559, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34562697

RESUMO

The human lateral geniculate nucleus (LGN) of the visual thalamus is a key subcortical processing site for visual information analysis. Due to its small size and deep location within the brain, a non-invasive characterization of the LGN and its microstructurally distinct magnocellular (M) and parvocellular (P) subdivisions in humans is challenging. Here, we investigated whether structural quantitative MRI (qMRI) methods that are sensitive to underlying microstructural tissue features enable MR-based mapping of human LGN M and P subdivisions. We employed high-resolution 7 Tesla in-vivo qMRI in N = 27 participants and ultra-high resolution 7 Tesla qMRI of a post-mortem human LGN specimen. We found that a quantitative assessment of the LGN and its subdivisions is possible based on microstructure-informed qMRI contrast alone. In both the in-vivo and post-mortem qMRI data, we identified two components of shorter and longer longitudinal relaxation time (T1) within the LGN that coincided with the known anatomical locations of a dorsal P and a ventral M subdivision, respectively. Through ground-truth histological validation, we further showed that the microstructural MRI contrast within the LGN pertains to cyto- and myeloarchitectonic tissue differences between its subdivisions. These differences were based on cell and myelin density, but not on iron content. Our qMRI-based mapping strategy paves the way for an in-depth understanding of LGN function and microstructure in humans. It further enables investigations into the selective contributions of LGN subdivisions to human behavior in health and disease.


Assuntos
Corpos Geniculados/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Corpos Geniculados/citologia , Humanos , Masculino , Adulto Jovem
10.
Hum Brain Mapp ; 42(12): 3963-3982, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-34043249

RESUMO

Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so-called 'face-benefit' is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face-benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face-sensitive regions while participants recognised the identity of auditory-only speakers (previously learned by face) in high (SNR -4 dB) and low (SNR +4 dB) levels of auditory noise. We observed a face-benefit in both noise levels, for most participants (16 of 21). In high-noise, the recognition of face-learned speakers engaged the right posterior superior temporal sulcus motion-sensitive face area (pSTS-mFA), a region implicated in the processing of dynamic facial cues. The face-benefit in high-noise also correlated positively with increased functional connectivity between this region and voice-sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face-benefit. In low-noise, the face-benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS-mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice-identity recognition in auditory-only listening conditions.


Assuntos
Percepção Auditiva/fisiologia , Conectoma , Reconhecimento Facial/fisiologia , Reconhecimento Psicológico/fisiologia , Lobo Temporal/fisiologia , Voz , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Ruído , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
11.
J Neurosci ; 39(9): 1720-1732, 2019 02 27.
Artigo em Inglês | MEDLINE | ID: mdl-30643025

RESUMO

Developmental dyslexia is characterized by the inability to acquire typical reading and writing skills. Dyslexia has been frequently linked to cerebral cortex alterations; however, recent evidence also points toward sensory thalamus dysfunctions: dyslexics showed reduced responses in the left auditory thalamus (medial geniculate body, MGB) during speech processing in contrast to neurotypical readers. In addition, in the visual modality, dyslexics have reduced structural connectivity between the left visual thalamus (lateral geniculate nucleus, LGN) and V5/MT, a cerebral cortex region involved in visual movement processing. Higher LGN-V5/MT connectivity in dyslexics was associated with the faster rapid naming of letters and numbers (RANln), a measure that is highly correlated with reading proficiency. Here, we tested two hypotheses that were directly derived from these previous findings. First, we tested the hypothesis that dyslexics have reduced structural connectivity between the left MGB and the auditory-motion-sensitive part of the left planum temporale (mPT). Second, we hypothesized that the amount of left mPT-MGB connectivity correlates with dyslexics RANln scores. Using diffusion tensor imaging-based probabilistic tracking, we show that male adults with developmental dyslexia have reduced structural connectivity between the left MGB and the left mPT, confirming the first hypothesis. Stronger left mPT-MGB connectivity was not associated with faster RANln scores in dyslexics, but was in neurotypical readers. Our findings provide the first evidence that reduced cortico-thalamic connectivity in the auditory modality is a feature of developmental dyslexia and it may also affect reading-related cognitive abilities in neurotypical readers.SIGNIFICANCE STATEMENT Developmental dyslexia is one of the most widespread learning disabilities. Although previous neuroimaging research mainly focused on pathomechanisms of dyslexia at the cerebral cortex level, several lines of evidence suggest an atypical functioning of subcortical sensory structures. By means of diffusion tensor imaging, we here show that dyslexic male adults have reduced white matter connectivity in a cortico-thalamic auditory pathway between the left auditory motion-sensitive planum temporale and the left medial geniculate body. Connectivity strength of this pathway was associated with measures of reading fluency in neurotypical readers. This is novel evidence on the neurocognitive correlates of reading proficiency, highlighting the importance of cortico-subcortical interactions between regions involved in the processing of spectrotemporally complex sound.


Assuntos
Conectoma , Dislexia/fisiopatologia , Corpos Geniculados/fisiopatologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiopatologia , Dislexia/diagnóstico por imagem , Corpos Geniculados/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino
12.
Hum Brain Mapp ; 41(4): 952-972, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31749219

RESUMO

Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal-movement and ventral-form visual cortex regions. Here, we explored, for the first time, whether similar dorsal-ventral interactions (assessed via functional connectivity), might also exist for visual-speech processing. We then examined whether altered dorsal-ventral connectivity is observed in adults with high-functioning autism spectrum disorder (ASD), a disorder associated with impaired visual-speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal-movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral-form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT-right OFA, left TVSA-left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face-to-face communication.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Córtex Cerebral/fisiopatologia , Conectoma , Reconhecimento Visual de Modelos/fisiologia , Percepção Social , Fala , Adulto , Transtorno do Espectro Autista/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Tecnologia de Rastreamento Ocular , Reconhecimento Facial/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Adulto Jovem
13.
Brain ; 141(1): 234-247, 2018 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-29228111

RESUMO

Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Voz/fisiologia , Aprendizagem por Associação/fisiologia , Audiometria , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Psicoacústica , Estatísticas não Paramétricas , Inquéritos e Questionários , Aprendizagem Verbal
14.
Neuroimage ; 178: 721-734, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29772380

RESUMO

The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition.


Assuntos
Mapeamento Encefálico/métodos , Corpos Geniculados/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
15.
Neuroimage ; 155: 97-112, 2017 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-28254454

RESUMO

Human voice recognition is critical for many aspects of social communication. Recently, a rare disorder, developmental phonagnosia, which describes the inability to recognise a speaker's voice, has been discovered. The underlying neural mechanisms are unknown. Here, we used two functional magnetic resonance imaging experiments to investigate brain function in two behaviourally well characterised phonagnosia cases, both 32 years old: AS has apperceptive and SP associative phonagnosia. We found distinct malfunctioned brain mechanisms in AS and SP matching their behavioural profiles. In apperceptive phonagnosia, right-hemispheric auditory voice-sensitive regions (i.e., Heschl's gyrus, planum temporale, superior temporal gyrus) showed lower responses than in matched controls (nAS=16) for vocal versus non-vocal sounds and for speaker versus speech recognition. In associative phonagnosia, the connectivity between voice-sensitive (i.e. right posterior middle/inferior temporal gyrus) and supramodal (i.e. amygdala) regions was reduced in comparison to matched controls (nSP=16) during speaker versus speech recognition. Additionally, both cases recruited distinct potential compensatory mechanisms. Our results support a central assumption of current two-system models of voice-identity processing: They provide the first evidence that dysfunction of voice-sensitive regions and impaired connectivity between voice-sensitive and supramodal person recognition regions can selectively contribute to deficits in person recognition by voice.


Assuntos
Transtornos da Percepção Auditiva/fisiopatologia , Encéfalo/fisiopatologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Fenótipo
16.
Hum Brain Mapp ; 38(9): 4398-4412, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28580681

RESUMO

In the native language, abstract and concrete nouns are represented in distinct areas of the cerebral cortex. Currently, it is unknown whether this is also the case for abstract and concrete nouns of a foreign language. Here, we taught adult native speakers of German 45 abstract and 45 concrete nouns of a foreign language. After learning the nouns for 5 days, participants performed a vocabulary translation task during functional magnetic resonance imaging. Translating abstract nouns in contrast to concrete nouns elicited responses in regions that are also responsive to abstract nouns in the native language: the left inferior frontal gyrus and the left middle and superior temporal gyri. Concrete nouns elicited larger responses in the angular gyri bilaterally and the left parahippocampal gyrus than abstract nouns. The cluster in the left angular gyrus showed psychophysiological interaction (PPI) with the left lingual gyrus. The left parahippocampal gyrus showed PPI with the posterior cingulate cortex. Similar regions have been previously found for concrete nouns in the native language. The results reveal similarities in the cortical representation of foreign language nouns with the representation of native language nouns that already occur after 5 days of vocabulary learning. Furthermore, we showed that verbal and enriched learning methods were equally suitable to teach foreign abstract and concrete nouns. Hum Brain Mapp 38:4398-4412, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Encéfalo/fisiologia , Aprendizagem/fisiologia , Multilinguismo , Vocabulário , Adulto , Análise de Variância , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Circulação Cerebrovascular/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Oxigênio/sangue , Fatores de Tempo , Adulto Jovem
17.
J Cogn Neurosci ; 27(2): 280-91, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25170793

RESUMO

The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.


Assuntos
Córtex Cerebral/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Fala/fisiologia , Voz , Estimulação Acústica , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Magnetoencefalografia , Masculino , Processamento de Sinais Assistido por Computador , Adulto Jovem
18.
Hum Brain Mapp ; 36(1): 324-39, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25220190

RESUMO

Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face- and voice-sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross-modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice-face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face-sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face-sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico , Sensação/fisiologia , Estimulação Acústica , Adulto , Encéfalo/irrigação sanguínea , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Estimulação Luminosa , Psicofísica , Tempo de Reação/fisiologia , Adulto Jovem
19.
Proc Natl Acad Sci U S A ; 109(34): 13841-6, 2012 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-22869724

RESUMO

Developmental dyslexia, a severe and persistent reading and spelling impairment, is characterized by difficulties in processing speech sounds (i.e., phonemes). Here, we test the hypothesis that these phonological difficulties are associated with a dysfunction of the auditory sensory thalamus, the medial geniculate body (MGB). By using functional MRI, we found that, in dyslexic adults, the MGB responded abnormally when the task required attending to phonemes compared with other speech features. No other structure in the auditory pathway showed distinct functional neural patterns between the two tasks for dyslexic and control participants. Furthermore, MGB activity correlated with dyslexia diagnostic scores, indicating that the task modulation of the MGB is critical for performance in dyslexics. These results suggest that deficits in dyslexia are associated with a failure of the neural mechanism that dynamically tunes MGB according to predictions from cortical areas to optimize speech processing. This view on task-related MGB dysfunction in dyslexics has the potential to reconcile influential theories of dyslexia within a predictive coding framework of brain function.


Assuntos
Córtex Auditivo/fisiopatologia , Mapeamento Encefálico/métodos , Dislexia/fisiopatologia , Tálamo/fisiopatologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Modelos Biológicos , Modelos Genéticos , Neurônios/metabolismo , Fonética , Leitura , Percepção da Fala
20.
Neuroimage ; 91: 375-85, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24434677

RESUMO

Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One view is that speaker information is discarded at early processing stages and not used for understanding the speech message. An alternative view is that speaker information is exploited to improve speech recognition. Consistent with the latter view, previous research identified functional interactions between the left- and the right-hemispheric superior temporal sulcus/gyrus, which process speech- and speaker-specific vocal tract parameters, respectively. Vocal tract parameters are one of the two major acoustic features that determine both speaker identity and speech message (phonemes). Here, using functional magnetic resonance imaging (fMRI), we show that a similar interaction exists for glottal fold parameters between the left and right Heschl's gyri. Glottal fold parameters are the other main acoustic feature that determines speaker identity and speech message (linguistic prosody). The findings suggest that interactions between left- and right-hemispheric areas are specific to the processing of different acoustic features of speech and speaker, and that they represent a general neural mechanism when understanding speech from different speakers.


Assuntos
Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Fala/fisiologia , Adulto , Feminino , Lateralidade Funcional/fisiologia , Glote/anatomia & histologia , Glote/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Individualidade , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicolinguística , Prega Vocal/anatomia & histologia , Prega Vocal/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA