Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Exp Brain Res ; 237(12): 3143-3153, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31576421

RESUMO

An impressive number of theoretical proposals and neurobiological studies argue that perceptual processing is not strictly feedforward but rather operates through an interplay between bottom-up sensory and top-down predictive mechanisms. The present EEG study aimed to further determine how prior knowledge on auditory syllables may impact speech perception. Prior knowledge was manipulated by presenting the participants with visual information indicative of the syllable onset (when), its phonetic content (what) and/or its articulatory features (how). While when and what predictions consisted of unnatural visual cues (i.e., a visual timeline and a visuo-orthographic cue), how prediction consisted of the visual movements of a speaker. During auditory speech perception, when and what predictions both attenuated the amplitude of N1/P2 auditory evoked potentials. Regarding how prediction, not only an amplitude decrease but also a latency facilitation of N1/P2 auditory evoked potentials were observed during audiovisual compared to unimodal speech perception. However, when and what predictability effects were then reduced or abolished, with only what prediction reducing P2 amplitude but increasing latency. Altogether, these results demonstrate the influence of when, what and how visually induced predictions at an early stage on cortical auditory speech processing. Crucially, they indicate a preponderant predictive role of the speaker's articulatory gestures during audiovisual speech perception, likely driven by attentional load and focus.


Assuntos
Antecipação Psicológica/fisiologia , Córtex Cerebral/fisiologia , Potenciais Evocados Auditivos/fisiologia , Gestos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Psicolinguística , Leitura , Adulto Jovem
2.
Ear Hear ; 39(1): 139-149, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28753162

RESUMO

OBJECTIVES: The goal of this study was to determine the effect of auditory deprivation and age-related speech decline on perceptuo-motor abilities during speech processing in post-lingually deaf cochlear-implanted participants and in normal-hearing elderly (NHE) participants. DESIGN: A close-shadowing experiment was carried out on 10 cochlear-implanted patients and on 10 NHE participants, with two groups of normal-hearing young participants as controls. To this end, participants had to categorize auditory and audiovisual syllables as quickly as possible, either manually or orally. Reaction times and percentages of correct responses were compared depending on response modes, stimulus modalities, and syllables. RESULTS: Responses of cochlear-implanted subjects were globally slower and less accurate than those of both young and elderly normal-hearing people. Adding the visual modality was found to enhance performance for cochlear-implanted patients, whereas no significant effect was obtained for the NHE group. Critically, oral responses were faster than manual ones for all groups. In addition, for NHE participants, manual responses were more accurate than oral responses, as was the case for normal-hearing young participants when presented with noisy speech stimuli. CONCLUSIONS: Faster reaction times were observed for oral than for manual responses in all groups, suggesting that perceptuo-motor relationships were somewhat successfully functional after cochlear implantation and remain efficient in the NHE group. These results are in agreement with recent perceptuo-motor theories of speech perception. They are also supported by the theoretical assumption that implicit motor knowledge and motor representations partly constrain auditory speech processing. In this framework, oral responses would have been generated at an earlier stage of a sensorimotor loop, whereas manual responses would appear late, leading to slower but more accurate responses. The difference between oral and manual responses suggests that the perceptuo-motor loop is still effective for NHE subjects and also for cochlear-implanted participants, despite degraded global performance.


Assuntos
Envelhecimento/fisiologia , Percepção Auditiva/fisiologia , Implantes Cocleares , Surdez/fisiopatologia , Audição/fisiologia , Adulto , Idoso , Surdez/psicologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Privação Sensorial/fisiologia
3.
J Cogn Neurosci ; 29(3): 448-466, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28139959

RESUMO

Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.


Assuntos
Encéfalo/fisiologia , Reconhecimento Facial/fisiologia , Percepção de Movimento/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Estimulação Luminosa/métodos , Tempo de Reação , Percepção Social , Adulto Jovem
4.
Hum Brain Mapp ; 38(5): 2751-2771, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28263012

RESUMO

Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Envelhecimento , Atenção/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Movimento/fisiologia , Fala/fisiologia , Estimulação Acústica , Acústica , Adulto , Idoso , Encéfalo/diagnóstico por imagem , Estudos Transversais , Feminino , Movimentos da Cabeça , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Oxigênio/sangue , Adulto Jovem
5.
Exp Brain Res ; 235(9): 2867-2876, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28676921

RESUMO

Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.


Assuntos
Potenciais Evocados Auditivos/fisiologia , Gestos , Desempenho Psicomotor/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Ego , Eletroencefalografia , Feminino , Humanos , Lábio/fisiologia , Masculino , Adulto Jovem
6.
J Cogn Neurosci ; 27(2): 334-51, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25203272

RESUMO

Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback.


Assuntos
Adaptação Fisiológica/fisiologia , Face/fisiologia , Aprendizagem/fisiologia , Atividade Motora/fisiologia , Fala/fisiologia , Adaptação Psicológica/fisiologia , Mapeamento Encefálico , Humanos , Inibição Psicológica , Imageamento por Ressonância Magnética , Testes Neuropsicológicos
7.
J Acoust Soc Am ; 136(4): 1869-79, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25324087

RESUMO

Interaction between covert and overt orofacial gestures has been poorly studied apart from old and rather qualitative experiments. The question deserves special interest in the context of the debate between auditory and motor theories of speech perception, where dual tasks may be of great interest. It is shown here that dynamic mandible and lips movement produced by a participant result in strong and stable perturbations to an inner speech counting task that has to be realized at the same time, while static orofacial configurations and static or dynamic manual actions produce no perturbation. This enables the authors to discuss how such kinds of orofacial perturbations could be introduced in dual task paradigms to assess the role of motor processes in speech perception.


Assuntos
Expressão Facial , Gestos , Lábio/fisiologia , Mandíbula/fisiologia , Conceitos Matemáticos , Fala , Feminino , Humanos , Masculino , Movimento , Psicofísica , Análise e Desempenho de Tarefas , Pensamento , Fatores de Tempo
8.
Neuropsychologia ; 198: 108866, 2024 06 06.
Artigo em Inglês | MEDLINE | ID: mdl-38518889

RESUMO

Previous psychophysical and neurophysiological studies in young healthy adults have provided evidence that audiovisual speech integration occurs with a large degree of temporal tolerance around true simultaneity. To further determine whether audiovisual speech asynchrony modulates auditory cortical processing and neural binding in young healthy adults, N1/P2 auditory evoked responses were compared using an additive model during a syllable categorization task, without or with an audiovisual asynchrony ranging from 240 ms visual lead to 240 ms auditory lead. Consistent with previous psychophysical findings, the observed results converge in favor of an asymmetric temporal integration window. Three main findings were observed: 1) predictive temporal and phonetic cues from pre-phonatory visual movements before the acoustic onset appeared essential for neural binding to occur, 2) audiovisual synchrony, with visual pre-phonatory movements predictive of the onset of the acoustic signal, was a prerequisite for N1 latency facilitation, and 3) P2 amplitude suppression and latency facilitation occurred even when visual pre-phonatory movements were not predictive of the acoustic onset but the syllable to come. Taken together, these findings help further clarify how audiovisual speech integration partly operates through two stages of visually-based temporal and phonetic predictions.


Assuntos
Estimulação Acústica , Eletroencefalografia , Potenciais Evocados Auditivos , Percepção da Fala , Percepção Visual , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Luminosa , Tempo de Reação/fisiologia , Fala/fisiologia , Percepção Auditiva/fisiologia
9.
Brain Lang ; 253: 105415, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38692095

RESUMO

With age, the speech system undergoes important changes that render speech production more laborious, slower and often less intelligible. And yet, the neural mechanisms that underlie these age-related changes remain unclear. In this EEG study, we examined two important mechanisms in speech motor control: pre-speech movement-related cortical potential (MRCP), which reflects speech motor planning, and speaking-induced suppression (SIS), which indexes auditory predictions of speech motor commands, in 20 healthy young and 20 healthy older adults. Participants undertook a vowel production task which was followed by passive listening of their own recorded vowels. Our results revealed extensive differences in MRCP in older compared to younger adults. Further, while longer latencies were observed in older adults on N1 and P2, in contrast, the SIS was preserved. The observed reduced MRCP appears as a potential explanatory mechanism for the known age-related slowing of speech production, while preserved SIS suggests intact motor-to-auditory integration.


Assuntos
Envelhecimento , Eletroencefalografia , Fala , Humanos , Fala/fisiologia , Idoso , Masculino , Feminino , Adulto , Envelhecimento/fisiologia , Adulto Jovem , Pessoa de Meia-Idade , Córtex Cerebral/fisiologia , Movimento/fisiologia , Percepção da Fala/fisiologia , Potenciais Evocados/fisiologia
10.
Hum Brain Mapp ; 34(10): 2574-91, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22488985

RESUMO

This functional magnetic resonance imaging (fMRI) study aimed at examining the cerebral regions involved in the auditory perception of prosodic focus using a natural focus detection task. Two conditions testing the processing of simple utterances in French were explored, narrow-focused versus broad-focused. Participants performed a correction detection task. The utterances in both conditions had exactly the same segmental, lexical, and syntactic contents, and only differed in their prosodic realization. The comparison between the two conditions therefore allowed us to examine processes strictly associated with prosodic focus processing. To assess the specific effect of pitch on hemispheric specialization, a parametric analysis was conducted using a parameter reflecting pitch variations specifically related to focus. The comparison between the two conditions reveals that brain regions recruited during the detection of contrastive prosodic focus can be described as a right-hemisphere dominant dual network consisting of (a) ventral regions which include the right posterosuperior temporal and bilateral middle temporal gyri and (b) dorsal regions including the bilateral inferior frontal, inferior parietal and left superior parietal gyri. Our results argue for a dual stream model of focus perception compatible with the asymmetric sampling in time hypothesis. They suggest that the detection of prosodic focus involves an interplay between the right and left hemispheres, in which the computation of slowly changing prosodic cues in the right hemisphere dynamically feeds an internal model concurrently used by the left hemisphere, which carries out computations over shorter temporal windows.


Assuntos
Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Idioma , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Adulto , Sinais (Psicologia) , Dominância Cerebral/fisiologia , Feminino , Humanos , Masculino , Modelos Neurológicos , Modelos Psicológicos , Rede Nervosa/fisiologia , Fonação , Discriminação da Altura Tonal/fisiologia , Percepção da Altura Sonora/fisiologia , Adulto Jovem
11.
Exp Brain Res ; 227(2): 275-88, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23591689

RESUMO

The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.


Assuntos
Idioma , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Tomada de Decisões , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Fonética , Estimulação Luminosa , Tempo de Reação , Medida da Produção da Fala , Fatores de Tempo , Adulto Jovem
12.
Brain Lang ; 247: 105359, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37951157

RESUMO

Visual information from a speaker's face enhances auditory neural processing and speech recognition. To determine whether auditory memory can be influenced by visual speech, the degree of auditory neural adaptation of an auditory syllable preceded by an auditory, visual, or audiovisual syllable was examined using EEG. Consistent with previous findings and additional adaptation of auditory neurons tuned to acoustic features, stronger adaptation of N1, P2 and N2 auditory evoked responses was observed when the auditory syllable was preceded by an auditory compared to a visual syllable. However, although stronger than when preceded by a visual syllable, lower adaptation was observed when the auditory syllable was preceded by an audiovisual compared to an auditory syllable. In addition, longer N1 and P2 latencies were then observed. These results further demonstrate that visual speech acts on auditory memory but suggest competing visual influences in the case of audiovisual stimulation.


Assuntos
Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Fala , Eletroencefalografia , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Estimulação Luminosa
13.
Neuroimage ; 60(4): 1937-46, 2012 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-22361165

RESUMO

Sensory-motor interactions between auditory and articulatory representations in the dorsal auditory processing stream are suggested to contribute to speech perception, especially when bottom-up information alone is insufficient for purely auditory perceptual mechanisms to succeed. Here, we hypothesized that the dorsal stream responds more vigorously to auditory syllables when one is engaged in a phonetic identification/repetition task subsequent to perception compared to passive listening, and that this effect is further augmented when the syllables are embedded in noise. To this end, we recorded magnetoencephalography while twenty subjects listened to speech syllables, with and without noise masking, in four conditions: passive perception; overt repetition; covert repetition; and overt imitation. Compared to passive listening, left-hemispheric N100m equivalent current dipole responses were amplified and shifted posteriorly when perception was followed by covert repetition task. Cortically constrained minimum-norm estimates showed amplified left supramarginal and angylar gyri responses in the covert repetition condition at ~100ms from stimulus onset. Longer-latency responses at ~200ms were amplified in the covert repetition condition in the left angular gyrus and in all three active conditions in the left premotor cortex, with further enhancements when the syllables were embedded in noise. Phonetic categorization accuracy and magnitude of voice pitch change between overt repetition and imitation conditions correlated with left premotor cortex responses at ~100 and ~200ms, respectively. Together, these results suggest that the dorsal stream involvement in speech perception is dependent on perceptual task demands and that phonetic categorization performance is influenced by the left premotor cortex.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Lateralidade Funcional/fisiologia , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Fonética , Adulto Jovem
14.
Hum Brain Mapp ; 33(10): 2306-21, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21826760

RESUMO

Compared with complex coordinated orofacial actions, few neuroimaging studies have attempted to determine the shared and distinct neural substrates of supralaryngeal and laryngeal articulatory movements when performed independently. To determine cortical and subcortical regions associated with supralaryngeal motor control, participants produced lip, tongue and jaw movements while undergoing functional magnetic resonance imaging (fMRI). For laryngeal motor activity, participants produced the steady-state/i/vowel. A sparse temporal sampling acquisition method was used to minimize movement-related artifacts. Three main findings were observed. First, the four tasks activated a set of largely overlapping, common brain areas: the sensorimotor and premotor cortices, the right inferior frontal gyrus, the supplementary motor area, the left parietal operculum and the adjacent inferior parietal lobule, the basal ganglia and the cerebellum. Second, differences between tasks were restricted to the bilateral auditory cortices and to the left ventrolateral sensorimotor cortex, with greater signal intensity for vowel vocalization. Finally, a dorso-ventral somatotopic organization of lip, jaw, vocalic/laryngeal, and tongue movements was observed within the primary motor and somatosensory cortices using individual region-of-interest (ROI) analyses. These results provide evidence for a core neural network involved in laryngeal and supralaryngeal motor control and further refine the sensorimotor somatotopic organization of orofacial articulators.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Atividade Motora/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Arcada Osseodentária/fisiologia , Laringe/fisiologia , Lábio/fisiologia , Imageamento por Ressonância Magnética , Masculino , Língua/fisiologia , Adulto Jovem
15.
Brain Lang ; 235: 105196, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36343508

RESUMO

In face-to-face communication, visual information from a speaker's face and time-varying kinematics of articulatory movements have been shown to fine-tune auditory neural processing and improve speech recognition. To further determine whether the timing of visual gestures modulates auditory cortical processing, three sets of syllables only differing in the onset and duration of silent prephonatory movements, before the acoustic speech signal, were contrasted using EEG. Despite similar visual recognition rates, an increase in the amplitude of P2 auditory evoked responses was observed from the longest to the shortest movements. Taken together, these results clarify how audiovisual speech perception partly operates through visually-based predictions and related processing time, with acoustic-phonetic neural processing paralleling the timing of visual prephonatory gestures.


Assuntos
Percepção da Fala , Fala , Humanos , Fala/fisiologia , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Percepção da Fala/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica
16.
Cortex ; 152: 21-35, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35490663

RESUMO

During speaking or listening, endogenous motor or exogenous visual processes have been shown to fine-tune the auditory neural processing of incoming acoustic speech signal. To compare the impact of these cross-modal effects on auditory evoked responses, two sets of speech production and perception tasks were contrasted using EEG. In a first set, participants produced vowels in a self-paced manner while listening to their auditory feedback. Following the production task, they passively listened to the entire recorded speech sequence. In a second set, the procedure was identical except that participants also watched online their own articulatory movements. While both endogenous motor and exogenous visual processes fine-tuned auditory neural processing, these cross-modal effects were found to act differentially on the amplitude and latency of auditory evoked responses. A reduced amplitude was observed on auditory evoked responses during speaking compared to listening, irrespective of the auditory or audiovisual feedback. Adding orofacial visual movements to the acoustic speech signal also speeded up the latency of auditory evoked responses, irrespective of the perception or production task. Taken together, these results suggest distinct motor and visual influences on auditory neural processing, possibly through different neural gating and predictive mechanisms.


Assuntos
Percepção da Fala , Estimulação Acústica , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Retroalimentação Sensorial/fisiologia , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia
17.
Brain Lang ; 225: 105058, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34929531

RESUMO

Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers' brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word's onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.


Assuntos
Fonética , Percepção da Fala , Estimulação Acústica , Eletroencefalografia/métodos , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia
18.
Neuropsychologia ; 159: 107949, 2021 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-34228997

RESUMO

The ability to process speech evolves over the course of the lifespan. Understanding speech at low acoustic intensity and in the presence of background noise becomes harder, and the ability for older adults to benefit from audiovisual speech also appears to decline. These difficulties can have important consequences on quality of life. Yet, a consensus on the cause of these difficulties is still lacking. The objective of this study was to examine the processing of speech in young and older adults under different modalities (i.e. auditory [A], visual [V], audiovisual [AV]) and in the presence of different visual prediction cues (i.e., no predictive cue (control), temporal predictive cue, phonetic predictive cue, and combined temporal and phonetic predictive cues). We focused on recognition accuracy and four auditory evoked potential (AEP) components: P1-N1-P2 and N2. Thirty-four right-handed French-speaking adults were recruited, including 17 younger adults (28 ± 2 years; 20-42 years) and 17 older adults (67 ± 3.77 years; 60-73 years). Participants completed a forced-choice speech identification task. The main findings of the study are: (1) The faciliatory effect of visual information was reduced, but present, in older compared to younger adults, (2) visual predictive cues facilitated speech recognition in younger and older adults alike, (3) age differences in AEPs were localized to later components (P2 and N2), suggesting that aging predominantly affects higher-order cortical processes related to speech processing rather than lower-level auditory processes. (4) Specifically, AV facilitation on P2 amplitude was lower in older adults, there was a reduced effect of the temporal predictive cue on N2 amplitude for older compared to younger adults, and P2 and N2 latencies were longer for older adults. Finally (5) behavioural performance was associated with P2 amplitude in older adults. Our results indicate that aging affects speech processing at multiple levels, including audiovisual integration (P2) and auditory attentional processes (N2). These findings have important implications for understanding barriers to communication in older ages, as well as for the development of compensation strategies for those with speech processing difficulties.


Assuntos
Sinais (Psicologia) , Percepção da Fala , Estimulação Acústica , Idoso , Percepção Auditiva , Humanos , Pessoa de Meia-Idade , Qualidade de Vida , Fala , Percepção Visual
19.
Neuropsychologia ; 140: 107404, 2020 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-32087207

RESUMO

The neurobiology of sex differences during language processing has been widely investigated in the past three decades. While substantial sex differences have been reported, empirical findings however appear largely equivocal. The present systematic review of the literature and meta-analysis aimed to determine the degree of agreement among studies reporting sex differences in cortical activity during language processing. Irrespective of the modality and the specificity of the language task, sex differences in the BOLD signal or cerebral blood flow was highly inconsistent across fMRI and PET studies. On the temporal side, earlier latency of auditory evoked responses for female compared to male participants were consistently observed in EEG studies during both listening and speaking. Overall, the present review and meta-analysis support the theoretical assumption that there are much more similarities than differences between men and women in the human brain during language processing. Subtle but consistent temporal differences are however observed in the auditory processing of phonetic cues during speech perception and production.


Assuntos
Idioma , Percepção da Fala , Adulto , Percepção Auditiva , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Caracteres Sexuais
20.
Neuropsychologia ; 136: 107267, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31770550

RESUMO

In order to determine the neural substrates of phonemic coding during both listening and speaking, we used a repetition suppression (RS) paradigm in which vowels were repeatedly perceived or produced while measuring BOLD activity with sparse sampling functional magnetic resonance imaging (fMRI). RS refers to the phenomenon that repeated stimuli or actions lead to decreased activity in specific neural populations associated with enhanced neural selectivity and information coding efficiency. Common suppressed BOLD responses during repeated vowel perception and production were observed in the inferior frontal gyri, the posterior part of the left middle temporal gyrus and superior temporal sulcus, the left intraprietal sulcus, as well as in the cingulate gyrus and presupplementary motor area. By providing evidence for common adaptive neural changes in premotor and associative auditory and somatosensory brain areas, the observed RS effects suggest that phonemic coding is partly driven by shared sensorimotor regions in the listening and speaking brain.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Psicolinguística , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Fonética , Córtex Sensório-Motor/diagnóstico por imagem , Córtex Sensório-Motor/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa