Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Brain Lang ; 247: 105359, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37951157

RESUMEN

Visual information from a speaker's face enhances auditory neural processing and speech recognition. To determine whether auditory memory can be influenced by visual speech, the degree of auditory neural adaptation of an auditory syllable preceded by an auditory, visual, or audiovisual syllable was examined using EEG. Consistent with previous findings and additional adaptation of auditory neurons tuned to acoustic features, stronger adaptation of N1, P2 and N2 auditory evoked responses was observed when the auditory syllable was preceded by an auditory compared to a visual syllable. However, although stronger than when preceded by a visual syllable, lower adaptation was observed when the auditory syllable was preceded by an audiovisual compared to an auditory syllable. In addition, longer N1 and P2 latencies were then observed. These results further demonstrate that visual speech acts on auditory memory but suggest competing visual influences in the case of audiovisual stimulation.


Asunto(s)
Percepción del Habla , Humanos , Percepción del Habla/fisiología , Habla , Electroencefalografía , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica , Estimulación Luminosa
2.
Brain Lang ; 225: 105058, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34929531

RESUMEN

Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers' brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word's onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.


Asunto(s)
Fonética , Percepción del Habla , Estimulación Acústica , Electroencefalografía/métodos , Humanos , Habla/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología
3.
Hum Brain Mapp ; 38(5): 2751-2771, 2017 05.
Artículo en Inglés | MEDLINE | ID: mdl-28263012

RESUMEN

Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc.


Asunto(s)
Envejecimiento , Atención/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Movimiento/fisiología , Habla/fisiología , Estimulación Acústica , Acústica , Adulto , Anciano , Encéfalo/diagnóstico por imagen , Estudios Transversales , Femenino , Movimientos de la Cabeza , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Oxígeno/sangre , Adulto Joven
4.
J Cogn Neurosci ; 29(3): 448-466, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28139959

RESUMEN

Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both "audible" and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.


Asunto(s)
Encéfalo/fisiología , Reconocimiento Facial/fisiología , Percepción de Movimiento/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa/métodos , Tiempo de Reacción , Percepción Social , Adulto Joven
5.
Brain Struct Funct ; 220(2): 979-97, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24402675

RESUMEN

Speech perception difficulties are common among elderlies; yet the underlying neural mechanisms are still poorly understood. New empirical evidence suggesting that brain senescence may be an important contributor to these difficulties has challenged the traditional view that peripheral hearing loss was the main factor in the etiology of these difficulties. Here, we investigated the relationship between structural and functional brain senescence and speech perception skills in aging. Following audiometric evaluations, participants underwent MRI while performing a speech perception task at different intelligibility levels. As expected, with age speech perception declined, even after controlling for hearing sensitivity using an audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally, decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were found in several motor and premotor areas, including the left ventral premotor cortex and the right supplementary motor area (SMA). Age-dependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal anterior insula. In this region, changes in BOLD signal modulated the relationship between age and speech perception skills suggesting a role for this region in maintaining speech perception in older ages. These results provide important new insights into the neurobiology of speech perception in aging.


Asunto(s)
Envejecimiento/psicología , Corteza Auditiva/fisiopatología , Corteza Motora/fisiopatología , Presbiacusia/etiología , Percepción del Habla , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Envejecimiento/patología , Audiometría de Tonos Puros , Audiometría del Habla , Corteza Auditiva/patología , Umbral Auditivo , Mapeo Encefálico/métodos , Senescencia Celular , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Corteza Motora/patología , Emisiones Otoacústicas Espontáneas , Presbiacusia/patología , Presbiacusia/fisiopatología , Presbiacusia/psicología , Psicoacústica , Inteligibilidad del Habla , Adulto Joven
6.
Neuropsychologia ; 57: 71-7, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24530236

RESUMEN

Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception. Crucially, shortened latencies of early auditory evoked potentials were also observed during audio-haptic speech perception. Altogether, these results suggest early bimodal interactions during live face-to-face and hand-to-face speech perception in dyadic interactions.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Percepción del Habla/fisiología , Percepción del Tacto/fisiología , Tacto , Percepción Visual/fisiología , Estimulación Acústica , Acústica , Adulto , Análisis de Varianza , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Tiempo de Reacción , Adulto Joven
7.
Exp Brain Res ; 227(2): 275-88, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23591689

RESUMEN

The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.


Asunto(s)
Lenguaje , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Toma de Decisiones , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Fonética , Estimulación Luminosa , Tiempo de Reacción , Medición de la Producción del Habla , Factores de Tiempo , Adulto Joven
8.
Brain Res ; 1515: 55-65, 2013 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-23542585

RESUMEN

In addition to sensory processing, recent neurobiological models of speech perception postulate the existence of a left auditory dorsal processing stream, linking auditory speech representations in the auditory cortex with articulatory representations in the motor system, through sensorimotor interaction interfaced in the supramarginal gyrus and/or the posterior part of the superior temporal gyrus. The present state-dependent transcranial magnetic stimulation study is aimed at determining whether speech recognition is indeed mediated by the auditory dorsal pathway, by examining the causal contribution of the left ventral premotor cortex, supramarginal gyrus and posterior part of the superior temporal gyrus during an auditory syllable identification/categorization task. To this aim, participants listened to a sequence of /ba/ syllables before undergoing a two forced-choice auditory syllable decision task on ambiguous syllables (ranging in the categorical boundary between /ba/ and /da/). Consistent with previous studies on selective adaptation to speech, following adaptation to /ba/, participants responses were biased towards /da/. In contrast, in a control condition without prior auditory adaptation no such bias was observed. Crucially, compared to the results observed without stimulation, single-pulse transcranial magnetic stimulation delivered at the onset of each target stimulus interacted with the initial state of each of the stimulated brain area by enhancing the adaptation effect. These results demonstrate that the auditory dorsal pathway contribute to auditory speech adaptation.


Asunto(s)
Estimulación Acústica/métodos , Adaptación Fisiológica/fisiología , Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción del Habla/fisiología , Estimulación Magnética Transcraneal/métodos , Adulto , Femenino , Humanos , Masculino , Tiempo de Reacción/fisiología , Adulto Joven
9.
Neuroimage ; 60(4): 1937-46, 2012 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-22361165

RESUMEN

Sensory-motor interactions between auditory and articulatory representations in the dorsal auditory processing stream are suggested to contribute to speech perception, especially when bottom-up information alone is insufficient for purely auditory perceptual mechanisms to succeed. Here, we hypothesized that the dorsal stream responds more vigorously to auditory syllables when one is engaged in a phonetic identification/repetition task subsequent to perception compared to passive listening, and that this effect is further augmented when the syllables are embedded in noise. To this end, we recorded magnetoencephalography while twenty subjects listened to speech syllables, with and without noise masking, in four conditions: passive perception; overt repetition; covert repetition; and overt imitation. Compared to passive listening, left-hemispheric N100m equivalent current dipole responses were amplified and shifted posteriorly when perception was followed by covert repetition task. Cortically constrained minimum-norm estimates showed amplified left supramarginal and angylar gyri responses in the covert repetition condition at ~100ms from stimulus onset. Longer-latency responses at ~200ms were amplified in the covert repetition condition in the left angular gyrus and in all three active conditions in the left premotor cortex, with further enhancements when the syllables were embedded in noise. Phonetic categorization accuracy and magnitude of voice pitch change between overt repetition and imitation conditions correlated with left premotor cortex responses at ~100 and ~200ms, respectively. Together, these results suggest that the dorsal stream involvement in speech perception is dependent on perceptual task demands and that phonetic categorization performance is influenced by the left premotor cortex.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Lateralidad Funcional/fisiología , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Fonética , Adulto Joven
10.
Philos Trans R Soc Lond B Biol Sci ; 367(1591): 965-76, 2012 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-22371618

RESUMEN

The verbal transformation effect (VTE) refers to perceptual switches while listening to a speech sound repeated rapidly and continuously. It is a specific case of perceptual multistability providing a rich paradigm for studying the processes underlying the perceptual organization of speech. While the VTE has been mainly considered as a purely auditory effect, this paper presents a review of recent behavioural and neuroimaging studies investigating the role of perceptuo-motor interactions in the effect. Behavioural data show that articulatory constraints and visual information from the speaker's articulatory gestures can influence verbal transformations. In line with these data, functional magnetic resonance imaging and intracranial electroencephalography studies demonstrate that articulatory-based representations play a key role in the emergence and the stabilization of speech percepts during a verbal transformation task. Overall, these results suggest that perceptuo (multisensory)-motor processes are involved in the perceptual organization of speech and the formation of speech perceptual objects.


Asunto(s)
Percepción del Habla/fisiología , Estimulación Acústica , Encéfalo/fisiología , Humanos , Imagen por Resonancia Magnética , Modelos Neurológicos , Modelos Psicológicos , Fonética , Estimulación Luminosa , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología
11.
Brain Lang ; 111(1): 1-7, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19362734

RESUMEN

Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech perception under normal listening conditions remains debated. To directly test this hypothesis, we applied rTMS to the left ventral premotor cortex and participants performed auditory speech tasks involving the same set of syllables but differing in the use of phonemic segmentation processes. Compared to sham stimulation, rTMS applied over the ventral premotor cortex resulted in slower phoneme discrimination requiring phonemic segmentation. No effect was observed in phoneme identification and syllable discrimination tasks that could be performed without need for phonemic segmentation. The findings demonstrate a mediating role of the ventral premotor cortex in speech segmentation under normal listening conditions and are interpreted in relation to theories assuming a link between perception and action in the human speech processing system.


Asunto(s)
Lóbulo Frontal/fisiología , Lenguaje , Percepción del Habla/fisiología , Conducta Verbal/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Mapeo Encefálico , Estimulación Eléctrica , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Tiempo de Reacción/fisiología , Medición de la Producción del Habla , Estimulación Magnética Transcraneal
12.
J Acoust Soc Am ; 125(2): 1103-13, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19206885

RESUMEN

The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.


Asunto(s)
Vías Auditivas/fisiología , Retroalimentación Psicológica , Aprendizaje , Actividad Motora , Sensación , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Adulto , Audiometría del Habla , Femenino , Humanos , Adulto Joven
13.
Neuroimage ; 23(3): 1143-51, 2004 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-15528113

RESUMEN

We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition of speech sequences, and a verbal transformation condition involving the mental repetition of the same items with an active search for verbal transformation. Our results reveal a predominantly left-lateralized network of cerebral regions activated by the verbal transformation task, similar to the neural network involved in verbal working memory: the left inferior frontal gyrus, the left supramarginal gyrus, the left superior temporal gyrus, the anterior part of the right cingulate cortex, and the cerebellar cortex, bilaterally. Our results strongly suggest that the imagery analogue of the verbal transformation effect, which requires percept analysis, form interpretation, and attentional maintenance of verbal material, relies on a working memory module sharing common components of speech perception and speech production systems.


Asunto(s)
Encéfalo/fisiología , Habla/fisiología , Conducta Verbal/fisiología , Adulto , Imagen Eco-Planar , Femenino , Lateralidad Funcional/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Memoria a Corto Plazo/fisiología , Modelos Estadísticos , Desempeño Psicomotor/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA