Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Lang Cogn Neurosci ; 39(4): 423-430, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38812611

RESUMEN

Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.

2.
Cogn Sci ; 48(1): e13407, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38279899

RESUMEN

During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.


Asunto(s)
Gestos , Habla , Humanos , Habla/fisiología , Lenguaje , Semántica , Comprensión/fisiología
3.
J Cogn Neurosci ; 36(3): 460-474, 2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38165746

RESUMEN

Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150-550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.


Asunto(s)
Potenciales Evocados , Percepción del Habla , Humanos , Potenciales Evocados/fisiología , Lenguaje , Semántica , Lingüística , Electroencefalografía , Percepción del Habla/fisiología
4.
J Neurosci ; 44(10)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38199864

RESUMEN

During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.


Asunto(s)
Encéfalo , Percepción del Habla , Humanos , Masculino , Femenino , Encéfalo/fisiología , Percepción Visual/fisiología , Magnetoencefalografía , Habla/fisiología , Atención/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Estimulación Luminosa
5.
J Cogn ; 6(1): 60, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37841668

RESUMEN

Language processing is influenced by sensorimotor experiences. Here, we review behavioral evidence for embodied and grounded influences in language processing across six linguistic levels of granularity. We examine (a) sub-word features, discussing grounded influences on iconicity (systematic associations between word form and meaning); (b) words, discussing boundary conditions and generalizations for the simulation of color, sensory modality, and spatial position; (c) sentences, discussing boundary conditions and applications of action direction simulation; (d) texts, discussing how the teaching of simulation can improve comprehension in beginning readers; (e) conversations, discussing how multi-modal cues improve turn taking and alignment; and (f) text corpora, discussing how distributional semantic models can reveal how grounded and embodied knowledge is encoded in texts. These approaches are converging on a convincing account of the psychology of language, but at the same time, there are important criticisms of the embodied approach and of specific experimental paradigms. The surest way forward requires the adoption of a wide array of scientific methods. By providing complimentary evidence, a combination of multiple methods on various levels of granularity can help us gain a more complete understanding of the role of embodiment and grounding in language processing.

6.
STAR Protoc ; 4(3): 102370, 2023 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-37421617

RESUMEN

We present a protocol to study naturalistic human communication using dual-electroencephalography (EEG) and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses. For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).1.


Asunto(s)
Comunicación , Electroencefalografía , Humanos , Recolección de Datos
7.
Cereb Cortex ; 33(5): 1626-1629, 2023 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-35452080

RESUMEN

Frequency tagging has been successfully used to investigate selective stimulus processing in electroencephalography (EEG) or magnetoencephalography (MEG) studies. Recently, new projectors have been developed that allow for frequency tagging at higher frequencies (>60 Hz). This technique, rapid invisible frequency tagging (RIFT), provides two crucial advantages over low-frequency tagging as (i) it leaves low-frequency oscillations unperturbed, and thus open for investigation, and ii) it can render the tagging invisible, resulting in more naturalistic paradigms and a lack of participant awareness. The development of this technique has far-reaching implications as oscillations involved in cognitive processes can be investigated, and potentially manipulated, in a more naturalistic manner.


Asunto(s)
Electroencefalografía , Magnetoencefalografía , Humanos , Electroencefalografía/métodos , Magnetoencefalografía/métodos , Cognición
8.
Psychon Bull Rev ; 30(2): 792-801, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36138282

RESUMEN

During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.


Asunto(s)
Percepción del Habla , Humanos , Habla , Lenguaje , Percepción Visual
9.
iScience ; 25(11): 105413, 2022 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-36388995

RESUMEN

We here demonstrate that face-to-face spatial orientation induces a special 'social mode' for neurocognitive processing during conversation, even in the absence of visibility. Participants conversed face to face, face to face but visually occluded, and back to back to tease apart effects caused by seeing visual communicative signals and by spatial orientation. Using dual EEG, we found that (1) listeners' brains engaged more strongly while conversing face to face than back to back, irrespective of the visibility of communicative signals, (2) listeners attended to speech more strongly in a back-to-back compared to a face-to-face spatial orientation without visibility; visual signals further reduced the attention needed; (3) the brains of interlocutors were more in sync in a face-to-face compared to a back-to-back spatial orientation, even when they could not see each other; visual signals further enhanced this pattern. Communicating in face-to-face spatial orientation is thus sufficient to induce a special 'social mode' which fine-tunes the brain for neurocognitive processing in conversation.

10.
J Speech Lang Hear Res ; 65(5): 1822-1838, 2022 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-35439423

RESUMEN

PURPOSE: This study investigated to what extent iconic co-speech gestures help word intelligibility in sentence context in two different linguistic maskers (native vs. foreign). It was hypothesized that sentence recognition improves with the presence of iconic co-speech gestures and with foreign compared to native babble. METHOD: Thirty-two native Dutch participants performed a Dutch word recognition task in context in which they were presented with videos in which an actress uttered short Dutch sentences (e.g., Ze begint te openen, "She starts to open"). Participants were presented with a total of six audiovisual conditions: no background noise (i.e., clear condition) without gesture, no background noise with gesture, French babble without gesture, French babble with gesture, Dutch babble without gesture, and Dutch babble with gesture; and they were asked to type down what was said by the Dutch actress. The accurate identification of the action verbs at the end of the target sentences was measured. RESULTS: The results demonstrated that performance on the task was better in the gesture compared to the nongesture conditions (i.e., gesture enhancement effect). In addition, performance was better in French babble than in Dutch babble. CONCLUSIONS: Listeners benefit from iconic co-speech gestures during communication and from foreign background speech compared to native. These insights into multimodal communication may be valuable to everyone who engages in multimodal communication and especially to a public who often works in public places where competing speech is present in the background.


Asunto(s)
Gestos , Percepción del Habla , Comprensión , Femenino , Humanos , Lenguaje , Habla
11.
Cogn Sci ; 46(2): e13083, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35188682

RESUMEN

Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers' co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.


Asunto(s)
Lenguaje , Habla , Gestos , Humanos , Lingüística , Metáfora
12.
Sci Rep ; 11(1): 16721, 2021 08 18.
Artículo en Inglés | MEDLINE | ID: mdl-34408178

RESUMEN

In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

13.
Philos Trans R Soc Lond B Biol Sci ; 376(1835): 20200334, 2021 10 11.
Artículo en Inglés | MEDLINE | ID: mdl-34420378

RESUMEN

It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.


Asunto(s)
Comunicación Animal , Comunicación , Periodicidad , Primates/psicología , Animales , Humanos
14.
J Cogn Neurosci ; 33(5): 887-901, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34449844

RESUMEN

Rhythmic neural activity synchronizes with certain rhythmic behaviors, such as breathing, sniffing, saccades, and speech. The extent to which neural oscillations synchronize with higher-level and more complex behaviors is largely unknown. Here, we investigated electrophysiological synchronization with keyboard typing, which is an omnipresent behavior daily engaged by an uncountably large number of people. Keyboard typing is rhythmic, with frequency characteristics roughly the same as neural oscillatory dynamics associated with cognitive control, notably through midfrontal theta (4-7 Hz) oscillations. We tested the hypothesis that synchronization occurs between typing and midfrontal theta and breaks down when errors are committed. Thirty healthy participants typed words and sentences on a keyboard without visual feedback, while EEG was recorded. Typing rhythmicity was investigated by interkeystroke interval analyses and by a kernel density estimation method. We used a multivariate spatial filtering technique to investigate frequency-specific synchronization between typing and neuronal oscillations. Our results demonstrate theta rhythmicity in typing (around 6.5 Hz) through the two different behavioral analyses. Synchronization between typing and neuronal oscillations occurred at frequencies ranging from 4 to 15 Hz, but to a larger extent for lower frequencies. However, peak synchronization frequency was idiosyncratic across participants, therefore not specific to theta nor to midfrontal regions, and correlated somewhat with peak typing frequency. Errors and trials associated with stronger cognitive control were not associated with changes in synchronization at any frequency. As a whole, this study shows that brain-behavior synchronization does occur during keyboard typing but is not specific to midfrontal theta.


Asunto(s)
Electroencefalografía , Ritmo Teta , Encéfalo , Humanos , Neuronas
15.
Psychol Res ; 85(5): 1997-2011, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32627053

RESUMEN

When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker's mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults' comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.


Asunto(s)
Envejecimiento/psicología , Lectura de los Labios , Memoria a Corto Plazo , Comunicación no Verbal/psicología , Lengua de Signos , Percepción del Habla , Factores de Edad , Anciano , Comprensión , Gestos , Humanos , Ruido , Detección de Señal Psicológica , Percepción Visual , Adulto Joven
16.
Hum Brain Mapp ; 42(4): 1138-1152, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33206441

RESUMEN

During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual - fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.


Asunto(s)
Magnetoencefalografía/métodos , Corteza Prefrontal/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Gestos , Humanos , Masculino , Prueba de Estudio Conceptual , Percepción Social , Inteligibilidad del Habla/fisiología , Adulto Joven
17.
Behav Res Methods ; 52(4): 1783-1794, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31974805

RESUMEN

In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG's output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG's output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.


Asunto(s)
Gestos , Percepción del Habla , Humanos , Movimiento , Habla
18.
Lang Speech ; 63(2): 209-220, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30795715

RESUMEN

Native listeners benefit from both visible speech and iconic gestures to enhance degraded speech comprehension (Drijvers & Ozyürek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible speech and gestures, especially since the benefit from visible speech was minimal when the signal quality was not sufficient.


Asunto(s)
Comprensión , Gestos , Estimulación Luminosa , Semántica , Percepción del Habla , Femenino , Humanos , Masculino , Adulto Joven
19.
Cogn Sci ; 43(10): e12789, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31621126

RESUMEN

Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non-native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye-tracking to investigate whether and how native and highly proficient non-native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6-band noise-vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued-recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non-native listeners mostly gazed at the face during comprehension, but non-native listeners gazed more often at gestures than native listeners. However, only native but not non-native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non-native listeners might gaze at gesture more as it might be more challenging for non-native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non-native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non-native listeners.


Asunto(s)
Atención , Gestos , Lenguaje , Inteligibilidad del Habla , Habla , Adulto , Comprensión , Medidas del Movimiento Ocular , Humanos , Adulto Joven
20.
Neuroimage ; 194: 55-67, 2019 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-30905837

RESUMEN

Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.


Asunto(s)
Comprensión/fisiología , Gestos , Corteza Somatosensorial/fisiología , Percepción del Habla/fisiología , Adulto , Señales (Psicología) , Femenino , Humanos , Magnetoencefalografía , Masculino , Ruido , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...