Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39041175

RESUMO

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Assuntos
Gestos , Percepção da Fala , Humanos , Feminino , Masculino , Percepção da Fala/fisiologia , Adulto Jovem , Adulto , Percepção Visual/fisiologia , Eletroencefalografia , Compreensão/fisiologia , Estimulação Acústica , Fala/fisiologia , Encéfalo/fisiologia , Estimulação Luminosa/métodos
2.
J Child Lang ; 51(3): 656-680, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38314574

RESUMO

Based on the linguistic analysis of game explanations and retellings, the paper's goal is to investigate the relation of preschool children's situated discourse competence and iconic gestures in different communicative genres, focussing on reinforcing and supplementary speech-gesture-combinations. To this end, a method was developed to evaluate discourse competence as a context-sensitive and interactively embedded phenomenon. The so-called GLOBE-model was adapted to assess discourse competence in relation to interactive scaffolding. The findings show clear links between the children's competence and their parents' scaffolding. We suggest this to be evidence of a fine-tuned interactive support system. The results also indicate strong relations between higher discourse competence and increased frequency of iconic gestures. This applies in particular to reinforcing gestures. The results are interpreted as a confirmation that the speech-gesture system undergoes systematic changes during early childhood, and that gesturing becomes more iconic - and thus more communicative - when discourse competence is growing.


Assuntos
Linguagem Infantil , Gestos , Humanos , Pré-Escolar , Masculino , Feminino , Fala , Comunicação , Desenvolvimento da Linguagem , Linguística
3.
Brain Sci ; 13(12)2023 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-38137160

RESUMO

This paper investigates the influence of gestures on foreign language (FL) vocabulary learning. In this work, we first address the state of the art in the field and then delve into the research conducted in our lab (three experiments already published) in order to finally offer a unified theoretical interpretation of the role of gestures in FL vocabulary learning. In Experiments 1 and 2, we examined the impact of gestures on noun and verb learning. The results revealed that participants exhibited better learning outcomes when FL words were accompanied by congruent gestures compared to those from the no-gesture condition. Conversely, when meaningless or incongruent gestures were presented alongside new FL words, gestures had a detrimental effect on the learning process. Secondly, we addressed the question of whether or not individuals need to physically perform the gestures themselves to observe the effects of gestures on vocabulary learning (Experiment 3). Results indicated that congruent gestures improved FL word recall when learners only observed the instructor's gestures ("see" group) and when they mimicked them ("do" group). Importantly, the adverse effect associated with incongruent gestures was reduced in the "do" compared to that in the "see" experimental group. These findings suggest that iconic gestures can serve as an effective tool for learning vocabulary in an FL, particularly when the gestures align with the meaning of the words. Furthermore, the active performance of gestures helps counteract the negative effects associated with inconsistencies between gestures and word meanings. Consequently, if a choice must be made, an FL learning strategy in which learners acquire words while making gestures congruent with their meaning would be highly desirable.

4.
Psychon Bull Rev ; 29(2): 600-612, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34671936

RESUMO

Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures. For each word we also measured the informativeness of the mouth movements from a separate lipreading task. We manipulated whether gestures were congruent or incongruent with the speech, and whether the words were audible or noise vocoded. The task was to decide whether the speech from the video matched a previously seen picture. We found that congruent iconic gestures aided word recognition, especially in the noise-vocoded condition, and the effect was larger (in terms of reaction times) for more informative gestures. Moreover, more informative mouth movements facilitated performance in challenging listening conditions when the speech was accompanied by gestures (either congruent or incongruent) suggesting an enhancement when both cues are present relative to just one. We also observed (a trend) that more informative mouth movements speeded up word recognition across clarity conditions, but only when the gestures were absent. We conclude that listeners use and dynamically weight the informativeness of gestures and mouth movements available during face-to-face communication.


Assuntos
Gestos , Percepção da Fala , Compreensão , Humanos , Leitura Labial , Fala
5.
Front Psychol ; 12: 776867, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34917002

RESUMO

Numerous studies have explored the benefit of iconic gestures in speech comprehension. However, only few studies have investigated how visual attention was allocated to these gestures in the context of clear versus degraded speech and the way information is extracted for enhancing comprehension. This study aimed to explore the effect of iconic gestures on comprehension and whether fixating the gesture is required for information extraction. Four types of gestures (i.e., semantically and syntactically incongruent iconic gestures, meaningless configurations, and congruent iconic gestures) were presented in a sentence context in three different listening conditions (i.e., clear, partly degraded or fully degraded speech). Using eye tracking technology, participants' gaze was recorded, while they watched video clips after which they were invited to answer simple comprehension questions. Results first showed that different types of gestures differently attract attention and that the more speech was degraded, the less participants would pay attention to gestures. Furthermore, semantically incongruent gestures appeared to particularly impair comprehension although not being fixated while congruent gestures appeared to improve comprehension despite also not being fixated. These results suggest that covert attention is sufficient to convey information that will be processed by the listener.

6.
Front Psychol ; 12: 634074, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33995189

RESUMO

Iconic gesture-speech integration is a relatively recent field of investigation with numerous researchers studying its various aspects. The results obtained are just as diverse. The definition of iconic gestures is often overlooked in the interpretations of results. Furthermore, while most behavioral studies have demonstrated an advantage of bimodal presentation, brain activity studies show a diversity of results regarding the brain regions involved in the processing of this integration. Clinical studies also yield mixed results, some suggesting parallel processing channels, others a unique and integrated channel. This review aims to draw attention to the methodological variations in research on iconic gesture-speech integration and how they impact conclusions regarding the underlying phenomena. It will also attempt to draw together the findings from other relevant research and suggest potential areas for further investigation in order to better understand processes at play during speech integration process.

7.
Front Psychol ; 12: 651725, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33981277

RESUMO

The economic principle of communication, according to which successful communication can be reached by least effort, has been studied for verbal communication. With respect to nonverbal behavior, it implies that forms of iconic gestures change over the course of communication and become reduced in the sense of less pronounced. These changes and their effects on learning are currently unexplored in relevant literature. Addressing this research gap, we conducted a word learning study to test the effects of changing gestures on children's slow mapping. We applied a within-subject design and tested 51 children, aged 6.7 years (SD = 0.4), who learned unknown words from a story. The storyteller acted on the basis of two conditions: In one condition, in which half of the target words were presented, the story presentation was enhanced with progressively reduced iconic gestures (PRG); in the other condition, half of the target words were accompanied by fully executed iconic gestures (FEG). To ensure a reliable gesture presentation, children were exposed to a recorded person telling a story in both conditions. We tested the slow mapping effects on children's productive and receptive word knowledge three minutes as well as two to three days after being presented the story. The results suggest that children's production of the target words, but not their understanding thereof, was enhanced by PRG.

8.
Brain Lang ; 216: 104916, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33652372

RESUMO

Here we examine the role of visuospatial working memory (WM) during the comprehension of multimodal discourse with co-speech iconic gestures. EEG was recorded as healthy adults encoded either a sequence of one (low load) or four (high load) dot locations on a grid and rehearsed them until a free recall response was collected later in the trial. During the rehearsal period of the WM task, participants observed videos of a speaker describing objects in which half of the trials included semantically related co-speech gestures (congruent), and the other half included semantically unrelated gestures (incongruent). Discourse processing was indexed by oscillatory EEG activity in the alpha and beta bands during the videos. Across all participants, effects of speech and gesture incongruity were more evident in low load trials than in high load trials. Effects were also modulated by individual differences in visuospatial WM capacity. These data suggest visuospatial WM resources are recruited in the comprehension of multimodal discourse.


Assuntos
Gestos , Percepção da Fala , Adulto , Compreensão , Humanos , Memória de Curto Prazo , Fala
9.
Brain Cogn ; 146: 105640, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33171343

RESUMO

Multimodal discourse requires an assembly of cognitive processes that are uniquely recruited for language comprehension in social contexts. In this study, we investigated the role of verbal working memory for the online integration of speech and iconic gestures. Participants memorized and rehearsed a series of auditorily presented digits in low (one digit) or high (four digits) memory load conditions. To observe how verbal working memory load impacts online discourse comprehension, ERPs were recorded while participants watched discourse videos containing either congruent or incongruent speech-gesture combinations during the maintenance portion of the memory task. While expected speech-gesture congruity effects were found in the low memory load condition, high memory load trials elicited enhanced frontal positivities that indicated a unique interaction between online speech-gesture integration and the availability of verbal working memory resources. This work contributes to an understanding of discourse comprehension by demonstrating that language processing in a multimodal context is subject to the relationship between cognitive resource availability and the degree of controlled processing required for task performance. We suggest that verbal working memory is less important for speech-gesture integration than it is for mediating speech processing under high task demands.


Assuntos
Gestos , Memória de Curto Prazo , Percepção da Fala , Compreensão , Humanos , Processos Mentais , Fala
10.
Cogn Sci ; 44(9): e12890, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32939773

RESUMO

People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.


Assuntos
Afasia , Gestos , Fala , Humanos
11.
Front Psychol ; 11: 118, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32116924

RESUMO

Gesture and language development are strongly connected to each other. Two types of gestures in particular are analyzed regarding their role for language acquisition: pointing and iconic gestures. With the present longitudinal study, the predictive values of index-finger pointing at 12 months and the comprehension of iconic gestures at 3;0 years for later language skills in typically developing (TD) children and in children with a language delay (LD) or developmental language disorder (DLD) are examined. Forty-two monolingual German children and their primary caregivers participated in the study and were followed longitudinally from 1;0 to 6;0 years. Within a total of 14 observation sessions, the gestural and language abilities of the children were measured using standardized as well as ad hoc tests, parent questionnaires and semi-natural interactions between the child and their caregivers. At the age of 2;0 years, 10 of the 42 children were identified as having a LD. The ability to point with the extended index finger at 1;0 year is predictive for language skills at 5;0 and 6;0 years. This predictive effect is mediated by the language skills of the children at 3;0 years. The comprehension of iconic gestures at 3;0 years correlates with index-finger pointing at 1;0 year and also with earlier and later language skills. It mediates the predictive value of index-finger pointing at 1;0 year for grammar skills at 5;0 and 6;0 years. Children with LD develop the ability to understand the iconicity in gestures later than TD children and score lower in language tests until the age of 6;0 years. The language differences between these two groups of children persist partially until the age of 5;0 years even when the two children with manifested DLD within the group of children with LD are excluded from analyses. Beyond that age, no differences in the language skills between children with and without a history of LD are found when children with a manifest DLD are excluded. The findings support the assumption of an integrated speech-gesture communication system, which functions similarly in TD children and children with LD or DLD, but with a time delay.

12.
Behav Res Methods ; 50(3): 1270-1284, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-28916988

RESUMO

Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.


Assuntos
Gestos , Locomoção , Comunicação não Verbal , Estimulação Luminosa/métodos , Língua de Sinais , Percepção Visual , Adulto , Bases de Dados Factuais , Feminino , Humanos , Masculino , Gravação em Vídeo
13.
Res Dev Disabil ; 72: 128-139, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29132079

RESUMO

BACKGROUND: Gestures are spontaneous hand movements produced when speaking. Despite gestures being of communicative significance, little is known about the gestural production in spoken narratives in six- to 12-year-old children with Autism Spectrum Disorders (ASD). AIMS: The present study examined whether six- to 12-year-old children with ASD have a delay in gestural production in a spoken narrative task, in comparison to their typically-developing (TD) peers. METHODS AND PROCEDURES: Six- to-12-year-old children with ASD (N=14) and their age- and IQ-matched TD peers (N=12) narrated a story, which could elicit spontaneous speech and gestures. Their speech and gestures were then transcribed and coded. OUTCOMES AND RESULTS: Both groups of children had comparable expressive language skills. Children with ASD produced a similar number of pointing and marker gestures to TD children and significantly more iconic gestures in their spoken narratives. While children with ASD produced more reinforcing gestures than their TD counterparts, both groups of children produced comparable numbers of disambiguating and supplementary gestures. CONCLUSIONS: Our findings indicate that children with ASD may be as capable as TD children in gestural production when they engage in spoken narratives, which gives them spontaneity in producing gestures.


Assuntos
Transtorno do Espectro Autista/psicologia , Gestos , Narração , Fala , Criança , Feminino , Humanos , Masculino , Comunicação não Verbal , Comportamento Verbal
14.
J Exp Child Psychol ; 142: 1-17, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26448391

RESUMO

Iconic gestures-communicative acts using hand or body movements that resemble their referent-figure prominently in theories of language evolution and development. This study contrasted the abilities of chimpanzees (N=11) and 4-year-old human children (N=24) to comprehend novel iconic gestures. Participants learned to retrieve rewards from apparatuses in two distinct locations, each requiring a different action. In the test, a human adult informed the participant where to go by miming the action needed to obtain the reward. Children used the iconic gestures (more than arbitrary gestures) to locate the reward, whereas chimpanzees did not. Some children also used arbitrary gestures in the same way, but only after they had previously shown comprehension for iconic gestures. Over time, chimpanzees learned to associate iconic gestures with the appropriate location faster than arbitrary gestures, suggesting at least some recognition of the iconicity involved. These results demonstrate the importance of iconicity in referential communication.


Assuntos
Cognição/fisiologia , Compreensão/fisiologia , Gestos , Idioma , Aprendizagem , Animais , Pré-Escolar , Feminino , Humanos , Masculino , Pan troglodytes , Percepção Social
15.
Commun Integr Biol ; 8(1): e992742, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26844623

RESUMO

We comment on a recent behavioral study in which we describe a human-like beckoning gesture in 2 groups of bonobos, used in combination with sexual solicitation postures. The beckoning gesture fulfils key criteria of deixis and iconicity, in that it communicates to a distant recipient the desired travel path in relation to a specific social intention, i.e., to have sex at another location. We discuss this finding in light of the fact that, despite the documented great ape capacity and obvious communicative advantage, referential gestures are still surprisingly rare in their natural communication. We address several possibilities for this peculiar underuse and are most compelled by the notion that non-human primates are generally not very motivated to share their experiences of external objects or events with others, which removes most reasons for referential signaling.

16.
Acta Psychol (Amst) ; 153: 39-50, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25282199

RESUMO

Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes.


Assuntos
Gestos , Memória de Curto Prazo/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Percepção da Fala/fisiologia , Adulto , Compreensão/fisiologia , Feminino , Humanos , Individualidade , Masculino , Adulto Jovem
17.
Brain Res ; 1567: 42-56, 2014 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-24746497

RESUMO

Speech-associated gesturing leads to memory advantages for spoken sentences. However, unexpected or surprising events are also likely to be remembered. With this study we test the hypothesis that different neural mechanisms (semantic elaboration and surprise) lead to memory advantages for iconic and unrelated gestures. During fMRI-data acquisition participants were presented with video clips of an actor verbalising concrete sentences accompanied by iconic gestures (IG; e.g., circular gesture; sentence: "The man is sitting at the round table"), unrelated free gestures (FG; e.g., unrelated up down movements; same sentence) and no gestures (NG; same sentence). After scanning, recognition performance for the three conditions was tested. Videos were evaluated regarding semantic relation and surprise by a different group of participants. The semantic relationship between speech and gesture was rated higher for IG (IG>FG), whereas surprise was rated higher for FG (FG>IG). Activation of the hippocampus correlated with subsequent memory performance of both gesture conditions (IG+FG>NG). For the IG condition we found activation in the left temporal pole and middle cingulate cortex (MCC; IG>FG). In contrast, for the FG condition posterior thalamic structures (FG>IG) as well as anterior and posterior cingulate cortices were activated (FG>NG). Our behavioral and fMRI-data suggest different mechanisms for processing related and unrelated co-verbal gestures, both of them leading to enhanced memory performance. Whereas activation in MCC and left temporal pole for iconic co-verbal gestures may reflect semantic memory processes, memory enhancement for unrelated gestures relies on the surprise response, mediated by anterior/posterior cingulate cortex and thalamico-hippocampal structures.


Assuntos
Encéfalo/fisiologia , Gestos , Reconhecimento Psicológico/fisiologia , Semântica , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Antecipação Psicológica/fisiologia , Mapeamento Encefálico , Discriminação Psicológica/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/fisiologia , Testes Neuropsicológicos , Adulto Jovem
18.
Front Behav Neurosci ; 7: 181, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24391560

RESUMO

Space and shape are distinct perceptual categories. In language, perceptual information can also be used to describe abstract semantic concepts like a "rising income" (space) or a "square personality" (shape). Despite being inherently concrete, co-speech gestures depicting space and shape can accompany concrete or abstract utterances. Here, we investigated the way that abstractness influences the neural processing of the perceptual categories of space and shape in gestures. Thus, we tested the hypothesis that the neural processing of perceptual categories is highly dependent on language context. In a two-factorial design, we investigated the neural basis for the processing of gestures containing shape (SH) and spatial information (SP) when accompanying concrete (c) or abstract (a) verbal utterances. During fMRI data acquisition participants were presented with short video clips of the four conditions (cSP, aSP, cSH, aSH) while performing an independent control task. Abstract (a) as opposed to concrete (c) utterances activated temporal lobes bilaterally and the left inferior frontal gyrus (IFG) for both shape-related (SH) and space-related (SP) utterances. An interaction of perceptual category and semantic abstractness in a more anterior part of the left IFG and inferior part of the posterior temporal lobe (pTL) indicates that abstractness strongly influenced the neural processing of space and shape information. Despite the concrete visual input of co-speech gestures in all conditions, space and shape information is processed differently depending on the semantic abstractness of its linguistic context.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA