Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cognition ; 248: 105806, 2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38749291

RESUMO

The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.

2.
Cogn Sci ; 48(1): e13407, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38279899

RESUMO

During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.


Assuntos
Gestos , Fala , Humanos , Fala/fisiologia , Idioma , Semântica , Compreensão/fisiologia
3.
Sci Rep ; 14(1): 2286, 2024 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-38280963

RESUMO

Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.


Assuntos
Comunicação , Intenção , Humanos , Idioma
4.
Psychon Bull Rev ; 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38267742

RESUMO

The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing-planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e., time between speaker turns).

5.
Sci Rep ; 13(1): 21295, 2023 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-38042876

RESUMO

In conversation, recognizing social actions (similar to 'speech acts') early is important to quickly understand the speaker's intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.


Assuntos
Sobrancelhas , Fala , Humanos , Fala/fisiologia , Movimento , Comunicação
6.
Cogn Sci ; 47(12): e13392, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38058215

RESUMO

Conversation is a time-pressured environment. Recognizing a social action (the ''speech act,'' such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers' intentions.


Assuntos
Avatar , Sobrancelhas , Humanos , Comunicação , Fala/fisiologia , Idioma
7.
Disabil Rehabil ; : 1-20, 2023 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-37752723

RESUMO

PURPOSE: To perform a scoping review to investigate the psychosocial impact of having an altered facial expression in five neurological diseases. METHODS: A systematic literature search was performed. Studies were on Bell's palsy, facioscapulohumeral muscular dystrophy (FSHD), Moebius syndrome, myotonic dystrophy type 1, or Parkinson's disease patients; had a focus on altered facial expression; and had any form of psychosocial outcome measure. Data extraction focused on psychosocial outcomes. RESULTS: Bell's palsy, myotonic dystrophy type 1, and Parkinson's disease patients more often experienced some degree of psychosocial distress than healthy controls. In FSHD, facial weakness negatively influenced communication and was experienced as a burden. The psychosocial distress applied especially to women (Bell's palsy and Parkinson's disease), and patients with more severely altered facial expression (Bell's palsy), but not for Moebius syndrome patients. Furthermore, Parkinson's disease patients with more pronounced hypomimia were perceived more negatively by observers. Various strategies were reported to compensate for altered facial expression. CONCLUSIONS: This review showed that patients with altered facial expression in four of five included neurological diseases had reduced psychosocial functioning. Future research recommendations include studies on observers' judgements of patients during social interactions and on the effectiveness of compensation strategies in enhancing psychosocial functioning.


Negative effects of altered facial expression on psychosocial functioning are common and more abundant in women and in more severely affected patients with various neurological disorders.Health care professionals should be alert to psychosocial distress in patients with altered facial expression.Learning of compensatory strategies could be a beneficial therapy for patients with psychosocial distress due to an altered facial expression.

8.
STAR Protoc ; 4(3): 102370, 2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37421617

RESUMO

We present a protocol to study naturalistic human communication using dual-electroencephalography (EEG) and audio-visual recordings. We describe preparatory steps for data collection including setup preparation, experiment design, and piloting. We then describe the data collection process in detail which consists of participant recruitment, experiment room preparation, and data collection. We also outline the kinds of research questions that can be addressed with the current protocol, including several analysis possibilities, from conversational to advanced time-frequency analyses. For complete details on the use and execution of this protocol, please refer to Drijvers and Holler (2022).1.


Assuntos
Comunicação , Eletroencefalografia , Humanos , Coleta de Dados
9.
PLoS One ; 18(7): e0288104, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37467253

RESUMO

The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker's intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request "What time is it?", an invitation "Will you come to my party?" or a criticism "Are you crazy?"). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.


Assuntos
Comunicação , Idioma , Humanos , Reconhecimento Psicológico , Pesquisadores , Expressão Facial
10.
Philos Trans R Soc Lond B Biol Sci ; 378(1875): 20210473, 2023 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-36871587

RESUMO

Human communicative interaction is characterized by rapid and precise turn-taking. This is achieved by an intricate system that has been elucidated in the field of conversation analysis, based largely on the study of the auditory signal. This model suggests that transitions occur at points of possible completion identified in terms of linguistic units. Despite this, considerable evidence exists that visible bodily actions including gaze and gestures also play a role. To reconcile disparate models and observations in the literature, we combine qualitative and quantitative methods to analyse turn-taking in a corpus of multimodal interaction using eye-trackers and multiple cameras. We show that transitions seem to be inhibited when a speaker averts their gaze at a point of possible turn completion, or when a speaker produces gestures which are beginning or unfinished at such points. We further show that while the direction of a speaker's gaze does not affect the speed of transitions, the production of manual gestures does: turns with gestures have faster transitions. Our findings suggest that the coordination of transitions involves not only linguistic resources but also visual gestural ones and that the transition-relevance places in turns are multimodal in nature. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.


Assuntos
Gestos , Linguística , Humanos , Interação Social
11.
Philos Trans R Soc Lond B Biol Sci ; 378(1875): 20210470, 2023 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-36871590

RESUMO

Face-to-face interaction is core to human sociality and its evolution, and provides the environment in which most of human communication occurs. Research into the full complexities that define face-to-face interaction requires a multi-disciplinary, multi-level approach, illuminating from different perspectives how we and other species interact. This special issue showcases a wide range of approaches, bringing together detailed studies of naturalistic social-interactional behaviour with larger scale analyses for generalization, and investigations of socially contextualized cognitive and neural processes that underpin the behaviour we observe. We suggest that this integrative approach will allow us to propel forwards the science of face-to-face interaction by leading us to new paradigms and novel, more ecologically grounded and comprehensive insights into how we interact with one another and with artificial agents, how differences in psychological profiles might affect interaction, and how the capacity to socially interact develops and has evolved in the human and other species. This theme issue makes a first step into this direction, with the aim to break down disciplinary boundaries and emphasizing the value of illuminating the many facets of face-to-face interaction. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.


Assuntos
Comportamento Social , Interação Social , Humanos , Comunicação
12.
Cogn Affect Behav Neurosci ; 23(2): 340-353, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36823247

RESUMO

In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.


Assuntos
Percepção da Fala , Fala , Adulto , Humanos , Fala/fisiologia , Gestos , Compreensão/fisiologia , Eletroencefalografia , Linguística , Percepção da Fala/fisiologia
13.
Perspect Psychol Sci ; 18(5): 1136-1159, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36634318

RESUMO

Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.


Assuntos
Comunicação , Idioma , Humanos , Percepção Visual , Fala
14.
Psychon Bull Rev ; 30(2): 792-801, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36138282

RESUMO

During face-to-face communication, recipients need to rapidly integrate a plethora of auditory and visual signals. This integration of signals from many different bodily articulators, all offset in time, with the information in the speech stream may either tax the cognitive system, thus slowing down language processing, or may result in multimodal facilitation. Using the classical shadowing paradigm, participants shadowed speech from face-to-face, naturalistic dyadic conversations in an audiovisual context, an audiovisual context without visual speech (e.g., lips), and an audio-only context. Our results provide evidence of a multimodal facilitation effect in human communication: participants were faster in shadowing words when seeing multimodal messages compared with when hearing only audio. Also, the more visual context was present, the fewer shadowing errors were made, and the earlier in time participants shadowed predicted lexical items. We propose that the multimodal facilitation effect may contribute to the ease of fast face-to-face conversational interaction.


Assuntos
Percepção da Fala , Humanos , Fala , Idioma , Percepção Visual
15.
Neuroimage ; 264: 119734, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36343884

RESUMO

We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants' representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants' individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 min total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond.


Assuntos
Linguística , Fala , Humanos , Fala/fisiologia , Comunicação , Idioma , Imageamento por Ressonância Magnética
16.
iScience ; 25(11): 105413, 2022 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-36388995

RESUMO

We here demonstrate that face-to-face spatial orientation induces a special 'social mode' for neurocognitive processing during conversation, even in the absence of visibility. Participants conversed face to face, face to face but visually occluded, and back to back to tease apart effects caused by seeing visual communicative signals and by spatial orientation. Using dual EEG, we found that (1) listeners' brains engaged more strongly while conversing face to face than back to back, irrespective of the visibility of communicative signals, (2) listeners attended to speech more strongly in a back-to-back compared to a face-to-face spatial orientation without visibility; visual signals further reduced the attention needed; (3) the brains of interlocutors were more in sync in a face-to-face compared to a back-to-back spatial orientation, even when they could not see each other; visual signals further enhanced this pattern. Communicating in face-to-face spatial orientation is thus sufficient to induce a special 'social mode' which fine-tunes the brain for neurocognitive processing in conversation.

17.
Acta Psychol (Amst) ; 229: 103690, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35961184

RESUMO

Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-models from building blocks on six consecutive trials. We induced mutually shared knowledge by either showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a significant reduction in the number of words and of gestures when common ground was present. Additionally, we observed a reduction in semantic content and a shift in cross-modal distribution of information across trials. Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the interactive language use of both younger and older adults.


Assuntos
Memória de Curto Prazo , Semântica , Idoso , Envelhecimento , Gestos , Humanos , Individualidade , Fala
18.
Nature ; 607(7918): 271-275, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35831605

RESUMO

Any system of coupled oscillators may be characterized by its spectrum of resonance frequencies (or eigenfrequencies), which can be tuned by varying the system's parameters. The relationship between control parameters and the eigenfrequency spectrum is central to a range of applications1-3. However, fundamental aspects of this relationship remain poorly understood. For example, if the controls are varied along a path that returns to its starting point (that is, around a 'loop'), the system's spectrum must return to itself. In systems that are Hermitian (that is, lossless and reciprocal), this process is trivial and each resonance frequency returns to its original value. However, in non-Hermitian systems, where the eigenfrequencies are complex, the spectrum may return to itself in a topologically non-trivial manner, a phenomenon known as spectral flow. The spectral flow is determined by how the control loop encircles degeneracies, and this relationship is well understood for [Formula: see text] (where [Formula: see text] is the number of oscillators in the system)4,5. Here we extend this description to arbitrary [Formula: see text]. We show that control loops generically produce braids of eigenfrequencies, and for [Formula: see text] these braids form a non-Abelian group that reflects the non-trivial geometry of the space of degeneracies. We demonstrate these features experimentally for [Formula: see text] using a cavity optomechanical system.

19.
Philos Trans R Soc Lond B Biol Sci ; 377(1859): 20210094, 2022 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-35876208

RESUMO

The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.


Assuntos
Gestos , Hominidae , Comunicação Animal , Animais , Humanos , Idioma
20.
R Soc Open Sci ; 9(4): 211489, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35425638

RESUMO

In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are dynamically co-adapted to meet communicative needs. To this end, we parametrically reduce visibility during casual conversational interaction and measure the effects on speakers' communicative behaviour using motion tracking and manual annotation for kinematic and acoustic analyses. We found that visual signalling effort was flexibly adapted in response to a decrease in visual quality (especially motion energy, gesture rate, size, velocity and hold-time). Interestingly, speech was also affected: speech intensity increased in response to reduced visual quality (particularly in speech-gesture utterances, but independently of kinematics). Our findings highlight that multi-modal communicative behaviours are flexibly adapted at multiple scales of measurement and question the notion that gesture plays an inferior role to speech.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...