Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions.
Sci Rep
; 14(1): 2286, 2024 01 27.
Article
em En
| MEDLINE
| ID: mdl-38280963
ABSTRACT
Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Limite:
Humans
Idioma:
En
Ano de publicação:
2024
Tipo de documento:
Article