Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Neuroimage ; 264: 119734, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36343884

RESUMO

We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants' representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants' individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 min total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond.


Assuntos
Linguística , Fala , Humanos , Fala/fisiologia , Comunicação , Idioma , Imageamento por Ressonância Magnética
2.
Cereb Cortex ; 30(3): 1056-1067, 2020 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-31504305

RESUMO

Social interaction requires us to recognize subtle cues in behavior, such as kinematic differences in actions and gestures produced with different social intentions. Neuroscientific studies indicate that the putative mirror neuron system (pMNS) in the premotor cortex and mentalizing system (MS) in the medial prefrontal cortex support inferences about contextually unusual actions. However, little is known regarding the brain dynamics of these systems when viewing communicatively exaggerated kinematics. In an event-related functional magnetic resonance imaging experiment, 28 participants viewed stick-light videos of pantomime gestures, recorded in a previous study, which contained varying degrees of communicative exaggeration. Participants made either social or nonsocial classifications of the videos. Using participant responses and pantomime kinematics, we modeled the probability of each video being classified as communicative. Interregion connectivity and activity were modulated by kinematic exaggeration, depending on the task. In the Social Task, communicativeness of the gesture increased activation of several pMNS and MS regions and modulated top-down coupling from the MS to the pMNS, but engagement of the pMNS and MS was not found in the nonsocial task. Our results suggest that expectation violations can be a key cue for inferring communicative intention, extending previous findings from wholly unexpected actions to more subtle social signaling.


Assuntos
Encéfalo/fisiologia , Gestos , Reconhecimento Visual de Modelos/fisiologia , Comportamento Social , Adulto , Fenômenos Biomecânicos , Mapeamento Encefálico , Feminino , Humanos , Intenção , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
3.
Psychol Res ; 84(7): 1897-1911, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31079227

RESUMO

Humans are unique in their ability to communicate information through representational gestures which visually simulate an action (eg. moving hands as if opening a jar). Previous research indicates that the intention to communicate modulates the kinematics (e.g., velocity, size) of such gestures. If and how this modulation influences addressees' comprehension of gestures have not been investigated. Here we ask whether communicative kinematic modulation enhances semantic comprehension (i.e., identification) of gestures. We additionally investigate whether any comprehension advantage is due to enhanced early identification or late identification. Participants (n = 20) watched videos of representational gestures produced in a more- (n = 60) or less-communicative (n = 60) context and performed a forced-choice recognition task. We tested the isolated role of kinematics by removing visibility of actor's faces in Experiment I, and by reducing the stimuli to stick-light figures in Experiment II. Three video lengths were used to disentangle early identification from late identification. Accuracy and response time quantified main effects. Kinematic modulation was tested for correlations with task performance. We found higher gesture identification performance in more- compared to less-communicative gestures. However, early identification was only enhanced within a full visual context, while late identification occurred even when viewing isolated kinematics. Additionally, temporally segmented acts with more post-stroke holds were associated with higher accuracy. Our results demonstrate that communicative signaling, interacting with other visual cues, generally supports gesture identification, while kinematic modulation specifically enhances late identification in the absence of other cues. Results provide insights into mutual understanding processes as well as creating artificial communicative agents.


Assuntos
Fenômenos Biomecânicos/fisiologia , Compreensão/fisiologia , Gestos , Comunicação não Verbal/fisiologia , Semântica , Adolescente , Adulto , Feminino , Humanos , Masculino , Países Baixos , Adulto Jovem
4.
Behav Res Methods ; 52(2): 723-740, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31659689

RESUMO

There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture's kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field's classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a "moment of maximum effort" is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.


Assuntos
Gestos , Fala , Idioma , Linguística , Movimento (Física)
5.
Behav Res Methods ; 51(2): 769-777, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30143970

RESUMO

Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human-robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.


Assuntos
Gestos , Processamento de Imagem Assistida por Computador/métodos , Movimento , Comunicação não Verbal/fisiologia , Gravação em Vídeo , Fenômenos Biomecânicos , Humanos , Movimento (Física)
6.
Hum Brain Mapp ; 36(4): 1554-66, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25598397

RESUMO

OBJECTIVE: Patients with Parkinson's disease (PD) often suffer from impairments in executive functions, such as working memory deficits. It is widely held that dopamine depletion in the striatum contributes to these impairments through decreased activity and connectivity between task-related brain networks. We investigated this hypothesis by studying task-related network activity and connectivity within a sample of de novo patients with PD, versus healthy controls, during a visuospatial working memory task. METHODS: Sixteen de novo PD patients and 35 matched healthy controls performed a visuospatial n-back task while we measured their behavioral performance and neural activity using functional magnetic resonance imaging. We constructed regions-of-interest in the bilateral inferior parietal cortex (IPC), bilateral dorsolateral prefrontal cortex (DLPFC), and bilateral caudate nucleus to investigate group differences in task-related activity. We studied network connectivity by assessing the functional connectivity of the bilateral DLPFC and by assessing effective connectivity within the frontoparietal and the frontostriatal networks. RESULTS: PD patients, compared with controls, showed trend-significantly decreased task accuracy, significantly increased task-related activity in the left DLPFC and a trend-significant increase in activity of the right DLPFC, left caudate nucleus, and left IPC. Furthermore, we found reduced functional connectivity of the DLPFC with other task-related regions, such as the inferior and superior frontal gyri, in the PD group, and group differences in effective connectivity within the frontoparietal network. INTERPRETATION: These findings suggest that the increase in working memory-related brain activity in PD patients is compensatory to maintain behavioral performance in the presence of network deficits.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Proteínas da Membrana Plasmática de Transporte de Dopamina/metabolismo , Memória de Curto Prazo/fisiologia , Doença de Parkinson/diagnóstico por imagem , Doença de Parkinson/fisiopatologia , Adulto , Idoso , Mapeamento Encefálico , Feminino , Humanos , Radioisótopos do Iodo , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiopatologia , Testes Neuropsicológicos , Doença de Parkinson/psicologia , Compostos Radiofarmacêuticos , Processamento de Sinais Assistido por Computador , Tomografia Computadorizada de Emissão de Fóton Único , Tropanos
7.
Hum Brain Mapp ; 36(9): 3703-15, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26096737

RESUMO

OBJECTIVE: Parkinson's disease (PD) often entails impairments of executive functions, such as planning. Although widely held that these impairments arise from dopaminergic denervation of the striatum, not all executive functions are affected early on, and the underlying neural dynamics are not fully understood. In a combined longitudinal and cross-sectional study, we investigated how planning deficits progress over time in the early stages of PD compared to matched healthy controls. We used functional magnetic resonance imaging (fMRI) to identify accompanying neural dynamics. METHODS: Seventeen PD patients and 20 healthy controls performed a parametric Tower of London task at two time points separated by ∼3 years (baseline and follow-up). We assessed task performance longitudinally in both groups; at follow-up, a subset of participants (14 patients, 19 controls) performed a parallel version of the task during fMRI. We performed meta-analyses to localize regions-of-interest (ROIs), that is, the bilateral dorsolateral prefrontal cortex (DLPFC), inferior parietal cortex, and caudate nucleus, and performed group-by-task analyses and within-group regression analyses of planning-related neural activation. We studied task-related functional connectivity of seeds in the DLPFC and caudate nucleus. RESULTS: PD patients, compared with controls, showed impaired task performance at both time-points, while both groups showed similar performance reductions from baseline to follow-up. Compared to controls, patients showed lower planning-related brain activation together with decreased functional connectivity. CONCLUSION: These findings support the notion that planning is affected early in the PD disease course, and that this impairment in planning is accompanied by decreases in both task-related brain activity and connectivity.


Assuntos
Encéfalo/fisiopatologia , Função Executiva/fisiologia , Doença de Parkinson/fisiopatologia , Doença de Parkinson/psicologia , Resolução de Problemas/fisiologia , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Estudos Transversais , Proteínas da Membrana Plasmática de Transporte de Dopamina/metabolismo , Feminino , Seguimentos , Humanos , Estudos Longitudinais , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiopatologia , Testes Neuropsicológicos , Doença de Parkinson/diagnóstico por imagem , Compostos Radiofarmacêuticos , Tomografia Computadorizada de Emissão de Fóton Único , Tropanos
8.
Psychon Bull Rev ; 31(4): 1723-1734, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38267742

RESUMO

The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing-planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e., time between speaker turns).


Assuntos
Idioma , Humanos , Fala , Comunicação
9.
Sci Rep ; 14(1): 2286, 2024 01 27.
Artigo em Inglês | MEDLINE | ID: mdl-38280963

RESUMO

Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.


Assuntos
Comunicação , Intenção , Humanos , Idioma
10.
Perspect Psychol Sci ; 18(5): 1136-1159, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36634318

RESUMO

Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.


Assuntos
Comunicação , Idioma , Humanos , Percepção Visual , Fala
11.
PLoS One ; 18(7): e0288104, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37467253

RESUMO

The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker's intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request "What time is it?", an invitation "Will you come to my party?" or a criticism "Are you crazy?"). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.


Assuntos
Comunicação , Idioma , Humanos , Reconhecimento Psicológico , Pesquisadores , Expressão Facial
12.
Sci Rep ; 13(1): 21295, 2023 12 02.
Artigo em Inglês | MEDLINE | ID: mdl-38042876

RESUMO

In conversation, recognizing social actions (similar to 'speech acts') early is important to quickly understand the speaker's intended message and to provide a fast response. Fast turns are typical for fundamental social actions like questions, since a long gap can indicate a dispreferred response. In multimodal face-to-face interaction, visual signals may contribute to this fast dynamic. The face is an important source of visual signalling, and previous research found that prevalent facial signals such as eyebrow movements facilitate the rapid recognition of questions. We aimed to investigate whether early eyebrow movements with natural movement intensities facilitate question identification, and whether specific intensities are more helpful in detecting questions. Participants were instructed to view videos of avatars where the presence of eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) was manipulated, and to indicate whether the utterance in the video was a question or statement. Results showed higher accuracies for questions with eyebrow frowns, and faster response times for questions with eyebrow frowns and eyebrow raises. No additional effect was observed for the specific movement intensity. This suggests that eyebrow movements that are representative of naturalistic multimodal behaviour facilitate question recognition.


Assuntos
Sobrancelhas , Fala , Humanos , Fala/fisiologia , Movimento , Comunicação
13.
Cogn Sci ; 47(12): e13392, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38058215

RESUMO

Conversation is a time-pressured environment. Recognizing a social action (the ''speech act,'' such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers' intentions.


Assuntos
Avatar , Sobrancelhas , Humanos , Comunicação , Fala/fisiologia , Idioma
14.
Cogn Sci ; 47(6): e13298, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37303224

RESUMO

In conversation, individuals work together to achieve communicative goals, complementing and aligning language and body with each other. An important emerging question is whether interlocutors entrain with one another equally across linguistic levels (e.g., lexical, syntactic, and semantic) and modalities (i.e., speech and gesture), or whether there are complementary patterns of behaviors, with some levels or modalities diverging and others converging in coordinated fashions. This study assesses how kinematic and linguistic entrainment interact with one another across levels of measurement, and according to communicative context. We analyzed data from two matched corpora of dyadic interaction between-respectively-Danish and Norwegian native speakers engaged in affiliative conversations and task-oriented conversations. We assessed linguistic entrainment at the lexical, syntactic, and semantic level, and kinetic alignment of the head and hands using video-based motion tracking and dynamic time warping. We tested whether-across the two languages-linguistic alignment correlates with kinetic alignment, and whether these kinetic-linguistic associations are modulated either by the type of conversation or by the language spoken. We found that kinetic entrainment was positively associated with low-level linguistic (i.e., lexical) entrainment, while negatively associated with high-level linguistic (i.e., semantic) entrainment, in a cross-linguistically robust way. Our findings suggest that conversation makes use of a dynamic coordination of similarity and complementarity both between individuals as well as between different communicative modalities, and provides evidence for a multimodal, interpersonal synergy account of interaction.


Assuntos
Idioma , Linguística , Humanos , Semântica , Gestos , Dinamarca
15.
J Cogn ; 6(1): 60, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37841668

RESUMO

Language processing is influenced by sensorimotor experiences. Here, we review behavioral evidence for embodied and grounded influences in language processing across six linguistic levels of granularity. We examine (a) sub-word features, discussing grounded influences on iconicity (systematic associations between word form and meaning); (b) words, discussing boundary conditions and generalizations for the simulation of color, sensory modality, and spatial position; (c) sentences, discussing boundary conditions and applications of action direction simulation; (d) texts, discussing how the teaching of simulation can improve comprehension in beginning readers; (e) conversations, discussing how multi-modal cues improve turn taking and alignment; and (f) text corpora, discussing how distributional semantic models can reveal how grounded and embodied knowledge is encoded in texts. These approaches are converging on a convincing account of the psychology of language, but at the same time, there are important criticisms of the embodied approach and of specific experimental paradigms. The surest way forward requires the adoption of a wide array of scientific methods. By providing complimentary evidence, a combination of multiple methods on various levels of granularity can help us gain a more complete understanding of the role of embodiment and grounding in language processing.

16.
R Soc Open Sci ; 9(4): 211489, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35425638

RESUMO

In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are dynamically co-adapted to meet communicative needs. To this end, we parametrically reduce visibility during casual conversational interaction and measure the effects on speakers' communicative behaviour using motion tracking and manual annotation for kinematic and acoustic analyses. We found that visual signalling effort was flexibly adapted in response to a decrease in visual quality (especially motion energy, gesture rate, size, velocity and hold-time). Interestingly, speech was also affected: speech intensity increased in response to reduced visual quality (particularly in speech-gesture utterances, but independently of kinematics). Our findings highlight that multi-modal communicative behaviours are flexibly adapted at multiple scales of measurement and question the notion that gesture plays an inferior role to speech.

17.
Soc Cogn Affect Neurosci ; 17(11): 1021-1034, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-35428885

RESUMO

Persons with and without autism process sensory information differently. Differences in sensory processing are directly relevant to social functioning and communicative abilities, which are known to be hampered in persons with autism. We collected functional magnetic resonance imaging data from 25 autistic individuals and 25 neurotypical individuals while they performed a silent gesture recognition task. We exploited brain network topology, a holistic quantification of how networks within the brain are organized to provide new insights into how visual communicative signals are processed in autistic and neurotypical individuals. Performing graph theoretical analysis, we calculated two network properties of the action observation network: 'local efficiency', as a measure of network segregation, and 'global efficiency', as a measure of network integration. We found that persons with autism and neurotypical persons differ in how the action observation network is organized. Persons with autism utilize a more clustered, local-processing-oriented network configuration (i.e. higher local efficiency) rather than the more integrative network organization seen in neurotypicals (i.e. higher global efficiency). These results shed new light on the complex interplay between social and sensory processing in autism.


Assuntos
Transtorno Autístico , Humanos , Transtorno Autístico/patologia , Gestos , Encéfalo , Mapeamento Encefálico , Imageamento por Ressonância Magnética/métodos
18.
J Autism Dev Disord ; 52(4): 1771-1777, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34008098

RESUMO

The actions and feelings questionnaire (AFQ) provides a short, self-report measure of how well someone uses and understands visual communicative signals such as gestures. The objective of this study was to translate and cross-culturally adapt the AFQ into Dutch (AFQ-NL) and validate this new version in neurotypical and autistic populations. Translation and adaptation of the AFQ consisted of forward translation, synthesis, back translation, and expert review. In order to validate the AFQ-NL, we assessed convergent and divergent validity. We additionally assessed internal consistency using Cronbach's alpha. Validation and reliability outcomes were all satisfactory. The AFQ-NL is a valid adaptation that can be used for both autistic and neurotypical populations in the Netherlands.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Adulto , Transtorno do Espectro Autista/diagnóstico , Transtorno Autístico/diagnóstico , Comparação Transcultural , Emoções , Humanos , Psicometria , Reprodutibilidade dos Testes , Inquéritos e Questionários
19.
Brain Sci ; 11(8)2021 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-34439615

RESUMO

During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing-requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation.

20.
Brain Sci ; 11(8)2021 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-34439636

RESUMO

In a conversation, recognising the speaker's social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa