RESUMO
Psychotherapists, who use their communicative skills to assist people, review their dialogue practices and improve their skills from their experiences. However, technology has not been fully exploited for this purpose. In this study, we analyze the use of head movements during actual psychotherapeutic dialogues between two participants-therapist and client-using video recordings and head-mounted accelerometers. Accelerometers have been utilized in the mental health domain but not for analyzing mental health related communications. We examined the relationship between the state of the interaction and temporally varying head nod and movement patterns in psychological counseling sessions. Head nods were manually annotated and the head movements were measured using accelerometers. Head nod counts were analyzed based on annotations taken from video data. We conducted cross-correlation analysis of the head movements of the two participants using the accelerometer data. The results of two case studies suggest that upward and downward head nod count patterns may reflect stage transitions in counseling dialogues and that peaks of head movement synchrony may be related to emphasis in the interaction.
Assuntos
Movimentos da Cabeça , Cabeça , Acelerometria , Comunicação , Movimento , Gravação em VídeoRESUMO
The automated annotation of conversational video by semantic miscommunication labels is a challenging topic. Although miscommunications are often obvious to the speakers as well as the observers, it is difficult for machines to detect them from the low-level features. We investigate the utility of gestural cues in this paper among various non-verbal features. Compared with gesture recognition tasks in human-computer interaction, this process is difficult due to the lack of understanding on which cues contribute to miscommunications and the implicitness of gestures. Nine simple gestural features are taken from gesture data, and both simple and complex classifiers are constructed using machine learning. The experimental results suggest that there is no single gestural feature that can predict or explain the occurrence of semantic miscommunication in our setting.