Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Multivariate Behav Res ; 56(5): 739-767, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32530313

RESUMO

Head movement is an important but often overlooked component of emotion and social interaction. Examination of regularity and differences in head movements of infant-mother dyads over time and across dyads can shed light on whether and how mothers and infants alter their dynamics over the course of an interaction to adapt to each others. One way to study these emergent differences in dynamics is to allow parameters that govern the patterns of interactions to change over time, and according to person- and dyad-specific characteristics. Using two estimation approaches to implement variations of a vector-autoregressive model with time-varying coefficients, we investigated the dynamics of automatically-tracked head movements in mothers and infants during the Face-Face/Still-Face Procedure (SFP) with 24 infant-mother dyads. The first approach requires specification of a confirmatory model for the time-varying parameters as part of a state-space model, whereas the second approach handles the time-varying parameters in a semi-parametric ("mostly" model-free) fashion within a generalized additive modeling framework. Results suggested that infant-mother head movement dynamics varied in time both within and across episodes of the SFP, and varied based on infants' subsequently-assessed attachment security. Code for implementing the time-varying vector-autoregressive model using two R packages, dynr and mgcv, is provided.


Assuntos
Movimentos da Cabeça , Mães , Emoções , Face , Feminino , Humanos , Lactente , Relações Mãe-Filho
2.
J Med Internet Res ; 20(8): e10056, 2018 08 03.
Artigo em Inglês | MEDLINE | ID: mdl-30076127

RESUMO

BACKGROUND: Pain is the most common physical symptom requiring medical care, yet the current methods for assessing pain are sorely inadequate. Pain assessment tools can be either too simplistic or take too long to complete to be useful for point-of-care diagnosis and treatment. OBJECTIVE: The aim was to develop and test Painimation, a novel tool that uses graphic visualizations and animations instead of words or numeric scales to assess pain quality, intensity, and course. This study examines the utility of abstract animations as a measure of pain. METHODS: Painimation was evaluated in a chronic pain medicine clinic. Eligible patients were receiving treatment for pain and reported pain more days than not for at least 3 months. Using a tablet computer, participating patients completed the Painimation instrument, the McGill Pain Questionnaire (MPQ), and the PainDETECT questionnaire for neuropathic symptoms. RESULTS: Participants (N=170) completed Painimation and indicated it was useful for describing their pain (mean 4.1, SE 0.1 out of 5 on a usefulness scale), and 130 of 162 participants (80.2%) agreed or strongly agreed that they would use Painimation to communicate with their providers. Animations selected corresponded with pain adjectives endorsed on the MPQ. Further, selection of the electrifying animation was associated with self-reported neuropathic pain (r=.16, P=.03), similar to the association between neuropathic pain and PainDETECT (r=.17, P=.03). Painimation was associated with PainDETECT (r=.35, P<.001). CONCLUSIONS: Using animations may be a faster and more patient-centered method for assessing pain and is not limited by age, literacy level, or language; however, more data are needed to assess the validity of this approach. To establish the validity of using abstract animations ("painimations") for communicating and assessing pain, apps and other digital tools using painimations will need to be tested longitudinally across a larger pain population and also within specific, more homogenous pain conditions.


Assuntos
Informática Médica/métodos , Medição da Dor/métodos , Dor/diagnóstico , Comunicação , Estudos Transversais , Estudos de Viabilidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Dor/patologia , Inquéritos e Questionários
3.
Cleft Palate Craniofac J ; 55(5): 711-720, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29377723

RESUMO

OBJECTIVE: To compare facial expressiveness (FE) of infants with and without craniofacial macrosomia (cases and controls, respectively) and to compare phenotypic variation among cases in relation to FE. DESIGN: Positive and negative affect was elicited in response to standardized emotion inductions, video recorded, and manually coded from video using the Facial Action Coding System for Infants and Young Children. SETTING: Five craniofacial centers: Children's Hospital of Los Angeles, Children's Hospital of Philadelphia, Seattle Children's Hospital, University of Illinois-Chicago, and University of North Carolina-Chapel Hill. PARTICIPANTS: Eighty ethnically diverse 12- to 14-month-old infants. MAIN OUTCOME MEASURES: FE was measured on a frame-by-frame basis as the sum of 9 observed facial action units (AUs) representative of positive and negative affect. RESULTS: FE differed between conditions intended to elicit positive and negative affect (95% confidence interval = 0.09-0.66, P = .01). FE failed to differ between cases and controls (ES = -0.16 to -0.02, P = .47 to .92). Among cases, those with and without mandibular hypoplasia showed similar levels of FE (ES = -0.38 to 0.54, P = .10 to .66). CONCLUSIONS: FE varied between positive and negative affect, and cases and controls responded similarly. Null findings for case/control differences may be attributable to a lower than anticipated prevalence of nerve palsy among cases, the selection of AUs, or the use of manual coding. In future research, we will reexamine group differences using an automated, computer vision approach that can cover a broader range of facial movements and their dynamics.


Assuntos
Anormalidades Craniofaciais/fisiopatologia , Assimetria Facial/fisiopatologia , Expressão Facial , Paralisia Facial/fisiopatologia , Estudos de Casos e Controles , Emoções , Feminino , Humanos , Lactente , Masculino , Fenótipo , Método Simples-Cego , Gravação em Vídeo
4.
Image Vis Comput ; 32(10): 641-647, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25378765

RESUMO

The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.

5.
Front Pain Res (Lausanne) ; 3: 849950, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35295797

RESUMO

[This corrects the article DOI: 10.3389/fpain.2021.788606.].

6.
Front Digit Health ; 4: 916810, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36060543

RESUMO

In this mini-review, we discuss the fundamentals of using technology in mental health diagnosis and tracking. We highlight those principles using two clinical concepts: (1) cravings and relapse in the context of addictive disorders and (2) anhedonia in the context of depression. This manuscript is useful for both clinicians wanting to understand the scope of technology use in psychiatry and for computer scientists and engineers wishing to assess psychiatric frameworks useful for diagnosis and treatment. The increase in smartphone ownership and internet connectivity, as well as the accelerated development of wearable devices, have made the observation and analysis of human behavior patterns possible. This has, in turn, paved the way to understand mental health conditions better. These technologies have immense potential in facilitating the diagnosis and tracking of mental health conditions; they also allow the implementation of existing behavioral treatments in new contexts (e.g., remotely, online, and in rural/underserved areas), and the possibility to develop new treatments based on new understanding of behavior patterns. The path to understand how to best use technology in mental health includes the need to match interdisciplinary frameworks from engineering/computer sciences and psychiatry. Thus, we start our review by introducing bio-behavioral sensing, the types of information available, and what behavioral patterns they may reflect and be related to in psychiatric diagnostic frameworks. This information is linked to the use of functional imaging, highlighting how imaging modalities can be considered "ground truth" for mental health/psychiatric dimensions, given the heterogeneity of clinical presentations, and the difficulty of determining what symptom corresponds to what disease. We then discuss how mental health/psychiatric dimensions overlap, yet differ from, psychiatric diagnoses. Using two clinical examples, we highlight the potential agreement areas in assessment/management of anhedonia and cravings. These two dimensions were chosen because of their link to two very prevalent diseases worldwide: depression and addiction. Anhedonia is a core symptom of depression, which is one of the leading causes of disability worldwide. Cravings, the urge to use a substance or perform an action (e.g., shopping, internet), is the leading step before relapse. Lastly, through the manuscript, we discuss potential mental health dimensions.

7.
IEEE Trans Affect Comput ; 13(4): 1813-1826, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36452255

RESUMO

We propose an automatic method to estimate self-reported pain based on facial landmarks extracted from videos. For each video sequence, we decompose the face into four different regions and the pain intensity is measured by modeling the dynamics of facial movement using the landmarks of these regions. A formulation based on Gram matrices is used for representing the trajectory of landmarks on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. A curve fitting algorithm is used to smooth the trajectories and temporal alignment is performed to compute the similarity between the trajectories on the manifold. A Support Vector Regression classifier is then trained to encode extracted trajectories into pain intensity levels consistent with self-reported pain intensity measurement. Finally, a late fusion of the estimation for each region is performed to obtain the final predicted pain level. The proposed approach is evaluated on two publicly available datasets, the UNBCMcMaster Shoulder Pain Archive and the Biovid Heat Pain dataset. We compared our method to the state-of-the-art on both datasets using different testing protocols, showing the competitiveness of the proposed approach.

8.
ICMI '21 Companion (2021) ; 2021: 54-57, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38013804

RESUMO

Advances in the understanding and control of pain require methods for measuring its presence, intensity, and other qualities. Shortcomings of the main method for evaluating pain-verbal report-have motivated the pursuit of other measures. Measurement of observable pain-related behaviors, such as facial expressions, has provided an alternative, but has seen limited application because available techniques are burdensome. Computer vision and machine learning techniques have been successfully applied to the assessment of painrelated facial expression, suggesting that automated assessment may be feasible. Further development is necessary before such techniques can have more widespread implementation in pain science and clinical practice. Suggestions are made for the dimensions that need to be addressed to facilitate such developments.

9.
Artigo em Inglês | MEDLINE | ID: mdl-35174358

RESUMO

Pain is often characterized as a fundamentally subjective phenomenon; however, all pain assessment reduces the experience to observables, with strengths and limitations. Most evidence about pain derives from observations of pain-related behavior. There has been considerable progress in articulating the properties of behavioral indices of pain; especially, but not exclusively those based on facial expression. An abundant literature shows that a limited subset of facial actions, with homologues in several non-human species, encode pain intensity across the lifespan. Unfortunately, acquiring such measures remains prohibitively impractical in many settings because it requires trained human observers and is laborious. The advent of the field of affective computing, which applies computer vision and machine learning (CVML) techniques to the recognition of behavior, raised the prospect that advanced technology might overcome some of the constraints limiting behavioral pain assessment in clinical and research settings. Studies have shown that it is indeed possible, through CVML, to develop systems that track facial expressions of pain. There has since been an explosion of research testing models for automated pain assessment. More recently, researchers have explored the feasibility of multimodal measurement of pain-related behaviors. Commercial products that purport to enable automatic, real-time measurement of pain expression have also appeared. Though progress has been made, this field remains in its infancy and there is risk of overpromising on what can be delivered. Insufficient adherence to conventional principles for developing valid measures and drawing appropriate generalizations to identifiable populations could lead to scientifically dubious and clinically risky claims. There is a particular need for the development of databases containing samples from various settings in which pain may or may not occur, meticulously annotated according to standards that would permit sharing, subject to international privacy standards. Researchers and users need to be sensitive to the limitations of the technology (for example, the potential reification of biases that are irrelevant to the assessment of pain) and its potentially problematic social implications.

10.
Artigo em Inglês | MEDLINE | ID: mdl-34651145

RESUMO

We propose an automatic method for pain intensity measurement from video. For each video, pain intensity was measured using the dynamics of facial movement using 66 facial points. Gram matrices formulation was used for facial points trajectory representations on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. Curve fitting and temporal alignment were then used to smooth the extracted trajectories. A Support Vector Regression model was then trained to encode the extracted trajectories into ten pain intensity levels consistent with the Visual Analogue Scale for pain intensity measurement. The proposed approach was evaluated using the UNBC McMaster Shoulder Pain Archive and was compared to the state-of-the-art on the same data. Using both 5-fold cross-validation and leave-one-subject-out cross-validation, our results are competitive with respect to state-of-the-art methods.

11.
Proc ACM Int Conf Multimodal Interact ; 2020: 156-164, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34755152

RESUMO

The standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this study, we propose a video based approach for the automatic measurement of self-reported pain and the observer pain intensity, respectively. To this end, we explore the added value of three self-reported pain scales, i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the Affective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) rating for a reliable assessment of pain intensity from facial expression. Using a spatio-temporal Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) architecture, we propose to jointly minimize the mean absolute error of pain scores estimation for each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Our results show that enforcing the consistency between different self-reported pain intensity scores collected using different pain scales enhances the quality of predictions and improve the state of the art in automatic self-reported pain estimation. The obtained results suggest that automatic assessment of self-reported pain intensity from videos is feasible, and could be used as a complementary instrument to unburden caregivers, specially for vulnerable populations that need constant monitoring.

12.
Proc ACM Int Conf Multimodal Interact ; 2020: 874-875, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33274351

RESUMO

The goal of Face and Gesture Analysis for Health Informatics's workshop is to share and discuss the achievements as well as the challenges in using computer vision and machine learning for automatic human behavior analysis and modeling for clinical research and healthcare applications. The workshop aims to promote current research and support growth of multidisciplinary collaborations to advance this groundbreaking research. The meeting gathers scientists working in related areas of computer vision and machine learning, multi-modal signal processing and fusion, human centered computing, behavioral sensing, assistive technologies, and medical tutoring systems for healthcare applications and medicine.

13.
J Vis ; 9(2): 22.1-19, 2009 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-19271932

RESUMO

Humans recognize basic facial expressions effortlessly. Yet, despite a considerable amount of research, this task remains elusive for computer vision systems. Here, we compared the behavior of one of the best computer models of facial expression recognition (Z. Hammal, L. Couvreur, A. Caplier, & M. Rombaut, 2007) with the behavior of human observers during the M. Smith, G. Cottrell, F. Gosselin, and P. G. Schyns (2005) facial expression recognition task performed on stimuli randomly sampled using Gaussian apertures. The model--which we had to significantly modify in order to give the ability to deal with partially occluded stimuli--classifies the six basic facial expressions (Happiness, Fear, Sadness, Surprise, Anger, and Disgust) plus Neutral from static images based on the permanent facial feature deformations and the Transferable Belief Model (TBM). Three simulations demonstrated the suitability of the TBM-based model to deal with partially occluded facial parts and revealed the differences between the facial information used by humans and by the model. This opens promising perspectives for the future development of the model.


Assuntos
Expressão Facial , Modelos Psicológicos , Reconhecimento Visual de Modelos , Simulação por Computador , Humanos , Mascaramento Perceptivo
14.
Artigo em Inglês | MEDLINE | ID: mdl-31745390

RESUMO

With few exceptions, most research in automated assessment of depression has considered only the patient's behavior to the exclusion of the therapist's behavior. We investigated the interpersonal coordination (synchrony) of head movement during patient-therapist clinical interviews. Our sample consisted of patients diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at 7-week intervals over a period of 21 weeks. For each session, patient and therapist 3D head movement was tracked from 2D videos. Head angles in the horizontal (pitch) and vertical (yaw) axes were used to measure head movement. Interpersonal coordination of head movement between patients and therapists was measured using windowed cross-correlation. Patterns of coordination in head movement were investigated using the peak picking algorithm. Changes in head movement coordination over the course of treatment were measured using a hierarchical linear model (HLM). The results indicated a strong effect for patient-therapist head movement synchrony. Within-dyad variability in head movement coordination was found to be higher than between-dyad variability, meaning that differences over time in a dyad were higher as compared to the differences between dyads. Head movement synchrony did not change over the course of treatment with change in depression severity. To the best of our knowledge, this study is the first attempt to analyze the mutual influence of patient-therapist head movement in relation to depression severity.

15.
Plast Reconstr Surg Glob Open ; 7(1): e2081, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30859039

RESUMO

BACKGROUND: Craniofacial microsomia (CFM) is a congenital condition associated with malformations of the bone and soft tissue of the face and the facial nerves, all of which have the potential to impair facial expressiveness. We investigated whether CFM-related variation in expressiveness is evident as early as infancy. METHODS: Participants were 113 ethnically diverse 13-month-old infants (n = 63 cases with CFM and n = 50 unaffected matched controls). They were observed in 2 emotion induction tasks designed to elicit positive and negative effects. Facial and head movement was automatically measured using a computer vision-based approach. Expressiveness was quantified as the displacement, velocity, and acceleration of 49 facial landmarks (eg, lip corners) and head pitch and yaw. RESULTS: For both cases and controls, all measures of expressiveness strongly differed between tasks. Case-control differences were limited to infants with microtia plus mandibular hypoplasia and other associated CFM features, which were the most common phenotypes and were characterized by decreased expressiveness relative to control infants. CONCLUSIONS: Infants with microtia plus mandibular hypoplasia and those with other associated CFM phenotypes were less facially expressive than same-aged peers. Both phenotypes were associated with more severe involvement than microtia alone, suggesting that infants with more severe CFM begin to diverge in expressiveness from controls by age 13 months. Further research is needed to both replicate the current findings and elucidate their developmental implications.

16.
IEEE J Biomed Health Inform ; 22(2): 525-536, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28278485

RESUMO

Depression is one of the most common psychiatric disorders worldwide, with over 350 million people affected. Current methods to screen for and assess depression depend almost entirely on clinical interviews and self-report scales. While useful, such measures lack objective, systematic, and efficient ways of incorporating behavioral observations that are strong indicators of depression presence and severity. Using dynamics of facial and head movement and vocalization, we trained classifiers to detect three levels of depression severity. Participants were a community sample diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at seven-week intervals over a period of 21 weeks. At each interview, they were scored by the HRSD as moderately to severely depressed, mildly depressed, or remitted. Logistic regression classifiers using leave-one-participant-out validation were compared for facial movement, head movement, and vocal prosody individually and in combination. Accuracy of depression severity measurement from facial movement dynamics was higher than that for head movement dynamics, and each was substantially higher than that for vocal prosody. Accuracy using all three modalities combined only marginally exceeded that of face and head combined. These findings suggest that automatic detection of depression severity from behavioral indicators in patients is feasible and that multimodal measures afford the most powerful detection.


Assuntos
Depressão , Diagnóstico por Computador/métodos , Gravação em Vídeo/métodos , Adulto , Idoso , Depressão/classificação , Depressão/diagnóstico , Depressão/fisiopatologia , Face/fisiologia , Feminino , Movimentos da Cabeça/fisiologia , Humanos , Entrevistas como Assunto , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Processamento de Sinais Assistido por Computador , Voz/fisiologia , Adulto Jovem
17.
Artigo em Inglês | MEDLINE | ID: mdl-30271308

RESUMO

Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of facial landmarks and a Lie-algebra based rotation matrix of 3D head motion. Using these representations, kinematic features are extracted, preprocessed, and encoded using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM is used to classify the encoded facial and head movement dynamics into three levels of depression severity. The proposed approach was evaluated in adults with history of chronic depression. The method approached the classification accuracy of state-of-the-art deep learning while enabling clinically and theoretically relevant findings. The velocity and acceleration of facial movement strongly mapped onto depression severity symptoms consistent with clinical data and theory.

18.
Mol Autism ; 9: 14, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29492241

RESUMO

Background: Deficits in motor movement in children with autism spectrum disorder (ASD) have typically been characterized qualitatively by human observers. Although clinicians have noted the importance of atypical head positioning (e.g. social peering and repetitive head banging) when diagnosing children with ASD, a quantitative understanding of head movement in ASD is lacking. Here, we conduct a quantitative comparison of head movement dynamics in children with and without ASD using automated, person-independent computer-vision based head tracking (Zface). Because children with ASD often exhibit preferential attention to nonsocial versus social stimuli, we investigated whether children with and without ASD differed in their head movement dynamics depending on stimulus sociality. Methods: The current study examined differences in head movement dynamics in children with (n = 21) and without ASD (n = 21). Children were video-recorded while watching a 16-min video of social and nonsocial stimuli. Three dimensions of rigid head movement-pitch (head nods), yaw (head turns), and roll (lateral head inclinations)-were tracked using Zface. The root mean square of pitch, yaw, and roll was calculated to index the magnitude of head angular displacement (quantity of head movement) and angular velocity (speed). Results: Compared with children without ASD, children with ASD exhibited greater yaw displacement, indicating greater head turning, and greater velocity of yaw and roll, indicating faster head turning and inclination. Follow-up analyses indicated that differences in head movement dynamics were specific to the social rather than the nonsocial stimulus condition. Conclusions: Head movement dynamics (displacement and velocity) were greater in children with ASD than in children without ASD, providing a quantitative foundation for previous clinical reports. Head movement differences were evident in lateral (yaw and roll) but not vertical (pitch) movement and were specific to a social rather than nonsocial condition. When presented with social stimuli, children with ASD had higher levels of head movement and moved their heads more quickly than children without ASD. Children with ASD may use head movement to modulate their perception of social scenes.


Assuntos
Transtorno Autístico/fisiopatologia , Movimentos da Cabeça , Atenção , Transtorno Autístico/diagnóstico , Estudos de Casos e Controles , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Exame Neurológico/normas , Comportamento Social
19.
Artigo em Inglês | MEDLINE | ID: mdl-29862131

RESUMO

Action unit detection in infants relative to adults presents unique challenges. Jaw contour is less distinct, facial texture is reduced, and rapid and unusual facial movements are common. To detect facial action units in spontaneous behavior of infants, we propose a multi-label Convolutional Neural Network (CNN). Eighty-six infants were recorded during tasks intended to elicit enjoyment and frustration. Using an extension of FACS for infants (Baby FACS), over 230,000 frames were manually coded for ground truth. To control for chance agreement, inter-observer agreement between Baby-FACS coders was quantified using free-margin kappa. Kappa coefficients ranged from 0.79 to 0.93, which represents high agreement. The multi-label CNN achieved comparable agreement with manual coding. Kappa ranged from 0.69 to 0.93. Importantly, the CNN-based AU detection revealed the same change in findings with respect to infant expressiveness between tasks. While further research is needed, these findings suggest that automatic AU detection in infants is a viable alternative to manual coding of infant facial expression.

20.
IEEE Trans Affect Comput ; 6(4): 361-370, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26640622

RESUMO

We investigated the dynamics of head movement in mothers and infants during an age-appropriate, well-validated emotion induction, the Still Face paradigm. In this paradigm, mothers and infants play normally for 2 minutes (Play) followed by 2 minutes in which the mothers remain unresponsive (Still Face), and then two minutes in which they resume normal behavior (Reunion). Participants were 42 ethnically diverse 4-month-old infants and their mothers. Mother and infant angular displacement and angular velocity were measured using the CSIRO head tracker. In male but not female infants, angular displacement increased from Play to Still-Face and decreased from Still Face to Reunion. Infant angular velocity was higher during Still-Face than Reunion with no differences between male and female infants. Windowed cross-correlation suggested changes in how infant and mother head movements are associated, revealing dramatic changes in direction of association. Coordination between mother and infant head movement velocity was greater during Play compared with Reunion. Together, these findings suggest that angular displacement, angular velocity and their coordination between mothers and infants are strongly related to age-appropriate emotion challenge. Attention to head movement can deepen our understanding of emotion communication.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA