Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters

Publication year range
1.
Multivariate Behav Res ; 56(5): 739-767, 2021.
Article in English | MEDLINE | ID: mdl-32530313

ABSTRACT

Head movement is an important but often overlooked component of emotion and social interaction. Examination of regularity and differences in head movements of infant-mother dyads over time and across dyads can shed light on whether and how mothers and infants alter their dynamics over the course of an interaction to adapt to each others. One way to study these emergent differences in dynamics is to allow parameters that govern the patterns of interactions to change over time, and according to person- and dyad-specific characteristics. Using two estimation approaches to implement variations of a vector-autoregressive model with time-varying coefficients, we investigated the dynamics of automatically-tracked head movements in mothers and infants during the Face-Face/Still-Face Procedure (SFP) with 24 infant-mother dyads. The first approach requires specification of a confirmatory model for the time-varying parameters as part of a state-space model, whereas the second approach handles the time-varying parameters in a semi-parametric ("mostly" model-free) fashion within a generalized additive modeling framework. Results suggested that infant-mother head movement dynamics varied in time both within and across episodes of the SFP, and varied based on infants' subsequently-assessed attachment security. Code for implementing the time-varying vector-autoregressive model using two R packages, dynr and mgcv, is provided.


Subject(s)
Head Movements , Mothers , Emotions , Face , Female , Humans , Infant , Mother-Child Relations
2.
J Med Internet Res ; 20(8): e10056, 2018 08 03.
Article in English | MEDLINE | ID: mdl-30076127

ABSTRACT

BACKGROUND: Pain is the most common physical symptom requiring medical care, yet the current methods for assessing pain are sorely inadequate. Pain assessment tools can be either too simplistic or take too long to complete to be useful for point-of-care diagnosis and treatment. OBJECTIVE: The aim was to develop and test Painimation, a novel tool that uses graphic visualizations and animations instead of words or numeric scales to assess pain quality, intensity, and course. This study examines the utility of abstract animations as a measure of pain. METHODS: Painimation was evaluated in a chronic pain medicine clinic. Eligible patients were receiving treatment for pain and reported pain more days than not for at least 3 months. Using a tablet computer, participating patients completed the Painimation instrument, the McGill Pain Questionnaire (MPQ), and the PainDETECT questionnaire for neuropathic symptoms. RESULTS: Participants (N=170) completed Painimation and indicated it was useful for describing their pain (mean 4.1, SE 0.1 out of 5 on a usefulness scale), and 130 of 162 participants (80.2%) agreed or strongly agreed that they would use Painimation to communicate with their providers. Animations selected corresponded with pain adjectives endorsed on the MPQ. Further, selection of the electrifying animation was associated with self-reported neuropathic pain (r=.16, P=.03), similar to the association between neuropathic pain and PainDETECT (r=.17, P=.03). Painimation was associated with PainDETECT (r=.35, P<.001). CONCLUSIONS: Using animations may be a faster and more patient-centered method for assessing pain and is not limited by age, literacy level, or language; however, more data are needed to assess the validity of this approach. To establish the validity of using abstract animations ("painimations") for communicating and assessing pain, apps and other digital tools using painimations will need to be tested longitudinally across a larger pain population and also within specific, more homogenous pain conditions.


Subject(s)
Medical Informatics/methods , Pain Measurement/methods , Pain/diagnosis , Communication , Cross-Sectional Studies , Feasibility Studies , Female , Humans , Male , Middle Aged , Pain/pathology , Surveys and Questionnaires
3.
Cleft Palate Craniofac J ; 55(5): 711-720, 2018 05.
Article in English | MEDLINE | ID: mdl-29377723

ABSTRACT

OBJECTIVE: To compare facial expressiveness (FE) of infants with and without craniofacial macrosomia (cases and controls, respectively) and to compare phenotypic variation among cases in relation to FE. DESIGN: Positive and negative affect was elicited in response to standardized emotion inductions, video recorded, and manually coded from video using the Facial Action Coding System for Infants and Young Children. SETTING: Five craniofacial centers: Children's Hospital of Los Angeles, Children's Hospital of Philadelphia, Seattle Children's Hospital, University of Illinois-Chicago, and University of North Carolina-Chapel Hill. PARTICIPANTS: Eighty ethnically diverse 12- to 14-month-old infants. MAIN OUTCOME MEASURES: FE was measured on a frame-by-frame basis as the sum of 9 observed facial action units (AUs) representative of positive and negative affect. RESULTS: FE differed between conditions intended to elicit positive and negative affect (95% confidence interval = 0.09-0.66, P = .01). FE failed to differ between cases and controls (ES = -0.16 to -0.02, P = .47 to .92). Among cases, those with and without mandibular hypoplasia showed similar levels of FE (ES = -0.38 to 0.54, P = .10 to .66). CONCLUSIONS: FE varied between positive and negative affect, and cases and controls responded similarly. Null findings for case/control differences may be attributable to a lower than anticipated prevalence of nerve palsy among cases, the selection of AUs, or the use of manual coding. In future research, we will reexamine group differences using an automated, computer vision approach that can cover a broader range of facial movements and their dynamics.


Subject(s)
Craniofacial Abnormalities/physiopathology , Facial Asymmetry/physiopathology , Facial Expression , Facial Paralysis/physiopathology , Case-Control Studies , Emotions , Female , Humans , Infant , Male , Phenotype , Single-Blind Method , Video Recording
4.
Image Vis Comput ; 32(10): 641-647, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25378765

ABSTRACT

The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.

5.
Front Pain Res (Lausanne) ; 3: 849950, 2022.
Article in English | MEDLINE | ID: mdl-35295797

ABSTRACT

[This corrects the article DOI: 10.3389/fpain.2021.788606.].

6.
Front Digit Health ; 4: 916810, 2022.
Article in English | MEDLINE | ID: mdl-36060543

ABSTRACT

In this mini-review, we discuss the fundamentals of using technology in mental health diagnosis and tracking. We highlight those principles using two clinical concepts: (1) cravings and relapse in the context of addictive disorders and (2) anhedonia in the context of depression. This manuscript is useful for both clinicians wanting to understand the scope of technology use in psychiatry and for computer scientists and engineers wishing to assess psychiatric frameworks useful for diagnosis and treatment. The increase in smartphone ownership and internet connectivity, as well as the accelerated development of wearable devices, have made the observation and analysis of human behavior patterns possible. This has, in turn, paved the way to understand mental health conditions better. These technologies have immense potential in facilitating the diagnosis and tracking of mental health conditions; they also allow the implementation of existing behavioral treatments in new contexts (e.g., remotely, online, and in rural/underserved areas), and the possibility to develop new treatments based on new understanding of behavior patterns. The path to understand how to best use technology in mental health includes the need to match interdisciplinary frameworks from engineering/computer sciences and psychiatry. Thus, we start our review by introducing bio-behavioral sensing, the types of information available, and what behavioral patterns they may reflect and be related to in psychiatric diagnostic frameworks. This information is linked to the use of functional imaging, highlighting how imaging modalities can be considered "ground truth" for mental health/psychiatric dimensions, given the heterogeneity of clinical presentations, and the difficulty of determining what symptom corresponds to what disease. We then discuss how mental health/psychiatric dimensions overlap, yet differ from, psychiatric diagnoses. Using two clinical examples, we highlight the potential agreement areas in assessment/management of anhedonia and cravings. These two dimensions were chosen because of their link to two very prevalent diseases worldwide: depression and addiction. Anhedonia is a core symptom of depression, which is one of the leading causes of disability worldwide. Cravings, the urge to use a substance or perform an action (e.g., shopping, internet), is the leading step before relapse. Lastly, through the manuscript, we discuss potential mental health dimensions.

7.
IEEE Trans Affect Comput ; 13(4): 1813-1826, 2022.
Article in English | MEDLINE | ID: mdl-36452255

ABSTRACT

We propose an automatic method to estimate self-reported pain based on facial landmarks extracted from videos. For each video sequence, we decompose the face into four different regions and the pain intensity is measured by modeling the dynamics of facial movement using the landmarks of these regions. A formulation based on Gram matrices is used for representing the trajectory of landmarks on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. A curve fitting algorithm is used to smooth the trajectories and temporal alignment is performed to compute the similarity between the trajectories on the manifold. A Support Vector Regression classifier is then trained to encode extracted trajectories into pain intensity levels consistent with self-reported pain intensity measurement. Finally, a late fusion of the estimation for each region is performed to obtain the final predicted pain level. The proposed approach is evaluated on two publicly available datasets, the UNBCMcMaster Shoulder Pain Archive and the Biovid Heat Pain dataset. We compared our method to the state-of-the-art on both datasets using different testing protocols, showing the competitiveness of the proposed approach.

8.
ICMI '21 Companion (2021) ; 2021: 54-57, 2021 Oct.
Article in English | MEDLINE | ID: mdl-38013804

ABSTRACT

Advances in the understanding and control of pain require methods for measuring its presence, intensity, and other qualities. Shortcomings of the main method for evaluating pain-verbal report-have motivated the pursuit of other measures. Measurement of observable pain-related behaviors, such as facial expressions, has provided an alternative, but has seen limited application because available techniques are burdensome. Computer vision and machine learning techniques have been successfully applied to the assessment of painrelated facial expression, suggesting that automated assessment may be feasible. Further development is necessary before such techniques can have more widespread implementation in pain science and clinical practice. Suggestions are made for the dimensions that need to be addressed to facilitate such developments.

9.
Article in English | MEDLINE | ID: mdl-35174358

ABSTRACT

Pain is often characterized as a fundamentally subjective phenomenon; however, all pain assessment reduces the experience to observables, with strengths and limitations. Most evidence about pain derives from observations of pain-related behavior. There has been considerable progress in articulating the properties of behavioral indices of pain; especially, but not exclusively those based on facial expression. An abundant literature shows that a limited subset of facial actions, with homologues in several non-human species, encode pain intensity across the lifespan. Unfortunately, acquiring such measures remains prohibitively impractical in many settings because it requires trained human observers and is laborious. The advent of the field of affective computing, which applies computer vision and machine learning (CVML) techniques to the recognition of behavior, raised the prospect that advanced technology might overcome some of the constraints limiting behavioral pain assessment in clinical and research settings. Studies have shown that it is indeed possible, through CVML, to develop systems that track facial expressions of pain. There has since been an explosion of research testing models for automated pain assessment. More recently, researchers have explored the feasibility of multimodal measurement of pain-related behaviors. Commercial products that purport to enable automatic, real-time measurement of pain expression have also appeared. Though progress has been made, this field remains in its infancy and there is risk of overpromising on what can be delivered. Insufficient adherence to conventional principles for developing valid measures and drawing appropriate generalizations to identifiable populations could lead to scientifically dubious and clinically risky claims. There is a particular need for the development of databases containing samples from various settings in which pain may or may not occur, meticulously annotated according to standards that would permit sharing, subject to international privacy standards. Researchers and users need to be sensitive to the limitations of the technology (for example, the potential reification of biases that are irrelevant to the assessment of pain) and its potentially problematic social implications.

10.
Article in English | MEDLINE | ID: mdl-34651145

ABSTRACT

We propose an automatic method for pain intensity measurement from video. For each video, pain intensity was measured using the dynamics of facial movement using 66 facial points. Gram matrices formulation was used for facial points trajectory representations on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. Curve fitting and temporal alignment were then used to smooth the extracted trajectories. A Support Vector Regression model was then trained to encode the extracted trajectories into ten pain intensity levels consistent with the Visual Analogue Scale for pain intensity measurement. The proposed approach was evaluated using the UNBC McMaster Shoulder Pain Archive and was compared to the state-of-the-art on the same data. Using both 5-fold cross-validation and leave-one-subject-out cross-validation, our results are competitive with respect to state-of-the-art methods.

SELECTION OF CITATIONS
SEARCH DETAIL