Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 117
Filtrar
1.
Pattern Recognit Lett ; 182: 111-117, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-39086494

RESUMEN

Detecting action units is an important task in face analysis, especially in facial expression recognition. This is due, in part, to the idea that expressions can be decomposed into multiple action units. To evaluate systems that detect action units, F1-binary score is often used as the evaluation metric. In this paper, we argue that F1-binary score does not reliably evaluate these models due largely to class imbalance. Because of this, F1-binary score should be retired and a suitable replacement should be used. We justify this argument through a detailed evaluation of the negative influence of class imbalance on action unit detection. This includes an investigation into the influence of class imbalance in train and test sets and in new data (i.e., generalizability). We empirically show that F1-micro should be used as the replacement for F1-binary.

2.
J Neuroeng Rehabil ; 20(1): 64, 2023 05 16.
Artículo en Inglés | MEDLINE | ID: mdl-37193985

RESUMEN

BACKGROUND: Major Depressive Disorder (MDD) is associated with interoceptive deficits expressed throughout the body, particularly the facial musculature. According to the facial feedback hypothesis, afferent feedback from the facial muscles suffices to alter the emotional experience. Thus, manipulating the facial muscles could provide a new "mind-body" intervention for MDD. This article provides a conceptual overview of functional electrical stimulation (FES), a novel neuromodulation-based treatment modality that can be potentially used in the treatment of disorders of disrupted brain connectivity, such as MDD. METHODS: A focused literature search was performed for clinical studies of FES as a modulatory treatment for mood symptoms. The literature is reviewed in a narrative format, integrating theories of emotion, facial expression, and MDD. RESULTS: A rich body of literature on FES supports the notion that peripheral muscle manipulation in patients with stroke or spinal cord injury may enhance central neuroplasticity, restoring lost sensorimotor function. These neuroplastic effects suggest that FES may be a promising innovative intervention for psychiatric disorders of disrupted brain connectivity, such as MDD. Recent pilot data on repetitive FES applied to the facial muscles in healthy participants and patients with MDD show early promise, suggesting that FES may attenuate the negative interoceptive bias associated with MDD by enhancing positive facial feedback. Neurobiologically, the amygdala and nodes of the emotion-to-motor transformation loop may serve as potential neural targets for facial FES in MDD, as they integrate proprioceptive and interoceptive inputs from muscles of facial expression and fine-tune their motor output in line with socio-emotional context. CONCLUSIONS: Manipulating facial muscles may represent a mechanistically novel treatment strategy for MDD and other disorders of disrupted brain connectivity that is worthy of investigation in phase II/III trials.


Asunto(s)
Trastorno Depresivo Mayor , Humanos , Trastorno Depresivo Mayor/terapia , Músculos Faciales , Emociones/fisiología , Encéfalo , Estimulación Eléctrica , Imagen por Resonancia Magnética
3.
Infancy ; 28(5): 910-929, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37466002

RESUMEN

Although still-face effects are well-studied, little is known about the degree to which the Face-to-Face/Still-Face (FFSF) is associated with the production of intense affective displays. Duchenne smiling expresses more intense positive affect than non-Duchenne smiling, while Duchenne cry-faces express more intense negative affect than non-Duchenne cry-faces. Forty 4-month-old infants and their mothers completed the FFSF, and key affect-indexing facial Action Units (AUs) were coded by expert Facial Action Coding System coders for the first 30 s of each FFSF episode. Computer vision software, automated facial affect recognition (AFAR), identified AUs for the entire 2-min episodes. Expert coding and AFAR produced similar infant and mother Duchenne and non-Duchenne FFSF effects, highlighting the convergent validity of automated measurement. Substantive AFAR analyses indicated that both infant Duchenne and non-Duchenne smiling declined from the FF to the SF, but only Duchenne smiling increased from the SF to the RE. In similar fashion, the magnitude of mother Duchenne smiling changes over the FFSF were 2-4 times greater than non-Duchenne smiling changes. Duchenne expressions appear to be a sensitive index of intense infant and mother affective valence that are accessible to automated measurement and may be a target for future FFSF research.


Asunto(s)
Expresión Facial , Madres , Femenino , Humanos , Lactante , Madres/psicología , Sonrisa/psicología , Programas Informáticos
4.
Behav Res Methods ; 55(3): 1024-1035, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-35538295

RESUMEN

Automated detection of facial action units in infants is challenging. Infant faces have different proportions, less texture, fewer wrinkles and furrows, and unique facial actions relative to adults. For these and related reasons, action unit (AU) detectors that are trained on adult faces may generalize poorly to infant faces. To train and test AU detectors for infant faces, we trained convolutional neural networks (CNN) in adult video databases and fine-tuned these networks in two large, manually annotated, infant video databases that differ in context, head pose, illumination, video resolution, and infant age. AUs were those central to expression of positive and negative emotion. AU detectors trained in infants greatly outperformed ones trained previously in adults. Training AU detectors across infant databases afforded greater robustness to between-database differences than did training database specific AU detectors and outperformed previous state-of-the-art in infant AU detection. The resulting AU detection system, which we refer to as Infant AFAR (Automated Facial Action Recognition), is available to the research community for further testing and applications in infant emotion, social interaction, and related topics.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Humanos , Lactante , Redes Neurales de la Computación , Emociones , Interacción Social , Bases de Datos Factuales
5.
Multivariate Behav Res ; 56(5): 739-767, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-32530313

RESUMEN

Head movement is an important but often overlooked component of emotion and social interaction. Examination of regularity and differences in head movements of infant-mother dyads over time and across dyads can shed light on whether and how mothers and infants alter their dynamics over the course of an interaction to adapt to each others. One way to study these emergent differences in dynamics is to allow parameters that govern the patterns of interactions to change over time, and according to person- and dyad-specific characteristics. Using two estimation approaches to implement variations of a vector-autoregressive model with time-varying coefficients, we investigated the dynamics of automatically-tracked head movements in mothers and infants during the Face-Face/Still-Face Procedure (SFP) with 24 infant-mother dyads. The first approach requires specification of a confirmatory model for the time-varying parameters as part of a state-space model, whereas the second approach handles the time-varying parameters in a semi-parametric ("mostly" model-free) fashion within a generalized additive modeling framework. Results suggested that infant-mother head movement dynamics varied in time both within and across episodes of the SFP, and varied based on infants' subsequently-assessed attachment security. Code for implementing the time-varying vector-autoregressive model using two R packages, dynr and mgcv, is provided.


Asunto(s)
Movimientos de la Cabeza , Madres , Emociones , Cara , Femenino , Humanos , Lactante , Relaciones Madre-Hijo
6.
Image Vis Comput ; 81: 1-14, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30524157

RESUMEN

Facial action units (AUs) may be represented spatially, temporally, and in terms of their correlation. Previous research focuses on one or another of these aspects or addresses them disjointly. We propose a hybrid network architecture that jointly models spatial and temporal representations and their correlation. In particular, we use a Convolutional Neural Network (CNN) to learn spatial representations, and a Long Short-Term Memory (LSTM) to model temporal dependencies among them. The outputs of CNNs and LSTMs are aggregated into a fusion network to produce per-frame prediction of multiple AUs. The hybrid network was compared to previous state-of-the-art approaches in two large FACS-coded video databases, GFT and BP4D, with over 400,000 AU-coded frames of spontaneous facial behavior in varied social contexts. Relative to standard multi-label CNN and feature-based state-of-the-art approaches, the hybrid system reduced person-specific biases and obtained increased accuracy for AU detection. To address class imbalance within and between batches during training the network, we introduce multi-labeling sampling strategies that further increase accuracy when AUs are relatively sparse. Finally, we provide visualization of the learned AU models, which, to the best of our best knowledge, reveal for the first time how machines see AUs.

7.
Cleft Palate Craniofac J ; 55(5): 711-720, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29377723

RESUMEN

OBJECTIVE: To compare facial expressiveness (FE) of infants with and without craniofacial macrosomia (cases and controls, respectively) and to compare phenotypic variation among cases in relation to FE. DESIGN: Positive and negative affect was elicited in response to standardized emotion inductions, video recorded, and manually coded from video using the Facial Action Coding System for Infants and Young Children. SETTING: Five craniofacial centers: Children's Hospital of Los Angeles, Children's Hospital of Philadelphia, Seattle Children's Hospital, University of Illinois-Chicago, and University of North Carolina-Chapel Hill. PARTICIPANTS: Eighty ethnically diverse 12- to 14-month-old infants. MAIN OUTCOME MEASURES: FE was measured on a frame-by-frame basis as the sum of 9 observed facial action units (AUs) representative of positive and negative affect. RESULTS: FE differed between conditions intended to elicit positive and negative affect (95% confidence interval = 0.09-0.66, P = .01). FE failed to differ between cases and controls (ES = -0.16 to -0.02, P = .47 to .92). Among cases, those with and without mandibular hypoplasia showed similar levels of FE (ES = -0.38 to 0.54, P = .10 to .66). CONCLUSIONS: FE varied between positive and negative affect, and cases and controls responded similarly. Null findings for case/control differences may be attributable to a lower than anticipated prevalence of nerve palsy among cases, the selection of AUs, or the use of manual coding. In future research, we will reexamine group differences using an automated, computer vision approach that can cover a broader range of facial movements and their dynamics.


Asunto(s)
Anomalías Craneofaciales/fisiopatología , Asimetría Facial/fisiopatología , Expresión Facial , Parálisis Facial/fisiopatología , Estudios de Casos y Controles , Emociones , Femenino , Humanos , Lactante , Masculino , Fenotipo , Método Simple Ciego , Grabación en Video
8.
Int J Comput Vis ; 123(3): 372-391, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28943718

RESUMEN

Event discovery aims to discover a temporal segment of interest, such as human behavior, actions or activities. Most approaches to event discovery within or between time series use supervised learning. This becomes problematic when some relevant event labels are unknown, are difficult to detect, or not all possible combinations of events have been anticipated. To overcome these problems, this paper explores Common Event Discovery (CED), a new problem that aims to discover common events of variable-length segments in an unsupervised manner. A potential solution to CED is searching over all possible pairs of segments, which would incur a prohibitive quartic cost. In this paper, we propose an efficient branch-and-bound (B&B) framework that avoids exhaustive search while guaranteeing a globally optimal solution. To this end, we derive novel bounding functions for various commonality measures and provide extensions to multiple commonality discovery and accelerated search. The B&B framework takes as input any multidimensional signal that can be quantified into histograms. A generalization of the framework can be readily applied to discover events at the same or different times (synchrony and event commonality, respectively). We consider extensions to video search and supervised event detection. The effectiveness of the B&B framework is evaluated in motion capture of deliberate behavior and in video of spontaneous facial behavior in diverse interpersonal contexts: interviews, small groups of young adults, and parent-infant face-to-face interaction.

9.
Image Vis Comput ; 58: 13-24, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29731533

RESUMEN

To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.

10.
Pattern Recognit Lett ; 66: 13-21, 2015 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-26461205

RESUMEN

Both the occurrence and intensity of facial expressions are critical to what the face reveals. While much progress has been made towards the automatic detection of facial expression occurrence, controversy exists about how to estimate expression intensity. The most straight-forward approach is to train multiclass or regression models using intensity ground truth. However, collecting intensity ground truth is even more time consuming and expensive than collecting binary ground truth. As a shortcut, some researchers have proposed using the decision values of binary-trained maximum margin classifiers as a proxy for expression intensity. We provide empirical evidence that this heuristic is flawed in practice as well as in theory. Unfortunately, there are no shortcuts when it comes to estimating smile intensity: researchers must take the time to collect and train on intensity ground truth. However, if they do so, high reliability with expert human coders can be achieved. Intensity-trained multiclass and regression models outperformed binary-trained classifier decision values on smile intensity estimation across multiple databases and methods for feature extraction and dimensionality reduction. Multiclass models even outperformed binary-trained classifiers on smile occurrence detection.

11.
Behav Res Methods ; 47(4): 1136-1147, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25488104

RESUMEN

Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.


Asunto(s)
Emociones/fisiología , Expresión Facial , Relaciones Interpersonales , Cara , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Grabación en Video , Adulto Joven
12.
Image Vis Comput ; 32(10): 641-647, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25378765

RESUMEN

The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.

13.
J Affect Disord ; 366: 290-299, 2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39187178

RESUMEN

BACKGROUND: Approximately 10% of mothers experience depression each year, which increases risk for depression in offspring. Currently no research has analysed the linguistic features of depressed mothers and their adolescent offspring during dyadic interactions. We examined the extent to which linguistic features of mothers' and adolescents' speech during dyadic interactional tasks could discriminate depressed from non-depressed mothers. METHODS: Computer-assisted linguistic analysis (Linguistic Inquiry and Word Count; LIWC) was applied to transcripts of low-income mother-adolescent dyads (N = 151) performing a lab-based problem-solving interaction task. One-way multivariate analyses were conducted to determine linguistic features hypothesized to be related to maternal depressive status that significantly differed in frequency between depressed and non-depressed mothers and higher and lower risk offspring. Logistic regression analyses were performed to classify between dyads belonging to the two groups. RESULTS: The results showed that linguistic features in mothers' and their adolescent offsprings' speech during problem-solving interactions discriminated between maternal depression status. Many, but not all effects, were consistent with those identified in previous research using primarily written text, highlighting the validity and reliability of language behaviour associated with depressive symptomatology across lab-based and natural environmental contexts. LIMITATIONS: Our analyses do not enable to ascertain how mothers' language behaviour may have influenced their offspring's communication patterns. We also cannot say how or whether these findings generalize to other contexts or populations. CONCLUSION: The findings extend the existing literature on linguistic features of depression by indicating that mothers' depression is associated with linguistic behaviour during mother-adolescent interaction.

14.
Oper Neurosurg (Hagerstown) ; 27(3): 329-336, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39145663

RESUMEN

BACKGROUND AND OBJECTIVES: Recent advances in stereotactic and functional neurosurgery have brought forth the stereo-electroencephalography approach which allows deeper interrogation and characterization of the contributions of deep structures to neural and affective functioning. We argue that this approach can and should be brought to bear on the notoriously intractable issue of defining the pathophysiology of refractory psychiatric disorders and developing patient-specific optimized stimulation therapies. METHODS: We have developed a suite of methods for maximally leveraging the stereo-electroencephalography approach for an innovative application to understand affective disorders, with high translatability across the broader range of refractory neuropsychiatric conditions. RESULTS: This article provides a roadmap for determining desired electrode coverage, tracking high-resolution research recordings across a large number of electrodes, synchronizing intracranial signals with ongoing research tasks and other data streams, applying intracranial stimulation during recording, and design choices for patient comfort and safety. CONCLUSION: These methods can be implemented across other neuropsychiatric conditions needing intensive electrophysiological characterization to define biomarkers and more effectively guide therapeutic decision-making in cases of severe and treatment-refractory disease.


Asunto(s)
Electroencefalografía , Trastornos Mentales , Técnicas Estereotáxicas , Humanos , Trastornos Mentales/terapia , Trastornos Mentales/fisiopatología , Electroencefalografía/métodos , Estimulación Encefálica Profunda/métodos , Monitorización Neurofisiológica/métodos
15.
IEEE Trans Affect Comput ; 14(1): 133-152, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36938342

RESUMEN

Given the prevalence of depression worldwide and its major impact on society, several studies employed artificial intelligence modelling to automatically detect and assess depression. However, interpretation of these models and cues are rarely discussed in detail in the AI community, but have received increased attention lately. In this study, we aim to analyse the commonly selected features using a proposed framework of several feature selection methods and their effect on the classification results, which will provide an interpretation of the depression detection model. The developed framework aggregates and selects the most promising features for modelling depression detection from 38 feature selection algorithms of different categories. Using three real-world depression datasets, 902 behavioural cues were extracted from speech behaviour, speech prosody, eye movement and head pose. To verify the generalisability of the proposed framework, we applied the entire process to depression datasets individually and when combined. The results from the proposed framework showed that speech behaviour features (e.g. pauses) are the most distinctive features of the depression detection model. From the speech prosody modality, the strongest feature groups were F0, HNR, formants, and MFCC, while for the eye activity modality they were left-right eye movement and gaze direction, and for the head modality it was yaw head movement. Modelling depression detection using the selected features (even though there are only 9 features) outperformed using all features in all the individual and combined datasets. Our feature selection framework did not only provide an interpretation of the model, but was also able to produce a higher accuracy of depression detection with a small number of features in varied datasets. This could help to reduce the processing time needed to extract features and creating the model.

16.
J Affect Disord ; 333: 543-552, 2023 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-37121279

RESUMEN

BACKGROUND: Expert consensus guidelines recommend Cognitive Behavioral Therapy (CBT) and Interpersonal Psychotherapy (IPT), interventions that were historically delivered face-to-face, as first-line treatments for Major Depressive Disorder (MDD). Despite the ubiquity of telehealth following the COVID-19 pandemic, little is known about differential outcomes with CBT versus IPT delivered in-person (IP) or via telehealth (TH) or whether working alliance is affected. METHODS: Adults meeting DSM-5 criteria for MDD were randomly assigned to either 8 sessions of IPT or CBT (group). Mid-trial, COVID-19 forced a change of therapy delivery from IP to TH (study phase). We compared changes in Hamilton Rating Scale for Depression (HRSD-17) and Working Alliance Inventory (WAI) scores for individuals by group and phase: CBT-IP (n = 24), CBT-TH (n = 11), IPT-IP (n = 25) and IPT-TH (n = 17). RESULTS: HRSD-17 scores declined significantly from pre to post treatment (pre: M = 17.7, SD = 4.4 vs. post: M = 11.7, SD = 5.9; p < .001; d = 1.45) without significant group or phase effects. WAI scores did not differ by group or phase. Number of completed therapy sessions was greater for TH (M = 7.8, SD = 1.2) relative to IP (M = 7.2, SD = 1.6) (Mann-Whitney U = 387.50, z = -2.24, p = .025). LIMITATIONS: Participants were not randomly assigned to IP versus TH. Sample size is small. CONCLUSIONS: This study provides preliminary evidence supporting the efficacy of both brief IPT and CBT, delivered by either TH or IP, for depression. It showed that working alliance is preserved in TH, and delivery via TH may improve therapy adherence. Prospective, randomized controlled trials are needed to definitively test efficacy of brief IPT and CBT delivered via TH versus IP.


Asunto(s)
COVID-19 , Terapia Cognitivo-Conductual , Trastorno Depresivo Mayor , Psicoterapia Interpersonal , Telemedicina , Adulto , Humanos , Depresión/terapia , Trastorno Depresivo Mayor/terapia , Pandemias , Estudios Prospectivos , Psicoterapia , Resultado del Tratamiento
17.
J Am Acad Child Adolesc Psychiatry ; 62(9): 1010-1020, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37182586

RESUMEN

OBJECTIVE: Suicide is a leading cause of death among adolescents. However, there are no clinical tools to detect proximal risk for suicide. METHOD: Participants included 13- to 18-year-old adolescents (N = 103) reporting a current depressive, anxiety, and/or substance use disorder who owned a smartphone; 62% reported current suicidal ideation, with 25% indicating a past-year attempt. At baseline, participants were administered clinical interviews to assess lifetime disorders and suicidal thoughts and behaviors (STBs). Self-reports assessing symptoms and suicide risk factors also were obtained. In addition, the Effortless Assessment of Risk States (EARS) app was installed on adolescent smartphones to acquire daily mood and weekly suicidal ideation severity during the 6-month follow-up period. Adolescents completed STB and psychiatric service use interviews at the 1-, 3-, and 6-month follow-up assessments. RESULTS: K-means clustering based on aggregates of weekly suicidal ideation scores resulted in a 3-group solution reflecting high-risk (n = 26), medium-risk (n = 47), and low-risk (n = 30) groups. Of the high-risk group, 58% reported suicidal events (ie, suicide attempts, psychiatric hospitalizations, emergency department visits, ideation severity requiring an intervention) during the 6-month follow-up period. For participants in the high-risk and medium-risk groups (n = 73), mood disturbances in the preceding 7 days predicted clinically significant ideation, with a 1-SD decrease in mood doubling participants' likelihood of reporting clinically significant ideation on a given week. CONCLUSION: Intensive longitudinal assessment through use of personal smartphones offers a feasible method to assess variability in adolescents' emotional experiences and suicide risk. Translating these tools into clinical practice may help to reduce the needless loss of life among adolescents.


Asunto(s)
Ideación Suicida , Intento de Suicidio , Humanos , Adolescente , Intento de Suicidio/prevención & control , Intento de Suicidio/psicología , Trastornos del Humor , Trastornos de Ansiedad , Factores de Riesgo
18.
Psychol Sci ; 23(8): 869-78, 2012 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-22760882

RESUMEN

We integrated research on emotion and on small groups to address a fundamental and enduring question facing alcohol researchers: What are the specific mechanisms that underlie the reinforcing effects of drinking? In one of the largest alcohol-administration studies yet conducted, we employed a novel group-formation paradigm to evaluate the socioemotional effects of alcohol. Seven hundred twenty social drinkers (360 male, 360 female) were assembled into groups of 3 unacquainted persons each and given a moderate dose of an alcoholic, placebo, or control beverage, which they consumed over 36 min. These groups' social interactions were video recorded, and the duration and sequence of interaction partners' facial and speech behaviors were systematically coded (e.g., using the facial action coding system). Alcohol consumption enhanced individual- and group-level behaviors associated with positive affect, reduced individual-level behaviors associated with negative affect, and elevated self-reported bonding. Our results indicate that alcohol facilitates bonding during group formation. Assessing nonverbal responses in social contexts offers new directions for evaluating the effects of alcohol.


Asunto(s)
Consumo de Bebidas Alcohólicas/psicología , Depresores del Sistema Nervioso Central/farmacología , Emociones/efectos de los fármacos , Etanol/farmacología , Relaciones Interpersonales , Apego a Objetos , Conducta Social , Adulto , Bebidas Alcohólicas , Expresión Facial , Femenino , Humanos , Masculino , Distribución Aleatoria , Adulto Joven
19.
Proc ACM Int Conf Multimodal Interact ; 2022: 487-494, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36913231

RESUMEN

The relationship between a therapist and their client is one of the most critical determinants of successful therapy. The working alliance is a multifaceted concept capturing the collaborative aspect of the therapist-client relationship; a strong working alliance has been extensively linked to many positive therapeutic outcomes. Although therapy sessions are decidedly multimodal interactions, the language modality is of particular interest given its recognized relationship to similar dyadic concepts such as rapport, cooperation, and affiliation. Specifically, in this work we study language entrainment, which measures how much the therapist and client adapt toward each other's use of language over time. Despite the growing body of work in this area, however, relatively few studies examine causal relationships between human behavior and these relationship metrics: does an individual's perception of their partner affect how they speak, or does how they speak affect their perception? We explore these questions in this work through the use of structural equation modeling (SEM) techniques, which allow for both multilevel and temporal modeling of the relationship between the quality of the therapist-client working alliance and the participants' language entrainment. In our first experiment, we demonstrate that these techniques perform well in comparison to other common machine learning models, with the added benefits of interpretability and causal analysis. In our second analysis, we interpret the learned models to examine the relationship between working alliance and language entrainment and address our exploratory research questions. The results reveal that a therapist's language entrainment can have a significant impact on the client's perception of the working alliance, and that the client's language entrainment is a strong indicator of their perception of the working alliance. We discuss the implications of these results and consider several directions for future work in multimodality.

20.
Artículo en Inglés | MEDLINE | ID: mdl-39161704

RESUMEN

This preliminary study applied a computer-assisted quantitative linguistic analysis to examine the effectiveness of language-based classification models to discriminate between mothers (n = 140) with and without history of treatment for depression (51% and 49%, respectively). Mothers were recorded during a problem-solving interaction with their adolescent child. Transcripts were manually annotated and analyzed using a dictionary-based, natural-language program approach (Linguistic Inquiry and Word Count). To assess the importance of linguistic features to correctly classify history of depression, we used Support Vector Machines (SVM) with interpretable features. Using linguistic features identified in the empirical literature, an initial SVM achieved nearly 63% accuracy. A second SVM using only the top 5 highest ranked SHAP features improved accuracy to 67.15%. The findings extend the existing literature base on understanding language behavior of depressed mood states, with a focus on the linguistic style of mothers with and without a history of treatment for depression and its potential impact on child development and trans-generational transmission of depression.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA