Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39161456

RESUMO

Depression strongly impacts parents' behavior. Does parents' depression strongly afect the behavior of their children as well? To investigate this question, we compared dyadic interactions between 73 depressed and 75 non-depressed mothers and their adolescent child. Families were of low income and 84% were white. Child behavior was measured from audio-video recordings using manual annotation of verbal and nonverbal behavior by expert coders and by multimodal computational measures of facial expression, face and head dynamics, prosody, speech behavior, and linguistics. For both sets of measures, we used Support Vector Machines. For computational measures, we investigated the relative contribution of single versus multiple modalities using a novel approach to SHapley Additive exPlanations (SHAP). Computational measures outperformed manual ratings by human experts. Among individual computational measures, prosody was the most informative. SHAP reduction resulted in a four-fold decrease in the number of features and highest performance (77% accuracy; positive and negative agreements at 75% and 76%, respectively). These fndings suggest that maternal depression strongly impacts the behavior of adolescent children; diferences are most revealed in prosody; multimodal features together with SHAP reduction are most powerful.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39161704

RESUMO

This preliminary study applied a computer-assisted quantitative linguistic analysis to examine the effectiveness of language-based classification models to discriminate between mothers (n = 140) with and without history of treatment for depression (51% and 49%, respectively). Mothers were recorded during a problem-solving interaction with their adolescent child. Transcripts were manually annotated and analyzed using a dictionary-based, natural-language program approach (Linguistic Inquiry and Word Count). To assess the importance of linguistic features to correctly classify history of depression, we used Support Vector Machines (SVM) with interpretable features. Using linguistic features identified in the empirical literature, an initial SVM achieved nearly 63% accuracy. A second SVM using only the top 5 highest ranked SHAP features improved accuracy to 67.15%. The findings extend the existing literature base on understanding language behavior of depressed mood states, with a focus on the linguistic style of mothers with and without a history of treatment for depression and its potential impact on child development and trans-generational transmission of depression.

3.
Adv Neural Inf Process Syst ; 2021(DB1): 1-20, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38774625

RESUMO

Learning multimodal representations involves integrating information from multiple heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world applications in multimedia, affective computing, robotics, finance, human-computer interaction, and healthcare. Unfortunately, multimodal research has seen limited resources to study (1) generalization across domains and modalities, (2) complexity during training and inference, and (3) robustness to noisy and missing modalities. In order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness, we release MultiBench, a systematic and unified large-scale benchmark for multimodal learning spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas. MultiBench provides an automated end-to-end machine learning pipeline that simplifies and standardizes data loading, experimental setup, and model evaluation. To enable holistic evaluation, MultiBench offers a comprehensive methodology to assess (1) generalization, (2) time and space complexity, and (3) modality robustness. MultiBench introduces impactful challenges for future research, including scalability to large-scale multimodal datasets and robustness to realistic imperfections. To accompany this benchmark, we also provide a standardized implementation of 20 core approaches in multimodal learning spanning innovations in fusion paradigms, optimization objectives, and training approaches. Simply applying methods proposed in different research areas can improve the state-of-the-art performance on 9/15 datasets. Therefore, MultiBench presents a milestone in unifying disjoint efforts in multimodal machine learning research and paves the way towards a better understanding of the capabilities and limitations of multimodal models, all the while ensuring ease of use, accessibility, and reproducibility. MultiBench, our standardized implementations, and leaderboards are publicly available, will be regularly updated, and welcomes inputs from the community.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA