Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Implement Res Pract ; 4: 26334895231187906, 2023.
Article in English | MEDLINE | ID: mdl-37790171

ABSTRACT

Background: Evidence-based parenting programs effectively prevent the onset and escalation of child and adolescent behavioral health problems. When programs have been taken to scale, declines in the quality of implementation diminish intervention effects. Gold-standard methods of implementation monitoring are cost-prohibitive and impractical in resource-scarce delivery systems. Technological developments using computational linguistics and machine learning offer an opportunity to assess fidelity in a low burden, timely, and comprehensive manner. Methods: In this study, we test two natural language processing (NLP) methods [i.e., Term Frequency-Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT)] to assess the delivery of the Family Check-Up 4 Health (FCU4Health) program in a type 2 hybrid effectiveness-implementation trial conducted in primary care settings that serve primarily Latino families. We trained and evaluated models using 116 English and 81 Spanish-language transcripts from the 113 families who initiated FCU4Health services. We evaluated the concurrent validity of the TF-IDF and BERT models using observer ratings of program sessions using the COACH measure of competent adherence. Following the Implementation Cascade model, we assessed predictive validity using multiple indicators of parent engagement, which have been demonstrated to predict improvements in parenting and child outcomes. Results: Both TF-IDF and BERT ratings were significantly associated with observer ratings and engagement outcomes. Using mean squared error, results demonstrated improvement over baseline for observer ratings from a range of 0.83-1.02 to 0.62-0.76, resulting in an average improvement of 24%. Similarly, results demonstrated improvement over baseline for parent engagement indicators from a range of 0.81-27.3 to 0.62-19.50, resulting in an approximate average improvement of 18%. Conclusions: These results demonstrate the potential for NLP methods to assess implementation in evidence-based parenting programs delivered at scale. Future directions are presented. Trial registration: NCT03013309 ClinicalTrials.gov.


Research has shown that evidence-based parenting programs effectively prevent the onset and escalation of child and adolescent behavioral health problems. However, if they are not implemented with fidelity, there is a potential that they will not produce the same effects. Gold-standard methods of implementation monitoring include observations of program sessions. This is expensive and difficult to implement in delivery settings with limited resources. Using data from a trial of the Family Check-Up 4 Health program in primary care settings that served Latino families, we investigated the potential to make use of a form of machine learning called natural language processing (NLP) to monitor program delivery. NLP-based ratings were significantly associated with independent observer ratings of fidelity and participant engagement outcomes. These results demonstrate the potential for NLP methods to monitor implementation in evidence-based parenting programs delivered at scale.

2.
Couns Psychother Res ; 23(1): 258-269, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36873916

ABSTRACT

Psychotherapy is a conversation, whereby, at its foundation, many interventions are derived from the therapist talking. Research suggests that the voice can convey a variety of emotional and social information, and individuals may change their voice based on the context and content of the conversation (e.g., talking to a baby or delivering difficult news to patients with cancer). As such, therapists may adjust aspects of their voice throughout a therapy session depending on if they are beginning a therapy session and checking in with a client, conducting more therapeutic "work," or ending the session. In this study, we modeled three vocal features-pitch, energy, and rate-with linear and quadratic multilevel models to understand how therapists' vocal features change throughout a therapy session. We hypothesized that all three vocal features would be best fit with a quadratic function - starting high and more congruent with a conversational voice, decreasing during the middle portions of therapy where more therapeutic interventions were being administered, and increasing again at the end of the session. Results indicated a quadratic model for all three vocal features was superior in fitting the data, as compared to a linear model, suggesting that therapists begin and end therapy using a different style of voice than in the middle of a session.

3.
PLoS One ; 16(4): e0249957, 2021.
Article in English | MEDLINE | ID: mdl-33831109

ABSTRACT

Film music varies tremendously across genre in order to bring about different responses in an audience. For instance, composers may evoke passion in a romantic scene with lush string passages or inspire fear throughout horror films with inharmonious drones. This study investigates such phenomena through a quantitative evaluation of music that is associated with different film genres. We construct supervised neural network models with various pooling mechanisms to predict a film's genre from its soundtrack. We use these models to compare handcrafted music information retrieval (MIR) features against VGGish audio embedding features, finding similar performance with the top-performing architectures. We examine the best-performing MIR feature model through permutation feature importance (PFI), determining that mel-frequency cepstral coefficient (MFCC) and tonal features are most indicative of musical differences between genres. We investigate the interaction between musical and visual features with a cross-modal analysis, and do not find compelling evidence that music characteristic of a certain genre implies low-level visual features associated with that genre. Furthermore, we provide software code to replicate this study at https://github.com/usc-sail/mica-music-in-media. This work adds to our understanding of music's use in multi-modal contexts and offers the potential for future inquiry into human affective experiences.


Subject(s)
Motion Pictures/classification , Music/psychology , Humans , Smart Glasses , Software , Supervised Machine Learning , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...