RESUMEN
Depression has robust natural language correlates and can increasingly be measured in language using predictive models. However, despite evidence that language use varies as a function of individual demographic features (e.g., age, gender), previous work has not systematically examined whether and how depression's association with language varies by race. We examine how race moderates the relationship between language features (i.e., first-person pronouns and negative emotions) from social media posts and self-reported depression, in a matched sample of Black and White English speakers in the United States. Our findings reveal moderating effects of race: While depression severity predicts I-usage in White individuals, it does not in Black individuals. White individuals use more belongingness and self-deprecation-related negative emotions. Machine learning models trained on similar amounts of data to predict depression severity performed poorly when tested on Black individuals, even when they were trained exclusively using the language of Black individuals. In contrast, analogous models tested on White individuals performed relatively well. Our study reveals surprising race-based differences in the expression of depression in natural language and highlights the need to understand these effects better, especially before language-based models for detecting psychological phenomena are integrated into clinical practice.
Asunto(s)
Depresión , Medios de Comunicación Sociales , Humanos , Estados Unidos , Depresión/psicología , Emociones , LenguajeRESUMEN
Health risks due to preventable infections such as human papillomavirus (HPV) are exacerbated by persistent vaccine hesitancy. Due to limited sample sizes and the time needed to roll out, traditional methodologies like surveys and interviews offer restricted insights into quickly evolving vaccine concerns. Social media platforms can serve as fertile ground for monitoring vaccine-related conversations and detecting emerging concerns in a scalable and dynamic manner. Using state-of-the-art large language models, we propose a minimally supervised end-to-end approach to identify concerns against HPV vaccination from social media posts. We detect and characterize the concerns against HPV vaccination pre- and post-2020 to understand the evolution of HPV vaccine discourse. Upon analyzing 653 k HPV-related post-2020 tweets, adverse effects, personal anecdotes, and vaccine mandates emerged as the dominant themes. Compared to pre-2020, there is a shift towards personal anecdotes of vaccine injury with a growing call for parental consent and transparency. The proposed approach provides an end-to-end system, i.e. given a collection of tweets, a list of prevalent concerns is returned, providing critical insights for crafting targeted interventions, debunking messages, and informing public health campaigns.
Asunto(s)
Infecciones por Papillomavirus , Vacunas contra Papillomavirus , Medios de Comunicación Sociales , Vacunación , Humanos , Infecciones por Papillomavirus/prevención & control , Vacunación/psicología , Femenino , Vacilación a la Vacunación/psicologíaRESUMEN
Mental health conversational agents (a.k.a. chatbots) are widely studied for their potential to offer accessible support to those experiencing mental health challenges. Previous surveys on the topic primarily consider papers published in either computer science or medicine, leading to a divide in understanding and hindering the sharing of beneficial knowledge between both domains. To bridge this gap, we conduct a comprehensive literature review using the PRISMA framework, reviewing 534 papers published in both computer science and medicine. Our systematic review reveals 136 key papers on building mental health-related conversational agents with diverse characteristics of modeling and experimental design techniques. We find that computer science papers focus on LLM techniques and evaluating response quality using automated metrics with little attention to the application while medical papers use rule-based conversational agents and outcome metrics to measure the health outcomes of participants. Based on our findings on transparency, ethics, and cultural heterogeneity in this review, we provide a few recommendations to help bridge the disciplinary divide and enable the cross-disciplinary development of mental health conversational agents.