Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Npj Ment Health Res ; 3(1): 12, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38609507

RESUMO

Large language models (LLMs) such as Open AI's GPT-4 (which power ChatGPT) and Google's Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.

2.
JMIR Hum Factors ; 10: e40533, 2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36409300

RESUMO

BACKGROUND: The COVID-19 pandemic raised novel challenges in communicating reliable, continually changing health information to a broad and sometimes skeptical public, particularly around COVID-19 vaccines, which, despite being comprehensively studied, were the subject of viral misinformation. Chatbots are a promising technology to reach and engage populations during the pandemic. To inform and communicate effectively with users, chatbots must be highly usable and credible. OBJECTIVE: We sought to understand how young adults and health workers in the United States assessed the usability and credibility of a web-based chatbot called Vira, created by the Johns Hopkins Bloomberg School of Public Health and IBM Research using natural language processing technology. Using a mixed method approach, we sought to rapidly improve Vira's user experience to support vaccine decision-making during the peak of the COVID-19 pandemic. METHODS: We recruited racially and ethnically diverse young people and health workers, with both groups from urban areas of the United States. We used the validated Chatbot Usability Questionnaire to understand the tool's navigation, precision, and persona. We also conducted 11 interviews with health workers and young people to understand the user experience, whether they perceived the chatbot as confidential and trustworthy, and how they would use the chatbot. We coded and categorized emerging themes to understand the determining factors for participants' assessment of chatbot usability and credibility. RESULTS: In all, 58 participants completed a web-based usability questionnaire and 11 completed in-depth interviews. Most questionnaire respondents said the chatbot was "easy to navigate" (51/58, 88%) and "very easy to use" (50/58, 86%), and many (45/58, 78%) said its responses were relevant. The mean Chatbot Usability Questionnaire score was 70.2 (SD 12.1) and scores ranged from 40.6 to 95.3. Interview participants felt the chatbot achieved high usability due to its strong functionality, performance, and perceived confidentiality and that the chatbot could attain high credibility with a redesign of its cartoonish visual persona. Young people said they would use the chatbot to discuss vaccination with hesitant friends or family members, whereas health workers used or anticipated using the chatbot to support community outreach, save time, and stay up to date. CONCLUSIONS: This formative study conducted during the pandemic's peak provided user feedback for an iterative redesign of Vira. Using a mixed method approach provided multidimensional feedback, identifying how the chatbot worked well-being easy to use, answering questions appropriately, and using credible branding-while offering tangible steps to improve the product's visual design. Future studies should evaluate how chatbots support personal health decision-making, particularly in the context of a public health emergency, and whether such outreach tools can reduce staff burnout. Randomized studies should also be conducted to measure how chatbots countering health misinformation affect user knowledge, attitudes, and behavior.

3.
Proc Conf Empir Methods Nat Lang Process ; 2023: 11346-11369, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38618627

RESUMO

Mental health conversational agents (a.k.a. chatbots) are widely studied for their potential to offer accessible support to those experiencing mental health challenges. Previous surveys on the topic primarily consider papers published in either computer science or medicine, leading to a divide in understanding and hindering the sharing of beneficial knowledge between both domains. To bridge this gap, we conduct a comprehensive literature review using the PRISMA framework, reviewing 534 papers published in both computer science and medicine. Our systematic review reveals 136 key papers on building mental health-related conversational agents with diverse characteristics of modeling and experimental design techniques. We find that computer science papers focus on LLM techniques and evaluating response quality using automated metrics with little attention to the application while medical papers use rule-based conversational agents and outcome metrics to measure the health outcomes of participants. Based on our findings on transparency, ethics, and cultural heterogeneity in this review, we provide a few recommendations to help bridge the disciplinary divide and enable the cross-disciplinary development of mental health conversational agents.

4.
J Med Internet Res ; 24(7): e38418, 2022 07 06.
Artigo em Inglês | MEDLINE | ID: mdl-35737898

RESUMO

BACKGROUND: Automated conversational agents, or chatbots, have a role in reinforcing evidence-based guidance delivered through other media and offer an accessible, individually tailored channel for public engagement. In early-to-mid 2021, young adults and minority populations disproportionately affected by COVID-19 in the United States were more likely to be hesitant toward COVID-19 vaccines, citing concerns regarding vaccine safety and effectiveness. Successful chatbot communication requires purposive understanding of user needs. OBJECTIVE: We aimed to review the acceptability of messages to be delivered by a chatbot named VIRA from Johns Hopkins University. The study investigated which message styles were preferred by young, urban-dwelling Americans as well as public health workers, since we anticipated that the chatbot would be used by the latter as a job aid. METHODS: We conducted 4 web-based focus groups with 20 racially and ethnically diverse young adults aged 18-28 years and public health workers aged 25-61 years living in or near eastern-US cities. We tested 6 message styles, asking participants to select a preferred response style for a chatbot answering common questions about COVID-19 vaccines. We transcribed, coded, and categorized emerging themes within the discussions of message content, style, and framing. RESULTS: Participants preferred messages that began with an empathetic reflection of a user concern and concluded with a straightforward, fact-supported response. Most participants disapproved of moralistic or reasoning-based appeals to get vaccinated, although public health workers felt that such strong statements appealing to communal responsibility were warranted. Responses tested with humor and testimonials did not appeal to the participants. CONCLUSIONS: To foster credibility, chatbots targeting young people with vaccine-related messaging should aim to build rapport with users by deploying empathic, reflective statements, followed by direct and comprehensive responses to user queries. Further studies are needed to inform the appropriate use of user-customized testimonials and humor in the context of chatbot communication.


Assuntos
Vacinas contra COVID-19 , COVID-19 , Adolescente , COVID-19/prevenção & controle , Vacinas contra COVID-19/uso terapêutico , Comunicação , Humanos , Saúde Pública , Pesquisa Qualitativa , Estados Unidos , Adulto Jovem
5.
NPJ Digit Med ; 5(1): 21, 2022 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-35177772

RESUMO

Health-focused apps with chatbots ("healthbots") have a critical role in addressing gaps in quality healthcare. There is limited evidence on how such healthbots are developed and applied in practice. Our review of healthbots aims to classify types of healthbots, contexts of use, and their natural language processing capabilities. Eligible apps were those that were health-related, had an embedded text-based conversational agent, available in English, and were available for free download through the Google Play or Apple iOS store. Apps were identified using 42Matters software, a mobile app search engine. Apps were assessed using an evaluation framework addressing chatbot characteristics and natural language processing features. The review suggests uptake across 33 low- and high-income countries. Most healthbots are patient-facing, available on a mobile interface and provide a range of functions including health education and counselling support, assessment of symptoms, and assistance with tasks such as scheduling. Most of the 78 apps reviewed focus on primary care and mental health, only 6 (7.59%) had a theoretical underpinning, and 10 (12.35%) complied with health information privacy regulations. Our assessment indicated that only a few apps use machine learning and natural language processing approaches, despite such marketing claims. Most apps allowed for a finite-state input, where the dialogue is led by the system and follows a predetermined algorithm. Healthbots are potentially transformative in centering care around the user; however, they are in a nascent state of development and require further research on development, automation and adoption for a population-level health impact.

6.
NPJ Schizophr ; 7(1): 25, 2021 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-33990615

RESUMO

Computerized natural language processing (NLP) allows for objective and sensitive detection of speech disturbance, a hallmark of schizophrenia spectrum disorders (SSD). We explored several methods for characterizing speech changes in SSD (n = 20) compared to healthy control (HC) participants (n = 11) and approached linguistic phenotyping on three levels: individual words, parts-of-speech (POS), and sentence-level coherence. NLP features were compared with a clinical gold standard, the Scale for the Assessment of Thought, Language and Communication (TLC). We utilized Bidirectional Encoder Representations from Transformers (BERT), a state-of-the-art embedding algorithm incorporating bidirectional context. Through the POS approach, we found that SSD used more pronouns but fewer adverbs, adjectives, and determiners (e.g., "the," "a,"). Analysis of individual word usage was notable for more frequent use of first-person singular pronouns among individuals with SSD and first-person plural pronouns among HC. There was a striking increase in incomplete words among SSD. Sentence-level analysis using BERT reflected increased tangentiality among SSD with greater sentence embedding distances. The SSD sample had low speech disturbance on average and there was no difference in group means for TLC scores. However, NLP measures of language disturbance appear to be sensitive to these subclinical differences and showed greater ability to discriminate between HC and SSD than a model based on clinical ratings alone. These intriguing exploratory results from a small sample prompt further inquiry into NLP methods for characterizing language disturbance in SSD and suggest that NLP measures may yield clinically relevant and informative biomarkers.

7.
Drug Alcohol Depend ; 220: 108468, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-33540349

RESUMO

BACKGROUND: Public health has begun using social media forums such as Reddit to enhance surveillance and modernize interventions for young people. The current study's objective was to examine Reddit posts about the HBO series Euphoria to identify show themes that resonate with adolescent and young adult viewers in order to inform future social media interventions. METHODS: Reddit comments in the r/television community from June to August 2019 were downloaded. Following filtering, 725 comments were analyzed and coded using a codebook and ATLAS.ti. Coded comments were analyzed for themes relevant to Redditor substance use, reactions to Euphoria and the main character (Rue), and mental health concerns. RESULTS: During their discussion of the show, Redditors disclosed both personal recreational and prescription drug use, including substance use to cope with mental illness symptoms. There were approximately equal numbers of comments with positive and negative reactions to the show overall and to the main character, Rue. Redditors often found Euphoria's storyline and portrayed events to be relatable and realistic to the experience of young people who use drugs, as well as sometimes triggering. Overall, Redditors thought Rue accurately depicted an individual's struggle with a substance use disorder. CONCLUSIONS: This exploratory study highlights how television and social media can contribute to young peoples' understanding of substance use disorders and mental health. Findings could inform the design of social media interventions for adolescents and young adults on a variety of substance use issues, including stigma and the interconnectedness of substance use and mental health challenges.


Assuntos
Transtornos Mentais/psicologia , Mídias Sociais , Transtornos Relacionados ao Uso de Substâncias/psicologia , Televisão , Adaptação Psicológica , Adulto , Feminino , Humanos , Masculino , Saúde Mental , Saúde Pública , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA