Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
JMIR Ment Health ; 11: e58462, 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39293056

RESUMEN

BACKGROUND: The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. OBJECTIVE: This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. METHODS: We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. RESULTS: A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information. CONCLUSIONS: Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.


Asunto(s)
Inteligencia Artificial , Humanos , Estudios Transversales , Femenino , Masculino , Adulto , Persona de Mediana Edad , Servicios de Salud Mental , Adulto Joven , Estados Unidos , Adolescente , Anciano , Encuestas y Cuestionarios , Trastornos Mentales/terapia , Trastornos Mentales/diagnóstico , Trastornos Mentales/psicología
2.
J Am Med Inform Assoc ; 31(2): 289-297, 2024 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-37847667

RESUMEN

OBJECTIVES: To determine if different formats for conveying machine learning (ML)-derived postpartum depression risks impact patient classification of recommended actions (primary outcome) and intention to seek care, perceived risk, trust, and preferences (secondary outcomes). MATERIALS AND METHODS: We recruited English-speaking females of childbearing age (18-45 years) using an online survey platform. We created 2 exposure variables (presentation format and risk severity), each with 4 levels, manipulated within-subject. Presentation formats consisted of text only, numeric only, gradient number line, and segmented number line. For each format viewed, participants answered questions regarding each outcome. RESULTS: Five hundred four participants (mean age 31 years) completed the survey. For the risk classification question, performance was high (93%) with no significant differences between presentation formats. There were main effects of risk level (all P < .001) such that participants perceived higher risk, were more likely to agree to treatment, and more trusting in their obstetrics team as the risk level increased, but we found inconsistencies in which presentation format corresponded to the highest perceived risk, trust, or behavioral intention. The gradient number line was the most preferred format (43%). DISCUSSION AND CONCLUSION: All formats resulted high accuracy related to the classification outcome (primary), but there were nuanced differences in risk perceptions, behavioral intentions, and trust. Investigators should choose health data visualizations based on the primary goal they want lay audiences to accomplish with the ML risk score.


Asunto(s)
Depresión Posparto , Femenino , Humanos , Adulto , Adolescente , Adulto Joven , Persona de Mediana Edad , Depresión Posparto/diagnóstico , Factores de Riesgo , Encuestas y Cuestionarios , Visualización de Datos
3.
JAMIA Open ; 6(3): ooad048, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37425486

RESUMEN

This study aimed to evaluate women's attitudes towards artificial intelligence (AI)-based technologies used in mental health care. We conducted a cross-sectional, online survey of U.S. adults reporting female sex at birth focused on bioethical considerations for AI-based technologies in mental healthcare, stratifying by previous pregnancy. Survey respondents (n = 258) were open to AI-based technologies in mental healthcare but concerned about medical harm and inappropriate data sharing. They held clinicians, developers, healthcare systems, and the government responsible for harm. Most reported it was "very important" for them to understand AI output. More previously pregnant respondents reported being told AI played a small role in mental healthcare was "very important" versus those not previously pregnant (P = .03). We conclude that protections against harm, transparency around data use, preservation of the patient-clinician relationship, and patient comprehension of AI predictions may facilitate trust in AI-based technologies for mental healthcare among women.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA