Your browser doesn't support javascript.
loading
Doctor AI? A pilot study examining responses of artificial intelligence to common questions asked by geriatric patients.
Moore, Ian; Magnante, Christopher; Embry, Ellie; Mathis, Jennifer; Mooney, Scott; Haj-Hassan, Shereen; Cottingham, Maria; Padala, Prasad R.
Afiliación
  • Moore I; Geriatric Research Education and Clinical Center (GRECC), Central Arkansas Veterans Healthcare System (CAVHS), Little Rock, AR, United States.
  • Magnante C; Geriatric Research Education and Clinical Center (GRECC), Central Arkansas Veterans Healthcare System (CAVHS), Little Rock, AR, United States.
  • Embry E; Geriatric Research Education and Clinical Center (GRECC), Central Arkansas Veterans Healthcare System (CAVHS), Little Rock, AR, United States.
  • Mathis J; Geriatric Research Education and Clinical Center (GRECC), Central Arkansas Veterans Healthcare System (CAVHS), Little Rock, AR, United States.
  • Mooney S; Geriatric Research Education and Clinical Center (GRECC), Central Arkansas Veterans Healthcare System (CAVHS), Little Rock, AR, United States.
  • Haj-Hassan S; Tennessee Valley Veteran Affairs Healthcare System (TVHS), Nashville, TN, United States.
  • Cottingham M; Tennessee Valley Veteran Affairs Healthcare System (TVHS), Nashville, TN, United States.
  • Padala PR; Geriatric Research Education and Clinical Center (GRECC), Central Arkansas Veterans Healthcare System (CAVHS), Little Rock, AR, United States.
Front Artif Intell ; 7: 1438012, 2024.
Article en En | MEDLINE | ID: mdl-39118788
ABSTRACT

Introduction:

AI technologies have the potential to transform patient care. AI has been used to aid in differential diagnosis and treatment planning for psychiatric disorders, administer therapeutic protocols, assist with interpretation of cognitive testing, and patient treatment planning. Despite advancements, AI has notable limitations and remains understudied and further research on its strengths and limitations in patient care is required. This study explored the responses of AI (Chat-GPT 3.5) and trained clinicians to commonly asked patient questions.

Methods:

Three clinicians and AI provided responses to five dementia/geriatric healthcare-related questions. Responses were analyzed by a fourth, blinded clinician for clarity, accuracy, relevance, depth, and ease of understanding and to determine which response was AI generated.

Results:

AI responses were rated highest in ease of understanding and depth across all responses and tied for first for clarity, accuracy, and relevance. The rating for AI generated responses was 4.6/5 (SD = 0.26); the clinician s' responses were 4.3 (SD = 0.67), 4.2 (SD = 0.52), and 3.9 (SD = 0.59), respectively. The AI generated answers were identified in 4/5 instances.

Conclusions:

AI responses were rated more highly and consistently on each question individually and overall than clinician answers demonstrating that AI could produce good responses to potential patient questions. However, AI responses were easily distinguishable from those of clinicians. Although AI has the potential to positively impact healthcare, concerns are raised regarding difficulties discerning AI from human generated material, the increased potential for proliferation of misinformation, data security concerns, and more.
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: Front Artif Intell Año: 2024 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Idioma: En Revista: Front Artif Intell Año: 2024 Tipo del documento: Article