Your browser doesn't support javascript.
loading
Comparing the quality of ChatGPT- and physician-generated responses to patients' dermatology questions in the electronic medical record.
Reynolds, Kelly; Nadelman, Daniel; Durgin, Joseph; Ansah-Addo, Stephen; Cole, Daniel; Fayne, Rachel; Harrell, Jane; Ratycz, Madison; Runge, Mason; Shepard-Hayes, Amanda; Wenzel, Daniel; Tejasvi, Trilokraj.
Afiliação
  • Reynolds K; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Nadelman D; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Durgin J; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Ansah-Addo S; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Cole D; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Fayne R; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Harrell J; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Ratycz M; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Runge M; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Shepard-Hayes A; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Wenzel D; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
  • Tejasvi T; Department of Dermatology, University of Michigan, Ann Arbor, MI, USA.
Clin Exp Dermatol ; 49(7): 715-718, 2024 Jun 25.
Article em En | MEDLINE | ID: mdl-38180108
ABSTRACT

BACKGROUND:

ChatGPT is a free artificial intelligence (AI)-based natural language processing tool that generates complex responses to inputs from users.

OBJECTIVES:

To determine whether ChatGPT is able to generate high-quality responses to patient-submitted questions in the patient portal.

METHODS:

Patient-submitted questions and the corresponding responses from their dermatology physician were extracted from the electronic medical record for analysis. The questions were input into ChatGPT (version 3.5) and the outputs extracted for analysis, with manual removal of verbiage pertaining to ChatGPT's inability to provide medical advice. Ten blinded reviewers (seven physicians and three nonphysicians) rated and selected their preference in terms of 'overall quality', 'readability', 'accuracy', 'thoroughness' and 'level of empathy' of the physician- and ChatGPT-generated responses.

RESULTS:

Thirty-one messages and responses were analysed. Physician-generated responses were vastly preferred over the ChatGPT -responses by the physician and nonphysician reviewers and received significantly higher ratings for 'readability' and 'level of empathy'.

CONCLUSIONS:

The results of this study suggest that physician-generated responses to patients' portal messages are still preferred over ChatGPT, but generative AI tools may be helpful in generating the first drafts of responses and providing information on education resources for patients.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural / Dermatologia / Registros Eletrônicos de Saúde Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural / Dermatologia / Registros Eletrônicos de Saúde Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article