Your browser doesn't support javascript.
loading
Evaluation of Patient Education Materials From Large-Language Artificial Intelligence Models on Carpal Tunnel Release.
Croen, Brett J; Abdullah, Mohammed S; Berns, Ellis; Rapaport, Sarah; Hahn, Alexander K; Barrett, Caitlin C; Sobel, Andrew D.
Afiliação
  • Croen BJ; Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA.
  • Abdullah MS; Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA.
  • Berns E; Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA.
  • Rapaport S; Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA.
  • Hahn AK; Department of Orthopaedic Surgery, University of Connecticut, Farmington, USA.
  • Barrett CC; Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA.
  • Sobel AD; Department of Orthopaedic Surgery, Penn Medicine, Philadelphia, PA, USA.
Hand (N Y) ; : 15589447241247332, 2024 Apr 25.
Article em En | MEDLINE | ID: mdl-38660977
ABSTRACT

BACKGROUND:

ChatGPT, an artificial intelligence technology, has the potential to be a useful patient aid, though the accuracy and appropriateness of its responses and recommendations on common hand surgical pathologies and procedures must be understood. Comparing the sources referenced and characteristics of responses from ChatGPT and an established search engine (Google) on carpal tunnel surgery will allow for an understanding of the utility of ChatGPT for patient education.

METHODS:

A Google search of "carpal tunnel release surgery" was performed and "frequently asked questions (FAQs)" were recorded with their answer and source. ChatGPT was then asked to provide answers to the Google FAQs. The FAQs were compared, and answer content was compared using word count, readability analyses, and content source.

RESULTS:

There was 40% concordance among questions asked by the programs. Google answered each question with one source per answer, whereas ChatGPT's answers were created from two sources per answer. ChatGPT's answers were significantly longer than Google's and multiple readability analysis algorithms found ChatGPT responses to be statistically significantly more difficult to read and at a higher grade level than Google's. ChatGPT always recommended "contacting your surgeon."

CONCLUSION:

A comparison of ChatGPT's responses to Google's FAQ responses revealed that ChatGPT's answers were more in-depth, from multiple sources, and from a higher proportion of academic Web sites. However, ChatGPT answers were found to be more difficult to understand. Further study is needed to understand if the differences in the responses between programs correlate to a difference in patient comprehension.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Hand (N Y) Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Hand (N Y) Ano de publicação: 2024 Tipo de documento: Article