Your browser doesn't support javascript.
loading
A quality and readability comparison of artificial intelligence and popular health website education materials for common hand surgery procedures.
Pohl, Nicholas B; Derector, Evan; Rivlin, Michael; Bachoura, Abdo; Tosti, Rick; Kachooei, Amir R; Beredjiklian, Pedro K; Fletcher, Daniel J.
Afiliação
  • Pohl NB; Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA. Electronic address: nickbpohl@gmail.com.
  • Derector E; Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA.
  • Rivlin M; Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA.
  • Bachoura A; Department of Orthopaedic Surgery, Rothman Orthopaedics Florida, Orlando, FL, USA.
  • Tosti R; Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA.
  • Kachooei AR; Department of Orthopaedic Surgery, Rothman Orthopaedics Florida, Orlando, FL, USA.
  • Beredjiklian PK; Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA.
  • Fletcher DJ; Department of Orthopaedic Surgery, Rothman Orthopaedic Institute, Philadelphia, PA, USA.
Hand Surg Rehabil ; 43(3): 101723, 2024 06.
Article em En | MEDLINE | ID: mdl-38782361
ABSTRACT

INTRODUCTION:

ChatGPT and its application in producing patient education materials for orthopedic hand disorders has not been extensively studied. This study evaluated the quality and readability of educational information pertaining to common hand surgeries from patient education websites and information produced by ChatGPT.

METHODS:

Patient education information for four hand surgeries (carpal tunnel release, trigger finger release, Dupuytren's contracture, and ganglion cyst surgery) was extracted from ChatGPT (at a scientific and fourth-grade reading level), WebMD, and Mayo Clinic. In a blinded and randomized fashion, five fellowship-trained orthopaedic hand surgeons evaluated the quality of information using a modified DISCERN criteria. Readability and reading grade level were assessed using Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) equations.

RESULTS:

The Mayo Clinic website scored higher in terms of quality for carpal tunnel release information (p = 0.004). WebMD scored higher for Dupuytren's contracture release (p < 0.001), ganglion cyst surgery (p = 0.003), and overall quality (p < 0.001). ChatGPT - 4th Grade Reading Level, ChatGPT - Scientific Reading Level, WebMD, and Mayo Clinic written materials on average exceeded recommended reading grade levels (4th-6th grade) by at least four grade levels (10th, 14th, 13th, and 11th grade, respectively).

CONCLUSIONS:

ChatGPT provides inferior education materials compared to patient-friendly websites. When prompted to provide more easily read materials, ChatGPT generates less robust information compared to patient-friendly websites and does not adequately simplify the educational information. ChatGPT has potential to improve the quality and readability of patient education materials but currently, patient-friendly websites provide superior quality at similar reading comprehension levels.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Educação de Pacientes como Assunto / Internet / Compreensão Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Educação de Pacientes como Assunto / Internet / Compreensão Idioma: En Ano de publicação: 2024 Tipo de documento: Article