Your browser doesn't support javascript.
loading
Educating patients on osteoporosis and bone health: Can "ChatGPT" provide high-quality content?
Ghanem, Diane; Shu, Henry; Bergstein, Victoria; Marrache, Majd; Love, Andra; Hughes, Alice; Sotsky, Rachel; Shafiq, Babar.
Afiliação
  • Ghanem D; Department of Orthopaedic Surgery, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD, 21287, USA. dghanem1@jh.edu.
  • Shu H; School of Medicine, The Johns Hopkins University, Baltimore, MD, USA.
  • Bergstein V; School of Medicine, The Johns Hopkins University, Baltimore, MD, USA.
  • Marrache M; Department of Orthopaedic Surgery, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD, 21287, USA.
  • Love A; Department of Orthopaedic Surgery, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD, 21287, USA.
  • Hughes A; Department of Orthopaedic Surgery, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD, 21287, USA.
  • Sotsky R; Department of Orthopaedic Surgery, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD, 21287, USA.
  • Shafiq B; Department of Orthopaedic Surgery, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD, 21287, USA.
Eur J Orthop Surg Traumatol ; 34(5): 2757-2765, 2024 Jul.
Article em En | MEDLINE | ID: mdl-38769125
ABSTRACT

PURPOSE:

The rise of artificial intelligence (AI) models like ChatGPT offers potential for varied applications, including patient education in healthcare. With gaps in osteoporosis and bone health knowledge and adherence to prevention and treatment, this study aims to evaluate the accuracy of ChatGPT in delivering evidence-based information related to osteoporosis.

METHODS:

Twenty of the most common frequently asked questions (FAQs) related to osteoporosis were subcategorized into diagnosis, diagnostic method, risk factors, and treatment and prevention. These FAQs were sourced online and inputted into ChatGPT-3.5. Three orthopedic surgeons and one advanced practice provider who routinely treat patients with fragility fractures independently reviewed the ChatGPT-generated answers, grading them on a scale from 0 (harmful) to 4 (excellent). Mean response accuracy scores were calculated. To compare the variance of the means across the four categories, a one-way analysis of variance (ANOVA) was used.

RESULTS:

ChatGPT displayed an overall mean accuracy score of 91%. Its responses were graded as "accurate requiring minimal clarification" or "excellent," with a mean response score ranging from 3.25 to 4. No answers were deemed inaccurate or harmful. No significant difference was observed in the means of responses across the defined categories.

CONCLUSION:

ChatGPT-3.5 provided high-quality educational content. It showcased a high degree of accuracy in addressing osteoporosis-related questions, aligning closely with expert opinions and current literature, with structured and inclusive answers. However, while AI models can enhance patient information accessibility, they should be used as an adjunct rather than a substitute for human expertise and clinical judgment.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Osteoporose / Educação de Pacientes como Assunto Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Osteoporose / Educação de Pacientes como Assunto Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article