Your browser doesn't support javascript.
loading
Examining the role of ChatGPT in the management of distal radius fractures: insights into its accuracy and consistency.
Knee, Christopha J; Campbell, Ryan J; Graham, David J; Handford, Cameron; Symes, Michael; Sivakumar, Brahman S.
Afiliação
  • Knee CJ; Department of Orthopaedics and Trauma Surgery, Royal North Shore Hospital, St Leonards, New South Wales, Australia.
  • Campbell RJ; Department of Orthopaedics and Trauma Surgery, Royal North Shore Hospital, St Leonards, New South Wales, Australia.
  • Graham DJ; Australian Research Collaboration on Hands [ARCH], Mudgeeraba, Queensland, Australia.
  • Handford C; School of Medicine and Dentistry, Griffith University, Southport, Queensland, Australia.
  • Symes M; School of Medicine, University of Queensland, Herston, Queensland, Australia.
  • Sivakumar BS; Department of Musculoskeletal Services, Gold Coast University Hospital, Southport, Queensland, Australia.
ANZ J Surg ; 2024 Jul 05.
Article em En | MEDLINE | ID: mdl-38967407
ABSTRACT

BACKGROUND:

The optimal management of distal radius fractures remains a challenge for orthopaedic surgeons. The emergence of Artificial Intelligence (AI) and Large Language Models (LLMs), especially ChatGPT, affords significant potential in improving healthcare and research. This study aims to assess the accuracy and consistency of ChatGPT's knowledge in managing distal radius fractures, with a focus on its capability to provide information for patients and assist in the decision-making processes of orthopaedic clinicians.

METHODS:

We presented ChatGPT with seven questions on distal radius fracture management over two sessions, resulting in 14 responses. These questions covered a range of topics, including patient inquiries and orthopaedic clinical decision-making. We requested references for each response and involved two orthopaedic registrars and two senior orthopaedic surgeons to evaluate response accuracy and consistency.

RESULTS:

All 14 responses contained a mix of both correct and incorrect information. Among the 47 cited references, 13% were accurate, 28% appeared to be fabricated, 57% were incorrect, and 2% were correct but deemed inappropriate. Consistency was observed in 71% of the responses.

CONCLUSION:

ChatGPT demonstrates significant limitations in accuracy and consistency when providing information on distal radius fractures. In its current format, it offers limited utility for patient education and clinical decision-making.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article