Your browser doesn't support javascript.
loading
ChatGPT has Educational Potential: Assessing ChatGPT Responses to Common Patient Hip Arthroscopy Questions.
AlShehri, Yasir; McConkey, Mark; Lodhia, Parth.
Affiliation
  • AlShehri Y; Department of Orthopaedics, Faculty of Medicine, The University of British Columbia, Vancouver, BC, Canada; Department of Orthopedics, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.
  • McConkey M; Department of Orthopaedics, Faculty of Medicine, The University of British Columbia, Vancouver, BC, Canada.
  • Lodhia P; Department of Orthopaedics, Faculty of Medicine, The University of British Columbia, Vancouver, BC, Canada. Electronic address: parth.lodhia@ubc.ca.
Arthroscopy ; 2024 Jun 22.
Article in En | MEDLINE | ID: mdl-38914299
ABSTRACT

PURPOSE:

To assess the ability of ChatGPT to answer common patient questions regarding hip arthroscopy, and to analyze the accuracy and appropriateness of its responses.

METHODS:

Ten questions were selected from well-known patient education websites, and ChatGPT (version 3.5) responses to these questions were graded by two fellowship-trained hip preservation surgeons. Responses were analyzed, compared to the current literature, and graded from A to D (A being the highest, and D being the lowest) in a grading scale based on the accuracy and completeness of the response. If the grading differed between the two surgeons, a consensus was reached. Inter-rater agreement was calculated. The readability of responses was also assessed using the Flesch-Kincaid Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL).

RESULTS:

Responses received the following consensus grades A (50%, n=5), B (30%, n=3), C (10%, n=1), D (10%, n=1) (Table 2). Inter-rater agreement based on initial individual grading was 30%. The mean FRES was 28.2 (SD± 9.2), corresponding to a college graduate level, ranging from 11.7 to 42.5. The mean FKGL was 14.4 (SD±1.8), ranging from 12.1 to 18, indicating a college student reading level.

CONCLUSION:

ChatGPT can answer common patient questions regarding hip arthroscopy with satisfactory accuracy graded by two high-volume hip arthroscopists, however, incorrect information was identified in more than one instance. Caution must be observed when using ChatGPT for patient education related to hip arthroscopy. CLINICAL RELEVANCE Given the increasing number of hip arthroscopies being performed annually, ChatGPT has the potential to aid physicians in educating their patients about this procedure and address any questions they may have.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Arthroscopy Journal subject: ORTOPEDIA Year: 2024 Document type: Article Affiliation country: Saudi Arabia

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Arthroscopy Journal subject: ORTOPEDIA Year: 2024 Document type: Article Affiliation country: Saudi Arabia