Your browser doesn't support javascript.
loading
Responses From ChatGPT-4 Show Limited Correlation With Expert Consensus Statement on Anterior Shoulder Instability.
Artamonov, Alexander; Bachar-Avnieli, Ira; Klang, Eyal; Lubovsky, Omri; Atoun, Ehud; Bermant, Alexander; Rosinsky, Philip J.
Affiliation
  • Artamonov A; Orthopedic Department, Barzilai Medical Center, Ashkelon, Israel.
  • Bachar-Avnieli I; Orthopedic Department, Barzilai Medical Center, Ashkelon, Israel.
  • Klang E; Ben-Gurion University, Beer-Sheva, Israel.
  • Lubovsky O; Sagol AI Hub at ARC Innovation, Sheba Medical Center, Ramat Gan, Israel.
  • Atoun E; Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel.
  • Bermant A; Orthopedic Department, Barzilai Medical Center, Ashkelon, Israel.
  • Rosinsky PJ; Ben-Gurion University, Beer-Sheva, Israel.
Arthrosc Sports Med Rehabil ; 6(3): 100923, 2024 Jun.
Article in En | MEDLINE | ID: mdl-39006799
ABSTRACT

Purpose:

To compare the similarity of answers provided by Generative Pretrained Transformer-4 (GPT-4) with those of a consensus statement on diagnosis, nonoperative management, and Bankart repair in anterior shoulder instability (ASI).

Methods:

An expert consensus statement on ASI published by Hurley et al. in 2022 was reviewed and questions laid out to the expert panel were extracted. GPT-4, the subscription version of ChatGPT, was queried using the same set of questions. Answers provided by GPT-4 were compared with those of the expert panel and subjectively rated for similarity by 2 experienced shoulder surgeons. GPT-4 was then used to rate the similarity of its own responses to the consensus statement, classifying them as low, medium, or high. Rates of similarity as classified by the shoulder surgeons and GPT-4 were then compared and interobserver reliability calculated using weighted κ scores.

Results:

The degree of similarity between responses of GPT-4 and the ASI consensus statement, as defined by shoulder surgeons, was high in 25.8%, medium in 45.2%, and low 29% of questions. GPT-4 assessed similarity as high in 48.3%, medium in 41.9%, and low 9.7% of questions. Surgeons and GPT-4 reached consensus on the classification of 18 questions (58.1%) and disagreement on 13 questions (41.9%).

Conclusions:

The responses generated by artificial intelligence exhibit limited correlation with an expert statement on the diagnosis and treatment of ASI. Clinical Relevance As the use of artificial intelligence becomes more prevalent, it is important to understand how closely information resembles content produced by human authors.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Arthrosc Sports Med Rehabil Year: 2024 Document type: Article Affiliation country: Israel

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Arthrosc Sports Med Rehabil Year: 2024 Document type: Article Affiliation country: Israel