GPT-based chatbot tools are still unreliable in the management of prosthetic joint infections.
Musculoskelet Surg
; 2024 Jul 02.
Article
in En
| MEDLINE
| ID: mdl-38954323
ABSTRACT
BACKGROUND:
Artificial intelligence chatbot tools responses might discern patterns and correlations that may elude human observation, leading to more accurate and timely interventions. However, their reliability to answer healthcare-related questions is still debated. This study aimed to assess the performance of the three versions of GPT-based chatbots about prosthetic joint infections (PJI).METHODS:
Thirty questions concerning the diagnosis and treatment of hip and knee PJIs, stratified by a priori established difficulty, were generated by a team of experts, and administered to ChatGPT 3.5, BingChat, and ChatGPT 4.0. Responses were rated by three orthopedic surgeons and two infectious diseases physicians using a five-point Likert-like scale with numerical values to quantify the quality of responses. Inter-rater reliability was assessed by interclass correlation statistics.RESULTS:
Responses averaged "good-to-very good" for all chatbots examined, both in diagnosis and treatment, with no significant differences according to the difficulty of the questions. However, BingChat ratings were significantly lower in the treatment setting (p = 0.025), particularly in terms of accuracy (p = 0.02) and completeness (p = 0.004). Agreement in ratings among examiners appeared to be very poor.CONCLUSIONS:
On average, the quality of responses is rated positively by experts, but with ratings that frequently may vary widely. This currently suggests that AI chatbot tools are still unreliable in the management of PJI.
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Language:
En
Journal:
Musculoskelet Surg
Journal subject:
ORTOPEDIA
Year:
2024
Type:
Article
Affiliation country:
Italy