Your browser doesn't support javascript.
loading
Are large language models valid tools for patient information on lumbar disc herniation? The spine surgeons' perspective.
Lang, Siegmund; Vitale, Jacopo; Fekete, Tamás F; Haschtmann, Daniel; Reitmeir, Raluca; Ropelato, Mario; Puhakka, Jani; Galbusera, Fabio; Loibl, Markus.
Afiliación
  • Lang S; Department of Trauma Surgery, University Hospital Regensburg, Regensburg, Germany.
  • Vitale J; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Fekete TF; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Haschtmann D; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Reitmeir R; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Ropelato M; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Puhakka J; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Galbusera F; Spine Center, Schulthess Klinik, Zurich, Switzerland.
  • Loibl M; Spine Center, Schulthess Klinik, Zurich, Switzerland.
Brain Spine ; 4: 102804, 2024.
Article en En | MEDLINE | ID: mdl-38706800
ABSTRACT

Introduction:

Generative AI is revolutionizing patient education in healthcare, particularly through chatbots that offer personalized, clear medical information. Reliability and accuracy are vital in AI-driven patient education. Research question How effective are Large Language Models (LLM), such as ChatGPT and Google Bard, in delivering accurate and understandable patient education on lumbar disc herniation? Material and

methods:

Ten Frequently Asked Questions about lumbar disc herniation were selected from 133 questions and were submitted to three LLMs. Six experienced spine surgeons rated the responses on a scale from "excellent" to "unsatisfactory," and evaluated the answers for exhaustiveness, clarity, empathy, and length. Statistical analysis involved Fleiss Kappa, Chi-square, and Friedman tests.

Results:

Out of the responses, 27.2% were excellent, 43.9% satisfactory with minimal clarification, 18.3% satisfactory with moderate clarification, and 10.6% unsatisfactory. There were no significant differences in overall ratings among the LLMs (p = 0.90); however, inter-rater reliability was not achieved, and large differences among raters were detected in the distribution of answer frequencies. Overall, ratings varied among the 10 answers (p = 0.043). The average ratings for exhaustiveness, clarity, empathy, and length were above 3.5/5. Discussion and

conclusion:

LLMs show potential in patient education for lumbar spine surgery, with generally positive feedback from evaluators. The new EU AI Act, enforcing strict regulation on AI systems, highlights the need for rigorous oversight in medical contexts. In the current study, the variability in evaluations and occasional inaccuracies underline the need for continuous improvement. Future research should involve more advanced models to enhance patient-physician communication.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Brain Spine Año: 2024 Tipo del documento: Article País de afiliación: Alemania

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Brain Spine Año: 2024 Tipo del documento: Article País de afiliación: Alemania
...