Your browser doesn't support javascript.
loading
Large language models as assistance for glaucoma surgical cases: a ChatGPT vs. Google Gemini comparison.
Carlà, Matteo Mario; Gambini, Gloria; Baldascino, Antonio; Boselli, Francesco; Giannuzzi, Federico; Margollicci, Fabio; Rizzo, Stanislao.
Afiliação
  • Carlà MM; Ophthalmology Department, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168, Rome, Italy. mm.carla94@gmail.com.
  • Gambini G; Ophthalmology Department, Catholic University "Sacro Cuore,", Largo A. Gemelli, 8, Rome, Italy. mm.carla94@gmail.com.
  • Baldascino A; Ophthalmology Department, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168, Rome, Italy.
  • Boselli F; Ophthalmology Department, Catholic University "Sacro Cuore,", Largo A. Gemelli, 8, Rome, Italy.
  • Giannuzzi F; Ophthalmology Department, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168, Rome, Italy.
  • Margollicci F; Ophthalmology Department, Catholic University "Sacro Cuore,", Largo A. Gemelli, 8, Rome, Italy.
  • Rizzo S; Ophthalmology Department, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168, Rome, Italy.
Graefes Arch Clin Exp Ophthalmol ; 262(9): 2945-2959, 2024 Sep.
Article em En | MEDLINE | ID: mdl-38573349
ABSTRACT

PURPOSE:

The aim of this study was to define the capability of ChatGPT-4 and Google Gemini in analyzing detailed glaucoma case descriptions and suggesting an accurate surgical plan.

METHODS:

Retrospective analysis of 60 medical records of surgical glaucoma was divided into "ordinary" (n = 40) and "challenging" (n = 20) scenarios. Case descriptions were entered into ChatGPT and Bard's interfaces with the question "What kind of surgery would you perform?" and repeated three times to analyze the answers' consistency. After collecting the answers, we assessed the level of agreement with the unified opinion of three glaucoma surgeons. Moreover, we graded the quality of the responses with scores from 1 (poor quality) to 5 (excellent quality), according to the Global Quality Score (GQS) and compared the results.

RESULTS:

ChatGPT surgical choice was consistent with those of glaucoma specialists in 35/60 cases (58%), compared to 19/60 (32%) of Gemini (p = 0.0001). Gemini was not able to complete the task in 16 cases (27%). Trabeculectomy was the most frequent choice for both chatbots (53% and 50% for ChatGPT and Gemini, respectively). In "challenging" cases, ChatGPT agreed with specialists in 9/20 choices (45%), outperforming Google Gemini performances (4/20, 20%). Overall, GQS scores were 3.5 ± 1.2 and 2.1 ± 1.5 for ChatGPT and Gemini (p = 0.002). This difference was even more marked if focusing only on "challenging" cases (1.5 ± 1.4 vs. 3.0 ± 1.5, p = 0.001).

CONCLUSION:

ChatGPT-4 showed a good analysis performance for glaucoma surgical cases, either ordinary or challenging. On the other side, Google Gemini showed strong limitations in this setting, presenting high rates of unprecise or missed answers.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Glaucoma Limite: Aged / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Glaucoma Limite: Aged / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2024 Tipo de documento: Article