Your browser doesn't support javascript.
loading
Evaluating the strengths and limitations of multimodal ChatGPT-4 in detecting glaucoma using fundus images.
AlRyalat, Saif Aldeen; Musleh, Ayman Mohammed; Kahook, Malik Y.
Affiliation
  • AlRyalat SA; Department of Ophthalmology, The University of Jordan, Amman, Jordan.
  • Musleh AM; Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, United States.
  • Kahook MY; Jordan University Hospital, Amman, Jordan.
Front Ophthalmol (Lausanne) ; 4: 1387190, 2024.
Article de En | MEDLINE | ID: mdl-38984105
ABSTRACT
Overview This study evaluates the diagnostic accuracy of a multimodal large language model (LLM), ChatGPT-4, in recognizing glaucoma using color fundus photographs (CFPs) with a benchmark dataset and without prior training or fine tuning.

Methods:

The publicly accessible Retinal Fundus Glaucoma Challenge "REFUGE" dataset was utilized for analyses. The input data consisted of the entire 400 image testing set. The task involved classifying fundus images into either 'Likely Glaucomatous' or 'Likely Non-Glaucomatous'. We constructed a confusion matrix to visualize the results of predictions from ChatGPT-4, focusing on accuracy of binary classifications (glaucoma vs non-glaucoma).

Results:

ChatGPT-4 demonstrated an accuracy of 90% with a 95% confidence interval (CI) of 87.06%-92.94%. The sensitivity was found to be 50% (95% CI 34.51%-65.49%), while the specificity was 94.44% (95% CI 92.08%-96.81%). The precision was recorded at 50% (95% CI 34.51%-65.49%), and the F1 Score was 0.50.

Conclusion:

ChatGPT-4 achieved relatively high diagnostic accuracy without prior fine tuning on CFPs. Considering the scarcity of data in specialized medical fields, including ophthalmology, the use of advanced AI techniques, such as LLMs, might require less data for training compared to other forms of AI with potential savings in time and financial resources. It may also pave the way for the development of innovative tools to support specialized medical care, particularly those dependent on multimodal data for diagnosis and follow-up, irrespective of resource constraints.
Mots clés

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: Front Ophthalmol (Lausanne) Année: 2024 Type de document: Article Pays d'affiliation: Jordanie Pays de publication: Suisse

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Langue: En Journal: Front Ophthalmol (Lausanne) Année: 2024 Type de document: Article Pays d'affiliation: Jordanie Pays de publication: Suisse