Your browser doesn't support javascript.
loading
Evaluation and Comparison of Ophthalmic Scientific Abstracts and References by Current Artificial Intelligence Chatbots.
Hua, Hong-Uyen; Kaakour, Abdul-Hadi; Rachitskaya, Aleksandra; Srivastava, Sunil; Sharma, Sumit; Mammo, Danny A.
Affiliation
  • Hua HU; Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, Ohio.
  • Kaakour AH; Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, Ohio.
  • Rachitskaya A; Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, Ohio.
  • Srivastava S; Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, Ohio.
  • Sharma S; Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, Ohio.
  • Mammo DA; Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, Ohio.
JAMA Ophthalmol ; 141(9): 819-824, 2023 09 01.
Article in En | MEDLINE | ID: mdl-37498609
ABSTRACT
Importance Language-learning model-based artificial intelligence (AI) chatbots are growing in popularity and have significant implications for both patient education and academia. Drawbacks of using AI chatbots in generating scientific abstracts and reference lists, including inaccurate content coming from hallucinations (ie, AI-generated output that deviates from its training data), have not been fully explored.

Objective:

To evaluate and compare the quality of ophthalmic scientific abstracts and references generated by earlier and updated versions of a popular AI chatbot. Design, Setting, and

Participants:

This cross-sectional comparative study used 2 versions of an AI chatbot to generate scientific abstracts and 10 references for clinical research questions across 7 ophthalmology subspecialties. The abstracts were graded by 2 authors using modified DISCERN criteria and performance evaluation scores. Main Outcome and

Measures:

Scores for the chatbot-generated abstracts were compared using the t test. Abstracts were also evaluated by 2 AI output detectors. A hallucination rate for unverifiable references generated by the earlier and updated versions of the chatbot was calculated and compared.

Results:

The mean modified AI-DISCERN scores for the chatbot-generated abstracts were 35.9 and 38.1 (maximum of 50) for the earlier and updated versions, respectively (P = .30). Using the 2 AI output detectors, the mean fake scores (with a score of 100% meaning generated by AI) for the earlier and updated chatbot-generated abstracts were 65.4% and 10.8%, respectively (P = .01), for one detector and were 69.5% and 42.7% (P = .17) for the second detector. The mean hallucination rates for nonverifiable references generated by the earlier and updated versions were 33% and 29% (P = .74). Conclusions and Relevance Both versions of the chatbot generated average-quality abstracts. There was a high hallucination rate of generating fake references, and caution should be used when using these AI resources for health education or academic purposes.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Artificial Intelligence / Eye Type of study: Observational_studies / Prevalence_studies / Prognostic_studies / Risk_factors_studies Limits: Humans Language: En Journal: JAMA Ophthalmol Year: 2023 Type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Artificial Intelligence / Eye Type of study: Observational_studies / Prevalence_studies / Prognostic_studies / Risk_factors_studies Limits: Humans Language: En Journal: JAMA Ophthalmol Year: 2023 Type: Article