Your browser doesn't support javascript.
loading
ChatGPT and Google Assistant as a Source of Patient Education for Patients With Amblyopia: Content Analysis.
Wu, Gloria; Lee, David A; Zhao, Weichen; Wong, Adrial; Jhangiani, Rohan; Kurniawan, Sri.
Afiliación
  • Wu G; University of California, San Francisco School of Medicine, San Francisco, CA, United States.
  • Lee DA; McGovern Medical School, University of Texas Health Science Center at Houston, Houston, CA, United States.
  • Zhao W; College of Biological Sciences, University of California, Davis, Davis, CA, United States.
  • Wong A; College of Biological Sciences, University of California, Davis, Davis, CA, United States.
  • Jhangiani R; Department of Computational Media, University of California, Santa Cruz, Santa Cruz, CA, United States.
  • Kurniawan S; Department of Computational Media, University of California, Santa Cruz, Santa Cruz, CA, United States.
J Med Internet Res ; 26: e52401, 2024 Aug 15.
Article en En | MEDLINE | ID: mdl-39146013
ABSTRACT

BACKGROUND:

We queried ChatGPT (OpenAI) and Google Assistant about amblyopia and compared their answers with the keywords found on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website, specifically the section on amblyopia. Out of the 26 keywords chosen from the website, ChatGPT included 11 (42%) in its responses, while Google included 8 (31%).

OBJECTIVE:

Our study investigated the adherence of ChatGPT-3.5 and Google Assistant to the guidelines of the AAPOS for patient education on amblyopia.

METHODS:

ChatGPT-3.5 was used. The four questions taken from the AAPOS website, specifically its glossary section for amblyopia, are as follows (1) What is amblyopia? (2) What causes amblyopia? (3) How is amblyopia treated? (4) What happens if amblyopia is untreated? Approved and selected by ophthalmologists (GW and DL), the keywords from AAPOS were words or phrases that deemed significant for the education of patients with amblyopia. The "Flesch-Kincaid Grade Level" formula, approved by the US Department of Education, was used to evaluate the reading comprehension level for the responses from ChatGPT, Google Assistant, and AAPOS.

RESULTS:

In their responses, ChatGPT did not mention the term "ophthalmologist," whereas Google Assistant and AAPOS both mentioned the term once and twice, respectively. ChatGPT did, however, use the term "eye doctors" once. According to the Flesch-Kincaid test, the average reading level of AAPOS was 11.4 (SD 2.1; the lowest level) while that of Google was 13.1 (SD 4.8; the highest required reading level), also showing the greatest variation in grade level in its responses. ChatGPT's answers, on average, scored 12.4 (SD 1.1) grade level. They were all similar in terms of difficulty level in reading. For the keywords, out of the 4 responses, ChatGPT used 42% (11/26) of the keywords, whereas Google Assistant used 31% (8/26).

CONCLUSIONS:

ChatGPT trains on texts and phrases and generates new sentences, while Google Assistant automatically copies website links. As ophthalmologists, we should consider including "see an ophthalmologist" on our websites and journals. While ChatGPT is here to stay, we, as physicians, need to monitor its answers.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Ambliopía / Educación del Paciente como Asunto / Internet Límite: Humans Idioma: En Revista: J Med Internet Res / J. med. internet res / Journal of medical internet research Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Ambliopía / Educación del Paciente como Asunto / Internet Límite: Humans Idioma: En Revista: J Med Internet Res / J. med. internet res / Journal of medical internet research Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos