Your browser doesn't support javascript.
loading
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.
Gibson, Damien; Jackson, Stuart; Shanmugasundaram, Ramesh; Seth, Ishith; Siu, Adrian; Ahmadi, Nariman; Kam, Jonathan; Mehan, Nicholas; Thanigasalam, Ruban; Jeffery, Nicola; Patel, Manish I; Leslie, Scott.
Afiliação
  • Gibson D; Department of Urology, Saint George Hospital, Kogarah, Australia.
  • Jackson S; Faculty of Medicine, The University of New South Wales, Sydney, Australia.
  • Shanmugasundaram R; Surgical Outcomes Research Centre, Sydney, Australia.
  • Seth I; Faculty of Medicine, University of Sydney, Sydney, Australia.
  • Siu A; Department of Urology, Saint George Hospital, Kogarah, Australia.
  • Ahmadi N; Faculty of Medicine, The University of New South Wales, Sydney, Australia.
  • Kam J; Department of Surgery, Peninsula Health, Victoria, Australia.
  • Mehan N; Surgical Outcomes Research Centre, Sydney, Australia.
  • Thanigasalam R; Concord Institute of Academic Surgery, Concord Hospital, Sydney, Australia.
  • Jeffery N; Department of Urology, Chris O'Brien Lifehouse, Sydney, Australia.
  • Patel MI; Royal Prince Alfred Hospital Institute of Academic Surgery, Royal Prince Alfred Hospital, Sydney, Australia.
  • Leslie S; Nepean Urology Research Group, Nepean Hospital, Sydney, Australia.
J Med Internet Res ; 26: e55939, 2024 Aug 14.
Article em En | MEDLINE | ID: mdl-39141904
ABSTRACT

BACKGROUND:

Artificial intelligence (AI) chatbots, such as ChatGPT, have made significant progress. These chatbots, particularly popular among health care professionals and patients, are transforming patient education and disease experience with personalized information. Accurate, timely patient education is crucial for informed decision-making, especially regarding prostate-specific antigen screening and treatment options. However, the accuracy and reliability of AI chatbots' medical information must be rigorously evaluated. Studies testing ChatGPT's knowledge of prostate cancer are emerging, but there is a need for ongoing evaluation to ensure the quality and safety of information provided to patients.

OBJECTIVE:

This study aims to evaluate the quality, accuracy, and readability of ChatGPT-4's responses to common prostate cancer questions posed by patients.

METHODS:

Overall, 8 questions were formulated with an inductive approach based on information topics in peer-reviewed literature and Google Trends data. Adapted versions of the Patient Education Materials Assessment Tool for AI (PEMAT-AI), Global Quality Score, and DISCERN-AI tools were used by 4 independent reviewers to assess the quality of the AI responses. The 8 AI outputs were judged by 7 expert urologists, using an assessment framework developed to assess accuracy, safety, appropriateness, actionability, and effectiveness. The AI responses' readability was assessed using established algorithms (Flesch Reading Ease score, Gunning Fog Index, Flesch-Kincaid Grade Level, The Coleman-Liau Index, and Simple Measure of Gobbledygook [SMOG] Index). A brief tool (Reference Assessment AI [REF-AI]) was developed to analyze the references provided by AI outputs, assessing for reference hallucination, relevance, and quality of references.

RESULTS:

The PEMAT-AI understandability score was very good (mean 79.44%, SD 10.44%), the DISCERN-AI rating was scored as "good" quality (mean 13.88, SD 0.93), and the Global Quality Score was high (mean 4.46/5, SD 0.50). Natural Language Assessment Tool for AI had pooled mean accuracy of 3.96 (SD 0.91), safety of 4.32 (SD 0.86), appropriateness of 4.45 (SD 0.81), actionability of 4.05 (SD 1.15), and effectiveness of 4.09 (SD 0.98). The readability algorithm consensus was "difficult to read" (Flesch Reading Ease score mean 45.97, SD 8.69; Gunning Fog Index mean 14.55, SD 4.79), averaging an 11th-grade reading level, equivalent to 15- to 17-year-olds (Flesch-Kincaid Grade Level mean 12.12, SD 4.34; The Coleman-Liau Index mean 12.75, SD 1.98; SMOG Index mean 11.06, SD 3.20). REF-AI identified 2 reference hallucinations, while the majority (28/30, 93%) of references appropriately supplemented the text. Most references (26/30, 86%) were from reputable government organizations, while a handful were direct citations from scientific literature.

CONCLUSIONS:

Our analysis found that ChatGPT-4 provides generally good responses to common prostate cancer queries, making it a potentially valuable tool for patient education in prostate cancer care. Objective quality assessment tools indicated that the natural language processing outputs were generally reliable and appropriate, but there is room for improvement.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias da Próstata / Educação de Pacientes como Assunto Limite: Humans / Male Idioma: En Revista: J Med Internet Res Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Austrália

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias da Próstata / Educação de Pacientes como Assunto Limite: Humans / Male Idioma: En Revista: J Med Internet Res Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Austrália