Your browser doesn't support javascript.
loading
Head-to-Head Comparison of ChatGPT Versus Google Search for Medical Knowledge Acquisition.
Ayoub, Noel F; Lee, Yu-Jin; Grimm, David; Divi, Vasu.
Afiliação
  • Ayoub NF; Department of Otolaryngology-Head and Neck Surgery, Division of Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA.
  • Lee YJ; Department of Otolaryngology-Head and Neck Surgery, Division of Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA.
  • Grimm D; Department of Otolaryngology-Head and Neck Surgery, Division of Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA.
  • Divi V; Department of Otolaryngology-Head and Neck Surgery, Division of Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA.
Article em En | MEDLINE | ID: mdl-37529853
OBJECTIVE: Chat Generative Pretrained Transformer (ChatGPT) is the newest iteration of OpenAI's generative artificial intelligence (AI) with the potential to influence many facets of life, including health care. This study sought to assess ChatGPT's capabilities as a source of medical knowledge, using Google Search as a comparison. STUDY DESIGN: Cross-sectional analysis. SETTING: Online using ChatGPT, Google Seach, and Clinical Practice Guidelines (CPG). METHODS: CPG Plain Language Summaries for 6 conditions were obtained. Questions relevant to specific conditions were developed and input into ChatGPT and Google Search. All questions were written from the patient perspective and sought (1) general medical knowledge or (2) medical recommendations, with varying levels of acuity (urgent or emergent vs routine clinical scenarios). Two blinded reviewers scored all passages and compared results from ChatGPT and Google Search, using the Patient Education Material Assessment Tool (PEMAT-P) as the primary outcome. Additional customized questions were developed that assessed the medical content of the passages. RESULTS: The overall average PEMAT-P score for medical advice was 68.2% (standard deviation [SD]: 4.4) for ChatGPT and 89.4% (SD: 5.9) for Google Search (p < .001). There was a statistically significant difference in the PEMAT-P score by source (p < .001) but not by urgency of the clinical situation (p = .613). ChatGPT scored significantly higher than Google Search (87% vs 78%, p = .012) for patient education questions. CONCLUSION: ChatGPT fared better than Google Search when offering general medical knowledge, but it scored worse when providing medical recommendations. Health care providers should strive to understand the potential benefits and ramifications of generative AI to guide patients appropriately.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies Idioma: En Revista: Otolaryngol Head Neck Surg Assunto da revista: OTORRINOLARINGOLOGIA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies Idioma: En Revista: Otolaryngol Head Neck Surg Assunto da revista: OTORRINOLARINGOLOGIA Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos