Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
OTO Open ; 8(2): e164, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38938507

RESUMO

Objective: Advances in deep learning and artificial intelligence (AI) have led to the emergence of large language models (LLM) like ChatGPT from OpenAI. The study aimed to evaluate the performance of ChatGPT 3.5 and GPT4 on Otolaryngology (Rhinology) Standardized Board Examination questions in comparison to Otolaryngology residents. Methods: This study selected all 127 rhinology standardized questions from www.boardvitals.com, a commonly used study tool by otolaryngology residents preparing for board exams. Ninety-three text-based questions were administered to ChatGPT 3.5 and GPT4, and their answers were compared with the average results of the question bank (used primarily by otolaryngology residents). Thirty-four image-based questions were provided to GPT4 and underwent the same analysis. Based on the findings of an earlier study, a pass-fail cutoff was set at the 10th percentile. Results: On text-based questions, ChatGPT 3.5 answered correctly 45.2% of the time (8th percentile) (P = .0001), while GPT4 achieved 86.0% (66th percentile) (P = .001). GPT4 answered image-based questions correctly 64.7% of the time. Projections suggest that ChatGPT 3.5 might not pass the American Board of Otolaryngology Written Question Exam (ABOto WQE), whereas GPT4 stands a strong chance of passing. Discussion: The older LLM, ChatGPT 3.5, is unlikely to pass the ABOto WQE. However, the advanced GPT4 model exhibits a much higher likelihood of success. This rapid progression in AI indicates its potential future role in otolaryngology education. Implications for Practice: As AI technology rapidly advances, it may be that AI-assisted medical education, diagnosis, and treatment planning become commonplace in the medical and surgical landscape. Level of Evidence: Level 5.

2.
Otolaryngol Head Neck Surg ; 171(2): 603-608, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38751109

RESUMO

OBJECTIVE: The recommended readability of health education materials is at the sixth-grade level. Artificial intelligence (AI) large language models such as the newly released ChatGPT4 might facilitate the conversion of patient-education materials at scale. We sought to ascertain whether online otolaryngology education materials meet recommended reading levels and whether ChatGPT4 could rewrite these materials to the sixth-grade level. We also wished to ensure that converted materials were accurate and retained sufficient content. METHODS: Seventy-one articles from patient educational materials published online by the American Academy of Otolaryngology-Head and Neck Surgery were selected. Articles were entered into ChatGPT4 with the prompt "translate this text to a sixth-grade reading level." Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) were determined for each article before and after AI conversion. Each article and conversion were reviewed for factual inaccuracies, and each conversion was reviewed for content retention. RESULTS: The 71 articles had an initial average FKGL of 11.03 and FRES of 46.79. After conversion by ChatGPT4, the average FKGL across all articles was 5.80 and FRES was 77.27. Converted materials provided enough detail for patient education with no factual errors. DISCUSSION: We found that ChatGPT4 improved the reading accessibility of otolaryngology online patient education materials to recommended levels quickly and effectively. IMPLICATIONS FOR PRACTICE: Physicians can determine whether their patient education materials exceed current recommended reading levels by using widely available measurement tools, and then apply AI dialogue platforms to modify materials to more accessible levels as needed. LEVEL OF EVIDENCE: Level 5.


Assuntos
Inteligência Artificial , Compreensão , Otolaringologia , Educação de Pacientes como Assunto , Otolaringologia/educação , Humanos , Educação de Pacientes como Assunto/métodos , Materiais de Ensino/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA