Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
2.
PLOS Digit Health ; 2(2): e0000198, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36812645

RESUMO

We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations. These results suggest that large language models may have the potential to assist with medical education, and potentially, clinical decision-making.

3.
Spine (Phila Pa 1976) ; 41(12): 1041-1048, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27294810

RESUMO

STUDY DESIGN: Analysis of spine-related patient education materials (PEMs) from subspecialty websites. OBJECTIVE: The aim of this study was to assess the readability of spine-related PEMs and compare to readability data from 2008. SUMMARY OF BACKGROUND DATA: Many spine patients use the Internet for health information. Several agencies recommend that the readability of online PEMs should be no greater than a sixth-grade reading level, as health literacy predicts health-related quality of life outcomes. This study evaluated whether the North American Spine Society (NASS), American Association of Neurological Surgeons (AANS), and American Academy of Orthopaedic Surgeons (AAOS) online PEMs meet recommended readability guidelines for medical information. METHODS: All publicly accessible spine-related entries within the patient education section of the NASS, AANS, and AAOS websites were analyzed for grade level readability using the Flesch-Kincaid formula. Readability scores were also compared with a similar 2008 analysis. Comparative statistics were performed. RESULTS: A total of 125 entries from the subspecialty websites were analyzed. The average (SD) readability of the online articles was grade level 10.7 (2.3). Of the articles, 117 (93.6%) had a readability score above the sixth-grade level. The readability of the articles exceeded the maximum recommended level by an average of 4.7 grade levels (95% CI, 4.292-5.103; P < 0.001). Compared with 2008, the three societies published more spine-related patient education articles (61 vs. 125, P = 0.045) and the average readability level improved from 11.5 to 10.7 (P = 0.018). Of three examined societies, only one showed significant improvement over time. CONCLUSION: Our findings suggest that the spine-related PEMs on the NASS, AAOS, and AANS websites have readability levels that may make comprehension difficult for a substantial portion of the patient population. Although some progress has been made in the readability of PEMs over the past 7 years, additional improvement is necessary. LEVEL OF EVIDENCE: 2.


Assuntos
Letramento em Saúde/normas , Internet/normas , Educação de Pacientes como Assunto/normas , Leitura , Sociedades Médicas/normas , Doenças da Coluna Vertebral/terapia , Letramento em Saúde/métodos , Humanos , Educação de Pacientes como Assunto/métodos , Doenças da Coluna Vertebral/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA