Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Med Sci Educ ; 34(2): 331-333, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38686158

RESUMO

Purpose: We examined the performance of artificial intelligence chatbots on the PREview Practice Exam, an online situational judgment test for professionalism and ethics. Methods: We used validated methodologies to calculate scores and descriptive statistics, χ2 tests, and Fisher's exact tests to compare scores by model and competency. Results: GPT-3.5 and GPT-4 scored 6/9 (76th percentile) and 7/9 (92nd percentile), respectively, higher than medical school applicant averages of 5/9 (56th percentile). Both models answered 95 + % of questions correctly. Conclusions: Chatbots outperformed the average applicant on PREview, suggesting their potential for healthcare training and decision-making and highlighting risks of online assessment delivery.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA