Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Am J Pharm Educ ; : 101266, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39153573

RESUMO

OBJECTIVE: This study aimed to develop a prompt engineering procedure for test question mapping and then determine the effectiveness of test question mapping using ChatGPT compared to human faculty mapping. METHODS: We conducted a cross-sectional study to compare ChatGPT and human mapping using a sample of 139 test questions from modules within an integrated pharmacotherapeutics course series. The test questions were mapped by three faculty members to both module objectives and the Accreditation Council for Pharmacy Education Standards 2016 (Standards 2016) to create the "correct answer". Prompt engineering procedures were created to facilitate mapping with ChatGPT, and ChatGPT mapping results were compared with human mapping. RESULTS: ChatGPT mapped test questions directly to the "correct answer" based on human consensus in 68.0% of cases, and the program matched with at least one individual human response in another 20.1% of cases for a total of 88.1% agreement with human mappers. When humans fully agreed with the mapping decision, ChatGPT was more likely to map correctly. CONCLUSION: This study presents a practical use case with prompt engineering tailored for college assessment or curriculum committees to facilitate efficient test question and educational outcomes mapping.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa