Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Med Teach ; 46(8): 1021-1026, 2024 08.
Artículo en Inglés | MEDLINE | ID: mdl-38146711

RESUMEN

BACKGROUND: Crafting quality assessment questions in medical education is a crucial yet time-consuming, expertise-driven undertaking that calls for innovative solutions. Large language models (LLMs), such as ChatGPT (Chat Generative Pre-Trained Transformer), present a promising yet underexplored avenue for such innovations. AIMS: This study explores the utility of ChatGPT to generate diverse, high-quality medical questions, focusing on multiple-choice questions (MCQs) as an illustrative example, to increase educator's productivity and enable self-directed learning for students. DESCRIPTION: Leveraging 12 strategies, we demonstrate how ChatGPT can be effectively used to generate assessment questions aligned with Bloom's taxonomy and core knowledge domains while promoting best practices in assessment design. CONCLUSION: Integrating LLM tools like ChatGPT into generating medical assessment questions like MCQs augments but does not replace human expertise. With continual instruction refinement, AI can produce high-standard questions. Yet, the onus of ensuring ultimate quality and accuracy remains with subject matter experts, affirming the irreplaceable value of human involvement in the artificial intelligence-driven education paradigm.


Asunto(s)
Inteligencia Artificial , Educación Médica , Evaluación Educacional , Humanos , Educación Médica/métodos , Evaluación Educacional/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA