Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38502460

RESUMEN

Despite the increasing implementation of formative assessment in medical education, its' effect on learning behaviour remains questionable. This effect may depend on how students value formative, and summative assessments differently. Informed by Expectancy Value Theory, we compared test preparation, feedback use, and test-taking motivation of medical students who either took a purely formative progress test (formative PT-group) or a progress test that yielded study credits (summative PT-group). In a mixed-methods study design, we triangulated quantitative questionnaire data (n = 264), logging data of an online PT feedback system (n = 618), and qualitative interview data (n = 21) to compare feedback use, and test-taking motivation between the formative PT-group (n = 316), and the summative PT-group (n = 302). Self-reported, and actual feedback consultation was higher in the summative PT-group. Test preparation, and active feedback use were relatively low and similar in both groups. Both quantitative, and qualitative results showed that the motivation to prepare and consult feedback relates to how students value the assessment. In the interview data, a link could be made with goal orientation theory, as performance-oriented students perceived the formative PT as not important due to the lack of study credits. This led to low test-taking effort, and feedback consultation after the formative PT. In contrast, learning-oriented students valued the formative PT, and used it for self-study or self-assessment to gain feedback. Our results indicate that most students are less motivated to put effort in the test, and use feedback when there are no direct consequences. A supportive assessment environment that emphasizes recognition of the value of formative testing is required to motivate students to use feedback for learning.

2.
Med Educ ; 2024 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-38462812

RESUMEN

BACKGROUND: Active engagement with feedback is crucial for feedback to be effective and improve students' learning and achievement. Medical students are provided feedback on their development in the progress test (PT), which has been implemented in various medical curricula, although its format, integration and feedback differ across institutions. Existing research on engagement with feedback in the context of PT is not sufficient to make a definitive judgement on what works and which barriers exist. Therefore, we conducted an interview study to explore students' feedback use in medical progress testing. METHODS: All Dutch medical students participate in a national, curriculum-independent PT four times a year. This mandatory test, composed of multiple-choice questions, provides students with written feedback on their scores. Furthermore, an answer key is available to review their answers. Semi-structured interviews were conducted with 21 preclinical and clinical medical students who participated in the PT. Template analysis was performed on the qualitative data using a priori themes based on previous research on feedback use. RESULTS: Template analysis revealed that students faced challenges in crucial internal psychological processes that impact feedback use, including 'awareness', 'cognizance', 'agency' and 'volition'. Factors such as stakes, available time, feedback timing and feedback presentation contributed to these difficulties, ultimately hindering feedback use. Notably, feedback engagement was higher during clinical rotations, and students were interested in the feedback when seeking insights into their performance level and career perspectives. CONCLUSION: Our study enhanced the understanding of students' feedback utilisation in medical progress testing by identifying key processes and factors that impact feedback use. By recognising and addressing barriers in feedback use, we can improve both student and teacher feedback literacy, thereby transforming the PT into a more valuable learning tool.

3.
Ned Tijdschr Geneeskd ; 1682024 02 01.
Artículo en Holandés | MEDLINE | ID: mdl-38319315

RESUMEN

Assessment plays a significant role in the career of medical doctors. Not only are they being assessed, many medical doctors are also involved in teaching which includes the creation of tests. Therefore, knowledge on high quality assessment questions is essential. Multiple-choice questions (MCQs) are commonly used, but allow for cueing, stimulates recognition-based learning and do not align clinical practice. The Very Short Answer Question (VSAQ), an open-ended question with a limited answer, is a good alternative which does not allow for cueing, is authentic and encourages students to study more actively. The marking time of VSAQs is relatively short and plausible alternative answer options are no longer needed. It's time to challenge the limits of our comfort zone and to dare using VSAQs in our assessments more often. This way, good and representative assessments can stimulate the learning process of medical doctors and form a strong fundament for professional practice.


Asunto(s)
Médicos , Estudiantes , Humanos
4.
PLoS One ; 18(7): e0288558, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37450485

RESUMEN

Multiple choice questions (MCQs) offer high reliability and easy machine-marking, but allow for cueing and stimulate recognition-based learning. Very short answer questions (VSAQs), which are open-ended questions requiring a very short answer, may circumvent these limitations. Although VSAQ use in medical assessment increases, almost all research on reliability and validity of VSAQs in medical education has been performed by a single research group with extensive experience in the development of VSAQs. Therefore, we aimed to validate previous findings about VSAQ reliability, discrimination, and acceptability in undergraduate medical students and teachers with limited experience in VSAQs development. To validate the results presented in previous studies, we partially replicated a previous study and extended results on student experiences. Dutch undergraduate medical students (n = 375) were randomized to VSAQs first and MCQs second or vice versa in a formative exam in two courses, to determine reliability, discrimination, and cueing. Acceptability for teachers (i.e., VSAQ review time) was determined in the summative exam. Reliability (Cronbach's α) was 0.74 for VSAQs and 0.57 for MCQs in one course. In the other course, Cronbach's α was 0.87 for VSAQs and 0.83 for MCQs. Discrimination (average Rir) was 0.27 vs. 0.17 and 0.43 vs. 0.39 for VSAQs vs. MCQs, respectively. Reviewing time of one VSAQ for the entire student cohort was ±2 minutes on average. Positive cueing occurred more in MCQs than in VSAQs (20% vs. 4% and 20.8% vs. 8.3% of questions per person in both courses). This study validates the positive results regarding VSAQs reliability, discrimination, and acceptability in undergraduate medical students. Furthermore, we demonstrate that VSAQ use is reliable among teachers with limited experience in writing and marking VSAQs. The short learning curve for teachers, favourable marking time and applicability regardless of the topic suggest that VSAQs might also be valuable beyond medical assessment.


Asunto(s)
Educación de Pregrado en Medicina , Educación Médica , Estudiantes de Medicina , Humanos , Reproducibilidad de los Resultados , Evaluación Educacional/métodos , Educación de Pregrado en Medicina/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...