Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Med Teach ; 46(2): 188-195, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-37542358

RESUMEN

Post-assessments psychometric reports are a vital component of the assessment cycle to ensure that assessments are reliable, valid and fair to make appropriate pass-fail decisions. Students' scores can be summarised by examination of frequency distributions, central tendency measures and dispersion measures. Item discrimination indicies to assess the quality of items, and distractors that differentiate between students achieving or not achieving the learning outcomes are key. Estimating individual item reliability and item validity indices can maximise test-score reliability and validity. Test accuracy can be evaluated by assessing test reliability, consistency and validity and standard error of measurement can be used to measure the variation. Standard setting, even by experts, may be unreliable and reality checks such as the Hofstee method, P values and correlation analysis can improve validity. The Rasch model of student ability and item difficulty assists in modifying assessment questions, pinpointing areas for additional instruction. We propose 12 tips to support test developers in interpreting structured psychometric reports, including analysis and refinement of flawed items and ensuring fair assessments with accurate and defensible marks.


Asunto(s)
Evaluación Educacional , Estudiantes de Medicina , Humanos , Psicometría , Reproducibilidad de los Resultados , Evaluación Educacional/métodos , Aprendizaje
2.
Med Teach ; 44(6): 582-595, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34726546

RESUMEN

The ratings that judges or examiners use for determining pass marks and students' performance on OSCEs serve a number of essential functions in medical education assessment, and their validity is a pivotal issue. However, some types of errors often occur in ratings that require special efforts to minimise. Rater characteristics (e.g. generosity error, severity error, central tendency error or halo error) may present a source of performance irrelevant variance. Prior literature shows the fundamental problems in student performance measurement attached to judges' or examiners' errors. It also indicates that the control of such errors supports a robust and credible pass mark and thus, accurate student marks. Therefore, for a standard-setter who identifies the pass mark and an examiner who rates student performance in OSCEs, proper, user-friendly feedback on their standard-setting and ratings is essential for reducing bias. This feedback provides useful avenues for understanding why performance ratings may be irregular and how to improve the quality of ratings. This AMEE Guide discusses various methods of feedback to support examiners' understanding of the performance of students and the standard-setting process with an effort to make inferences from assessments fair, valid and reliable.


Asunto(s)
Educación Médica , Estudiantes de Medicina , Competencia Clínica , Evaluación Educacional/métodos , Retroalimentación , Humanos
6.
Med Teach ; 39(10): 1010-1015, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28768456

RESUMEN

As a medical educator, you may be directly or indirectly involved in the quality of assessments. Measurement has a substantial role in developing the quality of assessment questions and student learning. The information provided by psychometric data can improve pedagogical issues in medical education. Through measurement we are able to assess the learning experiences of students. Standard setting plays an important role in assessing the performance quality of students as doctors in the future. Presentation of performance data for standard setters may contribute towards developing a credible and defensible pass mark. Validity and reliability of test scores are the most important factors for developing quality assessment questions. Analysis of the answers to individual questions provides useful feedback for assessment leads to improve the quality of each question, and hence make students' marks fair in terms of diversity and ethnicity. Item Characteristic Curves (ICC) can send signals to assessment leads to improve the quality of individual questions.


Asunto(s)
Educación de Pregrado en Medicina/métodos , Evaluación Educacional , Aprendizaje , Estudiantes de Medicina , Educación Médica , Humanos , Psicometría , Reproducibilidad de los Resultados
8.
Med Teach ; 44(4): 453-454, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35037563
9.
Med Teach ; 36(10): 838-48, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24845954

RESUMEN

Abstract Medical educators need to understand and conduct medical education research in order to make informed decisions based on the best evidence, rather than rely on their own hunches. The purpose of this Guide is to provide medical educators, especially those who are new to medical education research, with a basic understanding of how quantitative and qualitative methods contribute to the medical education evidence base through their different inquiry approaches and also how to select the most appropriate inquiry approach to answer their research questions.


Asunto(s)
Proyectos de Investigación , Antropología Cultural/métodos , Comunicación , Recolección de Datos/métodos , Educación Médica , Ética en Investigación , Humanos , Relaciones Interpersonales , Modelos Teóricos , Reproducibilidad de los Resultados , Tamaño de la Muestra
10.
Med Teach ; 36(9): 746-56, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24846122

RESUMEN

Medical educators need to understand and conduct medical education research in order to make informed decisions based on the best evidence, rather than rely on their own hunches. The purpose of this Guide is to provide medical educators, especially those who are new to medical education research, with a basic understanding of how quantitative and qualitative methods contribute to the medical education evidence base through their different inquiry approaches and also how to select the most appropriate inquiry approach to answer their research questions.


Asunto(s)
Educación Médica/organización & administración , Proyectos de Investigación , Investigación/organización & administración , Práctica Clínica Basada en la Evidencia , Humanos
12.
J Nurs Meas ; 22(1): 94-105, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24851666

RESUMEN

BACKGROUND AND PURPOSE: Although the importance of item response theory (IRT) has been emphasized in health and medical education, in practice, few psychometricians in nurse education have used these methods to create tests that discriminate well at any level of student ability. The purpose of this study is to evaluate the psychometric properties of a real objective test using three-parameter IRT. METHODS: Three-parameter IRT was used to monitor and improve the quality of the test items. RESULTS: Item parameter indices, item characteristic curves (ICCs), test information functions, and test characteristic curves reveal aberrant items which do not assess the construct being measured. CONCLUSIONS: The results of this study provide useful information for educators to improve the quality of assessment, teaching strategies, and curricula.


Asunto(s)
Bachillerato en Enfermería , Evaluación Educacional/métodos , Enfermería Pediátrica/educación , Humanos , Modelos Educacionales , Psicometría
13.
Med Teach ; 35(1): e838-48, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23137252

RESUMEN

Classical test theory has traditionally been used to carry out post-examination analysis of objective test data. It uses descriptive methods and aggregated data to help identify sources of measurement error and unreliability in a test, in order to minimise them. Item response theory (IRT), and in particular Rasch analysis, uses more complex methods to produce outputs that not only identify sources of measurement error and unreliability, but also identify the way item difficulty interacts with student ability. In this guide, a knowledge-based test is analysed by the Rasch method to demonstrate the variety of useful outputs that can be provided. IRT provides a much deeper analysis giving a range of information on the behaviour of individual test items and individual students as well as the underlying constructs being examined. Graphical displays can be used to evaluate the ease or difficulty of items across the student ability range as well as providing a visual method for judging how well the difficulty of items on a test match student ability. By displaying data in this way, problem test items are more easily identified and modified allowing medical educators to iteratively move towards the 'perfect' test in which the distribution of item difficulty is mirrored by the distribution of student ability.


Asunto(s)
Evaluación Educacional/normas , Modelos Estadísticos , Educación Médica , Evaluación Educacional/estadística & datos numéricos , Análisis Factorial , Humanos , Psicometría , Estudiantes de Medicina
14.
Int J Med Educ ; 14: 123-130, 2023 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-37678838

RESUMEN

Objectives: To measure intra-standard-setter variability and assess the variations between the pass marks obtained from Angoff ratings, guided by the latent trait theory as the theoretical model. Methods: A non-experimental cross-sectional study was conducted to achieve the purpose of the study. Two knowledge-based tests were administered to 358 final-year medical students (223 females and 135 males) as part of their normal summative programme of assessments. The results of judgmental standard-setting using the Angoff method, which is widely used in medical schools, were used to determine intra-standard-setter inconsistency using the three-parameter item response theory (IRT). Permission for this study was granted by the local Research Ethics Committee of the University of Nottingham. To ensure anonymity and confidentiality, all identifiers at the student level were removed before the data were analysed. Results: The results of this study confirm that the three-parameter IRT can be used to analyse the results of individual judgmental standard setters. Overall, standard-setters behaved fairly consistently in both tests. The mean Angoff ratings and conditional probability were strongly positively correlated, which is a matter of inter-standard-setter validity. Conclusions: We recommend that assessment providers adopt the methodology used in this study to help determine inter and intra-judgmental inconsistencies across standard setters to minimise the number of false positive and false negative decisions.


Asunto(s)
Rendimiento Académico , Educación Médica , Evaluación de Programas y Proyectos de Salud , Humanos , Masculino , Femenino , Estudiantes de Medicina , Educación Médica/normas , Estudios Transversales , Modelos Teóricos
15.
Med Educ ; 46(3): 306-16, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22324530

RESUMEN

CONTEXT: Empathy towards patients is associated with improved health outcomes. However, quantitative studies using self-reported data have not provided an in-depth opportunity to explore the lived experiences of medical students concerning empathy. OBJECTIVES: This study was designed to investigate undergraduate medical students' experiences of the phenomenon of empathy during the course of their medical education and to explore the essence of their empathy. METHODS: This was a descriptive, phenomenological study of medical student interviews conducted using the method of Colaizzi and Giorgi. The sample (n = 10) was drawn from medical students in Years 4 and 5. In-depth interviews were used to obtain a clear understanding of their experiences of empathy in the context of patient care. Interviews continued until no new information could be identified from transcripts. RESULTS: Five themes were identified from analysis: the meaning of empathy; willingness to empathise; innate empathic ability; empathy decline or enhancement, and empathy education. Empathic ability was manifested through two factors: innate capacity for empathy, and barriers to displaying empathy. Different experiences and explanations concerning the decline or enhancement of empathy during medical education were explored. CONCLUSIONS: Empathic ability was identified as an important innate attribute which nevertheless can be enhanced by educational interventions. Barriers to the expression of empathy with patients were identified. Role-modelling by clinical teachers was seen as the most important influence on empathy education for students engaged in experiential learning.


Asunto(s)
Educación de Pregrado en Medicina/métodos , Empatía , Relaciones Médico-Paciente , Estudiantes de Medicina/psicología , Adulto , Comprensión , Femenino , Humanos , Masculino , Aprendizaje Basado en Problemas , Investigación Cualitativa , Adulto Joven
17.
Med Teach ; 34(3): 245-8, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22364458

RESUMEN

As great emphasis is rightly placed upon the importance of assessment to judge the quality of our future healthcare professionals, it is appropriate not only to choose the most appropriate assessment method, but to continually monitor the quality of the tests themselves, in a hope that we may continually improve the process. This article stresses the importance of quality control mechanisms in the exam cycle and briefly outlines some of the key psychometric concepts including reliability measures, factor analysis, generalisability theory and item response theory. The importance of such analyses for the standard setting procedures is emphasised. This article also accompanies two new AMEE Guides in Medical Education (Tavakol M, Dennick R. Post-examination Analysis of Objective Tests: AMEE Guide No. 54 and Tavakol M, Dennick R. 2012. Post examination analysis of objective test data: Monitoring and improving the quality of high stakes examinations: AMEE Guide No. 66) which provide the reader with practical examples of analysis and interpretation, in order to help develop valid and reliable tests.


Asunto(s)
Educación Médica/normas , Evaluación Educacional/normas , Humanos
18.
Med Teach ; 34(3): e161-75, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22364473

RESUMEN

The purpose of this Guide is to provide both logical and empirical evidence for medical teachers to improve their objective tests by appropriate interpretation of post-examination analysis. This requires a description and explanation of some basic statistical and psychometric concepts derived from both Classical Test Theory (CTT) and Item Response Theory (IRT) such as: descriptive statistics, explanatory and confirmatory factor analysis, Generalisability Theory and Rasch modelling. CTT is concerned with the overall reliability of a test whereas IRT can be used to identify the behaviour of individual test items and how they interact with individual student abilities. We have provided the reader with practical examples clarifying the use of these frameworks in test development and for research purposes.


Asunto(s)
Educación Médica/normas , Evaluación Educacional/normas , Interpretación Estadística de Datos , Educación Médica/métodos , Evaluación Educacional/métodos , Análisis Factorial , Docentes Médicos , Humanos , Psicometría , Reproducibilidad de los Resultados
19.
Int J Med Educ ; 13: 100-106, 2022 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-35462355

RESUMEN

Assessments in medical education, with consequent decisions about performance and competence, have both a profound and far-reaching impact on students and their future careers. Physicians who make decisions about students must be confident that these decisions are based on objective, valid and reliable evidence and are thus fair. An increasing use of psychometrics has aimed to minimise measurement bias as a major threat to fairness in testing. Currently, there is substantial literature on psychometric methods and their applications, ranging from basic to advanced, outlining how assessment providers can improve their exams to make them fairer and minimise the errors attached to assessments. Understanding the mathematical models of some of these methods may be difficult for some assessment providers, and in particular clinicians. This guide requires no prior knowledge of mathematics and describes some of the key methods used to improve and develop assessments; essential for those involved in interpreting assessment results. This article aligns each method to the Standards for educational and psychological testing framework, recognised as the gold standard for testing guidance since the 1960s. This helps the reader develop a deeper understanding of how assessors provide evidence for reliability and validity with consideration to test construction, evaluation, fairness, application, and consequences, and provides a platform to better understand the literature in regards other more complex psychometric concepts that are not specifically covered in this article.


Asunto(s)
Educación Médica , Médicos , Competencia Clínica , Evaluación Educacional , Humanos , Psicometría , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA