Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
BMC Med Educ ; 22(1): 177, 2022 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-35291995

RESUMEN

BACKGROUND: Most work on the validity of clinical assessments for measuring learner performance in graduate medical education has occurred at the residency level. Minimal research exists on the validity of clinical assessments for measuring learner performance in advanced subspecialties. We sought to determine validity characteristics of cardiology fellows' assessment scores during subspecialty training, which represents the largest subspecialty of internal medicine. Validity evidence included item content, internal consistency reliability, and associations between faculty-of-fellow clinical assessments and other pertinent variables. METHODS: This was a retrospective validation study exploring the domains of content, internal structure, and relations to other variables validity evidence for scores on faculty-of-fellow clinical assessments that include the 10-item Mayo Cardiology Fellows Assessment (MCFA-10). Participants included 7 cardiology fellowship classes. The MCFA-10 item content included questions previously validated in the assessment of internal medicine residents. Internal structure evidence was assessed through Cronbach's α. The outcome for relations to other variables evidence was overall mean of faculty-of-fellow assessment score (scale 1-5). Independent variables included common measures of fellow performance. FINDINGS: Participants included 65 cardiology fellows. The overall mean ± standard deviation faculty-of-fellow assessment score was 4.07 ± 0.18. Content evidence for the MCFA-10 scores was based on published literature and core competencies. Cronbach's α was 0.98, suggesting high internal consistency reliability and offering evidence for internal structure validity. In multivariable analysis to provide relations to other variables evidence, mean assessment scores were independently associated with in-training examination scores (beta = 0.088 per 10-point increase; p = 0.05) and receiving a departmental or institutional award (beta = 0.152; p = 0.001). Assessment scores were not associated with educational conference attendance, compliance with completion of required evaluations, faculty appointment upon completion of training, or performance on the board certification exam. R2 for the multivariable model was 0.25. CONCLUSIONS: These findings provide sound validity evidence establishing item content, internal consistency reliability, and associations with other variables for faculty-of-fellow clinical assessment scores that include MCFA-10 items during cardiology fellowship. Relations to other variables evidence included associations of assessment scores with performance on the in-training examination and receipt of competitive awards. These data support the utility of the MCFA-10 as a measure of performance during cardiology training and could serve as the foundation for future research on the assessment of subspecialty learners.


Asunto(s)
Distinciones y Premios , Cardiología , Competencia Clínica , Evaluación Educacional , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos
2.
Am J Gastroenterol ; 107(11): 1610-4, 2012 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-23160284

RESUMEN

OBJECTIVES: We studied whether differences exist in evaluation scores of faculty and trainees in gastroenterology (GI) based on the gender of the evaluator or evaluatee, or the evaluator-evaluatee gender pairing. METHODS: We examined evaluations of faculty and trainees (GI fellows and internal medicine residents rotating on GI services), using mixed linear models to assess effects of the four possible evaluator-evaluatee gender pairings. Potential confounding variables were adjusted for, and random effects were used to account for repeated assessments. RESULTS: For internal medicine (IM) residents, no difference in evaluation scores based on gender was found. Resident age was negatively associated with performance rating, while percentage correct on the in-training examination (ITE) was positively associated. For GI fellows, the interaction between evaluator and evaluatee gender was significant. Fellow age and international medical graduate (IMG) status were negatively associated with performance rating, while ITE percentage correct was positively associated. For faculty, no difference was found in evaluation scores by IM residents based on the gender of the evaluated faculty or the evaluating resident, although the interaction between the evaluator and the evaluatee gender was significant. Gender had a significant marginal effect on faculty scores by GI fellows, with female faculty receiving lower scores. The interaction between evaluator and evaluatee gender was also significant for evaluations by fellows. Faculty age was negatively associated with performance rating. DISCUSSION: Gender, age, and ITE performance are associated with evaluation scores of GI trainees and faculty at our institution. The interaction of evaluator and evaluatee gender appears to play a more critical role in evaluation scoring than the gender of the evaluatee or evaluator in isolation.


Asunto(s)
Evaluación Educacional , Gastroenterología/educación , Adulto , Factores de Edad , Competencia Clínica , Femenino , Médicos Graduados Extranjeros/estadística & datos numéricos , Humanos , Internado y Residencia , Modelos Lineales , Masculino , Persona de Mediana Edad , Factores Sexuales
3.
Tex Heart Inst J ; 47(4): 258-264, 2020 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-33472223

RESUMEN

Variables in cardiology fellowship applications have not been objectively analyzed against applicants' subsequent clinical performance. We investigated possible correlations in a retrospective cohort study of 65 cardiology fellows at the Mayo Clinic (Rochester, Minn) who began 2 years of clinical training from July 2007 through July 2013. Application variables included the strength of comparative statements in recommendation letters and the authors' academic ranks, membership status in the Alpha Omega Alpha Honor Medical Society, awards earned, volunteer activities, United States Medical Licensing Examination (USMLE) scores, advanced degrees, publications, and completion of a residency program ranked in the top 6 in the United States. The outcome was clinical performance as measured by a mean of faculty evaluation scores during clinical training. The overall mean evaluation score was 4.07 ± 0.18 (scale, 1-5). After multivariable analysis, evaluation scores were associated with Alpha Omega Alpha designation (ß=0.13; 95% CI, 0.01-0.25; P=0.03), residency program reputation (ß=0.13; 95% CI, 0.05-0.21; P=0.004), and strength of comparative statements in recommendation letters (ß=0.08; 95% CI, 0.01-0.15; P=0.02), particularly in letters from residency program directors (ß=0.05; 95% CI, 0.01-0.08; P=0.009). Objective factors to consider in the cardiology fellowship application include Alpha Omega Alpha membership, residency program reputation, and comparative statements from residency program directors.


Asunto(s)
Cardiología/educación , Competencia Clínica , Educación de Postgrado en Medicina/métodos , Internado y Residencia/métodos , Adulto , Femenino , Estudios de Seguimiento , Humanos , Masculino , Estudios Retrospectivos , Estados Unidos
4.
Teach Learn Med ; 21(3): 188-94, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-20183337

RESUMEN

BACKGROUND: Assessment score reliability is usually based on a single analysis. However, reliability is an essential component of validity and assessment validation and revision is a never-ending cycle. For ongoing assessments over extended time frames, real-time reliability computations may alert users to possible changes in the learning environment that are revealed by variations in reliability over time. PURPOSE: To develop software that calculates the reliability of clinical assessments in real time. METHODS: Over 2,400 assessment forms were analyzed. We developed software that calculates reliability in real time. Software accuracy was verified by comparing data from our software with a standard method. Factor analysis determined scale dimensionality. RESULTS: Correlation between our software and a standard method was excellent (ICC for kappas = 0.97; Cronbach's alphas differed by < 0.03). Cronbach's alpha ranged from 0.94 to 0.97 and weighted kappa ranged from 0.08 to 0.40. Factor analysis confirmed 3 teaching domains. CONCLUSIONS: We describe an accurate method for calculating reliability in real time. The benefit of real time computation is that it provides a mechanism for detecting possible changes (related to curriculum, teachers, and students) in the learning environment indicated by changes in reliability over time. This technique will enable investigators to monitor and detect changes in the reliability of assessment scores and, with future study, isolate aspects of the learning environment that impact on reliability.


Asunto(s)
Competencia Clínica , Educación de Postgrado en Medicina/normas , Evaluación Educacional/normas , Medicina Interna/normas , Internado y Residencia/normas , Programas Informáticos , Adulto , Análisis Factorial , Femenino , Humanos , Medicina Interna/educación , Masculino , Reproducibilidad de los Resultados
5.
JAMA ; 300(11): 1326-33, 2008 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-18799445

RESUMEN

CONTEXT: Unprofessional behaviors in medical school predict high stakes consequences for practicing physicians, yet little is known about specific behaviors associated with professionalism during residency. OBJECTIVE: To identify behaviors that distinguish highly professional residents from their peers. DESIGN, SETTING, AND PARTICIPANTS: Comparative study of 148 first-year internal medicine residents at Mayo Clinic from July 1, 2004, through June 30, 2007. MAIN OUTCOME MEASURES: Professionalism as determined by multiple observation-based assessments by peers, senior residents, faculty, medical students, and nonphysician professionals over 1 year. Highly professional residents were defined as those who received a total professionalism score at the 80th percentile or higher of observation-based assessments on a 5-point scale (1, needs improvement; 5, exceptional). They were compared with residents who received professionalism scores below the 80th percentile according to In-Training Examination (ITE) scores, Mini-Clinical Evaluation Exercise (mini-CEX) scores, conscientious behaviors (percentage of completed evaluations and conference attendance), and receipt of a warning or probation from the residency program. RESULTS: The median total professionalism score among highly professional residents was 4.39 (interquartile range [IQR], 4.32-4.44) vs 4.07 (IQR, 3.91-4.17) among comparison residents. Highly professional residents achieved higher median scores on the ITE (65.5; IQR, 60.5-73.0 vs 63.0; IQR, 59.0-67.0; P = .03) and on the mini-CEX (3.95; IQR, 3.63-4.20 vs 3.69; IQR, 3.36-3.90; P = .002), and they completed a greater percentage of required evaluations (95.6%; IQR, 88.1%-99.0% vs 86.1%; IQR, 70.6%-95.0%; P < .001) compared with residents with lower professionalism scores. In multivariate analysis, a professionalism score in the top 20% of residents was independently associated with ITE scores (odds ratio [OR] per 1-point increase, 1.07; 95% confidence interval [CI], 1.01-1.14; P = .046), mini-CEX scores (OR, 4.64; 95% CI, 1.23-17.48; P = .02), and completion of evaluations (OR, 1.07; 95% CI, 1.01-1.13; P = .02). Six of the 8 residents who received a warning or probation had total professionalism scores in the bottom 20% of residents. CONCLUSION: Observation-based assessments of professionalism were associated with residents' knowledge, clinical skills, and conscientious behaviors.


Asunto(s)
Competencia Clínica , Ética Médica , Internado y Residencia , Rol del Médico , Logro , Docentes Médicos , Humanos , Modelos Logísticos , Grupo Paritario , Estudiantes de Medicina
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA