Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Bases de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
2.
Acad Med ; 93(8): 1212-1217, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29697428

RESUMO

PURPOSE: Many factors influence the reliable assessment of medical students' competencies in the clerkships. The purpose of this study was to determine how many clerkship competency assessment scores were necessary to achieve an acceptable threshold of reliability. METHOD: Clerkship student assessment data were collected during the 2015-2016 academic year as part of the medical school assessment program at the University of Michigan Medical School. Faculty and residents assigned competency assessment scores for third-year core clerkship students. Generalizability (G) and decision (D) studies were conducted using balanced, stratified, and random samples to examine the extent to which overall assessment scores could reliably differentiate between students' competency levels both within and across clerkships. RESULTS: In the across-clerkship model, the residual error accounted for the largest proportion of variance (75%), whereas the variance attributed to the student and student-clerkship effects was much smaller (7% and 10.1%, respectively). D studies indicated that generalizability estimates for eight assessors within a clerkship varied across clerkships (G coefficients range = 0.000-0.795). Within clerkships, the number of assessors needed for optimal reliability varied from 4 to 17. CONCLUSIONS: Minimal reliability was found in competency assessment scores for half of clerkships. The variability in reliability estimates across clerkships may be attributable to differences in scoring processes and assessor training. Other medical schools face similar variation in assessments of clerkship students; therefore, the authors hope this study will serve as a model for other institutions that wish to examine the reliability of their clerkship assessment scores.


Assuntos
Estágio Clínico/normas , Competência Clínica/normas , Avaliação Educacional/normas , Estágio Clínico/estatística & dados numéricos , Competência Clínica/estatística & dados numéricos , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Escolaridade , Humanos , Reprodutibilidade dos Testes , Estudantes de Medicina/estatística & dados numéricos
3.
Adv Health Sci Educ Theory Pract ; 22(2): 337-363, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-27544387

RESUMO

The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.


Assuntos
Entrevistas como Assunto/métodos , Entrevistas como Assunto/normas , Critérios de Admissão Escolar , Faculdades de Medicina/normas , Comunicação , Humanos , Método de Monte Carlo , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA