Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Ann Surg ; 276(6): e1095-e1100, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34132692

RESUMO

OBJECTIVE: To examine the alignment between graduating surgical trainee operative performance and a prior survey of surgical program director expectations. BACKGROUND: Surgical trainee operative training is expected to prepare residents to independently perform clinically important surgical procedures. METHODS: We conducted a cross-sectional observational study of US general surgery residents' rated operative performance for Core general surgery procedures. Residents' expected performance on those procedures at the time of graduation was compared to the current list of Core general surgery procedures ranked by their importance for clinical practice, as assessed via a previous national survey of general surgery program directors. We also examined the frequency of individual procedures logged by residents over the course of their training. RESULTS: Operative performance ratings for 29,885 procedures performed by 1861 surgical residents in 54 general surgery programs were analyzed. For each Core general surgery procedure, adjusted mean probability of a graduating resident being deemed practice-ready ranged from 0.59 to 0.99 (mean 0.90, standard deviation 0.08). There was weak correlation between the readiness of trainees to independently perform a procedure at the time of graduation and that procedure's historical importance to clinical practice ( p = 0.22, 95% confidence interval 0.01-0.41, P = 0.06). Residents also continue to have limited opportunities to learn many procedures that are important for clinical practice. CONCLUSION: The operative performance of graduating general surgery residents may not be well aligned with surgical program director expectations.


Assuntos
Cirurgia Geral , Internato e Residência , Humanos , Competência Clínica , Estudos Transversais , Motivação , Inquéritos e Questionários , Cirurgia Geral/educação , Educação de Pós-Graduação em Medicina
2.
Adv Health Sci Educ Theory Pract ; 23(1): 151-158, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28501933

RESUMO

Medical school admissions interviews are used to assess applicants' nonacademic characteristics as advocated by the Association of American Medical Colleges' Advancing Holistic Review Initiative. The objective of this study is to determine whether academic metrics continue to significantly influence interviewers' scores in holistic processes by blinding interviewers to applicants' undergraduate grade point averages (uGPA) and Medical College Admission Test (MCAT). This study examines academic and demographic predictors of interview scores for two applicant cohorts at the University of Michigan Medical School. In 2012, interviewers were provided applicants' uGPA and MCAT scores; in 2013, these academic metrics were withheld from interviewers' files. Hierarchical regression analysis was conducted to examine the influence of academic and demographic variables on overall cohort interview scores. When interviewers were provided uGPA and MCAT scores, academic metrics explained more variation in interview scores (7.9%) than when interviewers were blinded to these metrics (4.1%). Further analysis showed a statistically significant interaction between cohort and uGPA, indicating that the association between uGPA and interview scores was significantly stronger for the 2012 unblinded cohort compared to the 2013 blinded cohort (ß = .573, P < .05). By contrast, MCAT scores had no interactive effects on interviewer scores. While MCAT scores accounted for some variation in interview scores for both cohorts, only access to uGPA significantly influenced interviewers' scores when looking at interaction effects. Withholding academic metrics from interviewers' files may promote assessment of nonacademic characteristics independently from academic metrics.


Assuntos
Teste de Admissão Acadêmica/estatística & dados numéricos , Avaliação Educacional/normas , Entrevistas como Assunto/normas , Critérios de Admissão Escolar/estatística & dados numéricos , Faculdades de Medicina/normas , Estudantes de Medicina/psicologia , Estudantes de Medicina/estatística & dados numéricos , Adulto , Estudos de Coortes , Feminino , Humanos , Masculino , Valor Preditivo dos Testes , Análise de Regressão , Estados Unidos , Adulto Jovem
3.
Adv Health Sci Educ Theory Pract ; 22(2): 337-363, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-27544387

RESUMO

The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.


Assuntos
Entrevistas como Assunto/métodos , Entrevistas como Assunto/normas , Critérios de Admissão Escolar , Faculdades de Medicina/normas , Comunicação , Humanos , Método de Monte Carlo , Reprodutibilidade dos Testes
4.
Perspect Med Educ ; 9(5): 318-323, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32789666

RESUMO

Throughout history, race and ethnicity have been used as key descriptors to categorize and label individuals. The use of these concepts as variables can impact resources, policy, and perceptions in medical education. Despite the pervasive use of race and ethnicity as quantitative variables, it is unclear whether researchers use them in their proper context. In this Eye Opener, we present the following seven considerations with corresponding recommendations, for using race and ethnicity as variables in medical education research: 1) Ensure race and ethnicity variables are used to address questions directly related to these concepts. 2) Use race and ethnicity to represent social experiences, not biological facts, to explain the phenomenon under study. 3) Allow study participants to define their preferred racial and ethnic identity. 4) Collect complete and accurate race and ethnicity data that maximizes data richness and minimizes opportunities for researchers' assumptions about participants' identity. 5) Follow evidence-based practices to describe and collapse individual-level race and ethnicity data into broader categories. 6) Align statistical analyses with the study's conceptualization and operationalization of race and ethnicity. 7) Provide thorough interpretation of results beyond simple reporting of statistical significance. By following these recommendations, medical education researchers can avoid major pitfalls associated with the use of race and ethnicity and make informed decisions around some of the most challenging race and ethnicity topics in medical education.


Assuntos
Etnicidade , Grupos Raciais/etnologia , Projetos de Pesquisa/normas , Pesquisa/normas , Coleta de Dados/métodos , Coleta de Dados/normas , Humanos , Pesquisa/tendências , Projetos de Pesquisa/tendências
5.
Acad Med ; 93(6): 856-859, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29215375

RESUMO

Medical school assessments should foster the development of higher-order thinking skills to support clinical reasoning and a solid foundation of knowledge. Multiple-choice questions (MCQs) are commonly used to assess student learning, and well-written MCQs can support learner engagement in higher levels of cognitive reasoning such as application or synthesis of knowledge. Bloom's taxonomy has been used to identify MCQs that assess students' critical thinking skills, with evidence suggesting that higher-order MCQs support a deeper conceptual understanding of scientific process skills. Similarly, clinical practice also requires learners to develop higher-order thinking skills that include all of Bloom's levels. Faculty question writers and examinees may approach the same material differently based on varying levels of knowledge and expertise, and these differences can influence the cognitive levels being measured by MCQs. Consequently, faculty question writers may perceive that certain MCQs require higher-order thinking skills to process the question, whereas examinees may only need to employ lower-order thinking skills to render a correct response. Likewise, seemingly lower-order questions may actually require higher-order thinking skills to respond correctly. In this Perspective, the authors describe some of the cognitive processes examinees use to respond to MCQs. The authors propose that various factors affect both the question writer and examinee's interaction with test material and subsequent cognitive processes necessary to answer a question.


Assuntos
Avaliação Educacional/métodos , Estudantes de Medicina/psicologia , Pensamento , Comportamento de Escolha , Cognição , Humanos , Resolução de Problemas
6.
Acad Med ; 93(8): 1212-1217, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29697428

RESUMO

PURPOSE: Many factors influence the reliable assessment of medical students' competencies in the clerkships. The purpose of this study was to determine how many clerkship competency assessment scores were necessary to achieve an acceptable threshold of reliability. METHOD: Clerkship student assessment data were collected during the 2015-2016 academic year as part of the medical school assessment program at the University of Michigan Medical School. Faculty and residents assigned competency assessment scores for third-year core clerkship students. Generalizability (G) and decision (D) studies were conducted using balanced, stratified, and random samples to examine the extent to which overall assessment scores could reliably differentiate between students' competency levels both within and across clerkships. RESULTS: In the across-clerkship model, the residual error accounted for the largest proportion of variance (75%), whereas the variance attributed to the student and student-clerkship effects was much smaller (7% and 10.1%, respectively). D studies indicated that generalizability estimates for eight assessors within a clerkship varied across clerkships (G coefficients range = 0.000-0.795). Within clerkships, the number of assessors needed for optimal reliability varied from 4 to 17. CONCLUSIONS: Minimal reliability was found in competency assessment scores for half of clerkships. The variability in reliability estimates across clerkships may be attributable to differences in scoring processes and assessor training. Other medical schools face similar variation in assessments of clerkship students; therefore, the authors hope this study will serve as a model for other institutions that wish to examine the reliability of their clerkship assessment scores.


Assuntos
Estágio Clínico/normas , Competência Clínica/normas , Avaliação Educacional/normas , Estágio Clínico/estatística & dados numéricos , Competência Clínica/estatística & dados numéricos , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Escolaridade , Humanos , Reprodutibilidade dos Testes , Estudantes de Medicina/estatística & dados numéricos
7.
Acad Med ; 93(12): 1833-1840, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30024474

RESUMO

PURPOSE: Transforming a medical school curriculum wherein students enter clerkships earlier could result in two cohorts in clerkships simultaneously during the transition. To avoid overlapping cohorts at the University of Michigan Medical School, the length of all required clerkships was decreased by 25% during the 2016-2017 academic year, without instituting other systematic structural changes. The authors hypothe sized that the reduction in clerkship duration would result in decreases in student perfor mance and changes in student perceptions. METHOD: One-way analyses of variance and Tukey post hoc tests were used to compare the 2016-2017 shortened clerkship cohort with the preceding traditional clerkship cohorts (2014-2015 and 2015-2016) on the following student outcomes: National Board of Medical Examiners (NBME) subject exam scores, year-end clinical skills exam scores, evaluation of clerkships, perceived stress, resiliency, well-being, and perception of the learning environment. RESULTS: There were no significant differences in performance on NBME subject exams between the shortened clerkship cohort and the 2015-2016 traditional cohort, but scores declined significantly over the three years for one exam. Perceptions of clerkship quality improved for three shortened clerkships; there were no significant declines. Learning environment perceptions were not worse for the shortened clerkships. There were no significant differences in performance on the clinical skills exam or in perceived stress, resiliency, and well-being. CONCLUSIONS: The optimal clerkship duration is a matter of strong opinion, supported by few empirical data. These results provide some evidence that accelerating clinical education may, for the studied outcomes, be feasible.


Assuntos
Estágio Clínico/métodos , Competência Clínica/estatística & dados numéricos , Avaliação Educacional/estatística & dados numéricos , Estudantes de Medicina/psicologia , Fatores de Tempo , Adulto , Estudos de Viabilidade , Feminino , Humanos , Masculino , Faculdades de Medicina , Estudantes de Medicina/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA