Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Acad Med ; 99(8): 912-921, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38412485

RESUMO

PURPOSE: Clinical reasoning, a complex construct integral to the practice of medicine, has been challenging to define, teach, and assess. Programmatic assessment purports to overcome validity limitations of judgments made from individual assessments through proportionality and triangulation processes. This study explored a pragmatic approach to the programmatic assessment of clinical reasoning. METHOD: The study analyzed data from 2 student cohorts from the University of Utah School of Medicine (UUSOM) (n = 113 in cohort 1 and 119 in cohort 2) and 1 cohort from the University of Colorado School of Medicine (CUSOM) using assessment data that spanned from 2017 to 2021 (n = 199). The study methods included the following: (1) asking faculty judges to categorize student clinical reasoning skills, (2) selecting institution-specific assessment data conceptually aligned with clinical reasoning, (3) calculating correlations between assessment data and faculty judgments, and (4) developing regression models between assessment data and faculty judgments. RESULTS: Faculty judgments of student clinical reasoning skills were converted to a continuous variable of clinical reasoning struggles, with mean (SD) ratings of 2.93 (0.27) for the 232 UUSOM students and 2.96 (0.17) for the 199 CUSOM students. A total of 67 and 32 discrete assessment variables were included from the UUSOM and CUSOM, respectively. Pearson r correlations were moderate to strong between many individual and composite assessment variables and faculty judgments. Regression models demonstrated an overall adjusted R2 (standard error of the estimate) of 0.50 (0.19) for UUSOM cohort 1, 0.28 (0.15) for UUSOM cohort 2, and 0.30 (0.14) for CUSOM. CONCLUSIONS: This study represents an early pragmatic exploration of regression analysis as a potential tool for operationalizing the proportionality and triangulation principles of programmatic assessment. The study found that programmatic assessment may be a useful framework for longitudinal assessment of complicated constructs, such as clinical reasoning.


Assuntos
Competência Clínica , Raciocínio Clínico , Educação de Graduação em Medicina , Avaliação Educacional , Humanos , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Competência Clínica/estatística & dados numéricos , Utah , Colorado , Masculino , Estudantes de Medicina/estatística & dados numéricos , Feminino , Estudos de Coortes , Docentes de Medicina
2.
J Med Educ Curric Dev ; 10: 23821205231179534, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37435475

RESUMO

OBJECTIVES: In-training examinations (ITEs) are a popular teaching tool for certification programs. This study examines the relationship between examinees' performance on the National Commission for Certification of Anesthesiologist Assistants (NCCAA) ITE and the high-stakes NCCAA Certification Examination. METHODS: We utilized a mixed-methods approach in our study. Before estimating the models for the predictive validity study, a series of interviews with program directors were conducted to discuss the role of the ITE in students' education. Multiple linear regression analysis was then used to assess the strength of the relationship between the ITE and Certification Examination scores, while considering the percentage of program examinees completed in their anesthesiologist assistant program between their ITE and Certification Examination attempts. Logistic regression analysis was used to estimate the probability of passing the Certification Examination as a function of ITE score. RESULTS: Interviews with program directors confirmed that the ITE provided a valuable testing experience for students and highlighted the areas where students need to focus. Moreover, both the ITE score and the percentage of the program between exams were deemed statistically significant predictors for Certification Examination scores. The logistic regression model indicated that higher scores on the ITE implied a higher probability of passing the Certification Examination. CONCLUSION: This research demonstrated the high predictive validity of the ITE examination scores in predicting the Certification Examination outcomes. Together with the proportion of the program covered between exams, the variables explain a significant amount of variability in Certification Examination scores. The ITE feedback helped students assess their preparedness and better focus their studies for the high-stakes certification examination for the profession.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA