Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Teach Learn Med ; 29(3): 280-285, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28632015

RESUMO

Construct: We investigated the extent of the associations between medical students' clinical competency measured by performance in Objective Structured Clinical Examinations (OSCE) during Obstetrics/Gynecology and Family Medicine clerkships and later performance in both undergraduate and graduate medical education. BACKGROUND: There is a relative dearth of studies on the correlations between undergraduate OSCE scores and future exam performance within either undergraduate or graduate medical education and almost none on linking these simulated encounters to eventual patient care. Of the research studies that do correlate clerkship OSCE scores with future performance, these often have a small sample size and/or include only 1 clerkship. APPROACH: Students in USU graduating classes of 2007 through 2011 participated in the study. We investigated correlations between clerkship OSCE grades with United States Medical Licensing Examination Step 2 Clinical Knowledge, Clinical Skills, and Step 3 Exams scores as well as Postgraduate Year 1 program director's evaluation scores on Medical Expertise and Professionalism. We also conducted contingency table analysis to examine the associations between poor performance on clerkship OSCEs with failing Step 3 and receiving poor program director ratings. RESULTS: The correlation coefficients were weak between the clerkship OSCE grades and the outcomes. The strongest correlations existed between the clerkship OSCE grades and the Step 2 CS Integrated Clinical Encounter component score, Step 2 Clinical Skills, and Step 3 scores. Contingency table associations between poor performances on both clerkships OSCEs and poor Postgraduate Year 1 Program Director ratings were significant. CONCLUSIONS: The results of this study provide additional but limited validity evidence for the use of OSCEs during clinical clerkships given their associations with subsequent performance measures.


Assuntos
Estágio Clínico , Competência Clínica , Educação de Graduação em Medicina , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Humanos , Estados Unidos
2.
Mil Med ; 180(4 Suppl): 97-103, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25850135

RESUMO

BACKGROUND: In the early 1990 s, our group of interdepartmental academicians at the Uniformed Services University (USU) developed a PGY-1 (postgraduate year 1) program director evaluation form. Recently, we have revised it to better align with the core competencies established by the Accreditation Council for Graduate Medical Education. We also included items that reflected USU's military-unique context. PURPOSE: To collect feasibility, reliability, and validity evidence for our revised survey. METHOD: We collected PGY-1 data from program directors (PD) who oversee the training of military medical trainees. The cohort of the present study consisted of USU students graduating in 2010 and 2011. We performed exploratory factor analysis (EFA) to examine the factorial validity of the survey scores and subjected each of the factors identified in the EFA to an internal consistency reliability analysis. We then performed correlation analysis to examine the relationship between PD ratings and students' medical school grade point averages (GPAs) and performance on U.S. Medical Licensing Examinations Step assessments. RESULTS: Five factors emerged from the EFA--Medical Expertise, Military-unique Practice, Professionalism, System-based Practice, and Communication and Interpersonal Skills." The evaluation form also showed good reliability and feasibility. All five factors were more strongly associated with students' GPA in the initial clerkship year than the first 2 years. Further, these factors showed stronger correlations with students' performance on Step 3 than other Step Examinations. CONCLUSIONS: The revised PD evaluation form seemed to be a valid and reliable tool to gauge medical graduates' first-year internship performance.


Assuntos
Competência Clínica , Educação de Pós-Graduação em Medicina/normas , Avaliação Educacional/métodos , Estudantes de Medicina/estatística & dados numéricos , Inquéritos e Questionários/normas , Adulto , Educação de Pós-Graduação em Medicina/métodos , Análise Fatorial , Docentes de Medicina , Estudos de Viabilidade , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Faculdades de Medicina , Estados Unidos
3.
Teach Learn Med ; 26(4): 379-86, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25318034

RESUMO

BACKGROUND: Recently, there has been a surge in the use of objective structured clinical examinations (OSCEs) at medical schools around the world, and with this growth has come the concomitant need to validate such assessments. PURPOSES: The current study examined the associations between student performance on several school-level clinical skills and knowledge assessments, including two OSCEs, the National Board of Medical Examiners® (NBME) Subject Examinations, and the United States Medical Licensing Examination® (USMLE) Step 2 Clinical Skills (CS) and Step 3 assessments. METHODS: The sample consisted of 806 medical students from the Uniformed Services University of the Health Sciences. We conducted Pearson correlation analysis as well as stepwise multiple linear regression modeling to examine the strength of associations between students' performance on 2nd- and 3rd-year OSCEs and their two Step 2 CS component scores and Step 3 scores. RESULTS: Positive associations were found between the OSCE variables and the USMLE scores; in particular, student performance on both the 2nd- and 3rd-year OSCEs was more strongly associated with the two Step 2 CS component scores than with Step 3 scores. CONCLUSIONS: These findings, although preliminary, provide some predictive validity evidence for the use of OSCEs in determining readiness of medical students for clinical practice and licensure.


Assuntos
Competência Clínica , Educação de Graduação em Medicina/normas , Avaliação Educacional/métodos , Feminino , Humanos , Masculino , Estados Unidos , Adulto Jovem
4.
Acad Med ; 89(5): 762-6, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24667514

RESUMO

PURPOSE: To investigate the association between poor performance on National Board of Medical Examiners clinical subject examinations across six core clerkships and performance on the United States Medical Licensing Examination Step 3 examination. METHOD: In 2012, the authors studied matriculants from the Uniformed Services University of the Health Sciences with available Step 3 scores and subject exam scores on all six clerkships (Classes of 2007-2011, N = 654). Poor performance on subject exams was defined as scoring one standard deviation (SD) or more below the mean using the national norms of the corresponding test year. The association between poor performance on the subject exams and the probability of passing or failing Step 3 was tested using contingency table analyses and logistic regression modeling. RESULTS: Students performing poorly on one subject exam were significantly more likely to fail Step 3 (OR 14.23 [95% CI 1.7-119.3]) compared with students with no subject exam scores that were 1 SD below the mean. Poor performance on more than one subject exam further increased the chances of failing (OR 33.41 [95% CI 4.4-254.2]). This latter group represented 27% of the entire cohort, yet contained 70% of the students who failed Step 3. CONCLUSIONS: These findings suggest that individual schools could benefit from a review of subject exam performance to develop and validate their own criteria for identifying students at risk for failing Step 3.


Assuntos
Estágio Clínico , Educação de Graduação em Medicina/normas , Avaliação Educacional , Licenciamento em Medicina , Intervalos de Confiança , Feminino , Humanos , Modelos Logísticos , Masculino , Avaliação das Necessidades , Razão de Chances , Estados Unidos , Adulto Jovem
5.
Acad Med ; 88(5): 688-92, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23524920

RESUMO

PURPOSE: Previous studies on standardized patient (SP) exams reported score gains both across attempts when examinees failed and retook the exam and over multiple SP encounters within a single exam session. The authors analyzed the within-session score gains of examinees who repeated the United States Medical Licensing Examination Step 2 Clinical Skills to answer two questions: How much do scores increase within a session? Can the pattern of increasing first-attempt scores account for across-session score gains? METHOD: Data included encounter-level scores for 2,165 U.S. and Canadian medical students and graduates who took Step 2 Clinical Skills twice between April 1, 2005 and December 31, 2010. The authors modeled examinees' score patterns using smoothing and regression techniques and applied statistical tests to determine whether the patterns were the same or different across attempts. In addition, they tested whether any across-session score gains could be explained by the first-attempt within-session score trajectory. RESULTS: For the first and second attempts, the authors attributed examinees' within-session score gains to a pattern of score increases over the first three to six SP encounters followed by a leveling off. Model predictions revealed that the authors could not attribute the across-session score gains to the first-attempt within-session score gains. CONCLUSIONS: The within-session score gains over the first three to six SP encounters of both attempts indicate that there is a temporary "warm-up" effect on performance that "resets" between attempts. Across-session gains are not due to this warm-up effect and likely reflect true improvement in performance.


Assuntos
Avaliação Educacional/métodos , Licenciamento em Medicina , Exame Físico/normas , Canadá , Competência Clínica/normas , Competência Clínica/estatística & dados numéricos , Avaliação Educacional/normas , Avaliação Educacional/estatística & dados numéricos , Humanos , Modelos Estatísticos , Análise de Regressão , Estados Unidos
7.
Adv Health Sci Educ Theory Pract ; 17(3): 325-37, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21964951

RESUMO

Examinees who initially fail and later repeat an SP-based clinical skills exam typically exhibit large score gains on their second attempt, suggesting the possibility that examinees were not well measured on one of those attempts. This study evaluates score precision for examinees who repeated an SP-based clinical skills test administered as part of the US Medical Licensing Examination sequence. Generalizability theory was used as the basis for computing conditional standard errors of measurement (SEM) for individual examinees. Conditional SEMs were computed for approximately 60,000 single-take examinees and 5,000 repeat examinees who completed the Step 2 Clinical Skills Examination(®) between 2007 and 2009. The study focused exclusively on ratings of communication and interpersonal skills. Conditional SEMs for single-take and repeat examinees were nearly indistinguishable across most of the score scale. US graduates and IMGs were measured with equal levels of precision at all score levels, as were examinees with differing levels of skill speaking English. There was no evidence that examinees with the largest score changes were measured poorly on either their first or second attempt. The large score increases for repeat examinees on this SP-based exam probably cannot be attributed to unexpectedly large errors of measurement.


Assuntos
Competência Clínica/normas , Avaliação Educacional/normas , Exame Físico , Comunicação , Humanos , Licenciamento , Simulação de Paciente , Estudantes de Medicina , Estados Unidos
8.
Adv Health Sci Educ Theory Pract ; 17(4): 557-71, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22041870

RESUMO

Multiple studies examining the relationship between physician gender and performance on examinations have found consistent significant gender differences, but relatively little information is available related to any gender effect on interviewing and written communication skills. The United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) examination is a multi-station examination where examinees (physicians in training) interact with, and are rated by, standardized patients (SPs) portraying cases in an ambulatory setting. Data from a recent complete year (2009) were analyzed via a series of hierarchical linear models to examine the impact of examinee gender on performance on the data gathering (DG) and patient note (PN) components of this examination. Results from both components show that not only do women have higher scores on average, but women continue to perform significantly better than men when other examinee and case variables are taken into account. Generally, the effect sizes are moderate, reflecting an approximately 2% score advantage by encounter. The advantage for female examinees increased for encounters that did not require a physical examination (for the DG component only) and for encounters that involved a Women's Health issue (for both components). The gender of the SP did not have an impact on the examinee gender effect for DG, indicating a desirable lack of interaction between examinee and SP gender. The implications of the findings, especially with respect to the validity of the use of the examination outcomes, are discussed.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Licenciamento em Medicina/normas , Estudantes de Medicina/psicologia , Análise de Variância , Competência Clínica/estatística & dados numéricos , Comunicação , Avaliação Educacional/estatística & dados numéricos , Feminino , Humanos , Relações Interpessoais , Masculino , Simulação de Paciente , Reprodutibilidade dos Testes , Fatores Sexuais , Estudantes de Medicina/estatística & dados numéricos , Estados Unidos
9.
J Gen Intern Med ; 27(1): 65-70, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21879372

RESUMO

BACKGROUND: The United States Medical Licensing Examination® (USMLE®) Step 3® examination is a computer-based examination composed of multiple choice questions (MCQ) and computer-based case simulations (CCS). The CCS portion of Step 3 is unique in that examinees are exposed to interactive patient-care simulations. OBJECTIVE: The purpose of the following study is to investigate whether the type and length of examinees' postgraduate training impacts performance on the CCS component of Step 3, consistent with previous research on overall Step 3 performance. DESIGN: Retrospective cohort study PARTICIPANTS: Medical school graduates from U.S. and Canadian institutions completing Step 3 for the first time between March 2007 and December 2009 (n = 40,588). METHODS: Post-graduate training was classified as either broadly focused for general areas of medicine (e.g. pediatrics) or narrowly focused for specific areas of medicine (e.g. radiology). A three-way between-subjects MANOVA was utilized to test for main and interaction effects on Step 3 and CCS scores between the demographic characteristics of the sample and type of residency. Additionally, to examine the impact of postgraduate training, CCS scores were regressed on Step 1 and Step 2 Clinical Knowledge (CK) scores. Residuals from the resulting regressions were plotted. RESULTS: There was a significant difference in CCS scores between broadly focused (µ = 216, σ = 17) and narrowly focused (µ=211, σ = 16) residencies (p < 0.001). Examinees in broadly focused residencies performed better overall and as length of training increased, compared to examinees in narrowly focused residencies. Predictors of Step 1 and Step 2 CK explained 55% of overall Step 3 variability and 9% of CCS score variability. CONCLUSIONS: Factors influencing performance on the CCS component may be similar to those affecting Step 3 overall. Findings are supportive of the validity of the Step 3 program and may be useful to program directors and residents in considering readiness to take this examination.


Assuntos
Competência Clínica/normas , Tomada de Decisões Assistida por Computador , Educação de Pós-Graduação em Medicina/normas , Avaliação Educacional/normas , Internato e Residência/normas , Licenciamento em Medicina/normas , Canadá , Educação de Pós-Graduação em Medicina/métodos , Avaliação Educacional/métodos , Feminino , Humanos , Internato e Residência/métodos , Masculino , Estudos Retrospectivos , Estados Unidos
10.
Acad Med ; 86(10 Suppl): S17-20, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21955761

RESUMO

BACKGROUND: Women typically demonstrate stronger communication skills on performance-based assessments using human raters in medical education settings. This study examines the effects of examinee and rater gender on communication and interpersonal skills (CIS) scores from the performance-based component of the United States Medical Licensing Examination, the Step 2 Clinical Skills (CS) examination. METHOD: Data included demographic and performance information for examinees that took Step 2 CS for the first time in 2009. The sample contained 27,910 examinees, 625 standardized patient/case combinations, and 278,776 scored patient encounters. Hierarchical linear modeling techniques were employed with CIS scores as the outcome measure. RESULTS: Females tend to slightly outperform males on CIS, when other variables related to performance are taken into account. No evidence of an examinee and rater gender interaction effect was found. CONCLUSIONS: Results provide validity evidence supporting the interpretation and use of Step 2 CS CIS scores.


Assuntos
Comunicação , Licenciamento em Medicina , Relações Médico-Paciente , Análise de Variância , Avaliação Educacional , Feminino , Humanos , Masculino , Pacientes , Fatores Sexuais
11.
Acad Med ; 86(10): 1253-9, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21869669

RESUMO

PURPOSE: Prior studies report large score gains for examinees who fail and later repeat standardized patient (SP) assessments. Although research indicates that score gains on SP exams cannot be attributed to memorizing previous cases, no studies have investigated the empirical validity of scores for repeat examinees. This report compares single-take and repeat examinees in terms of both internal (construct) validity and external (criterion-related) validity. METHOD: Data consisted of test scores for examinees who took the United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam between July 16, 2007, and September 12, 2009. The sample included 12,090 examinees who completed Step 2 CS on one occasion and another 4,030 examinees who completed the exam on two occasions. The internal measures included four separately scored performance domains of the Step 2 CS examination, whereas the external measures consisted of scores on three written assessments of medical knowledge (Step 1, Step 2 clinical knowledge, and Step 3). The authors subjected the four Step 2 CS domains to confirmatory factor analysis and evaluated correlations between Step 2 CS scores and the three written assessments for single-take and repeat examinees. RESULTS: The factor structure for repeat examinees on their first attempt was markedly different from the factor structure for single-take examinees, but it became more similar to that for single-take examinees by their second attempt. Scores on the second attempt correlated more highly with all three external measures. CONCLUSIONS: The findings support the validity of scores for repeat examinees on their second attempt.


Assuntos
Competência Clínica , Educação de Graduação em Medicina/normas , Avaliação Educacional/métodos , Licenciamento em Medicina/normas , Exame Físico/normas , Feminino , Humanos , Masculino , Simulação de Paciente , Reprodutibilidade dos Testes , Estudos Retrospectivos , Estados Unidos
12.
Acad Med ; 85(10 Suppl): S89-92, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20881712

RESUMO

BACKGROUND: During the United States Medical Licensing Examination Step 2 Clinical Skills examination, examinees rotate through 12 standardized patient (SP) encounters. Examinees have 25 minutes per encounter to interact with SPs and complete postencounter patient notes (PNs), and they may end the SP interaction early to spend extra time on the PN. The current work assesses the time examinees are spending on PNs and whether this is related to performance on the PN. METHOD: Encounters from 2,479 examinees' videos were time-stamped to indicate total encounter time and PN time. Hierarchical linear modeling was employed to assess how well total and PN time, along with other examinee and case-rater variables, predicted PN scores. RESULTS: Examinee variables explained a significant portion of within-case-rater variability, but while PN time was significantly related to PN ratings, the effect was small. CONCLUSIONS: The results suggest that spending additional time on the PN does not translate to a meaningful score increase.


Assuntos
Avaliação Educacional/métodos , Licenciamento em Medicina , Simulação de Paciente , Redação , Adulto , Análise de Variância , Feminino , Humanos , Modelos Lineares , Masculino , Fatores de Tempo , Estados Unidos
13.
Acad Med ; 85(9): 1506-10, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20736678

RESUMO

PURPOSE: The United States Medical Licensing Examination series Step 2 Clinical Skills (CS) examination is a high-stakes performance assessment that uses standardized patients (SPs) to assess the clinical skills of physicians. Each Step 2 CS examination form involves 12 SPs, each of whom portrays a different clinical scenario or case. Examinees who fail and repeat the examination may encounter repeat information--the same SP, the same case, or the same SP portraying the same case. The goal of this study was twofold: to investigate score gains for all repeat examinees, regardless of whether they experienced repeat information, and to perform additional analyses for only those examinees who did encounter repeat information. METHOD: The dataset consisted of 3,045 Step 2 CS repeat examinees who initially tested between April 2005 and December 2007. The authors used paired t tests and analysis of variance models to assess mean score gains (first attempt versus second attempt) and to determine standardized mean differences between encounters with repeat information and those without. The authors ran each set of analyses by test score component and by examinee subgroup. RESULTS: The authors observed significant mean score increases on second attempt examinations for the entire group of repeat examinees. However, they observed no significant score increases for the subgroup of examinees who encountered repeat information. CONCLUSIONS: Examinees taking Step 2 CS for the second time improve on average, and those with prior exposure to exam information do not appear to benefit unfairly from this exposure.


Assuntos
Competência Clínica , Educação de Graduação em Medicina/normas , Avaliação Educacional/métodos , Exame Físico/normas , Análise de Variância , Humanos , Licenciamento , Simulação de Paciente , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...