Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Sci Educ ; 31(6): 1869-1873, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34956702

RESUMO

PURPOSE: Medical education researchers are often uncertain whether they should submit abstracts to certain conferences. Therefore, we aimed to elicit consensus among medical education conference organizers to answer the question: what are best practices for research abstract submissions to multiple conferences? METHODS: Using a 44-question online survey, we conducted a modified Delphi process to identify best practices for abstract submissions to medical education conferences. Consistent with the Delphi process, we identified conference representatives from non-specialty medical education conferences and across four conference types (institutional, regional, national, and international) to serve as panelists. RESULTS: Eleven expert panelists, representing all four conference types-two institutional conferences, five regional conferences, two national conferences, and two international conferences-completed three rounds of the survey. After three rounds, panelists reached consensus on 39/44 survey items-26 items in round 1, 10 items in round 2, and three items in round 3. Panelists' consensus and rationale indicated that it is most appropriate to resubmit abstracts to conferences with a larger or different audience, but not to more homogeneous audiences. Among the four conference types, abstract resubmission from institutional conferences to other conference types was the most widely accepted. Panelists agreed that abstracts using data and results submitted or accepted for publication could be submitted to any conference type. CONCLUSION: The results of this study provide best practices for presenting scholarship to medical education conferences. We recommend that guidelines for medical education conference abstract submissions provide consistent, clear instructions regarding the appropriate life cycle of an abstract.

2.
Med Teach ; 43(5): 575-582, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33590781

RESUMO

BACKGROUND: Using revised Bloom's taxonomy, some medical educators assume they can write multiple choice questions (MCQs) that specifically assess higher (analyze, apply) versus lower-order (recall) learning. The purpose of this study was to determine whether three key stakeholder groups (students, faculty, and education assessment experts) assign MCQs the same higher- or lower-order level. METHODS: In Phase 1, stakeholders' groups assigned 90 MCQs to Bloom's levels. In Phase 2, faculty wrote 25 MCQs specifically intended as higher- or lower-order. Then, 10 students assigned these questions to Bloom's levels. RESULTS: In Phase 1, there was low interrater reliability within the student group (Krippendorf's alpha = 0.37), the faculty group (alpha = 0.37), and among three groups (alpha = 0.34) when assigning questions as higher- or lower-order. The assessment team alone had high interrater reliability (alpha = 0.90). In Phase 2, 63% of students agreed with the faculty as to whether the MCQs were higher- or lower-order. There was low agreement between paired faculty and student ratings (Cohen's Kappa range .098-.448, mean .256). DISCUSSION: For many questions, faculty and students did not agree whether the questions were lower- or higher-order. While faculty may try to target specific levels of knowledge or clinical reasoning, students may approach the questions differently than intended.


Assuntos
Avaliação Educacional , Redação , Docentes , Humanos , Reprodutibilidade dos Testes , Estudantes
3.
J Surg Educ ; 76(6): e189-e192, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31501065

RESUMO

OBJECTIVE: The profession of surgery is entering a new era of "big data," where analyses of longitudinal trainee assessment data will be used to inform ongoing efforts to improve surgical education. Given the high-stakes implications of these types of analyses, researchers must define the conditions under which estimates derived from these large datasets remain valid. With this study, we determine the number of assessments of residents' performances needed to reliably assess the difficulty of "Core" surgical procedures. DESIGN: Using the SIMPL smartphone application from the Procedural Learning and Safety Collaborative, 402 attending surgeons directly observed and provided workplace-based assessments for 488 categorical residents after 5259 performances of 87 Core surgical procedures performed at 14 institutions. We used these faculty ratings to construct a linear mixed model with resident performance as the outcome variable and multiple predictors including, most significantly, the operative procedure as a random effect. We interpreted the variance in performance ratings attributable to the procedure, after controlling for other variables, as the "difficulty" of performing the procedure. We conducted a generalizability analysis and decision study to estimate the number of SIMPL performance ratings needed to reliably estimate the difficulty of a typical Core procedure. RESULTS: Twenty-four faculty ratings of resident operative performance were necessary to reliably estimate the difficulty of a typical Core surgical procedure (mean dependability coefficient 0.80, 95% confidence interval 0.73-0.87). CONCLUSIONS: At least 24 operative performance ratings are required to reliably estimate the difficulty of a typical Core surgical procedure. Future research using performance ratings to establish procedure difficulty should include adequate numbers of ratings given the high-stakes implications of those results for curriculum design and policy.


Assuntos
Competência Clínica , Avaliação de Desempenho Profissional , Cirurgia Geral/educação , Procedimentos Cirúrgicos Operatórios/normas , Adulto , Big Data , Avaliação Educacional , Feminino , Humanos , Internato e Residência , Masculino , Aplicativos Móveis , Autonomia Profissional , Reprodutibilidade dos Testes
4.
Perspect Med Educ ; 8(4): 261-264, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31347033

RESUMO

Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations. Including redundant or irrelevant limitations is an ineffective use of the already limited word count. A meaningful presentation of study limitations should describe the potential limitation, explain the implication of the limitation, provide possible alternative approaches, and describe steps taken to mitigate the limitation. This includes placing research findings within their proper context to ensure readers do not overemphasize or minimize findings. A more complete presentation will enrich the readers' understanding of the study's limitations and support future investigation.


Assuntos
Pesquisa Biomédica/normas , Educação Médica , Humanos , Reprodutibilidade dos Testes
5.
Acad Med ; 94(1): 71-75, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30188369

RESUMO

PROBLEM: Multiple-choice question (MCQ) examinations represent a primary mode of assessment used by medical schools. It can be challenging for faculty to produce content-aligned, comprehensive, and psychometrically sound MCQs. Despite best efforts, sometimes there are unexpected issues with examinations. Assessment best practices lack a systematic way to address gaps when actual and expected outcomes do not align. APPROACH: The authors propose using root cause analysis (RCA) to systematically review unexpected educational outcomes. Using a real-life example of a class's unexpectedly low reproduction examination scores (University of Michigan Medical School, 2015), the authors describe their RCA process, which included a system flow diagram, a fishbone diagram, and an application of the 5 Whys to understand the contributors and reasons for the lower-than-expected performance. Using this RCA approach, the authors identified multiple contributing factors that potentially led to the low examination scores. These included lack of examination quality improvement (QI) for poorly constructed items, content-question and pedagogy-assessment misalignment, and other issues related to environment and people. OUTCOMES: As a result of the RCA, the authors worked with stakeholders to address these issues and develop strategies to prevent similar systematic issues from reoccurring. For example, a more robust examination QI process was developed. NEXT STEPS: Using an RCA approach in health care is grounded in practice and can be easily adapted for assessment. Because this is a novel use of RCA, there are opportunities to expand beyond the authors' initial approach for using RCA in assessment.


Assuntos
Educação Médica/métodos , Educação Médica/normas , Avaliação Educacional/métodos , Avaliação Educacional/normas , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
7.
Acad Med ; 91(11): 1526-1529, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27119333

RESUMO

PROBLEM: Most medical schools have either retained a traditional admissions interview or fully adopted an innovative, multisampling format (e.g., the multiple mini-interview) despite there being advantages and disadvantages associated with each format. APPROACH: The University of Michigan Medical School (UMMS) sought to maximize the strengths associated with both interview formats after recognizing that combining the two approaches had the potential to capture additional, unique information about an applicant. In September 2014, the UMMS implemented a hybrid interview model with six, 6-minute short-form interviews-highly structured scenario-based encounters-and two, 30-minute semistructured long-form interviews. Five core skills were assessed across both interview formats. OUTCOMES: Overall, applicants and admissions committee members reported favorable reactions to the hybrid model, supporting continued use of the model. The generalizability coefficients for the six-station short-form and the two-interview long-form formats were estimated to be 0.470 and 0.176, respectively. Different skills were more reliably assessed by different interview formats. Scores from each format seemed to be operating independently as evidenced through moderate to low correlations (r = 0.100-0.403) for the same skills measured across different interview formats; however, after correcting for attenuation, these correlations were much higher. NEXT STEPS: This hybrid model will be revised and optimized to capture the skills most reliably assessed by each format. Future analysis will examine validity by determining whether short-form and long-form interview scores accurately measure the skills intended to be assessed. Additionally, data collected from both formats will be used to establish baselines for entering students' competencies.


Assuntos
Educação de Graduação em Medicina , Entrevistas como Assunto/métodos , Critérios de Admissão Escolar , Faculdades de Medicina , Michigan
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...