Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Crit Care Med ; 49(8): 1285-1292, 2021 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-33730745

RESUMO

OBJECTIVES: To describe the development and initial results of an examination and certification process assessing competence in critical care echocardiography. DESIGN: A test writing committee of content experts from eight professional societies invested in critical care echocardiography was convened, with the Executive Director representing the National Board of Echocardiography. Using an examination content outline, the writing committee was assigned topics relevant to their areas of expertise. The examination items underwent extensive review, editing, and discussion in several face-to-face meetings supervised by National Board of Medical Examiners editors and psychometricians. A separate certification committee was tasked with establishing criteria required to achieve National Board of Echocardiography certification in critical care echocardiography through detailed review of required supporting material submitted by candidates seeking to fulfill these criteria. SETTING: The writing committee met twice a year in person at the National Board of Medical Examiner office in Philadelphia, PA. SUBJECTS: Physicians enrolled in the examination of Special Competence in Critical Care Electrocardiography (CCEeXAM). MEASUREMENTS AND MAIN RESULTS: A total of 524 physicians sat for the examination, and 426 (81.3%) achieved a passing score. Of the examinees, 41% were anesthesiology trained, 33.2% had pulmonary/critical care background, and the majority had graduated training within the 10 years (91.6%). Most candidates work full-time at an academic hospital (46.9%). CONCLUSIONS: The CCEeXAM is designed to assess a knowledge base that is shared with echocardiologists in addition to that which is unique to critical care. The National Board of Echocardiography certification establishes that the physician has achieved the ability to independently perform and interpret critical care echocardiography at a standard recognized by critical care professional societies encompassing a wide spectrum of backgrounds. The interest shown and the success achieved on the CCEeXAM by practitioners of critical care echocardiography support the standards set by the National Board of Echocardiography for testamur status and certification in this imaging specialty area.


Assuntos
Certificação/normas , Competência Clínica/normas , Cuidados Críticos/normas , Ecocardiografia/normas , Medicina Interna/normas , Avaliação Educacional , Humanos , Conselhos de Especialidade Profissional
2.
Med Sci Educ ; 34(2): 471-475, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38686150

RESUMO

Problem: Many assessments in medical education involve measuring proficiency in a content area. Thus, proper content development (blueprinting) of tests in this field is of primary importance. Prior efforts to conduct content review as part of assessment development have been time- and resource-intensive, relying on practice analysis and then on linking methods. This monograph explores a "rapid, cost-effective" approach to blueprinting that allows efficient assessment development with rigor. Our investigation seeks to explore an efficient and effective alternate method for creating a content design (blueprint) for medical credentialing and evaluation examinations by focusing directly on assessment requirements. Approach: We employed a two-phase process to propose a rapid blueprinting method. Phase 1 involved a 1-day direct meeting of content experts/practitioners. Phase 2 involved a corroboration survey sent to a wider group of content experts/practitioners. The rapid blueprinting method was applied to developing eleven blueprints (five for medical specialty certification; five for health professions certification; and one for in-training assessment). Outcomes: The methods we used resulted in effective, well-balanced, operational examinations that successfully implemented the resulting blueprints in item writing assignments and test development. Assessments resulting from the use of the rapid blueprinting method also generated psychometrically sound inferences from the scores. For example, the assessments resulting from this methodology of test construction had KR-20 reliability coefficients ranging from .87 to .92. Next Steps: This approach leveraged the effectiveness and feasibility of the rapid blueprinting method and demonstrated successful examination designs (blueprints) that are cost- and time-effective. The rapid blueprinting method may be explored for further implementation in local assessment settings beyond medical credentialing examinations.

3.
Acad Med ; 99(8): 912-921, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38412485

RESUMO

PURPOSE: Clinical reasoning, a complex construct integral to the practice of medicine, has been challenging to define, teach, and assess. Programmatic assessment purports to overcome validity limitations of judgments made from individual assessments through proportionality and triangulation processes. This study explored a pragmatic approach to the programmatic assessment of clinical reasoning. METHOD: The study analyzed data from 2 student cohorts from the University of Utah School of Medicine (UUSOM) (n = 113 in cohort 1 and 119 in cohort 2) and 1 cohort from the University of Colorado School of Medicine (CUSOM) using assessment data that spanned from 2017 to 2021 (n = 199). The study methods included the following: (1) asking faculty judges to categorize student clinical reasoning skills, (2) selecting institution-specific assessment data conceptually aligned with clinical reasoning, (3) calculating correlations between assessment data and faculty judgments, and (4) developing regression models between assessment data and faculty judgments. RESULTS: Faculty judgments of student clinical reasoning skills were converted to a continuous variable of clinical reasoning struggles, with mean (SD) ratings of 2.93 (0.27) for the 232 UUSOM students and 2.96 (0.17) for the 199 CUSOM students. A total of 67 and 32 discrete assessment variables were included from the UUSOM and CUSOM, respectively. Pearson r correlations were moderate to strong between many individual and composite assessment variables and faculty judgments. Regression models demonstrated an overall adjusted R2 (standard error of the estimate) of 0.50 (0.19) for UUSOM cohort 1, 0.28 (0.15) for UUSOM cohort 2, and 0.30 (0.14) for CUSOM. CONCLUSIONS: This study represents an early pragmatic exploration of regression analysis as a potential tool for operationalizing the proportionality and triangulation principles of programmatic assessment. The study found that programmatic assessment may be a useful framework for longitudinal assessment of complicated constructs, such as clinical reasoning.


Assuntos
Competência Clínica , Raciocínio Clínico , Educação de Graduação em Medicina , Avaliação Educacional , Humanos , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Competência Clínica/estatística & dados numéricos , Utah , Colorado , Masculino , Estudantes de Medicina/estatística & dados numéricos , Feminino , Estudos de Coortes , Docentes de Medicina
4.
Mayo Clin Proc ; 99(5): 782-794, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38702127

RESUMO

The rapidly evolving coaching profession has permeated the health care industry and is gaining ground as a viable solution for addressing physician burnout, turnover, and leadership crises that plague the industry. Although various coach credentialing bodies are established, the profession has no standardized competencies for physician coaching as a specialty practice area, creating a market of aspiring coaches with varying degrees of expertise. To address this gap, we employed a modified Delphi approach to arrive at expert consensus on competencies necessary for coaching physicians and physician leaders. Informed by the National Board of Medical Examiners' practice of rapid blueprinting, a group of 11 expert physician coaches generated an initial list of key thematic areas and specific competencies within them. The competency document was then distributed for agreement rating and comment to over 100 stakeholders involved in physician coaching. Our consensus threshold was defined at 70% agreement, and actual responses ranged from 80.5% to 95.6% agreement. Comments were discussed and addressed by 3 members of the original group, resulting in a final model of 129 specific competencies in the following areas: (1) physician-specific coaching, (2) understanding physician and health care context, culture, and career span, (3) coaching theory and science, (4) diversity, equity, inclusion, and other social dynamics, (5) well-being and burnout, and (6) physician leadership. This consensus on physician coaching competencies represents a critical step toward establishing standards that inform coach education, training, and certification programs, as well as guide the selection of coaches and evaluation of coaching in health care settings.


Assuntos
Técnica Delphi , Tutoria , Humanos , Competência Clínica/normas , Consenso , Liderança , Médicos/normas , Médicos/psicologia , Competência Profissional/normas
5.
J Vet Med Educ ; 37(4): 377-82, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-21135405

RESUMO

The National Board of Veterinary Medical Examiners was interested in the possible effects of word count on the outcomes of the North American Veterinary Licensing Examination. In this study, the authors investigated the effects of increasing word count on the pacing of examinees during each section of the examination and on the performance of examinees on the items. Specifically, the authors analyzed the effect of item word count on the average time spent on each item within a section of the examination, the average number of items omitted at the end of a section, and the average difficulty of items as a function of presentation order. The average word count per item increased from 2001 to 2008. As expected, there was a relationship between word count and time spent on the item. No significant relationship was found between word count and item difficulty, and an analysis of omitted items and pacing patterns showed no indication of overall pacing problems.


Assuntos
Educação em Veterinária/métodos , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Licenciamento , Canadá , Certificação , Humanos , Idioma , Psicometria , Fatores de Tempo , Gerenciamento do Tempo , Estados Unidos , Interface Usuário-Computador
6.
Eval Health Prof ; 30(4): 362-75, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17986670

RESUMO

Cluster analysis can be a useful statistical technique for setting minimum passing scores on high-stakes examinations by grouping examinees into homogenous clusters based on their responses to test items. It has been most useful for supplementing data or validating minimum passing scores determined from expert judgment approaches, such as the Ebel and Nedelsky methods. However, there is no evidence supporting how well cluster analysis converges with the modified Angoff method, which is frequently used in medical credentialing. Therefore, the purpose of this study is to investigate the efficacy of cluster analysis for validating Angoff-derived minimum passing scores. Data are from 652 examinees who took a national credentialing examination based on a content-by-process test blueprint. Results indicate a high degree of consistency in minimum passing score estimates derived from the modified Angoff and cluster analysis methods. However, the stability of the estimates from cluster analysis across different samples was modest.


Assuntos
Credenciamento/normas , Avaliação Educacional/normas , Competência Clínica/normas , Medicina Clínica , Análise por Conglomerados , Avaliação Educacional/métodos , Humanos , Modelos Educacionais , Atenção Primária à Saúde , Reprodutibilidade dos Testes , Estados Unidos
7.
Clin Gastroenterol Hepatol ; 1(1): 64-8, 2003 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-15017519

RESUMO

BACKGROUND AND AIMS: Clinician educators are asked to provide both formative and summative evaluations on the medical knowledge of residents. This study evaluated the accuracy of these evaluations and the perception of residents regarding the ability of faculty to assess medical knowledge. METHODS: Gastroenterology knowledge ratings provided by 15 faculty gastroenterologists on 49 internal medicine residents during a required gastroenterology rotation were correlated with performance on the gastroenterology subsection of the In-Training Examination for Internal Medicine. Residents also were surveyed regarding their perception of the ability of faculty to judge their knowledge of medical gastroenterology. RESULTS: The mean correlation (Kendall's tau b) of faculty ratings with performance on the ITE was 0.30 (P < 0.01). The range of correlation values for individual faculty (-0.39 to 0.80) indicated that some faculty were able to assess the medical knowledge of residents better than others. Residents, as well as the faculty themselves, perceived that faculty were able to rate their medical knowledge relatively well. CONCLUSIONS: The ability of faculty gastroenterologists to judge the knowledge of gastroenterology in their resident trainees was quite limited. Residents, as well as faculty, inaccurately perceive the ability of gastroenterologists to render professional judgments on their knowledge base as good. An end-of-rotation written examination would appear to be required to provide an accurate assessment of the medical knowledge of residents.


Assuntos
Competência Clínica , Avaliação Educacional , Gastroenterologia/educação , Internato e Residência , Adulto , Docentes de Medicina , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA