Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Mayo Clin Proc ; 99(5): 782-794, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38702127

RESUMEN

The rapidly evolving coaching profession has permeated the health care industry and is gaining ground as a viable solution for addressing physician burnout, turnover, and leadership crises that plague the industry. Although various coach credentialing bodies are established, the profession has no standardized competencies for physician coaching as a specialty practice area, creating a market of aspiring coaches with varying degrees of expertise. To address this gap, we employed a modified Delphi approach to arrive at expert consensus on competencies necessary for coaching physicians and physician leaders. Informed by the National Board of Medical Examiners' practice of rapid blueprinting, a group of 11 expert physician coaches generated an initial list of key thematic areas and specific competencies within them. The competency document was then distributed for agreement rating and comment to over 100 stakeholders involved in physician coaching. Our consensus threshold was defined at 70% agreement, and actual responses ranged from 80.5% to 95.6% agreement. Comments were discussed and addressed by 3 members of the original group, resulting in a final model of 129 specific competencies in the following areas: (1) physician-specific coaching, (2) understanding physician and health care context, culture, and career span, (3) coaching theory and science, (4) diversity, equity, inclusion, and other social dynamics, (5) well-being and burnout, and (6) physician leadership. This consensus on physician coaching competencies represents a critical step toward establishing standards that inform coach education, training, and certification programs, as well as guide the selection of coaches and evaluation of coaching in health care settings.


Asunto(s)
Técnica Delphi , Tutoría , Humanos , Competencia Clínica/normas , Consenso , Liderazgo , Médicos/normas , Médicos/psicología , Competencia Profesional/normas
2.
Med Sci Educ ; 34(2): 471-475, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38686150

RESUMEN

Problem: Many assessments in medical education involve measuring proficiency in a content area. Thus, proper content development (blueprinting) of tests in this field is of primary importance. Prior efforts to conduct content review as part of assessment development have been time- and resource-intensive, relying on practice analysis and then on linking methods. This monograph explores a "rapid, cost-effective" approach to blueprinting that allows efficient assessment development with rigor. Our investigation seeks to explore an efficient and effective alternate method for creating a content design (blueprint) for medical credentialing and evaluation examinations by focusing directly on assessment requirements. Approach: We employed a two-phase process to propose a rapid blueprinting method. Phase 1 involved a 1-day direct meeting of content experts/practitioners. Phase 2 involved a corroboration survey sent to a wider group of content experts/practitioners. The rapid blueprinting method was applied to developing eleven blueprints (five for medical specialty certification; five for health professions certification; and one for in-training assessment). Outcomes: The methods we used resulted in effective, well-balanced, operational examinations that successfully implemented the resulting blueprints in item writing assignments and test development. Assessments resulting from the use of the rapid blueprinting method also generated psychometrically sound inferences from the scores. For example, the assessments resulting from this methodology of test construction had KR-20 reliability coefficients ranging from .87 to .92. Next Steps: This approach leveraged the effectiveness and feasibility of the rapid blueprinting method and demonstrated successful examination designs (blueprints) that are cost- and time-effective. The rapid blueprinting method may be explored for further implementation in local assessment settings beyond medical credentialing examinations.

3.
Acad Med ; 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38412485

RESUMEN

PURPOSE: Clinical reasoning, a complex construct integral to the practice of medicine, has been challenging to define, teach, and assess. Programmatic assessment purports to overcome validity limitations of judgments made from individual assessments through proportionality and triangulation processes. This study explored a pragmatic approach to the programmatic assessment of clinical reasoning. METHOD: The study analyzed data from 2 student cohorts from the University of Utah School of Medicine (UUSOM) (n = 113 in cohort 1 and 119 in cohort 2) and 1 cohort from the University of Colorado School of Medicine (CUSOM) using assessment data that spanned from 2017 to 2021 (n = 199). The study methods included the following: (1) asking faculty judges to categorize student clinical reasoning skills, (2) selecting institution-specific assessment data conceptually aligned with clinical reasoning, (3) calculating correlations between assessment data and faculty judgments, and (4) developing regression models between assessment data and faculty judgments. RESULTS: Faculty judgments of student clinical reasoning skills were converted to a continuous variable of clinical reasoning struggles, with mean (SD) ratings of 2.93 (0.27) for the 232 UUSOM students and 2.96 (0.17) for the 199 CUSOM students. A total of 67 and 32 discrete assessment variables were included from the UUSOM and CUSOM, respectively. Pearson r correlations were moderate to strong between many individual and composite assessment variables and faculty judgments. Regression models demonstrated an overall adjusted R2 (standard error of the estimate) of 0.50 (0.19) for UUSOM cohort 1, 0.28 (0.15) for UUSOM cohort 2, and 0.30 (0.14) for CUSOM. CONCLUSIONS: This study represents an early pragmatic exploration of regression analysis as a potential tool for operationalizing the proportionality and triangulation principles of programmatic assessment. The study found that programmatic assessment may be a useful framework for longitudinal assessment of complicated constructs, such as clinical reasoning.

4.
Crit Care Med ; 49(8): 1285-1292, 2021 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-33730745

RESUMEN

OBJECTIVES: To describe the development and initial results of an examination and certification process assessing competence in critical care echocardiography. DESIGN: A test writing committee of content experts from eight professional societies invested in critical care echocardiography was convened, with the Executive Director representing the National Board of Echocardiography. Using an examination content outline, the writing committee was assigned topics relevant to their areas of expertise. The examination items underwent extensive review, editing, and discussion in several face-to-face meetings supervised by National Board of Medical Examiners editors and psychometricians. A separate certification committee was tasked with establishing criteria required to achieve National Board of Echocardiography certification in critical care echocardiography through detailed review of required supporting material submitted by candidates seeking to fulfill these criteria. SETTING: The writing committee met twice a year in person at the National Board of Medical Examiner office in Philadelphia, PA. SUBJECTS: Physicians enrolled in the examination of Special Competence in Critical Care Electrocardiography (CCEeXAM). MEASUREMENTS AND MAIN RESULTS: A total of 524 physicians sat for the examination, and 426 (81.3%) achieved a passing score. Of the examinees, 41% were anesthesiology trained, 33.2% had pulmonary/critical care background, and the majority had graduated training within the 10 years (91.6%). Most candidates work full-time at an academic hospital (46.9%). CONCLUSIONS: The CCEeXAM is designed to assess a knowledge base that is shared with echocardiologists in addition to that which is unique to critical care. The National Board of Echocardiography certification establishes that the physician has achieved the ability to independently perform and interpret critical care echocardiography at a standard recognized by critical care professional societies encompassing a wide spectrum of backgrounds. The interest shown and the success achieved on the CCEeXAM by practitioners of critical care echocardiography support the standards set by the National Board of Echocardiography for testamur status and certification in this imaging specialty area.


Asunto(s)
Certificación/normas , Competencia Clínica/normas , Cuidados Críticos/normas , Ecocardiografía/normas , Medicina Interna/normas , Evaluación Educacional , Humanos , Consejos de Especialidades
5.
J Vet Med Educ ; 37(4): 377-82, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-21135405

RESUMEN

The National Board of Veterinary Medical Examiners was interested in the possible effects of word count on the outcomes of the North American Veterinary Licensing Examination. In this study, the authors investigated the effects of increasing word count on the pacing of examinees during each section of the examination and on the performance of examinees on the items. Specifically, the authors analyzed the effect of item word count on the average time spent on each item within a section of the examination, the average number of items omitted at the end of a section, and the average difficulty of items as a function of presentation order. The average word count per item increased from 2001 to 2008. As expected, there was a relationship between word count and time spent on the item. No significant relationship was found between word count and item difficulty, and an analysis of omitted items and pacing patterns showed no indication of overall pacing problems.


Asunto(s)
Educación en Veterinaria/métodos , Evaluación Educacional/métodos , Evaluación Educacional/estadística & datos numéricos , Concesión de Licencias , Canadá , Certificación , Humanos , Lenguaje , Psicometría , Factores de Tiempo , Administración del Tiempo , Estados Unidos , Interfaz Usuario-Computador
6.
Eval Health Prof ; 30(4): 362-75, 2007 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-17986670

RESUMEN

Cluster analysis can be a useful statistical technique for setting minimum passing scores on high-stakes examinations by grouping examinees into homogenous clusters based on their responses to test items. It has been most useful for supplementing data or validating minimum passing scores determined from expert judgment approaches, such as the Ebel and Nedelsky methods. However, there is no evidence supporting how well cluster analysis converges with the modified Angoff method, which is frequently used in medical credentialing. Therefore, the purpose of this study is to investigate the efficacy of cluster analysis for validating Angoff-derived minimum passing scores. Data are from 652 examinees who took a national credentialing examination based on a content-by-process test blueprint. Results indicate a high degree of consistency in minimum passing score estimates derived from the modified Angoff and cluster analysis methods. However, the stability of the estimates from cluster analysis across different samples was modest.


Asunto(s)
Habilitación Profesional/normas , Evaluación Educacional/normas , Competencia Clínica/normas , Medicina Clínica , Análisis por Conglomerados , Evaluación Educacional/métodos , Humanos , Modelos Educacionales , Atención Primaria de Salud , Reproducibilidad de los Resultados , Estados Unidos
7.
Acad Med ; 79(10 Suppl): S55-7, 2004 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-15383390

RESUMEN

PROBLEM STATEMENT AND BACKGROUND: This study examined the extent to which performance on the NBME(R) Comprehensive Basic Science Self-Assessment (CBSSA) and NBME Comprehensive Clinical Science Self-Assessment (CCSSA) can be used to project performance on USMLE Step 1 and Step 2 examinations, respectively. METHOD: Subjects were 1,156 U.S./Canadian medical students who took either (1) the CBSSA and Step 1, or (2) the CCSSA and Step 2, between April 2003 and January 2004. Regression analyses examined the relationship between each self-assessment and corresponding USMLE Step as a function of test administration conditions. RESULTS: The CBSSA explained 62% of the variation in Step 1 scores, while the CCSSA explained 56% of Step 2 score variation. In both samples, Standard-Paced conditions produced better estimates of future Step performance than Self-Paced ones. CONCLUSIONS: Results indicate that self-assessment examinations provide an accurate basis for predicting performance on the associated Step with some variation in predictive accuracy across test administration conditions.


Asunto(s)
Competencia Clínica , Evaluación Educacional/métodos , Licencia Médica , Programas de Autoevaluación , Estudiantes de Medicina , Canadá , Estudios de Cohortes , Educación de Pregrado en Medicina , Retroalimentación , Predicción , Humanos , Internet , Ciencia/educación , Factores de Tiempo , Estados Unidos
8.
Ann Med Interne (Paris) ; 154(3): 148-56, 2003 May.
Artículo en Francés | MEDLINE | ID: mdl-12910041

RESUMEN

Medical training is undergoing extensive revision in France. A nationwide comprehensive clinical competency examination will be administered for the first time in 2004, relying exclusively on essay-questions. Unfortunately, these questions have psychometric shortcomings, particularly their typically low reliability. High score reliability is mandatory in a high-stakes context. The National Board of Medical Examiners-designed multiple choice-questions (MCQ) are well adapted to assess clinical competency with a high reliability score. The purpose of this study was to test the hypothesis that French medical students could take an American-designed and French-adapted comprehensive clinical knowledge examination with this MCQ format. Two hundred and eighty five French students, from four Medical Schools across France, took an examination composed of 200 MCQs under standardized conditions. Their scores were compared with those of American students. This examination was found assess French students' clinical knowledge with a high level of reliability. French students' scores were slightly lower than those of American students, mostly due to a lack of familiarity with this particular item format, and a lower motivational level. Another study is being designed, with a larger group, to address some of the shortcomings of the initial study. If these preliminary results are replicated, the MCQ format might be a more defendable and sensible alternative to the proposed essay questions.


Asunto(s)
Competencia Clínica/normas , Educación Médica/normas , Evaluación Educacional , Licencia Médica/normas , Adulto , Femenino , Francia , Humanos , Masculino , Proyectos Piloto , Reproducibilidad de los Resultados , Estados Unidos
9.
Acad Med ; 78(5): 509-17, 2003 May.
Artículo en Inglés | MEDLINE | ID: mdl-12742789

RESUMEN

PURPOSE: The French government, as part of medical education reforms, has affirmed that an examination program for national residency selection will be implemented by 2004. The purpose of this study was to develop a French multiple-choice (MC) examination using the National Board of Medical Examiners' (NBME) expertise and materials. METHOD: The Evaluation Standardisée du Second Cycle (ESSC), a four-hour clinical sciences examination, was administered in January 2002 to 285 medical students at four university test sites in France. The ESSC had 200 translated and adapted MC items selected from the Comprehensive Clinical Sciences Examination (CCSE), an NBME subject test. RESULTS: Less than 10% of the ESSC items were rejected as inappropriate to French practice. Also, the distributions of ESSC item characteristics were similar to those reported with the CCSE. The ESSC also appeared to be very well targeted to examinees' proficiencies and yielded a reliability coefficient of.91. However, because of a higher word count, the ESSC did show evidence of speededness. Regarding overall performance, the mean proficiency estimate for French examinees was about 0.4 SD below that of a CCSE population. CONCLUSIONS: This study provides strong evidence for the usefulness of the model adopted in this first collaborative effort between the NBME and a consortium of French medical schools. Overall, the performance of French students was comparable to that of CCSE students, which was encouraging given the differences in motivation and the speeded nature of the French test. A second phase with the participation of larger numbers of French medical schools and students is being planned.


Asunto(s)
Medicina Clínica/educación , Evaluación Educacional , Facultades de Medicina , Estudiantes de Medicina , Femenino , Francia , Humanos , Masculino
10.
Clin Gastroenterol Hepatol ; 1(1): 64-8, 2003 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-15017519

RESUMEN

BACKGROUND AND AIMS: Clinician educators are asked to provide both formative and summative evaluations on the medical knowledge of residents. This study evaluated the accuracy of these evaluations and the perception of residents regarding the ability of faculty to assess medical knowledge. METHODS: Gastroenterology knowledge ratings provided by 15 faculty gastroenterologists on 49 internal medicine residents during a required gastroenterology rotation were correlated with performance on the gastroenterology subsection of the In-Training Examination for Internal Medicine. Residents also were surveyed regarding their perception of the ability of faculty to judge their knowledge of medical gastroenterology. RESULTS: The mean correlation (Kendall's tau b) of faculty ratings with performance on the ITE was 0.30 (P < 0.01). The range of correlation values for individual faculty (-0.39 to 0.80) indicated that some faculty were able to assess the medical knowledge of residents better than others. Residents, as well as the faculty themselves, perceived that faculty were able to rate their medical knowledge relatively well. CONCLUSIONS: The ability of faculty gastroenterologists to judge the knowledge of gastroenterology in their resident trainees was quite limited. Residents, as well as faculty, inaccurately perceive the ability of gastroenterologists to render professional judgments on their knowledge base as good. An end-of-rotation written examination would appear to be required to provide an accurate assessment of the medical knowledge of residents.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Gastroenterología/educación , Internado y Residencia , Adulto , Docentes Médicos , Femenino , Humanos , Masculino
11.
Anesth Analg ; 95(6): 1476-82, table of contents, 2002 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-12456404

RESUMEN

UNLABELLED: A key element in developing a process to determine knowledge and ability in applying perioperative echocardiography has included an examination. We report on the development of a certifying examination in perioperative echocardiography. In addition, we tested the hypothesis that examination performance is related to clinical experience in echocardiography. Since 1995, more than 1200 participants have taken the examination, and more than 70% have passed. Overall examination performance was related positively to longer than 3 mo of training (or equivalent) in echocardiography and performance and interpretation of at least six examinations a week. We concluded that the certifying examination in perioperative echocardiography is a valid tool to help determine individual knowledge in perioperative echocardiography application. IMPLICATIONS: This report describes the process involved in developing the certifying transesophageal echocardiography examination and identifies correlates with examination performance.


Asunto(s)
Anestesiología/educación , Certificación , Ecocardiografía Transesofágica , Humanos , Conocimiento
12.
Ann Intern Med ; 137(6): 505-10, 2002 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-12230352

RESUMEN

BACKGROUND: The In-Training Examination in Internal Medicine (IM-ITE) has been offered annually to all trainees in U.S. medical residency programs since 1988. Its purpose is to provide residents and program directors with an objective assessment of each resident's personal performance on a written, multiple-choice examination and the performance of the residency program compared with that of its peers. OBJECTIVE: To analyze trends in the demographic characteristics and scores of examinees during the first 12 years of administration of this examination. DESIGN: Descriptive analysis over time. SETTING: U.S. residency programs in internal medicine, 1988-2000. PARTICIPANTS: Residents at all levels of training in categorical, primary care, and medicine-pediatrics programs in the United States and Canada. The number of examinees increased from 7500 in 1988 to almost 18 000 in 2000. MEASUREMENTS: After calibration of the scores for each examination, test results were compared and analyzed for selected cohorts of residents over 12 years. RESULTS: More than 80% of residents in medicine training programs participate in the IM-ITE, most on an annual basis throughout their period of training. Test performance improves at a predictable rate with each year of training. Since 1995, international medical school graduates have persistently outperformed graduates of U.S. medical schools. Test results were affected by the timing of the examination, the time that was available to complete the examination, and the actual time that residents spent in internal medicine training before each examination. CONCLUSIONS: The IM-ITE scores generally improve with year of training time spent in internal medicine training before the examination and time permitted to complete the examination. These observations provide evidence that the IM-ITE is a valid measure of knowledge acquired during internal medicine training.


Asunto(s)
Evaluación Educacional , Medicina Interna/educación , Internado y Residencia/tendencias , Médicos Graduados Extranjeros , Humanos , Medicina Interna/tendencias , Factores de Tiempo , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...