Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 17 de 17
1.
Acad Med ; 96(9): 1236-1238, 2021 09 01.
Article En | MEDLINE | ID: mdl-34166234

The COVID-19 pandemic interrupted administration of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam in March 2020 due to public health concerns. As the scope and magnitude of the pandemic became clearer, the initial plans by the USMLE program's sponsoring organizations (NBME and Federation of State Medical Boards) to resume Step 2 CS in the short-term shifted to long-range plans to relaunch an exam that could harness technology and reduce infection risk. Insights about ongoing changes in undergraduate and graduate medical education and practice environments, coupled with challenges in delivering a transformed examination during a pandemic, led to the January 2021 decision to permanently discontinue Step 2 CS. Despite this, the USMLE program considers assessment of clinical skills to be critically important. The authors believe this decision will facilitate important advances in assessing clinical skills. Factors contributing to the decision included concerns about achieving desired goals within desired time frames; a review of enhancements to clinical skills training and assessment that have occurred since the launch of Step 2 CS in 2004; an opportunity to address safety and health concerns, including those related to examinee stress and wellness during a pandemic; a review of advances in the education, training, practice, and delivery of medicine; and a commitment to pursuing innovative assessments of clinical skills. USMLE program staff continue to seek input from varied stakeholders to shape and prioritize technological and methodological enhancements to guide development of clinical skills assessment. The USMLE program's continued exploration of constructs and methods by which communication skills, clinical reasoning, and physical examination may be better assessed within the remaining components of the exam provides opportunities for examinees, educators, regulators, the public, and other stakeholders to provide input.


Clinical Competence/standards , Educational Measurement/methods , Licensure, Medical/standards , COVID-19/prevention & control , Educational Measurement/standards , Humans , Licensure, Medical/trends , United States
2.
Acad Med ; 96(1): 37-43, 2021 01 01.
Article En | MEDLINE | ID: mdl-32910005

The practice of medicine is changing rapidly as a consequence of electronic health record adoption, new technologies for patient care, disruptive innovations that breakdown professional hierarchies, and evolving societal norms. Collectively, these have resulted in the modification of the physician's role as the gatekeeper for health care, increased shift-based care, and amplified interprofessional team-based care. Technological innovations present opportunities as well as challenges. Artificial intelligence, which has great potential, has already transformed some tasks, particularly those involving image interpretation. Ubiquitous access to information via the Internet by physicians and patients alike presents benefits as well as drawbacks: patients and providers have ready access to virtually all of human knowledge, but some websites are contaminated with misinformation and many people have difficulty differentiating between solid, evidence-based data and untruths. The role of the future physician will shift as complexity in health care increases and as artificial intelligence and other technologies advance. These technological advances demand new skills of physicians; memory and knowledge accumulation will diminish in importance while information management skills will become more important. In parallel, medical educators must enhance their teaching and assessment of critical human skills (e.g., clear communication, empathy) in the delivery of patient care. The authors emphasize the enduring role of critical human skills in safe and effective patient care even as medical practice is increasingly guided by artificial intelligence and related technology, and they suggest new and longitudinal ways of assessing essential noncognitive skills to meet the demands of the future. The authors envision practical and achievable benefits accruing to patients and providers if practitioners leverage technological advancements to facilitate the development of their critical human skills.


Artificial Intelligence/standards , Clinical Competence/standards , Empathy , Patient Care/psychology , Patient Care/standards , Physician's Role/psychology , Physicians/psychology , Therapy, Computer-Assisted/standards , Adult , Female , Humans , Male , Middle Aged
5.
Acad Med ; 94(8): 1103-1107, 2019 08.
Article En | MEDLINE | ID: mdl-31135402

Collaboration among the national organizations responsible for self-regulation in medicine in the United States is critical, as achieving the quadruple aim of enhancing the patient experience and improving population health while lowering costs and improving the work life of clinicians and staff is becoming more challenging. The leaders of the national organizations responsible for accreditation, assessment, licensure, and certification recognize this and have come together as the Coalition for Physician Accountability. The coalition, which meets twice per year, was created in 2011 as a discursive space for group discussion and action related to advancing health care, promoting professional accountability, and improving the education, training, and assessment of physicians. The coalition offers a useful avenue for members to seek common ground and develop constructive, thoughtful solutions to common challenges. Its members have endorsed consensus statements about current topics relevant to health care regulation, advanced innovation in medical school curricula, encouraged a plan for single graduate medical education accreditation for physicians holding MD and DO degrees, supported interprofessional education, championed opioid epidemic mitigation strategies, and supported initiatives responsive to physician workforce shortages, including the Interstate Medical Licensure Compact, an expedited pathway by which eligible physicians may be licensed to practice in multiple jurisdictions.


Education, Medical, Graduate/standards , Physicians/standards , Social Responsibility , Accreditation/organization & administration , Certification/organization & administration , Humans , Intersectoral Collaboration , Licensure, Medical , United States
6.
Acad Med ; 94(3): 305-308, 2019 03.
Article En | MEDLINE | ID: mdl-30570495

The United States Medical Licensing Examination has long been valued by state medical boards as an evidence-based, objective assessment of an individual's progressive readiness for the unsupervised practice of medicine. As a secondary use, it is also valued by residency program directors in resident selection. In response to Chen and colleagues' consideration of changing Step 1 scoring to pass/fail, contextual and germane information is offered in this Invited Commentary, including a discussion of potential consequences, risks, and benefits of such a change. A review of stakeholders involved in the residency application process and their possible reactions to a scoring change precedes a discussion of possible changes to the process-changes that may better address expressed concerns. In addition to pass/fail scoring, these include limiting score releases only to examinees, changing the timing of score releases, increasing the amount and improving the quality of information about residency programs available to applicants, developing additional quantitative measures of applicant characteristics important to residency programs, and developing a rating system for medical school student evaluations. Thoughtful and broad consideration of stakeholders and their concerns, informed by the best evidence available, will be necessary to maximize the potential for improvement and minimize the risk of unintended adverse consequences resulting from any changes to the status quo. An upcoming invitational conference in 2019 that is being organized by several stakeholder organizations is expected to further explore underlying issues and concerns related to these options.


Internship and Residency , Climate , Humans , Schools, Medical , Students , United States
7.
Acad Med ; 91(11): 1483-1487, 2016 11.
Article En | MEDLINE | ID: mdl-27627632

The residency application process requires that applicants, their schools, and residency programs exchange and evaluate information to accomplish successful matching of applicants to postgraduate training positions. The different motivations of these stakeholders influence both the types of information provided by medical schools and the perceived value and completeness of information received by residency programs. National standards have arisen to shape the type and format of information reported by medical schools about their students, though criticisms about the candor and completeness of the information remain. Growth in the number of applicants without proportional expansion of training positions and continued increases in the number of applications submitted by each applicant contribute to increases in the absolute number of applications each year, as well as the difficulty of evaluating applicants. Few standardized measures exist to facilitate comparison of applicants, and the heterogeneous nature of provided information limits its utility. Residency programs have been accused of excluding qualified applicants through use of numerical screening methods, such as United States Medical Licensing Examination (USMLE) Step 1 scores. Applicant evaluation includes review of standardized measurements such as USMLE Step 1 scores and other surrogate markers of future success. Proposed potential improvements to the residency application process include limiting applications; increasing the amount and/or types of information provided by applicants and by residency programs; shifting to holistic review, with standardization of metrics for important attributes; and fundamental reanalysis of the residency application process. A solution remains elusive, but these approaches may merit further consideration.


Internship and Residency/organization & administration , School Admission Criteria , Schools, Medical/organization & administration , Humans , United States
10.
Acad Med ; 88(11): 1670-5, 2013 Nov.
Article En | MEDLINE | ID: mdl-24072122

The National Board of Medical Examiners (NBME) reviewed all components of the United States Medical Licensing Examination as part of a strategic planning activity. One recommendation generated from the review called for enhancements of the communication skills component of the Step 2 Clinical Skills (Step 2 CS) examination. To address this recommendation, the NBME created a multidisciplinary team that comprised experts in communication content, communication measurement, and implementation of standardized patient (SP)-based examinations. From 2007 through 2012, the team reviewed literature in physician-patient communication, examined performance characteristics of the Step 2 CS exam, observed case development and quality assurance processes, interviewed SPs and their trainers, and reviewed video recordings of examinee-SP interactions. The authors describe perspectives gained by their team from the review process and outline the resulting enhancements to the Step 2 CS exam, some of which were rolled out in June 2012.


Communication , Educational Measurement/standards , Licensure, Medical , Physician-Patient Relations , Humans , Patient Simulation , United States
11.
Adv Health Sci Educ Theory Pract ; 17(2): 165-81, 2012 May.
Article En | MEDLINE | ID: mdl-20094911

During the last decade, interest in assessing professionalism in medical education has increased exponentially and has led to the development of many new assessment tools. Efforts to validate the scores produced by tools designed to assess professionalism have lagged well behind the development of these tools. This paper provides a structured framework for collecting evidence to support the validity of assessments of professionalism. The paper begins with a short history of the concept of validity in the context of psychological assessment. It then describes Michael Kane's approach to validity as a structured argument. The majority of the paper then focuses on how Kane's framework can be applied to assessments of professionalism. Examples are provided from the literature, and recommendations for future investigation are made in areas where the literature is deficient.


Education, Medical/methods , Mental Disorders/diagnosis , Professional Competence , Professional Role , Psychological Tests , Reproducibility of Results , Humans
13.
Acad Med ; 86(10 Suppl): S63-7; quiz S68, 2011 Oct.
Article En | MEDLINE | ID: mdl-21955772

BACKGROUND: Multisource feedback can provide a comprehensive picture of a medical trainee's performance. The utility of a multisource feedback system could be undermined by lack of direct observation and accurate knowledge. METHOD: The National Board of Medical Examiners conducted a national survey of medical students, interns, residents, chief residents, and fellows to learn the extent to which certain behaviors were observed, to examine beliefs about knowledge of each other's performance, and to assess feedback. RESULTS: Increased direct observation is associated with the perception of more accurate knowledge, which is associated with increased feedback. Some evaluators provide feedback in the absence of accurate knowledge of a trainee's performance, and others who have accurate knowledge miss opportunities for feedback. CONCLUSIONS: Direct observation is a key component of an effective multisource feedback system. Medical educators and residency directors may be well advised to establish explicit criteria specifying a minimum number of observations for evaluations.


Educational Measurement/methods , Feedback , Data Collection , Internship and Residency , Students, Medical , United States
14.
Acad Med ; 86(4): 460-7, 2011 Apr.
Article En | MEDLINE | ID: mdl-21346509

As the medical education community celebrates the 100th anniversary of the seminal Flexner Report, medical education is once again experiencing significant pressure to transform. Multiple reports from many of medicine's specialties and external stakeholders highlight the inadequacies of current training models to prepare a physician workforce to meet the needs of an increasingly diverse and aging population. This transformation, driven by competency-based medical education (CBME) principles that emphasize the outcomes, will require more effective evaluation and feedback by faculty.Substantial evidence suggests, however, that current faculty are insufficiently prepared for this task across both the traditional competencies of medical knowledge, clinical skills, and professionalism and the newer competencies of evidence-based practice, quality improvement, interdisciplinary teamwork, and systems. The implication of these observations is that the medical education enterprise urgently needs an international initiative of faculty development around CBME and assessment. In this article, the authors outline the current challenges and provide suggestions on where faculty development efforts should be focused and how such an initiative might be accomplished. The public, patients, and trainees need the medical education enterprise to improve training and outcomes now.


Competency-Based Education , Education, Medical , Educational Measurement/standards , Faculty, Medical , Quality Improvement , Staff Development , Humans
15.
J Grad Med Educ ; 3(4): 511-6, 2011 Dec.
Article En | MEDLINE | ID: mdl-23205200

BACKGROUND: Multisource feedback (MSF) is emerging as a central assessment method for several medical education competencies. Planning and resource requirements for a successful implementation can be significant. Our goal is to examine barriers and challenges to a successful multisite MSF implementation, and identify the benefits of MSF as perceived by participants. METHODS: We analyzed the 2007-2008 field trial implementation of the Assessment of Professional Behaviors, an MSF program of the National Board of Medical Examiners, conducted with 8 residency and fellowship programs at 4 institutions. We use a multimethod analysis that draws on quantitative process indicators and qualitative participant experience data. Process indicators include program attrition, completion of implementation milestones, number of participants at each site, number of MSF surveys assigned and completed, and adherence to an experimental rater training protocol. Qualitative data include communications with each program and semistructured interviews conducted with key field trial staff to elicit their experiences with implementation. RESULTS: Several implementation challenges are identified, including communication gaps and difficulty scheduling implementation and training workshops. Participant interviews indicate several program changes that should enhance feasibility, including increasing communication and streamlining the training process. CONCLUSIONS: Multisource feedback is a complex educational intervention that has the potential to provide users with a better understanding of performance expectations in the graduate medical education environment. Standardization of the implementation processes and tools should reduce the burden on program administrators and participants. Further study is warranted to broaden our understanding of the resource requirements for a successful MSF implementation and to show how outcomes change as MSF gains broader acceptance.

16.
Acad Med ; 85(10 Suppl): S106-9, 2010 Oct.
Article En | MEDLINE | ID: mdl-20881691

BACKGROUND: Written feedback on professional behaviors is an important part of medical training, but little attention has been paid to the quality of written feedback and its expected impact on learning. A large body of research on feedback suggests that feedback is most beneficial when it is specific, clear, and behavioral. Analysis of feedback comments may reveal opportunities to improve the value of feedback. METHOD: Using a directed content analysis, the authors coded and analyzed feedback phrases collected as part of a pilot of a developmental multisource feedback program. The authors coded feedback on various dimensions, including valence (positive or negative) and whether feedback was directed at the level of the self or behavioral performance. RESULTS: Most feedback comments were positive, self-oriented, and lacked actionable information that would make them useful to learners. CONCLUSIONS: Comments often lack effective feedback characteristics. Opportunities exist to improve the quality of comments provided in multisource feedback.


Education, Medical, Graduate , Feedback , Professional Practice , Writing , Education, Medical , Fellowships and Scholarships , Female , Humans , Internship and Residency , Male , Pilot Projects , Surveys and Questionnaires , United States
17.
Med Teach ; 31(4): 348-61, 2009 Apr.
Article En | MEDLINE | ID: mdl-19404894

Medical professionalism is increasingly recognized as a core competence of medical trainees and practitioners. Although the general and specific domains of professionalism are thoroughly characterized, procedures for assessing them are not well-developed. This article outlines an approach to designing and implementing an assessment program for medical professionalism that begins and ends with asking and answering a series of critical questions about the purpose and nature of the program. The process of exposing an assessment program to a series of interrogatives that comprise an integrated and iterative framework for thinking about the assessment process should lead to continued improvement in the quality and defensibility of that program.


Evaluation Studies as Topic , Physician's Role , Professional Competence/standards , Humans
...