Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Med Teach ; 32(6): 516-20, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20515385

RESUMEN

This collaborative project between the National Board of Medical Examiners (NBME) and Case Western Reserve University (CWRU) School of Medicine explored the design and use of cumulative achievement tests in basic science education. In cumulative achievement testing, integrative end-of-unit tests are deliberately constructed to systematically retest topics covered in previous units as well as material from the just-completed unit. CWRU faculty developed and administered a series of six web-based cumulative achievement tests using retired United States Medical Licensing Examination (USMLE) step 1 test material and tools provided by NBME's Customized Assessment Services, and trends in student performance were examined as the new CWRU basic science curriculum unfolded. This article provides the background information about test design and administration, as well as samples of score reporting information for students and faculty. While firm conclusions about the effectiveness of cumulative achievement testing are not warranted after a pilot test at a single school, preliminary results suggest that cumulative achievement testing may be an effective complement to progress testing, with the former used to encourage retention of already-covered material and the latter used to assess growth toward the knowledge and skills expected of a graduating student.


Asunto(s)
Conducta Cooperativa , Educación de Pregrado en Medicina , Evaluación Educacional , Competencia Clínica/normas , Humanos , Licencia Médica , Desarrollo de Programa , Estados Unidos
2.
Med Teach ; 32(6): 480-5, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20515377

RESUMEN

This collaborative project between the National Board of Medical Examiners and four schools in the UK is investigating the feasibility and utility of a cross-school progress testing program drawing on test material recently retired from the United States Medical Licensing Examination (USMLE) Step 2 Clinical Knowledge (CK) examination. This article describes the design of the progress test; the process used to build, translate (localize), review, and finalize test forms; the approach taken to (web-based) test administration; and the procedure used to calculate and report scores. Results to date have demonstrated that it is feasible to use test items written for the US licensing examination as a base for developing progress test forms for use in the UK. Some content areas can be localized more readily than others, and care is clearly needed in review and revision of test materials to ensure that it is clinically appropriate and suitably phrased for use in the UK. Involvement of content experts in review and vetting of the test material is essential, and it is clearly desirable to supplement expert review with the use of quality control procedures based on the item statistics as a final check on the appropriateness of individual test items.


Asunto(s)
Evaluación Educacional/normas , Cooperación Internacional , Facultades de Medicina , Humanos , Internet , Licencia Médica , Reino Unido , Estados Unidos
3.
Arch Intern Med ; 147(6): 1049-52, 1987 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-3592872

RESUMEN

There are substantial problems with the clinical training provided to medical students and with the assessment procedure used by medical schools to ensure that students have acquired the clinical skills necessary for graduate medical education. These skills are not evaluated carefully nor systematically at any point in training or licensure. This article describes the use of standardized patients to help resolve some of these shortcomings. Standardized patients are non-physicians highly trained to function in the multiple roles of patient, teacher, and evaluator while realistically replicating a patient encounter. They are effective teachers of interviewing and physical examination skills. They can help to provide a controlled exposure to common ambulatory and difficult patient communication situations. Initial studies indicate the promise of this approach for ensuring the competence of medical school graduates.


Asunto(s)
Prácticas Clínicas/normas , Competencia Clínica , Educación de Pregrado en Medicina/normas , Enseñanza/métodos , Comunicación , Evaluación Educacional , Entrevistas como Asunto , Anamnesis , Pacientes , Examen Físico , Relaciones Médico-Paciente , Enseñanza/normas
4.
Arch Intern Med ; 147(11): 1981-5, 1987 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-3675100

RESUMEN

The University of Massachusetts Medical Center, Worcester, attempted to develop a standardized, performance-based test battery aimed directly at assessing the critical aspects of clinical competence required for graduation from medical school. The battery used a blend of standardized patient-based and written test materials and was designed to yield a profile of scores, providing a "diagnosis" of student strengths and weaknesses on a skill-by-skill basis. Results indicate that a stable, reproducible assessment of clinical skills can be achieved in a one- to two-day test battery, depending on the specific skills measured. The resulting score profile provides faculty with important information about the clinical competence of students that is not readily available from other sources, thus improving the breadth and accuracy of student assessment. A long-term goal is that performance-based testing techniques will be incorporated into the licensure process to evaluate clinical skills and ensure the competence of graduating physicians.


Asunto(s)
Competencia Clínica , Evaluación Educacional/métodos , Centros Médicos Académicos , Medicina Interna/educación , Internado y Residencia , Massachusetts , Proyectos Piloto
5.
Qual Saf Health Care ; 13 Suppl 1: i41-5, 2004 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-15465954

RESUMEN

Over the last several years there has been much attention focused on the detection and remediation of problems that pose potential threats to patient safety and that interfere with the provision of effective care. It has been noted that changes in medical education and assessment are integral to eventual improvement in this area. Within the assessment system used to licence physicians in the United States, there has been an evolution of assessment formats intended to improve the measurement of knowledge and skills, including the recent development of computer based patient simulations and clinical skills assessments. A number of new testing formats intended to further enhance assessment of critical knowledge and skills will be available in the near future.


Asunto(s)
Competencia Clínica , Simulación por Computador , Evaluación Educacional , Licencia Médica , Médicos/normas , Estados Unidos
6.
Acad Med ; 68(2 Suppl): S51-6, 1993 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-8431254

RESUMEN

This study investigated the relationship between the National Board of Medical Examiners (NBME) Part I and Part II examination scores and subsequent performances on the 1991 certification examinations of the American Boards of Orthopaedic Surgery (ABOS), Dermatology (ABD), and Preventive Medicine (ABPM). There were significant correlations between scores on all specialty board examinations and all NBME scores, with higher correlations for subscores more closely related to specialty content. Both NBME Part I and NBME Part II were useful predictors; however, the relationships with NBME Part II were generally stronger. Strong relationships were observed between specialty board pass-fail outcomes and NBME scores: examinees whose NBME scores were below 400 were at much greater risk for failing their specialty board examinations.


Asunto(s)
Logro , Dermatología , Evaluación Educacional , Internado y Residencia , Ortopedia , Medicina Preventiva , Humanos , Reproducibilidad de los Resultados , Consejos de Especialidades
7.
Acad Med ; 72(12): 1097-102, 1997 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-9435717

RESUMEN

PURPOSE: To examine students' growth in basic science knowledge during medical school and to evaluate the accuracy of students' scores on the National Board of Medical Examiners Comprehensive Basic Science Subject Examination (CBSE) as predictors of their performances on Step 1 of the United States Medical Licensing Examination (USMLE). METHOD: A public medical school in the southwestern United States evaluated 58 students from the entering class of 1993 by administering the CBSE in April 1994, December 1994, and February 1996. These students then sat for the USMLE Step 1 in June 1996. For each CBSE administration, descriptive statistics were calculated and least-squares regression analyses were performed to predict the students' Step 1 scores from their CBSE scores. RESULTS: The students' CBSE scores improved as they progressed through their basic science course work and clinical clerkships. The strongest correlation (r = .85) between the students' CBSE scores and their Step 1 scores was for the second CBSE administration; the weakest correlation (r = .73) was for the first CBSE administration. CONCLUSION: These results indicate that basic science knowledge continues to grow throughout the first three years of medical school and that the CBSE is a useful tool for the identification of students at risk for failing the USMLE Step 1.


Asunto(s)
Educación de Pregrado en Medicina/normas , Evaluación Educacional , Licencia Médica , Ciencia/educación , Estudiantes de Medicina , Curriculum , Humanos , Estudios Longitudinales , Sudoeste de Estados Unidos
8.
Acad Med ; 69(3): 216-24, 1994 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-8135980

RESUMEN

BACKGROUND: The patient-physician relationship is central to medical practice. Increasingly, educators and certifying bodies seek to assess trainees' humanistic qualities. METHOD: The humanistic qualities of first-year internal medicine residents were rated in 1987-88 and 1988-89 by patients hospitalized on the general internal medicine and pulmonary services of the University of Michigan Hospital. Attending physicians (for 1988-89 only), program supervisors (program directors and chief residents), and nurses (for 1988-89 only) rated the same residents, and these ratings were compared with those of the patients. RESULTS: A total of 625 patient questionnaires for 70 residents were analyzed, with a mean of nine patient evaluations per resident and a range from four to 24. Analysis showed that more than 50 patients would need to rate each resident to achieve desired levels of reproducibility. Large numbers of attending physicians (20 to 50) would also be required to obtain a reproducible assessment; the attending physicians' ratings correlated only moderated well (r = .26) with the patients' ratings. Ratings from smaller numbers of program supervisors (five to ten) and nurses (ten to 20) would be needed for reproducible assessments. However, only the nurses' ratings showed a moderately strong relationship (r = .35) with the patients' ratings. CONCLUSIONS: Patients, attending physicians, program supervisors, and nurses view differently the humanistic attributes of residents as they interact with patients. Large numbers of patients and attending physicians would be needed to obtain reproducible ratings. Nurses' and program supervisors' ratings are much more reproducible, but nurses' perceptions correlate more closely to those of patients.


Asunto(s)
Competencia Clínica/normas , Humanismo , Medicina Interna/educación , Internado y Residencia , Cuerpo Médico de Hospitales/psicología , Cuerpo Médico de Hospitales/normas , Relaciones Médico-Paciente , Adulto , Anciano , Actitud del Personal de Salud , Evaluación Educacional , Femenino , Humanos , Masculino , Persona de Mediana Edad , Personal de Enfermería en Hospital/psicología , Satisfacción del Paciente , Ejecutivos Médicos/psicología , Reproducibilidad de los Resultados , Encuestas y Cuestionarios
9.
Acad Med ; 66(8): 429-33, 1991 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-1883423

RESUMEN

The National Board of Medical Examiners (NBME) has reviewed its procedure for setting pass-fail standards in conjunction with the introduction of its comprehensive Part I and Part II examinations in 1991. This report gives background information on the procedures used for the past decade to set pass-fail standards for the Part I and Part II examinations, an overview of the NBME's research on standard setting, under way since 1987, and a statement of its plans for determining pass-fail standards for these examinations. In 1981 the NBME changed from the norm-referenced standard, used since the 1950s, to a criterion-group approach to setting pass-fail standards. Although the criterion-group system resulted in more stable standards, it still meant that the standard moved whenever the performance of the reference group changed. After conducting research, surveying constituencies, and examining alternatives, the NBME has adopted a new standard-setting plan that has the following components: a content-based standard-setting procedure; determination of standards by an appropriate group; use of a fixed standard; and periodic review of standards and standard-setting procedures. This new process will produce three types of improvements: it will incorporate deliberations informed by a wide range of information, including content review; annual review of examinees' performances and pass-fail results and triennial restudy of the process will add further quality control; and a fixed standard will mean that comparable performances will be required across administrations in order to pass.


Asunto(s)
Educación de Pregrado en Medicina/normas , Evaluación Educacional/normas , Estados Unidos
10.
Acad Med ; 67(9): 553-6, 1992 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-1520408

RESUMEN

Medical licensure in the United States is in transition. In June 1991, the National Board of Medical Examiners (NBME) made major modifications in the content, format, pass/fail standards, and score reports of the NBME Part I examination. This year, Part I became Step 1, the first of three components of the United States Medical Licensing Examination (USMLE), which will shortly be the sole examination pathway to initial licensure for allopathic physicians. This essay describes Step 1, reviews the phase-in plans for the USMLE, and discusses the potential impact of both on medical schools' teaching and students' learning of the basic biomedical sciences. The authors recommend that medical schools (1) abandon the use of Step 1 as a sole criterion for student promotion to the third year and (2) carefully review other examination-related requirements for promotion and graduation.


Asunto(s)
Educación de Pregrado en Medicina/normas , Evaluación Educacional/normas , Licencia Médica/normas , Ciencia/educación , Enseñanza/normas , Curriculum , Estudios de Evaluación como Asunto , Humanos , Aprendizaje , Política Organizacional , Facultades de Medicina/organización & administración , Estados Unidos
11.
Acad Med ; 64(8): 454-7, 1989 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-2751784

RESUMEN

In October 1988, seven foreign medical graduates participated in the first administration of the examination devised by the Medical Education Evaluation Program (MEEP) mandated by the State Medical Board of Ohio. The MEEP was established to provide an objective evaluation of an applicant's clinical competencies; passing the MEEP examination was intended to certify that the applicant's clinical skills were comparable to those of a medical student graduating from a school accredited by the Liaison Committee on Medical Education. An applicant who successfully passed the MEEP examination and fulfilled the other Ohio licensure requirements would be eligible to take the Federation Licensing Examination (FLEX) and apply for an unrestricted license to practice medicine in Ohio. The paper describes the origin and development of the MEEP examination and the testing modalities selected (multiple-choice examinations and the use of standardized patients). Four fundamental areas were tested; these are named and described, along with the method for calculating scores for each area and the criteria for passing the different components of the examination. Although the small sample size prohibited meaningful data analysis for the performance of the first group of MEEP candidates, the MEEP examination appears to meet psychometric standards of certifying and licensing examinations, based on data from comparable tests taken by beginning fourth-year medical students in New England and NBME Part III examination examinees. Some potential pitfalls of the MEEP examination are mentioned, as well as the fact that it presents a challenge to boards of medical examiners of other states to implement performance-based assessments of physicians who graduate from non-accredited medical schools.


Asunto(s)
Competencia Clínica , Evaluación Educacional/métodos , Médicos Graduados Extranjeros , Educación de Pregrado en Medicina , Estudios de Evaluación como Asunto , Ohio
12.
Acad Med ; 65(5): 320-6, 1990 May.
Artículo en Inglés | MEDLINE | ID: mdl-2337437

RESUMEN

This paper describes a collaborative effort among five New England medical schools to assess important clinical skills of fourth-year medical students graduating in the class of 1988; results are presented from the four schools that provided sufficient data. Faculty from each school developed 36 patient cases representing a variety of common ambulatory-care problems. Over the course of a day, each student, on average, interacted with 16 different standardized patients, who were nonphysicians trained to accurately and consistently portray a patient in a simulated clinical setting. The students obtained focused histories, performed relevant physical examinations, and provided the patients with education and counseling. At each school, the performance of a small number of the students fell below standards set by the faculty. These deficiencies were not detected with the evaluation strategies currently being used. Although the use of standardized patients should not substitute for the process of faculty observing students as they interact with real patients, it appears that standardized patients can provide faculty with important information, not readily available from other sources, about students' performances of essential clinical activities and the levels of their clinical skills.


Asunto(s)
Atención Ambulatoria , Competencia Clínica , Educación Médica , Enseñanza/métodos , Consejo , Evaluación Educacional , Humanos , Educación del Paciente como Asunto , Pacientes
13.
Acad Med ; 65(1): 8-14, 1990 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-2294927

RESUMEN

The increased interest, in North America and around the world, in problem-based and community-oriented medical curricula has sparked interest in the evaluation of these innovative programs. In January 1989, the Josiah Macy Jr. Foundation sponsored a conference to consider designs for evaluation studies and the potential distinctive outcomes of the innovative curricula that might be foci of these studies. After defining an "innovative curriculum," the participants identified seven characteristics of "important evaluation studies," particularly endorsing studies that compare curricula as whole entities. The participants then identified 26 areas where differences between graduates of innovative and traditional curricula might be expected, and five equally important areas where differences are not expected. Distinctive outcomes of innovative curricula were anticipated in areas such as interpersonal skills, continuing learning, and professional satisfaction. Overall, these recommendations are offered to stimulate creative evaluations of the growing number of innovative programs in medical education.


Asunto(s)
Curriculum , Educación Médica , Prácticas Clínicas , Competencia Clínica , Educación Médica Continua , Educación de Pregrado en Medicina , Estudios de Evaluación como Asunto , Estados Unidos
14.
Acad Med ; 75(5): 426-31, 2000 May.
Artículo en Inglés | MEDLINE | ID: mdl-10824764

RESUMEN

In 1998, the authors, acting on behalf of the National Board of Medical Examiners (NBME), undertook a review of the scoring policy for the United States Medical Licensing Examination (USMLE). The main goal was to determine the likely effect of changing from numeric score reporting to reporting pass-fail status. Several groups were surveyed across the nation to learn how they felt they would be affected by such a change, and why: all 54 medical boards; 1,600 randomly selected examinees (including 250 foreign medical graduates) who had recently taken either Step 1, Step 2, or Step 3 of the USMLE; 2,000 residency directors; the deans, education deans, and student affairs deans at all 125 U.S. medical schools accredited by the Liaison Committee on Medical Education; and all 17 members of the Council of Medical Specialty Societies. Responses from the different groups surveyed varied from 80% to a little less than half. The authors describe in detail the various views of the respondents and their reasons. Some members in each group favored each of the reporting formats, but the trend was to favor numeric score reporting. The majority of the responding examinees desired that their USMLE scores be sent to them in numeric form but sent to their schools and to residency directors in pass-fail form. Based on the responses and a thorough discussion of their implications, the Composite Committee (which determines USMLE score-reporting policy) decided that there is no basis at this time for changing the current policy, but that it would review the policy in the future when necessary.


Asunto(s)
Competencia Clínica/estadística & datos numéricos , Evaluación Educacional , Concesión de Licencias , Recolección de Datos , Estados Unidos
15.
Eval Health Prof ; 7(4): 485-99, 1984 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-10269331

RESUMEN

This study compares the reliability, validity, and efficiency of three multiple-choice question (MCQs) ability scales with patient management problems (PMPs). Data are from the 1980, 1981, and 1982 American Board of Internal Medicine Certifying Examinations. The MCQ ability scales were constructed by classifying the one best answer and multiple-true/false questions in each examination as measuring predominantly clinical judgment, synthesis, or knowledge. Clinical judgment items require prioritizing or weighing management decisions; synthesis items require the integration of findings into a diagnostic decision; and knowledge items stress recall of factual information. Analyses indicate that the MCQ ability scales are more reliable and valid per unit of testing time than are PMPs and that clinical judgment and synthesis scales are slightly more correlated with PMPs than is the knowledge scale. Additionally, all MCQ ability scales seem to be measuring the same aspects of competence as PMPs.


Asunto(s)
Competencia Clínica/normas , Medicina Interna/normas , Evaluación Educacional , Estudios de Evaluación como Asunto , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA