Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 403
Filtrar
1.
BMC Med Educ ; 24(1): 717, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956537

RESUMEN

BACKGROUND: The National Medical Licensing Examination (NMLE) is the only objective, standardized metric to evaluate whether a medical student possessing the professional knowledge and skills necessary to work as a physician. However, the overall pass rate of NMLE in our hospital in 2021 was much lower than that of Peking Union Medical College Hospital, which was required to be further improved. METHODS: To find the reasons for the unsatisfactory performance in 2021, the quality improvement team (QIT) organized regular face-to-face meetings for in-depth discussion and questionnaire, and analyzed the data by "Plato analysis" and "Brainstorming method". After finding out the reasons, the "Plan-Do-Check-Action" (PDCA) cycle was continued to identify and solve problems, which included the formulation and implementation of specific training plans by creating the "Gantt charts", the check of effects, and continuous improvements from 2021 to 2022. Detailed information about the performance of students in 2021 and 2022, and the attendance, assessment, evaluation and suggestions from our hospital were provided by the relevant departments, and the pass rate-associated data was collected online. RESULTS: After the PDCA plan, the pass rate of NMLE in our hospital increased by 10.89% from 80.15% in 2021 to 91.04% in 2022 (P = 0.0109), with the pass rate of skill examination from 95.59% in 2021 to 99.25% in 2022 (P = 0.0581) and theoretical examination from 84.5% in 2021 to 93.13% in 2022 (P = 0.027). Additionally, the mean scores of all examinees increased with the theoretical examination score increasing from 377.0 ± 98.76 in 2021 to 407.6 ± 71.94 in 2022 (P = 0.004). CONCLUSIONS: Our results showed a success application of the PDCA plan in our hospital which improved the pass rate of the NMLE in 2022, and the PDCA plan may provide a practical framework for future medical education and further improve the pass rate of NMLE in the next year.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Licencia Médica , Estudiantes de Medicina , Humanos , Licencia Médica/normas , Competencia Clínica/normas , Mejoramiento de la Calidad , China , Educación de Pregrado en Medicina/normas , Encuestas y Cuestionarios
2.
J Med Internet Res ; 26: e60807, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39052324

RESUMEN

BACKGROUND: Over the past 2 years, researchers have used various medical licensing examinations to test whether ChatGPT (OpenAI) possesses accurate medical knowledge. The performance of each version of ChatGPT on the medical licensing examination in multiple environments showed remarkable differences. At this stage, there is still a lack of a comprehensive understanding of the variability in ChatGPT's performance on different medical licensing examinations. OBJECTIVE: In this study, we reviewed all studies on ChatGPT performance in medical licensing examinations up to March 2024. This review aims to contribute to the evolving discourse on artificial intelligence (AI) in medical education by providing a comprehensive analysis of the performance of ChatGPT in various environments. The insights gained from this systematic review will guide educators, policymakers, and technical experts to effectively and judiciously use AI in medical education. METHODS: We searched the literature published between January 1, 2022, and March 29, 2024, by searching query strings in Web of Science, PubMed, and Scopus. Two authors screened the literature according to the inclusion and exclusion criteria, extracted data, and independently assessed the quality of the literature concerning Quality Assessment of Diagnostic Accuracy Studies-2. We conducted both qualitative and quantitative analyses. RESULTS: A total of 45 studies on the performance of different versions of ChatGPT in medical licensing examinations were included in this study. GPT-4 achieved an overall accuracy rate of 81% (95% CI 78-84; P<.01), significantly surpassing the 58% (95% CI 53-63; P<.01) accuracy rate of GPT-3.5. GPT-4 passed the medical examinations in 26 of 29 cases, outperforming the average scores of medical students in 13 of 17 cases. Translating the examination questions into English improved GPT-3.5's performance but did not affect GPT-4. GPT-3.5 showed no difference in performance between examinations from English-speaking and non-English-speaking countries (P=.72), but GPT-4 performed better on examinations from English-speaking countries significantly (P=.02). Any type of prompt could significantly improve GPT-3.5's (P=.03) and GPT-4's (P<.01) performance. GPT-3.5 performed better on short-text questions than on long-text questions. The difficulty of the questions affected the performance of GPT-3.5 and GPT-4. In image-based multiple-choice questions (MCQs), ChatGPT's accuracy rate ranges from 13.1% to 100%. ChatGPT performed significantly worse on open-ended questions than on MCQs. CONCLUSIONS: GPT-4 demonstrates considerable potential for future use in medical education. However, due to its insufficient accuracy, inconsistent performance, and the challenges posed by differing medical policies and knowledge across countries, GPT-4 is not yet suitable for use in medical education. TRIAL REGISTRATION: PROSPERO CRD42024506687; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=506687.


Asunto(s)
Evaluación Educacional , Licencia Médica , Humanos , Licencia Médica/normas , Licencia Médica/estadística & datos numéricos , Evaluación Educacional/métodos , Evaluación Educacional/normas , Evaluación Educacional/estadística & datos numéricos , Competencia Clínica/estadística & datos numéricos , Competencia Clínica/normas , Inteligencia Artificial , Educación Médica/normas
3.
South Med J ; 117(6): 342-344, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38830589

RESUMEN

OBJECTIVES: This study assessed the content of US Medical Licensing Examination question banks with regard to out-of-hospital births and whether the questions aligned with current evidence. METHODS: Three question banks were searched for key words regarding out-of-hospital births. A thematic analysis was then utilized to analyze the results. RESULTS: Forty-seven questions were identified, and of these, 55% indicated a lack of inadequate, limited, or irregular prenatal care in the question stem. CONCLUSIONS: Systematic studies comparing prenatal care in out-of-hospital births versus hospital births are nonexistent, leading to the potential for bias and adverse outcomes. Adjustments to question stems that accurately portray current evidence are recommended.


Asunto(s)
Licencia Médica , Humanos , Estados Unidos , Licencia Médica/normas , Femenino , Embarazo , Atención Prenatal/normas , Evaluación Educacional/métodos , Educación Médica/métodos , Educación Médica/normas
4.
Rehabilitation (Stuttg) ; 63(3): 189-196, 2024 Jun.
Artículo en Alemán | MEDLINE | ID: mdl-38866029

RESUMEN

BACKGROUND: The learning objectives in the current cross-sectional subject "Rehabilitation, Physical Medicine, Naturopathic Medicine" have been revised as part of the further development of the National Competency-Based Catalogue of Learning Objectives for Medicine (NKLM) to its new version 2.0. Since the NKLM is designed as an interdisciplinary catalogue, a subject assignment seemed necessary from the point of view of various stakeholders. Thus, the German Association of Scientific Medical Societies (AWMF) and the German medical faculties initiated a subject assignment process. The assignment process for the subject "Physical and Rehabilitative Medicine, Naturopathic Medicine" (PRM-NHV; according to the subject list of the first draft of the planned novel medical license regulations from 2020) is presented in this paper. MATERIAL AND METHODS: The AWMF invited its member societies to participate in the assignment of learning objectives of chapters VI, VII, and VIII of the NKLM 2.0 to the individual subjects to which they consider to contribute in teaching. For "PRM-NHV", representatives of the societies for rehabilitation sciences (DGRW), physical and rehabilitation medicine (DGPRM), orthopaedics and traumatology (DGOU), as well as for naturopathy (DGNHK) participated. In a structured consensus process according to the DELPHI methodology, the learning objectives were selected and consented. Subsequently, subject recommendations were made by the AWMF for each learning objective. RESULTS: From the NKLM 2.0, a total of 100 competency-based learning objectives of chapters VII and VIII for the subject "PRM-NHV" were consented by the representatives of the involved societies for presentation on the NKLM 2.0 online platform. CONCLUSIONS: In the context of the revision process of medical studies in Germany and under the umbrella of the AWMF and the German medical faculties, a broad consensus of competency-based learning objectives in the subject "PRM-NHV" could be achieved. This provides an important orientation for all medical faculties both for the further development of teaching in the cross-sectional subject "Rehabilitation, Physical Medicine, Naturopathic Medicine" according to the 9th revision of the medical license regulations, which has been valid for twenty years, and for the preparation of the corresponding subjects in the draft bill of the novel license regulations.


Asunto(s)
Competencia Clínica , Curriculum , Naturopatía , Medicina Física y Rehabilitación , Alemania , Medicina Física y Rehabilitación/educación , Medicina Física y Rehabilitación/normas , Catálogos como Asunto , Educación Basada en Competencias/normas , Sociedades Médicas , Sociedades Científicas , Rehabilitación/normas , Humanos , Licencia Médica/normas , Licencia Médica/legislación & jurisprudencia
5.
BMC Med Educ ; 24(1): 504, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38714975

RESUMEN

BACKGROUND: Evaluation of students' learning strategies can enhance academic support. Few studies have investigated differences in learning strategies between male and female students as well as their impact on United States Medical Licensing Examination® (USMLE) Step 1 and preclinical performance. METHODS: The Learning and Study Strategies Inventory (LASSI) was administered to the classes of 2019-2024 (female (n = 350) and male (n = 262)). Students' performance on preclinical first-year (M1) courses, preclinical second-year (M2) courses, and USMLE Step 1 was recorded. An independent t-test evaluated differences between females and males on each LASSI scale. A Pearson product moment correlation determined which LASSI scales correlated with preclinical performance and USMLE Step 1 examinations. RESULTS: Of the 10 LASSI scales, Anxiety, Attention, Information Processing, Selecting Main Idea, Test Strategies and Using Academic Resources showed significant differences between genders. Females reported higher levels of Anxiety (p < 0.001), which significantly influenced their performance. While males and females scored similarly in Concentration, Motivation, and Time Management, these scales were significant predictors of performance variation in females. Test Strategies was the largest contributor to performance variation for all students, regardless of gender. CONCLUSION: Gender differences in learning influence performance on STEP1. Consideration of this study's results will allow for targeted interventions for academic success.


Asunto(s)
Educación de Pregrado en Medicina , Evaluación Educacional , Licencia Médica , Estudiantes de Medicina , Humanos , Femenino , Masculino , Evaluación Educacional/métodos , Educación de Pregrado en Medicina/normas , Factores Sexuales , Licencia Médica/normas , Aprendizaje , Estados Unidos , Rendimiento Académico , Adulto Joven
8.
PLoS One ; 19(4): e0302217, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38687696

RESUMEN

Efforts are being made to improve the time effectiveness of healthcare providers. Artificial intelligence tools can help transcript and summarize physician-patient encounters and produce medical notes and medical recommendations. However, in addition to medical information, discussion between healthcare and patients includes small talk and other information irrelevant to medical concerns. As Large Language Models (LLMs) are predictive models building their response based on the words in the prompts, there is a risk that small talk and irrelevant information may alter the response and the suggestion given. Therefore, this study aims to investigate the impact of medical data mixed with small talk on the accuracy of medical advice provided by ChatGPT. USMLE step 3 questions were used as a model for relevant medical data. We use both multiple-choice and open-ended questions. First, we gathered small talk sentences from human participants using the Mechanical Turk platform. Second, both sets of USLME questions were arranged in a pattern where each sentence from the original questions was followed by a small talk sentence. ChatGPT 3.5 and 4 were asked to answer both sets of questions with and without the small talk sentences. Finally, a board-certified physician analyzed the answers by ChatGPT and compared them to the formal correct answer. The analysis results demonstrate that the ability of ChatGPT-3.5 to answer correctly was impaired when small talk was added to medical data (66.8% vs. 56.6%; p = 0.025). Specifically, for multiple-choice questions (72.1% vs. 68.9%; p = 0.67) and for the open questions (61.5% vs. 44.3%; p = 0.01), respectively. In contrast, small talk phrases did not impair ChatGPT-4 ability in both types of questions (83.6% and 66.2%, respectively). According to these results, ChatGPT-4 seems more accurate than the earlier 3.5 version, and it appears that small talk does not impair its capability to provide medical recommendations. Our results are an important first step in understanding the potential and limitations of utilizing ChatGPT and other LLMs for physician-patient interactions, which include casual conversations.


Asunto(s)
Relaciones Médico-Paciente , Humanos , Femenino , Masculino , Adulto , Comunicación , Personal de Salud , Licencia Médica/normas , Inteligencia Artificial , Consejo , Persona de Mediana Edad
9.
Urology ; 189: 144-148, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38492756

RESUMEN

OBJECTIVE: To investigate how the shift of the United States Medical Licensing Examination (USMLE) Step 1 to a Pass/Fail (P/F) scoring system impacts the perceptions of Urology Program Directors (PDs) on evaluating urology residency applicants. METHODS AND MATERIALS: A cross-sectional survey was sent to 117 PDs, including questions about program characteristics, perceptions of shelf scores and medical school rank post-transition, beliefs about the predictive value of Step 1 and Step 2 Clinical Knowledge (CK) scores for board success and residency performance, and changes in applicant parameter ranking. RESULTS: Forty-five PDs (38% response rate) participated. Notably, 49% favored releasing quantitative clerkship grades, and 71% valued medical school rank more. Opinions on Step 1 scores' correlation with board success were split (49% agreed), and 44% endorsed Step 2 CK scores' connection to board performance. As predictors of good residents, only 9% and 22% considered Step 1 and Step 2 CK scores, respectively, indicative. Clerkship grades and Urology rotation recommendation letters maintained significance, while research experience gained importance. Step 2 CK scores' importance rose but did not match Step 1 scores' previous significance. CONCLUSION: The transition to P/F for USMLE Step 1 adds intricacies to urology residency selection, exposing PDs' uncertainties regarding clerkship grades and the relevance of medical school rank. This research underscores the dynamic nature of urology residency admissions, emphasizing the increasing importance of research in evaluating applicants and a diminishing emphasis on volunteering and leadership.


Asunto(s)
Evaluación Educacional , Internado y Residencia , Licencia Médica , Urología , Urología/educación , Estudios Transversales , Estados Unidos , Humanos , Licencia Médica/normas , Evaluación Educacional/métodos , Encuestas y Cuestionarios
10.
J Osteopath Med ; 124(6): 257-265, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38498662

RESUMEN

CONTEXT: The National Board of Osteopathic Medical Examiners (NBOME) administers the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA), a three-level examination designed for licensure for the practice of osteopathic medicine. The examination design for COMLEX-USA Level 3 (L3) was changed in September 2018 to a two-day computer-based examination with two components: a multiple-choice question (MCQ) component with single best answer and a clinical decision-making (CDM) case component with extended multiple-choice (EMC) and short answer (SA) questions. Continued validation of the L3 examination, especially with the new design, is essential for the appropriate interpretation and use of the test scores. OBJECTIVES: The purpose of this study is to gather evidence to support the validity of the L3 examination scores under the new design utilizing sources of evidence based on Kane's validity framework. METHODS: Kane's validity framework contains four components of evidence to support the validity argument: Scoring, Generalization, Extrapolation, and Implication/Decision. In this study, we gathered data from various sources and conducted analyses to provide evidence that the L3 examination is validly measuring what it is supposed to measure. These include reviewing content coverage of the L3 examination, documenting scoring and reporting processes, estimating the reliability and decision accuracy/consistency of the scores, quantifying associations between the scores from the MCQ and CDM components and between scores from different competency domains of the L3 examination, exploring the relationships between L3 scores and scores from a performance-based assessment that measures related constructs, performing subgroup comparisons, and describing and justifying the criterion-referenced standard setting process. The analysis data contains first-attempt test scores for 8,366 candidates who took the L3 examination between September 2018 and December 2019. The performance-based assessment utilized as a criterion measure in this study is COMLEX-USA Level 2 Performance Evaluation (L2-PE). RESULTS: All assessment forms were built through the automated test assembly (ATA) procedure to maximize parallelism in terms of content coverage and statistical properties across the forms. Scoring and reporting follows industry-standard quality-control procedures. The inter-rater reliability of SA rating, decision accuracy, and decision consistency for pass/fail classifications are all very high. There is a statistically significant positive association between the MCQ and the CDM components of the L3 examination. The patterns of associations, both within the L3 subscores and with L2-PE domain scores, fit with what is being measured. The subgroup comparisons by gender, race, and first language showed expected small differences in mean scores between the subgroups within each category and yielded findings that are consistent with those described in the literature. The L3 pass/fail standard was established through implementation of a defensible criterion-referenced procedure. CONCLUSIONS: This study provides some additional validity evidence for the L3 examination based on Kane's validity framework. The validity of any measurement must be established through ongoing evaluation of the related evidence. The NBOME will continue to collect evidence to support validity arguments for the COMLEX-USA examination series.


Asunto(s)
Evaluación Educacional , Licencia Médica , Medicina Osteopática , Estados Unidos , Humanos , Evaluación Educacional/métodos , Evaluación Educacional/normas , Licencia Médica/normas , Medicina Osteopática/educación , Medicina Osteopática/normas , Reproducibilidad de los Resultados , Competencia Clínica/normas
14.
Plast Reconstr Surg ; 148(1): 219-223, 2021 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-34076626

RESUMEN

SUMMARY: The United States Medical Licensing Examination announced the changing of Step 1 score reporting from a three-digit number to pass/fail beginning on January 1, 2022. Plastic surgery residency programs have traditionally used United States Medical Licensing Examination Step 1 scores to compare plastic surgery residency applicants. Without a numerical score, the plastic surgery residency application review process will likely change. This article discusses advantages, disadvantages, and steps forward for residency programs related to the upcoming change. The authors encourage programs to continue to seek innovative methods of objectively and holistically evaluating applications.


Asunto(s)
Evaluación Educacional/normas , Internado y Residencia/organización & administración , Licencia Médica/normas , Selección de Personal/organización & administración , Cirugía Plástica/educación , Humanos , Internado y Residencia/normas , Selección de Personal/normas , Cirugía Plástica/normas , Estados Unidos
15.
Acad Med ; 96(9): 1236-1238, 2021 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-34166234

RESUMEN

The COVID-19 pandemic interrupted administration of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam in March 2020 due to public health concerns. As the scope and magnitude of the pandemic became clearer, the initial plans by the USMLE program's sponsoring organizations (NBME and Federation of State Medical Boards) to resume Step 2 CS in the short-term shifted to long-range plans to relaunch an exam that could harness technology and reduce infection risk. Insights about ongoing changes in undergraduate and graduate medical education and practice environments, coupled with challenges in delivering a transformed examination during a pandemic, led to the January 2021 decision to permanently discontinue Step 2 CS. Despite this, the USMLE program considers assessment of clinical skills to be critically important. The authors believe this decision will facilitate important advances in assessing clinical skills. Factors contributing to the decision included concerns about achieving desired goals within desired time frames; a review of enhancements to clinical skills training and assessment that have occurred since the launch of Step 2 CS in 2004; an opportunity to address safety and health concerns, including those related to examinee stress and wellness during a pandemic; a review of advances in the education, training, practice, and delivery of medicine; and a commitment to pursuing innovative assessments of clinical skills. USMLE program staff continue to seek input from varied stakeholders to shape and prioritize technological and methodological enhancements to guide development of clinical skills assessment. The USMLE program's continued exploration of constructs and methods by which communication skills, clinical reasoning, and physical examination may be better assessed within the remaining components of the exam provides opportunities for examinees, educators, regulators, the public, and other stakeholders to provide input.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Licencia Médica/normas , COVID-19/prevención & control , Evaluación Educacional/normas , Humanos , Licencia Médica/tendencias , Estados Unidos
16.
Acad Med ; 96(9): 1319-1323, 2021 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-34133346

RESUMEN

PURPOSE: The United States Medical Licensing Examination (USMLE) recently announced 2 policy changes: shifting from numeric score reporting on the Step 1 examination to pass/fail reporting and limiting examinees to 4 attempts for each Step component. In light of these policies, exam measures other than scores, such as the number of examination attempts, are of interest. Attempt limit policies are intended to ensure minimum standards of physician competency, yet little research has explored how Step attempts relate to physician practice outcomes. This study examined the relationship between USMLE attempts and the likelihood of receiving disciplinary actions from state medical boards. METHOD: The sample population was 219,018 graduates from U.S. and Canadian MD-granting medical schools who passed all USMLE Step examinations by 2011 and obtained a medical license in the United States, using data from the NBME and the Federation of State Medical Boards. Logistic regressions estimated how attempts on Steps 1, 2 Clinical Knowledge (CK), and 3 examinations influenced the likelihood of receiving disciplinary actions by 2018, while accounting for physician characteristics. RESULTS: A total of 3,399 physicians (2%) received at least 1 disciplinary action. Additional attempts needed to pass Steps 1, 2 CK, and 3 were associated with an increased likelihood of receiving disciplinary actions (odds ratio [OR]: 1.07, 95% confidence interval [CI]: 1.01, 1.13; OR: 1.09, 95% CI: 1.03, 1.16; OR: 1.11, 95% CI: 1.04, 1.17, respectively), after accounting for other factors. CONCLUSIONS: Physicians who took multiple attempts to pass Steps 1, 2 CK, and 3 were associated with higher estimated likelihood of receiving disciplinary actions. This study offers support for licensure and practice standards to account for physicians' USMLE attempts. The relatively small effect sizes, however, caution policy makers from placing sole emphasis on this relationship.


Asunto(s)
Evaluación Educacional/estadística & datos numéricos , Disciplina Laboral/estadística & datos numéricos , Licencia Médica/estadística & datos numéricos , Médicos/estadística & datos numéricos , Mala Conducta Profesional/estadística & datos numéricos , Adulto , Canadá , Competencia Clínica , Evaluación Educacional/normas , Femenino , Humanos , Licencia Médica/normas , Modelos Logísticos , Masculino , Oportunidad Relativa , Médicos/normas , Facultades de Medicina/normas , Estados Unidos
17.
Acad Med ; 96(9): 1239-1241, 2021 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-34074900

RESUMEN

The discontinuation of the United States Medical Licensing Examination Step 2 Clinical Skills (CS) in 2020 in response to the COVID-19 pandemic marked the end of a decades-long debate about the utility and value of the exam. For all its controversy, the implementation of Step 2 CS in 2004 brought about profound changes to the landscape of medical education, altering the curriculum and assessment practices of medical schools to ensure students were prepared to take and pass this licensing exam. Its elimination, while celebrated by some, is not without potential negative consequences. As the responsibility for assessing students' clinical skills shifts back to medical schools, educators must take care not to lose the ground they have gained in advancing clinical skills education. Instead, they need to innovate, collaborate, and share resources; hold themselves accountable; and ultimately rise to the challenge of ensuring that physicians have the necessary clinical skills to safely and effectively practice medicine.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Licencia Médica/normas , COVID-19/prevención & control , Educación de Pregrado en Medicina/normas , Educación de Pregrado en Medicina/tendencias , Evaluación Educacional/normas , Humanos , Licencia Médica/tendencias , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...