Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.040
Filtrar
1.
BMC Med Educ ; 24(1): 980, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39252083

RESUMEN

PURPOSE: The United States Medical Licensing Examination (USMLE) is an examination series required for allopathic physician licensure in the United States (US). USMLE content is created and maintained by the National Board of Medical Examinations (NBME). The specialty composition of the USMLE and NBME taskforce members involved in the creation of examination content is currently unknown. METHODS: Using the 2021 USMLE and 2021 NBME Committees and Task Forces documents, we determined each member's board-certified primary specialty and involvement in test material development committees who we dubbed "test writers". Total active physicians by primary specialty were recorded from the 2020 Physician Specialty Data Report published by the Association of American Medical Colleges (AAMC). Descriptive statistics and chi-square analysis were used to analyze the cohorts. RESULTS: The USMLE and NBME test writer primary specialty composition was found to be significantly different compared to the US active physician population (USMLE χ2[32] = 172, p < .001 and NBME χ2[32] = 200, p < .001). Only nineteen specialties were represented within USMLE test writers, with three specialties being proportionally represented. Two specialties were represented within NBME test writers. Obstetrics and Gynecology physicians were proportionally represented in USMLE but not within NBME test writers. Internal Medicine (IM) accounts for the largest percentage of all USMLE test writers (60/197, 30%) with an excess representation of 31 individuals. CONCLUSIONS: There is an imbalance in the specialty representation of USMLE and NBME test writers compared to the US active physician population. These findings may have implications for the unbiased and accurate portrayal of topics in such national examinations; thus, future investigation is warranted.


Asunto(s)
Evaluación Educacional , Licencia Médica , Licencia Médica/normas , Estados Unidos , Humanos , Medicina , Médicos , Especialización
2.
BMC Med Educ ; 24(1): 1016, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39285419

RESUMEN

BACKGROUND: The ability of an expert's item difficulty ratings to predict test-taker actual performance is an important aspect of licensure examinations. Expert judgment is used as a primary source of information for users to make prior decisions to determine the pass rate of test takers. The nature of raters involved in predicting item difficulty is central to set credible standards. Therefore, this study aimed to assess and compare raters' prediction and actual Multiple-Choice Questions' difficulty of the undergraduate medicine licensure examination (UGMLE) in Ethiopia. METHOD: 815 examinees' responses to 200 Multiple-Choice Questions (MCQs) were used in this study. The study also included experts' item difficulty ratings of seven physicians who participated in the standard settings of UGMLE. Then, analysis was conducted to understand experts' rating variation in predicting the actual difficulty levels of examinees. Descriptive statistics was used to profile the mean rater's and actual difficulty value for MCQs, and ANOVA was used to compare the mean differences between raters' prediction of item difficulty. Additionally, regression analysis was used to understand the interrater variations in item difficulty predictions compared to the actual difficulty. The proportion of variance of actual difficulty explained from rater prediction was computed using regression analysis. RESULTS: In this study, the mean difference between raters' prediction and examinees' actual performance was inconsistent across the exam domains. The study revealed a statistically significant strong positive correlation between the actual and predicted item difficulty in exam domains eight and eleven. However, a non-statistically significant very weak positive correlation was reported in exam domains seven and twelve. The multiple comparison analysis showed significant differences in mean item difficulty ratings between raters. In the regression analysis, experts' item difficulty ratings of the UGMLE had 33% power in predicting the actual difficulty level. The regression model also showed a moderate positive correlation (R = 0.57) that was statistically significant at F (6, 193) = 15.58, P = 0.001. CONCLUSION: This study demonstrated the complex process for assessing the difficulty level of MCQs in the UGMLE and emphasized the benefits of using experts' ratings in advance. To ensure the exams maintain the necessary reliable and valid scores, raters' accuracy on the UGMLE must be improved. To achieve this, techniques that align with the evolving assessment methodologies must be developed.


Asunto(s)
Educación de Pregrado en Medicina , Evaluación Educacional , Licencia Médica , Humanos , Etiopía , Evaluación Educacional/métodos , Evaluación Educacional/normas , Educación de Pregrado en Medicina/normas , Licencia Médica/normas , Masculino , Femenino , Competencia Clínica/normas , Estudiantes de Medicina , Adulto
3.
JMIR Med Educ ; 10: e52784, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39140269

RESUMEN

Background: With the increasing application of large language models like ChatGPT in various industries, its potential in the medical domain, especially in standardized examinations, has become a focal point of research. Objective: The aim of this study is to assess the clinical performance of ChatGPT, focusing on its accuracy and reliability in the Chinese National Medical Licensing Examination (CNMLE). Methods: The CNMLE 2022 question set, consisting of 500 single-answer multiple choices questions, were reclassified into 15 medical subspecialties. Each question was tested 8 to 12 times in Chinese on the OpenAI platform from April 24 to May 15, 2023. Three key factors were considered: the version of GPT-3.5 and 4.0, the prompt's designation of system roles tailored to medical subspecialties, and repetition for coherence. A passing accuracy threshold was established as 60%. The χ2 tests and κ values were employed to evaluate the model's accuracy and consistency. Results: GPT-4.0 achieved a passing accuracy of 72.7%, which was significantly higher than that of GPT-3.5 (54%; P<.001). The variability rate of repeated responses from GPT-4.0 was lower than that of GPT-3.5 (9% vs 19.5%; P<.001). However, both models showed relatively good response coherence, with κ values of 0.778 and 0.610, respectively. System roles numerically increased accuracy for both GPT-4.0 (0.3%-3.7%) and GPT-3.5 (1.3%-4.5%), and reduced variability by 1.7% and 1.8%, respectively (P>.05). In subgroup analysis, ChatGPT achieved comparable accuracy among different question types (P>.05). GPT-4.0 surpassed the accuracy threshold in 14 of 15 subspecialties, while GPT-3.5 did so in 7 of 15 on the first response. Conclusions: GPT-4.0 passed the CNMLE and outperformed GPT-3.5 in key areas such as accuracy, consistency, and medical subspecialty expertise. Adding a system role insignificantly enhanced the model's reliability and answer coherence. GPT-4.0 showed promising potential in medical education and clinical practice, meriting further study.


Asunto(s)
Evaluación Educacional , Licencia Médica , Humanos , China , Evaluación Educacional/métodos , Evaluación Educacional/normas , Reproducibilidad de los Resultados , Competencia Clínica/normas
5.
BMC Med Educ ; 24(1): 930, 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39192215

RESUMEN

CONTEXT: Failure of students to pass the National Medical Licensure Examination (NMLE) is a major problem for universities and the health system in Japan. To assist students at risk for NMLE failure as early as possible after admission, this study investigated the time points (from the time of admission to graduation) at which predictive pass rate (PPR) can be used to identify students at risk of failing the NMLE. METHODS: Seven consecutive cohorts of medical students between 2012 and 2018 (n = 637) at the Gifu University Graduate School of Medicine were investigated. Using 7 variables before admission to medical school and 10 variables after admission, a prediction model to obtain the PPR for the NMLE was developed using logistic regression analysis at five time points, i.e., at admission and the end of the 1st, 2nd, 4th, and 6th grades. All students were divided into high (PPR < 95%) and low (PPR ≥ 95%) risk groups for failing the NMLE at the five time points, respectively, and the movement between the groups during 6 years in school was simulated. RESULTS: Medical students who passed the NMLE had statistically significant factors at each of the 5 time points, and the number of significant variables increased as their grade in school advanced. In addition, two factors extracted at admission were also selected as significant variables at all other time points. Especially, age at entry had a consistent and significant effect during medical school. CONCLUSIONS: Risk analysis based on multiple variables, such as PPR, can inform more effective intervention compared to a single variable, such as performance in the mock exam. A longer prospective study is required to confirm the validity of PPR.


Asunto(s)
Evaluación Educacional , Licencia Médica , Estudiantes de Medicina , Humanos , Japón , Licencia Médica/normas , Estudiantes de Medicina/estadística & datos numéricos , Femenino , Masculino , Medición de Riesgo , Fracaso Escolar , Facultades de Medicina
6.
JAMA ; 332(11): 869-870, 2024 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-39167381

RESUMEN

This Viewpoint examines physician licensure requirements administered by state medical boards and 2 lawsuits challenging the restrictions placed on interstate telemedicine.


Asunto(s)
Licencia Médica , Gobierno Estatal , Telemedicina , Humanos , Licencia Médica/legislación & jurisprudencia , Telemedicina/legislación & jurisprudencia , Estados Unidos
8.
JAMA ; 332(6): 490-496, 2024 08 13.
Artículo en Inglés | MEDLINE | ID: mdl-39008316

RESUMEN

Importance: Physician shortages and the geographic maldistribution of general and specialist physicians impair health care delivery and worsen health inequity in the US. International medical graduates (IMGs) represent a potential solution given their ready supply. Observations: Despite extensive clinical experience, evidence of competence, and willingness to practice in underserved communities, IMGs experience multiple barriers to entry in the US, including the immigration process, the pathways available for certification and licensing, and institutional reluctance to consider non-US-trained candidates. International medical graduates applying to postgraduate training programs compare favorably with US-trained candidates in terms of clinical experience, prior formal postgraduate training, and research, but have higher application withdrawal rates and significantly lower residency and fellowship match rates, a disparity that may be exacerbated by the recent elimination of objective performance metrics, such as the US Medical Licensing Examination Step 1 score. Once legally in the US, IMGs encounter additional obstacles to board eligibility, research funding, and career progression. Conclusions and Relevance: International medical graduates offer a viable and available solution to bridge the domestic physician supply gap, while improving workforce diversity and meaningfully addressing the public health implications of geographic maldistribution of general and specialist physicians, without disrupting existing physician stature and salaries. The US remains unable to integrate IMGs until systematic policy changes at the national level are implemented.


Asunto(s)
Médicos Graduados Extranjeros , Fuerza Laboral en Salud , Licencia Médica , Humanos , Certificación/legislación & jurisprudencia , Emigración e Inmigración/legislación & jurisprudencia , Médicos Graduados Extranjeros/legislación & jurisprudencia , Médicos Graduados Extranjeros/estadística & datos numéricos , Médicos Graduados Extranjeros/provisión & distribución , Fuerza Laboral en Salud/legislación & jurisprudencia , Fuerza Laboral en Salud/estadística & datos numéricos , Internado y Residencia/legislación & jurisprudencia , Internado y Residencia/estadística & datos numéricos , Licencia Médica/legislación & jurisprudencia , Licencia Médica/estadística & datos numéricos , Área sin Atención Médica , Estados Unidos
9.
J Surg Educ ; 81(10): 1428-1436, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39043510

RESUMEN

OBJECTIVE: To investigate interview and match outcomes of medical students who received pass/fail USMLE reporting vs medical students with numeric scoring during the same period. DESIGN: Retrospective analysis of a cross-sectional survey-based study. SETTING: United States 2023 residency match. PARTICIPANTS: Medical student applicants in the 2023 residency match cycle who responded to the Texas Seeking Transparency in Application to Residency (STAR) survey. RESULTS: Among 6756 applicants for the 2023 match, 496 (7.3%) took USMLE Step 1 with pass/fail reporting. Pass/fail reporting was associated with lower USMLE Step 2-CK scores (245.9 vs 250.7), fewer honored clerkships (2.4 vs 3.1), and lower Alpha Omega Alpha membership (12.5% vs 25.2%) (all p < 0.001). Applicants with numeric USMLE Step 1 scores received more interview offers after adjusting for academic performance (beta coefficient 1.04 (95% CI 0.28-1.79); p = 0.007). Numeric USMLE Step 1 scoring was associated with more interview offers in nonsurgical specialties (beta coefficient 1.64 [95% CI 0.74-2.53]; p < 0.001), but not in general surgery (beta coefficient 3.01 [95% CI -0.82 to 6.84]; p = 0.123) or surgical subspecialties (beta coefficient 1.92 [95% CI -0.78 to 4.62]; p = 0.163). Numeric USMLE Step 1 scoring was not associated with match outcome. CONCLUSIONS: Applicants with numeric USMLE Step 1 scoring had stronger academic profiles than those with pass/fail scoring; however, adjusted analyses found only weak associations with interview or match outcomes. Further research is warranted to assess longitudinal outcomes.


Asunto(s)
Internado y Residencia , Licencia Médica , Estudios Transversales , Estados Unidos , Estudios Retrospectivos , Humanos , Licencia Médica/normas , Femenino , Masculino , Entrevistas como Asunto , Evaluación Educacional/métodos , Adulto , Estudiantes de Medicina/estadística & datos numéricos , Cirugía General/educación
10.
Fam Med ; 56(8): 505-508, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39012286

RESUMEN

INTRODUCTION: Reports on the effects of changing the United States Medical Licensing Exam (USMLE) Step 1 examination scoring to pass/fail are evolving in the medical literature. This Council of Academic Family Medicine Educational Research Alliance family medicine clerkship directors' study seeks to describe family medicine clerkship directors' perceptions on the impact of incorporation of Step 1 pass/fail score reporting on students' family medicine clerkship performance. METHODS: Ninety-six clerkship directors responded (56.8% response rate). After exclusion of Canadian schools, we analyzed 88 clerkship directors' responses from US schools. We used descriptive statistics for demographics and responses to survey questions. We used ꭓ2 analysis to determine statistically significant associations between survey items. RESULTS: Clerkship directors did not observe changes in students' overall clinical performance after Step 1 pass/fail scoring (60.8%). Fifty percent of clerkship directors reported changes in Step 1 timing recommendations in the past 3 years. Reasons included curriculum redesign (30.5%), COVID (4.5%), change in Step 1 to pass/fail (11.0%), and other reasons (3.7%). Forty-five percent of these clerkship directors did not observe a change in students' clinical medical knowledge after Step 1 went to pass/fail. Eighty-four percent of these clerkship directors did not compare student performance on clerkship standardized exams before and after Step 1 score changes. We found no significant relationship between Step 1 timing and student performance. CONCLUSIONS: This study represents an early description of family medicine clerkship directors' perceived observations of the impact of Step 1 scoring changes on student performance. Continued investigation of the effects of USMLE Step 1 pass/fail scoring should occur.


Asunto(s)
Prácticas Clínicas , Competencia Clínica , Evaluación Educacional , Medicina Familiar y Comunitaria , Licencia Médica , Humanos , Prácticas Clínicas/normas , Medicina Familiar y Comunitaria/educación , Evaluación Educacional/métodos , Estados Unidos , Licencia Médica/normas , Competencia Clínica/normas , Encuestas y Cuestionarios , Estudiantes de Medicina/estadística & datos numéricos , Masculino , Femenino , Educación de Pregrado en Medicina
11.
BMC Med Educ ; 24(1): 717, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956537

RESUMEN

BACKGROUND: The National Medical Licensing Examination (NMLE) is the only objective, standardized metric to evaluate whether a medical student possessing the professional knowledge and skills necessary to work as a physician. However, the overall pass rate of NMLE in our hospital in 2021 was much lower than that of Peking Union Medical College Hospital, which was required to be further improved. METHODS: To find the reasons for the unsatisfactory performance in 2021, the quality improvement team (QIT) organized regular face-to-face meetings for in-depth discussion and questionnaire, and analyzed the data by "Plato analysis" and "Brainstorming method". After finding out the reasons, the "Plan-Do-Check-Action" (PDCA) cycle was continued to identify and solve problems, which included the formulation and implementation of specific training plans by creating the "Gantt charts", the check of effects, and continuous improvements from 2021 to 2022. Detailed information about the performance of students in 2021 and 2022, and the attendance, assessment, evaluation and suggestions from our hospital were provided by the relevant departments, and the pass rate-associated data was collected online. RESULTS: After the PDCA plan, the pass rate of NMLE in our hospital increased by 10.89% from 80.15% in 2021 to 91.04% in 2022 (P = 0.0109), with the pass rate of skill examination from 95.59% in 2021 to 99.25% in 2022 (P = 0.0581) and theoretical examination from 84.5% in 2021 to 93.13% in 2022 (P = 0.027). Additionally, the mean scores of all examinees increased with the theoretical examination score increasing from 377.0 ± 98.76 in 2021 to 407.6 ± 71.94 in 2022 (P = 0.004). CONCLUSIONS: Our results showed a success application of the PDCA plan in our hospital which improved the pass rate of the NMLE in 2022, and the PDCA plan may provide a practical framework for future medical education and further improve the pass rate of NMLE in the next year.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Licencia Médica , Estudiantes de Medicina , Humanos , Licencia Médica/normas , Competencia Clínica/normas , Mejoramiento de la Calidad , China , Educación de Pregrado en Medicina/normas , Encuestas y Cuestionarios
12.
Sci Rep ; 14(1): 17341, 2024 07 28.
Artículo en Inglés | MEDLINE | ID: mdl-39069520

RESUMEN

This study was designed to assess how different prompt engineering techniques, specifically direct prompts, Chain of Thought (CoT), and a modified CoT approach, influence the ability of GPT-3.5 to answer clinical and calculation-based medical questions, particularly those styled like the USMLE Step 1 exams. To achieve this, we analyzed the responses of GPT-3.5 to two distinct sets of questions: a batch of 1000 questions generated by GPT-4, and another set comprising 95 real USMLE Step 1 questions. These questions spanned a range of medical calculations and clinical scenarios across various fields and difficulty levels. Our analysis revealed that there were no significant differences in the accuracy of GPT-3.5's responses when using direct prompts, CoT, or modified CoT methods. For instance, in the USMLE sample, the success rates were 61.7% for direct prompts, 62.8% for CoT, and 57.4% for modified CoT, with a p-value of 0.734. Similar trends were observed in the responses to GPT-4 generated questions, both clinical and calculation-based, with p-values above 0.05 indicating no significant difference between the prompt types. The conclusion drawn from this study is that the use of CoT prompt engineering does not significantly alter GPT-3.5's effectiveness in handling medical calculations or clinical scenario questions styled like those in USMLE exams. This finding is crucial as it suggests that performance of ChatGPT remains consistent regardless of whether a CoT technique is used instead of direct prompts. This consistency could be instrumental in simplifying the integration of AI tools like ChatGPT into medical education, enabling healthcare professionals to utilize these tools with ease, without the necessity for complex prompt engineering.


Asunto(s)
Evaluación Educacional , Humanos , Evaluación Educacional/métodos , Licencia Médica , Competencia Clínica , Estados Unidos , Educación de Pregrado en Medicina/métodos
13.
J Med Internet Res ; 26: e60807, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39052324

RESUMEN

BACKGROUND: Over the past 2 years, researchers have used various medical licensing examinations to test whether ChatGPT (OpenAI) possesses accurate medical knowledge. The performance of each version of ChatGPT on the medical licensing examination in multiple environments showed remarkable differences. At this stage, there is still a lack of a comprehensive understanding of the variability in ChatGPT's performance on different medical licensing examinations. OBJECTIVE: In this study, we reviewed all studies on ChatGPT performance in medical licensing examinations up to March 2024. This review aims to contribute to the evolving discourse on artificial intelligence (AI) in medical education by providing a comprehensive analysis of the performance of ChatGPT in various environments. The insights gained from this systematic review will guide educators, policymakers, and technical experts to effectively and judiciously use AI in medical education. METHODS: We searched the literature published between January 1, 2022, and March 29, 2024, by searching query strings in Web of Science, PubMed, and Scopus. Two authors screened the literature according to the inclusion and exclusion criteria, extracted data, and independently assessed the quality of the literature concerning Quality Assessment of Diagnostic Accuracy Studies-2. We conducted both qualitative and quantitative analyses. RESULTS: A total of 45 studies on the performance of different versions of ChatGPT in medical licensing examinations were included in this study. GPT-4 achieved an overall accuracy rate of 81% (95% CI 78-84; P<.01), significantly surpassing the 58% (95% CI 53-63; P<.01) accuracy rate of GPT-3.5. GPT-4 passed the medical examinations in 26 of 29 cases, outperforming the average scores of medical students in 13 of 17 cases. Translating the examination questions into English improved GPT-3.5's performance but did not affect GPT-4. GPT-3.5 showed no difference in performance between examinations from English-speaking and non-English-speaking countries (P=.72), but GPT-4 performed better on examinations from English-speaking countries significantly (P=.02). Any type of prompt could significantly improve GPT-3.5's (P=.03) and GPT-4's (P<.01) performance. GPT-3.5 performed better on short-text questions than on long-text questions. The difficulty of the questions affected the performance of GPT-3.5 and GPT-4. In image-based multiple-choice questions (MCQs), ChatGPT's accuracy rate ranges from 13.1% to 100%. ChatGPT performed significantly worse on open-ended questions than on MCQs. CONCLUSIONS: GPT-4 demonstrates considerable potential for future use in medical education. However, due to its insufficient accuracy, inconsistent performance, and the challenges posed by differing medical policies and knowledge across countries, GPT-4 is not yet suitable for use in medical education. TRIAL REGISTRATION: PROSPERO CRD42024506687; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=506687.


Asunto(s)
Evaluación Educacional , Licencia Médica , Humanos , Licencia Médica/normas , Licencia Médica/estadística & datos numéricos , Evaluación Educacional/métodos , Evaluación Educacional/normas , Evaluación Educacional/estadística & datos numéricos , Competencia Clínica/estadística & datos numéricos , Competencia Clínica/normas , Inteligencia Artificial , Educación Médica/normas
14.
Sci Rep ; 14(1): 13553, 2024 06 12.
Artículo en Inglés | MEDLINE | ID: mdl-38866891

RESUMEN

ChatGPT has garnered attention as a multifaceted AI chatbot with potential applications in medicine. Despite intriguing preliminary findings in areas such as clinical management and patient education, there remains a substantial knowledge gap in comprehensively understanding the chances and limitations of ChatGPT's capabilities, especially in medical test-taking and education. A total of n = 2,729 USMLE Step 1 practice questions were extracted from the Amboss question bank. After excluding 352 image-based questions, a total of 2,377 text-based questions were further categorized and entered manually into ChatGPT, and its responses were recorded. ChatGPT's overall performance was analyzed based on question difficulty, category, and content with regards to specific signal words and phrases. ChatGPT achieved an overall accuracy rate of 55.8% in a total number of n = 2,377 USMLE Step 1 preparation questions obtained from the Amboss online question bank. It demonstrated a significant inverse correlation between question difficulty and performance with rs = -0.306; p < 0.001, maintaining comparable accuracy to the human user peer group across different levels of question difficulty. Notably, ChatGPT outperformed in serology-related questions (61.1% vs. 53.8%; p = 0.005) but struggled with ECG-related content (42.9% vs. 55.6%; p = 0.021). ChatGPT achieved statistically significant worse performances in pathophysiology-related question stems. (Signal phrase = "what is the most likely/probable cause"). ChatGPT performed consistent across various question categories and difficulty levels. These findings emphasize the need for further investigations to explore the potential and limitations of ChatGPT in medical examination and education.


Asunto(s)
Evaluación Educacional , Humanos , Evaluación Educacional/métodos , Licencia Médica , Encuestas y Cuestionarios
15.
PLoS One ; 19(6): e0304784, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38889174

RESUMEN

PURPOSE: Students who earn their medical doctorate (MD) in the U.S. must pass the United States Medical Licensing Exam (USMLE) Step-1. The application process for students with disabilities who seek Step-1 accommodations can be arduous, barrier-ridden, and can impose a significant burden that may have long-lasting effects. We sought to understand the experiences of medical students with Type-1 Diabetes (T1D) who applied for Step-1 accommodations. METHODS: A Qualtrics survey was administered to students enrolled in Liaison Committee on Medical Education (LCME)-accredited MD programs who disclosed having a primary diagnosis of T1D. Basic counts and qualitative inductive analyses were conducted. RESULTS: Of the 21 surveys sent, 16 (76.2%) participants responded. Of the 16 respondents, 11 (68.8%) applied for USMLE Step-1 accommodations, whereas 5 (31.2%) did not. Of the 11 who applied for accommodations, 7 (63.6%) received the accommodations requested, while 4 (36.4%) did not. Of those who received the accommodations requested, 5/7 (71.4%) experienced at least one diabetes-related barrier on exam day. Of those who did not apply for Step-1 accommodations, 4/5 (80%) participants reported experiencing at least one diabetes-related barrier on exam day. Overall, 11/16 (68.8%) students experienced barriers on exam day with or without accommodations. Qualitative analysis revealed themes among participants about their experience with the process: frustration, anger, stress, and some areas of general satisfaction. CONCLUSIONS: This study reports the perceptions of students with T1D about barriers and inequities in the Step-1 accommodations application process. Students with and without accommodations encountered T1D-related obstacles on test day.


Asunto(s)
Diabetes Mellitus Tipo 1 , Estudiantes de Medicina , Humanos , Estudiantes de Medicina/psicología , Masculino , Femenino , Estados Unidos , Encuestas y Cuestionarios , Evaluación Educacional , Adulto , Licencia Médica
16.
South Med J ; 117(6): 342-344, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38830589

RESUMEN

OBJECTIVES: This study assessed the content of US Medical Licensing Examination question banks with regard to out-of-hospital births and whether the questions aligned with current evidence. METHODS: Three question banks were searched for key words regarding out-of-hospital births. A thematic analysis was then utilized to analyze the results. RESULTS: Forty-seven questions were identified, and of these, 55% indicated a lack of inadequate, limited, or irregular prenatal care in the question stem. CONCLUSIONS: Systematic studies comparing prenatal care in out-of-hospital births versus hospital births are nonexistent, leading to the potential for bias and adverse outcomes. Adjustments to question stems that accurately portray current evidence are recommended.


Asunto(s)
Licencia Médica , Humanos , Estados Unidos , Licencia Médica/normas , Femenino , Embarazo , Atención Prenatal/normas , Evaluación Educacional/métodos , Educación Médica/métodos , Educación Médica/normas
17.
Rehabilitation (Stuttg) ; 63(3): 189-196, 2024 Jun.
Artículo en Alemán | MEDLINE | ID: mdl-38866029

RESUMEN

BACKGROUND: The learning objectives in the current cross-sectional subject "Rehabilitation, Physical Medicine, Naturopathic Medicine" have been revised as part of the further development of the National Competency-Based Catalogue of Learning Objectives for Medicine (NKLM) to its new version 2.0. Since the NKLM is designed as an interdisciplinary catalogue, a subject assignment seemed necessary from the point of view of various stakeholders. Thus, the German Association of Scientific Medical Societies (AWMF) and the German medical faculties initiated a subject assignment process. The assignment process for the subject "Physical and Rehabilitative Medicine, Naturopathic Medicine" (PRM-NHV; according to the subject list of the first draft of the planned novel medical license regulations from 2020) is presented in this paper. MATERIAL AND METHODS: The AWMF invited its member societies to participate in the assignment of learning objectives of chapters VI, VII, and VIII of the NKLM 2.0 to the individual subjects to which they consider to contribute in teaching. For "PRM-NHV", representatives of the societies for rehabilitation sciences (DGRW), physical and rehabilitation medicine (DGPRM), orthopaedics and traumatology (DGOU), as well as for naturopathy (DGNHK) participated. In a structured consensus process according to the DELPHI methodology, the learning objectives were selected and consented. Subsequently, subject recommendations were made by the AWMF for each learning objective. RESULTS: From the NKLM 2.0, a total of 100 competency-based learning objectives of chapters VII and VIII for the subject "PRM-NHV" were consented by the representatives of the involved societies for presentation on the NKLM 2.0 online platform. CONCLUSIONS: In the context of the revision process of medical studies in Germany and under the umbrella of the AWMF and the German medical faculties, a broad consensus of competency-based learning objectives in the subject "PRM-NHV" could be achieved. This provides an important orientation for all medical faculties both for the further development of teaching in the cross-sectional subject "Rehabilitation, Physical Medicine, Naturopathic Medicine" according to the 9th revision of the medical license regulations, which has been valid for twenty years, and for the preparation of the corresponding subjects in the draft bill of the novel license regulations.


Asunto(s)
Competencia Clínica , Curriculum , Naturopatía , Medicina Física y Rehabilitación , Alemania , Medicina Física y Rehabilitación/educación , Medicina Física y Rehabilitación/normas , Catálogos como Asunto , Educación Basada en Competencias/normas , Sociedades Médicas , Sociedades Científicas , Rehabilitación/normas , Humanos , Licencia Médica/normas , Licencia Médica/legislación & jurisprudencia
18.
Acad Med ; 99(9): 942-945, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38781284

RESUMEN

ABSTRACT: Letters of reference (LORs) are a common component of the application process for residency training programs. With the United States Medical Licensing Examination Step 1 transitioning to pass/fail grading and with the increasing use of holistic review, the potential role of LORs is rising in importance. Among some key benefits are the ability to provide a broader and more holistic view of applicants, which can include highlighting elements of experiences or skills that could be missed in their application, as well as providing a third-party assessment of the applicant external to their rotation experiences. However, LORs also face issues, including variation in quality, challenges with comparability, and risk of bias. In this article, the authors discuss the unique benefits, limitations, and best practice recommendations for LORs in academic medicine. The authors also discuss future directions, including the role of artificial intelligence, unblinded, and co-created LORs.


Asunto(s)
Internado y Residencia , Humanos , Internado y Residencia/normas , Estados Unidos , Correspondencia como Asunto , Criterios de Admisión Escolar , Licencia Médica/normas , Evaluación Educacional/métodos , Evaluación Educacional/normas
20.
Adv Health Sci Educ Theory Pract ; 29(4): 1393-1415, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38780827

RESUMEN

This paper reports the findings of a Canada based multi-institutional study designed to investigate the relationships between admissions criteria, in-program assessments, and performance on licensing exams. The study's objective is to provide valuable insights for improving educational practices across different institutions. Data were gathered from six medical schools: McMaster University, the Northern Ontario School of Medicine University, Queen's University, University of Ottawa, University of Toronto, and Western University. The dataset includes graduates who undertook the Medical Council of Canada Qualifying Examination Part 1 (MCCQE1) between 2015 and 2017. The data were categorized into five distinct sections: demographic information as well as four matrices: admissions, course performance, objective structured clinical examination (OSCE), and clerkship performance. Common and unique variables were identified through an extensive consensus-building process. Hierarchical linear regression and a manual stepwise variable selection approach were used for analysis. Analyses were performed on data set encompassing graduates of all six medical schools as well as on individual data sets from each school. For the combined data set the final model estimated 32% of the variance in performance on licensing exams, highlighting variables such as Age at Admission, Sex, Biomedical Knowledge, the first post-clerkship OSCE, and a clerkship theta score. Individual school analysis explained 41-60% of the variance in MCCQE1 outcomes, with comparable variables to the analysis from of the combined data set identified as significant independent variables. Therefore, strongly emphasising the need for variety of high-quality assessment on the educational continuum. This study underscores the importance of sharing data to enable educational insights. This study also had its challenges when it came to the access and aggregation of data. As such we advocate for the establishment of a common framework for multi-institutional educational research, facilitating studies and evaluations across diverse institutions. This study demonstrates the scientific potential of collaborative data analysis in enhancing educational outcomes. It offers a deeper understanding of the factors influencing performance on licensure exams and emphasizes the need for addressing data gaps to advance multi-institutional research for educational improvements.


Asunto(s)
Educación de Pregrado en Medicina , Evaluación Educacional , Criterios de Admisión Escolar , Humanos , Educación de Pregrado en Medicina/normas , Masculino , Femenino , Criterios de Admisión Escolar/estadística & datos numéricos , Canadá , Evaluación Educacional/normas , Evaluación Educacional/estadística & datos numéricos , Facultades de Medicina/normas , Facultades de Medicina/estadística & datos numéricos , Adulto , Licencia Médica/normas , Licencia Médica/estadística & datos numéricos , Prácticas Clínicas/normas , Prácticas Clínicas/organización & administración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...