Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Med Teach ; : 1-6, 2024 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-38615688

RESUMO

Learners across the medical education continuum will encounter numerous high-stakes exams and assessments. Effectively preparing for and performing well on these types of assessments can be challenging for learners for a wide variety of reasons. It is imperative that medical educators provide appropriate support for learners who experience challenges with high-stakes exams, particularly given the complexity of factors like life circumstances of individual learners and the significance of these assessments for career advancement/progression. Grouped into areas including educator mindset, information-gathering, and developing and executing a study plan, the following 12 tips will help medical educators be better prepared to meaningfully support learners in need of assessment remediation and guidance.

2.
Med Teach ; 44(7): 707-719, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35271398

RESUMO

BACKGROUND: Commercial-off-the-shelf learning platforms developed for medical education (herein referred to as MedED-COTS) have emerged as a resource used by a majority of medical students to prepare for licensing examinations. As MedED-COTS proliferate and include more functions and features, there is a need for an up-to-date review to inform medical educators on (a) students' use of MedED-COTS outside the formal medical school curriculum, (b) the integration of MedED-COTS into the formal curriculum, and (c) the potential effects of MedED-COTS usage on students' national licensing exam scores in the USA. METHODS: Due to the limited number of studies published on either the use or integration of MedED-COTS, a focused review of literature was conducted to guide future research and practice. Data extraction and quality appraisal were conducted independently by three reviewers; with disagreements resolved by a fourth reviewer. A narrative synthesis was completed to answer research questions, contextualize results, and identify trends and issues in the findings reported by the studies included in the review. RESULTS: Results revealed consistent positive correlations between students' use of question banks and their licensing exam performance. The limited number of integration studies, combined with a number of methodological issues, makes it impossible to isolate specific effects or associations of integrated commercial resources on standardized test or course outcomes. However, consistent positive correlations, along with students' pervasive use and strong theoretical foundations explaining the results, provide evidence for integrating MedED-COTS into medical school curricula and highlight the need for further research. CONCLUSIONS: Based on findings, we conclude that students use exam preparation materials broadly and they have a positive impact on exam results; the literature on integration of MedED-COTS into formal curriculum and the use by students of resources outside of exam preparation is scant.


Assuntos
Competência Clínica , Estudantes de Medicina , Currículo , Avaliação Educacional/métodos , Humanos , Pandemias
4.
Cureus ; 16(7): e65083, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39171020

RESUMO

Objectives Large language models (LLMs), for example, ChatGPT, have performed exceptionally well in various fields. Of note, their success in answering postgraduate medical examination questions has been previously reported, indicating their possible utility in surgical education and training. This study evaluated the performance of four different LLMs on the American Board of Thoracic Surgery's (ABTS) Self-Education and Self-Assessment in Thoracic Surgery (SESATS) XIII question bank to investigate the potential applications of these LLMs in the education and training of future surgeons. Methods The dataset in this study comprised 400 best-of-four questions from the SESATS XIII exam. This included 220 adult cardiac surgery questions, 140 general thoracic surgery questions, 20 congenital cardiac surgery questions, and 20 cardiothoracic critical care questions. The GPT-3.5 (OpenAI, San Francisco, CA) and GPT-4 (OpenAI) models were evaluated, as well as Med-PaLM 2 (Google Inc., Mountain View, CA) and Claude 2 (Anthropic Inc., San Francisco, CA), and their respective performances were compared. The subspecialties included were adult cardiac, general thoracic, congenital cardiac, and critical care. Questions requiring visual information, such as clinical images or radiology, were excluded. Results GPT-4 demonstrated a significant improvement over GPT-3.5 overall (87.0% vs. 51.8% of questions answered correctly, p < 0.0001). GPT-4 also exhibited consistently improved performance across all subspecialties, with accuracy rates ranging from 70.0% to 90.0%, compared to 35.0% to 60.0% for GPT-3.5. When using the GPT-4 model, ChatGPT performed significantly better on the adult cardiac and general thoracic subspecialties (p < 0.0001). Conclusions Large language models, such as ChatGPT with the GPT-4 model, demonstrate impressive skill in understanding complex cardiothoracic surgical clinical information, achieving an overall accuracy rate of nearly 90.0% on the SESATS question bank. Our study shows significant improvement between successive GPT iterations. As LLM technology continues to evolve, its potential use in surgical education, training, and continuous medical education is anticipated to enhance patient outcomes and safety in the future.

5.
Proc (Bayl Univ Med Cent) ; 36(4): 483-489, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37334084

RESUMO

Objective: To discover if first-attempt failure of the American Board of Colon and Rectal Surgery (ABCRS) board examination is associated with surgical training or personal demographic characteristics. Methods: Current colon and rectal surgery program directors in the United States were contacted via email. Deidentified records of trainees from 2011 to 2019 were requested. Analysis was performed to identify associations between individual risk factors and failure on the ABCRS board examination on the first attempt. Results: Seven programs contributed data, totaling 67 trainees. The overall first-time pass rate was 88% (n = 59). Several variables demonstrated potential for association, including Colon and Rectal Surgery In-Training Examination (CARSITE) percentile (74.5 vs 68.0, P = 0.09), number of major cases in colorectal residency (245.0 vs 219.2, P = 0.16), >5 publications during colorectal residency (75.0% vs 25.0%, P = 0.19), and first-time passage of the American Board of Surgery certifying examination (92.5% vs 7.5%, P = 0.18). Conclusion: The ABCRS board examination is a high-stakes test, and training program factors may be predictive of failure. Although several factors showed potential for association, none reached statistical significance. Our hope is that by increasing our data set, we will identify statistically significant associations that can potentially benefit future trainees in colon and rectal surgery.

6.
Med Sci Educ ; 30(4): 1379-1382, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34457804

RESUMO

Integrating clinical genetic education in physician training is an important strategic approach in the era of genomic medicine. To understand how much the board examinations of the American Board of Medical Specialties contain genomics-related content, a descriptive analysis of 21 exam blueprints was performed. Topics in genomics were not included in 43% of specialty blueprints which shows underrepresentation of clinical genetics in graduate medical education curricula.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa