Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
BMC Med Educ ; 24(1): 1071, 2024 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-39350075

RESUMEN

BACKGROUND: COVID-19 significantly impacted physician assistant/associate (PA) education programs. Most programs transitioned didactic and clinical education from in-person to remote, and clinical training opportunities diminished. Graduates of accredited PA programs take the Physician Assistant National Certifying Examination (PANCE), a five-hour exam with 300 multiple-choice questions, and must attain or exceed the scaled passing score of 350 (range: 200-800). We examined first-time examinees' trends in PANCE scores and passing rates three years prior to the pandemic and three years during. METHODS: We analyzed data (N = 59,459) from the National Commission on Certification of Physician Assistants. The two primary outcomes were PANCE scores and pass rates. The main exposure was the timeframe: three years pre-pandemic (2017-2019) and three years during the pandemic (2020-2022). The 2017-2018 scores were equated to the new passing standard implemented in 2019. Covariates included age, gender, years the PA program has been accredited, program region, and rural-urban setting. Analyses consisted of descriptive, bivariate, and multivariate statistics. RESULTS: The mean PANCE score and pass rate during the six-year study period were 463 and 93%, respectively. In unadjusted analyses comparing each year individually, mean PANCE score was highest in 2020 and lowest in 2022 than in all other years except for 2017. When comparing each pandemic year to the pooled three pre-pandemic years and adjusting for test-taker and PA program covariates, examinees scored significantly higher in 2020; there was no difference in 2021, and they scored lower in 2022. When controlling for covariates, examinees had 1.24 higher odds of failing in 2022 compared to the pooled pre-pandemic period. CONCLUSION: Findings suggest that PANCE scores and pass rates were impacted during the third year of the pandemic. PANCE assesses if examinees have the essential clinical knowledge to enter the PA profession. It is crucial to determine whether the pandemic affected PANCE scores and pass rates to ensure PAs provide safe and high-quality patient care.


Asunto(s)
COVID-19 , Certificación , Evaluación Educacional , Asistentes Médicos , Humanos , COVID-19/epidemiología , Certificación/normas , Estados Unidos/epidemiología , Masculino , Femenino , Pandemias , Adulto , SARS-CoV-2 , Competencia Clínica/normas
2.
BMC Med Educ ; 20(1): 169, 2020 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-32450862

RESUMEN

BACKGROUND: Most medical students in Germany are admitted via selection procedures, which are adjusted to the demands of the universities. At Lübeck medical school, scores from interviews that measure non-academic skills and pre-university GPAs are summed to arrive at an admission decision. This article seeks to illuminate the effectiveness of this selection procedure in comparison to other non-selected student groups. METHODS: Quota information and exam results from the first federal exam were linked for students admitted to Lübeck medical school between 2012 and 2015 (N = 655). Five different student groups (university-specific selection quota, pre-university GPA quota, waiting time quota, ex-ante quota and foreign students) were compared regarding exam attempts, written and oral grades, temporal continuity and examination success in the standard study period. RESULTS: While the pre-university GPA quota outperformed all other quotas regarding written and oral grades, it did not differ from the selection quota regarding exam attempts, temporal continuity and examination success in the standard study period. Students in the waiting time and ex-ante quotas performed inferior by comparison. The results of foreign students were the most problematic. CONCLUSION: Students selected by the university show high temporal continuity and examination success. These results, and possible advantages in physician eligibility, argue for the utilisation of non-academic skills for admission.


Asunto(s)
Evaluación Educacional/estadística & datos numéricos , Criterios de Admisión Escolar/estadística & datos numéricos , Estudiantes de Medicina/estadística & datos numéricos , Estudios Transversales , Educación de Pregrado en Medicina , Alemania , Humanos
3.
Teach Learn Med ; 29(2): 181-187, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28098483

RESUMEN

Number of appearances in the bottom quartile of 1st-year medical school exams were used to represent the extent to which students were having academic difficulties. Medical educators have long expressed a desire to have indicators of medical student performance that have strong predictive validity. Predictors traditionally used fell into 4 general categories: demographic (e.g., gender), other background factors (e.g., college major), performance/aptitude (e.g., medical college admission test scores), and noncognitive factors (e.g., curiosity). These factors, however, have an inconsistent record of predicting student performance. In comparison to traditional predictive factors, we sought to determine the extent to which academic performance in the 1st-year of medical school, as measured by examination performance in the bottom quartile of the class in 7 required courses, predicted later performance on a variety of assessments, both knowledge based (e.g., United States Medical Licensing Examination Step 1 and Step IICK) and clinical skills based (e.g., clerkship grades and objective structured clinical exam performance). Of all predictors measured, number of appearances in the bottom quartile in Year 1 was the most strongly related to performance in knowledge-based assessments, as well as clinically related outcomes, and, for each outcome, bottom-quartile performance accounted for additional variance beyond that of the traditional predictors. Low academic performance in the 1st year of medical school is a meaningful risk factor with both predictive validity and predictive utility for low performance later in medical school. The question remains as to how we can incorporate this indicator into a system of formative assessment that effectively addresses the challenges of medical students once they have been identified.


Asunto(s)
Logro , Competencia Clínica , Educación de Pregrado en Medicina , Evaluación Educacional , Estudiantes de Medicina , Femenino , Predicción , Humanos , Masculino , Estados Unidos
4.
Anat Sci Educ ; 17(6): 1299-1307, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38954745

RESUMEN

Reduced hours of instruction are reported within the gross anatomy education literature. Anatomy instruction continues to be challenged with motivating and inspiring learners to value the contribution of gross anatomy knowledge to their career development alongside increased organizational demands for efficiency and effectiveness. To address these demands, this retrospective study sought to understand how the relative timing and amount of gross anatomy instruction were related to examination performance. Undergraduate and graduate students between 2018 and 2022 were assigned to three cohorts determined by enrollment in prosection-based anatomy only (n = 334), concurrent enrollment in prosection- and dissection-based anatomy in the same semester (n = 67), or consecutive enrollment in the courses one year apart (n = 43). Concurrent students had higher prosection-based anatomy examination scores than prosection-only and consecutive students. Consecutively, enrolled students outperformed concurrently enrolled students on the first two dissection examinations but showed no performance differences on the third and fourth dissection examinations. While the results on the timing and presentation of anatomical instruction were inconclusive, the results do support increased instructional time using both prosection and dissection modalities concurrently to improve performance on identification-based gross anatomy examinations.


Asunto(s)
Anatomía , Curriculum , Disección , Educación de Pregrado en Medicina , Evaluación Educacional , Anatomía/educación , Humanos , Disección/educación , Evaluación Educacional/estadística & datos numéricos , Estudios Retrospectivos , Femenino , Masculino , Factores de Tiempo , Educación de Pregrado en Medicina/métodos , Adulto Joven , Adulto , Estudiantes de Medicina/estadística & datos numéricos , Estudiantes de Medicina/psicología
5.
Med Sci Educ ; 34(2): 371-378, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38686139

RESUMEN

Due to unique demands on students in medical education, this study examined the relationship between educational and demographic factors with undergraduate medical students' exam performance in a semester-long medical neuroscience course. Engaging with a mixed-enrollment cohort of medical students, the study used self-reported survey data and exam scores to specifically examine the relationships with growth mindset, use of study strategies, confidence, attendance, and demographic characteristics. Chi-square, ANOVA, and correlational tests revealed interesting and complex relationships among the study variables, which, in some cases, support and other cases challenge existing findings in the academic discourse. The paper concludes by discussing implications from the study that may potentially improve academic outcomes as well as identifying potential areas for future research and academic interventions.

6.
Adv Med Educ Pract ; 15: 551-563, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38884014

RESUMEN

Background: Formative assessment with feedback is part of the assessment program in medical education to improve students' learning. Limited research has focused on its application and impact on practical anatomy education. Methods: This study aimed to examine medical students' perceptions of formative assessment in practical anatomy sessions of body systems-based educational units and explore its influence on final practical exam performance. A descriptive, cross-sectional study was conducted. Data was collected from 173 Year 2 medical students through a survey that addressed their perception of process and importance of formative assessment and feedback. The survey employed a 5-point Likert scale. Two open-ended questions were appended at the end of the survey. Students' performance in Unit 3 (where formative assessment was conducted) was compared to their performance in Unit 2 (where no formative assessment was conducted) and with the performance of the previous academic year's students in Unit 3 (where no formative assessment was conducted). Descriptive statistics were used. The level of statistical significance was set at p-value < 0.05. Responses to open-ended questions (qualitative data) were counted, categorized as themes, and presented as frequencies and percentages. Results: The survey showed high internal consistency, and its validity was established through exploratory factor analysis. Results showed that the mean mark for the unit with formative assessment and feedback was significantly higher than for the units without formative assessment and feedback. Students showed positive perception of formative assessment and feedback conducted after practical anatomy sessions. They reported useful insights regarding the benefits they gained from formative assessment and feedback as well as constructive suggestions for future improvements. Conclusion: The study indicates that students positively perceived formative assessment and feedback sessions after practical anatomy sessions. Findings also refer to a positive effect of formative assessment on students' performance in summative practical assessment in anatomy.

7.
Artículo en Japonés | MEDLINE | ID: mdl-39284716

RESUMEN

OBJECTIVE: This study aimed to investigate the performance of generative pre-trained transformer-4 (GPT-4) on the Certification Test for Mental Health Management and whether tuned prompts could improve its performance. METHODS: This study used a 3 × 2 factorial design to examine the performance according to test difficulty (courses) and prompt conditions. We prepared 200 multiple-choice questions (600 questions overall) for each course using the Certification Test for Mental Health Management (levels I-III) and essay questions from the level I test for the previous four examinations. Two conditions were used: a simple prompt condition using the questions as prompts and tuned prompt condition using techniques to obtain better answers. GPT-4 (gpt-4-0613) was adopted and implemented using the OpenAI API. RESULTS: The simple prompt condition scores were 74.5, 71.5, and 64.0 for levels III, II, and I, respectively. The tuned and simple prompt condition scores had no significant differences (OR = 1.03, 95% CI; 0.65-1.62, p = 0.908). Incorrect answers were observed in the simple prompt condition because of the inability to make choices, whereas no incorrect answers were observed in the tuned prompt condition. The average score for the essay questions under the simple prompt condition was 22.5 out of 50 points (45.0%). CONCLUSION: GPT-4 had a sufficient knowledge network for occupational mental health, surpassing the criteria for levels II and III tests. For the level I test, which required the ability to describe more advanced knowledge accurately, GPT-4 did not meet the criteria. External information may be needed when using GPT-4 at this level. Although the tuned prompts did not significantly improve the performance, they were promising in avoiding unintended outputs and organizing output formats. UMIN trial registration: UMIN-CTR ID = UMIN000053582.

8.
J Surg Educ ; 81(3): 404-411, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38296725

RESUMEN

INTRODUCTION: The College of Surgeons of East, Central, and Southern Africa (COSECSA) has been expanding surgical training in sub-Saharan Africa to respond to the shortage in the region. However, acquiring surgical skills requires rigorous training, and these skills are repeatedly assessed throughout training. Therefore, understanding the factors influencing these assessments is crucial. Previous research has identified individual characteristics, educational background, curriculum structure and previous exam outcomes to influence performance. However, COSECSA's Membership of the College of Surgeons (MCS) exam has not been investigated for factors influencing performance, which this study aims to investigate. METHODS: Data from MCS trainees who took the exam between 2015 and 2021 were analyzed. Trainee demographics, institutional affiliation, operative experience, and exam performance were considered. Linear regression models were used to analyze the factors related to written and clinical exam performance. RESULTS: Out of 354 trainees, 228 were included in the study. Factors such as training duration, the ratio of emergency surgeries, institutional funding source, and country language were associated with written exam performance. Training duration, funding source, exposure to major surgeries, and the ratio of performing operations were significant factors for the clinical exam. DISCUSSION: Operative experience, institutional affiliation, training duration, and language proficiency influence exam performance. Hospitals funded by faith-based organizations or nongovernmental organizations had trainees with higher scores. Prolonged training did not guarantee improved performance. Lastly, having English as an official language improved written exam scores. Gender and country of training did not significantly impact performance. CONCLUSION: This study highlights the importance of operative experience, institutional affiliation, and language proficiency in the exam performance of surgical trainees in COSECSA. Interventions to enhance surgical training and improve exam outcomes in sub-Saharan Africa should consider these factors. Further research is needed to explore additional outcome measures and gather comprehensive data on trainee and hospital characteristics.


Asunto(s)
Cirujanos , Humanos , Estudios Retrospectivos , Cirujanos/educación , África del Sur del Sahara , África Austral , Curriculum , Competencia Clínica
9.
Front Psychol ; 15: 1252520, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38952836

RESUMEN

Overestimation and miscalibration increase with a decrease in performance. This finding has been attributed to a common factor: participants' knowledge and skills about the task performed. Researchers proposed that the same knowledge and skills needed for performing well in a test are also required for accurately evaluating one's performance. Thus, when people lack knowledge about a topic they are tested on, they perform poorly and do not know they did so. This is a compelling explanation for why low performers overestimate themselves, but such increases in overconfidence can also be due to statistical artifacts. Therefore, whether overestimation indicates lack of awareness is debatable, and additional studies are needed to clarify this issue. The present study addressed this problem by investigating the extent to which students at different levels of performance know that their self-estimates are biased. We asked 653 college students to estimate their performance in an exam and subsequently rate how confident they were that their self-estimates were accurate. The latter judgment is known as second-order judgments (SOJs) because it is a judgment of a metacognitive judgment. We then looked at whether miscalibration predicts SOJs per quartile. The findings showed that the relationship between miscalibration and SOJs was negative for high performers and positive for low performers. Specifically, for low performers, the less calibrated their self-estimates were the more confident they were in their accuracy. This finding supports the claim that awareness of what one knows and does not know depends in part on how much one knows.

10.
Cureus ; 16(4): e58864, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38800152

RESUMEN

BACKGROUND: The COVID-19 pandemic caused medical schools to convert to an online format, necessitating a swift change in medical education delivery. New teaching methods were adapted, with some schools having greater success than others. Kirk Kerkorian School of Medicine (KSOM) employed a small-group interactive learning style that consists of eight or fewer medical students and one faculty mentor engaging in group problem-based learning (PBL) twice weekly. This style had clear signs of struggle with a significant decrease in exam performance. Rocky Vista University College of Osteopathic Medicine (RVUCOM) employed a large-group didactic lecture style that consisted of one faculty mentor lecturing hundreds of medical students in a pre-recorded setting five times weekly. This style had greater success with its curriculum adaptation leading to minimal effect on their exam performance. This study aims to investigate whether the type of medical school curriculum (small-group interactive vs. large-group didactic) impacts student exam performance during online learning transitions forced by the COVID-19 pandemic. METHODOLOGY: KSOM and RVUCOM students were grouped into above-expectations and below-expectations categories based on each institution's standardized exam performance metrics. Independently sampled t-tests were performed to compare groups. KSOM was classified as a small-group interactive curriculum through its heavy reliance on student-led PBL, whereas RVUCOM was classified as a large-group didactic curriculum through its extensive proctor-led slideshow lectures. RESULTS: KSOM's transition to online PBL resulted in fewer students scoring above the national average on the National Board of Medical Examiners (NBME) exams compared to previous cohorts (55% vs. 77%, respectively; N = 47 and 78; P < 0.01). RVUCOM's transition to online large-group lectures yielded no significant differences between students who performed above expectations and students who performed below expectations between their cohorts (63% vs. 65%, respectively; N = 305 and 300; P > 0.05). CONCLUSIONS: KSOM's COVID-19 cohort performed significantly worse than RVUCOM's COVID-19 cohort during their medical school organ-system exams. We believe that the small-group learning at KSOM is less resilient for online curricula compared to the large-group didactics seen at RVUCOM. Understanding which didactic methods can transition to online learning more effectively than others is vital in guiding effective curriculum adjustments as online delivery becomes more prominent.

11.
Med Sci Educ ; 33(5): 1089-1094, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37886276

RESUMEN

Many medical students use spaced repetition as a study strategy to improve knowledge retention, and there has been growing interest from medical students in using flashcard software, such as Anki, to implement spaced repetition. Previous studies have provided insights into the relationship between medical students' use of spaced repetition and exam performance, but most of these studies have relied on self-reports. Novel insights about how medical students use spaced repetition can be gleaned from research that takes advantage of the ability of digital interfaces to log detailed data about how students use software. This study is unique in its use of data extracted from students' digital Anki data files, and those data are used to compare study patterns over the first year of medical school. Implementation of spaced repetition was compared between two groups of students who were retrospectively grouped based on average performance on three exams throughout the first year of medical school. Results indicate that students in the higher scoring group studied more total flashcards and implemented spaced repetition via Anki earlier in the year compared to the lower scoring group. These findings raise the possibility that implementing spaced repetition as a study strategy early in medical school may be related to improved knowledge retention and exam performance. Additional research should be performed at more sites to further examine the relationship between spaced repetition implementation and exam performance.

12.
J Med Educ Curric Dev ; 10: 23821205231205389, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37822777

RESUMEN

Objectives: As medical schools worldwide condense the preclinical phase of medical education, it is increasingly important to identify resources that help medical students retain and employ the medical information. One popular tool among medical students is an application called Anki, a free and open-source flashcard program utilizing spaced repetition for quick and durable memorization. The purpose of this study is to determine how variable Anki usage among first-year medical students throughout a standardized anatomy and physiology course correlates with performance. Methods: We designed a novel Anki add-on called "Anki Stat Scraper" to collect data on first-year medical students at Kirk Kerkorian School of Medicine during their 8-week anatomy and physiology course. Anki users (N = 45) were separated into four groups: Heavy (N = 5), intermediate (N = 5), light (N = 16), and limited-Anki (N = 19) users, based on the time each student spent on the flashcard app, how many flashcards they studied per day, and how many days they used the app prior to their anatomy and physiology exam. A 14-question Likert scale questionnaire was administered to each participant to gauge their understanding of Anki and how they used the app to study. Results: Heavy and intermediate Anki users had higher average exam scores than their counterparts who did not use Anki as a study method. Average exam scores were 90.34%, 91.74%, 85.86%, and 87.75% for heavy, intermediate, light, and limited-Anki users respectively (p > 0.05). Our survey demonstrated that Anki users spent an average of 73.86% of their study time using Anki, compared to an average of 36.53% for limited-Anki users (p < 0.001). Conclusion: Anki users did not score significantly higher compared to limited-Anki users. However, survey responses from students believe that Anki may still be a useful educational tool for future medical students.

13.
Anat Sci Educ ; 16(5): 989-1003, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37016440

RESUMEN

Formative assessments are primarily used as a tool to gauge learning throughout an anatomy course. They have also been demonstrated to improve student mastery and exam performance, although the precise nature of this relationship is poorly understood. In this study, it is hypothesized that formative assessment questions targeting higher cognitive levels, integrating topics from multiple lessons, and including visuospatial elements will increase student exam performance. Formative and summative questions provided to students during the Clinical Anatomy block at the University of Arizona College of Medicine-Phoenix between 2015 and 2018 were assessed for cognitive level, integration of targeted learning objectives, and presence or absence of visuospatial elements. These variables were entered into a hierarchical linear model along with demographic variables for each cohort to assess the relationships between these variables and cohort performance on exam questions. The best predictor of exam performance was the inclusion of constituent learning objectives within the formative assessment. Additionally, students performed better on exam questions with visuospatial elements when the targeted learning objectives were also associated with visuospatial elements on the formative assessment. Surprisingly, the cognitive level of formative questions and the integration of learning objectives within them were not correlated with student exam performance. This study demonstrates the importance of including a broad range of topics in formative assessments and highlights a potential benefit of adopting consistent question formats for formative assessments and exams.


Asunto(s)
Anatomía , Evaluación Educacional , Humanos , Anatomía/educación , Aprendizaje , Estudiantes , Curriculum
14.
Cureus ; 14(10): e30523, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36415427

RESUMEN

Background The ability to provide performance insights of various United States Medical Licensing Examination (USMLE) Step 1 assessments is of great importance to medical educators. Two custom pre-clerkship assessments used at the Kirk Kerkorian School of Medicine at the University of Nevada, Las Vegas (KSOM) are National Board of Medical Examiners (NBME)-derived end-of-semester final examinations and subject examinations. The authors sought to determine if performance on these custom assessments can provide feedback on a medical student's readiness to undertake the USMLE Step 1 examination. Methodology Deidentified student performance data were provided by institutional databases for the KSOM graduating class of 2023 (N = 60). Pearson correlation analyses were utilized to evaluate the strength of the correlation between USMLE Step 1 performance and NBME subject examinations versus NBME end-of-semester final examinations. Results The results indicated that the NBME end-of-semester final examinations have a statistically higher correlation to the USMLE Step 1 score than the majority of the individual NBME subject examinations. However, the mean NBME subject examination score (Semester 1: r = 0.53, p < 0.05; Semester 2: r = 0.58, p < 0.05) demonstrated significantly higher correlation to the USMLE Step 1 performance than the NBME end-of-semester final examination score for both Semesters 1 and 2 (Semester 1: r = 0.50, p < 0.05; Semester 2: r = 0.48, p < 0.05). Conclusions These results showed that the mean of the NBME subject examination score was a better metric to assess readiness for the USMLE Step 1 than the NBME end-of-semester final examinations. However, each NBME end-of-semester final examination score showed a better correlation than the majority of the NBME subject examinations.

15.
Ann Surg Open ; 3(4): e209, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36590890

RESUMEN

Assess the association of residents' exam performance and transient emotions with their reports of burnout, suicidality, and mistreatment. Background: An annual survey evaluating surgical resident well-being is administered following the American Board of Surgery In-Training Examination (ABSITE). One concern about administering a survey after the ABSITE is that stress from the exam may influence their responses. Methods: A survey was administered to all general surgery residents following the 2018 ABSITE assessing positive and negative emotions (scales range from 0 to 12), as well as burnout, suicidality over the past 12 months, and mistreatment (discrimination, sexual harassment verbal/emotional or physical abuse) in the past academic year. Multivariable hierarchical regressions assessed the associations of exam performance and emotions with burnout, suicidality, and mistreatment. Results: Residents from 262 programs provided complete responses (N = 6987, 93.6% response rate). Residents reported high mean positive emotion (M = 7.54, SD = 2.35) and low mean negative emotion (M = 5.33, SD = 2.43). While residents in the bottom ABSITE score quartile reported lower positive and higher negative emotion than residents in the top 2 and 3 quartiles, respectively (P < 0.005), exam performance was not associated with the reported likelihood of burnout, suicidality, or mistreatment. Conclusions: Residents' emotions after the ABSITE are largely positive. Although poor exam performance may be associated with lower positive and higher negative emotion, it does not seem to be associated with the likelihood of reporting burnout, suicidality, or mistreatment. After adjusting for exam performance and emotions, mistreatment remained independently associated with burnout and suicidality. These findings support existing evidence demonstrating that burnout and suicidality are stable constructs that are robust to transient stress and/or emotions.

16.
Med Sci Educ ; 32(6): 1465-1479, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36407815

RESUMEN

Medical students have unprecedented access to a large variety of learning resources, but patterns of resource use, differences in use across education cohorts, and the relationship between resource use and academic performance are unclear. Therefore, the purpose of the current study was to evaluate student resource use and its relationship to academic performance during preclerkship years. First-year and second-year medical students completed a 10-question electronic survey that assessed likelihood of using outside resources recommended by others, reasons for using outside resources, frequency of use of resources, and use of outside resources for specific disciplines. Outcomes were compared between the 2 cohorts of students. First-year students were more likely to use instructor-produced resources and self-generated study resources, and second-year students were more likely to use board review resources. Although differences were found between cohorts for frequency of use of certain resources, correlations between resource use and academic performance were modest. Overall, our results indicated that student use of study resources changed between the first and second years of medical school. These results suggest opportunities for medical educators to guide students in the selection and effective use of outside resources as they mature as self-regulated learners. Further, since students seem to extensively use external resources for learning, institutions should consider calibrating their curriculum and teaching methods to this learning style and providing high-quality, accessible resource materials for all students to reduce the potential impact of socioeconomic factors on student performance.

17.
Med Sci Educ ; 31(4): 1319-1326, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34457974

RESUMEN

INTRODUCTION: Undergraduate medical education has evolved necessarily with the increasing utilization of technology and the availability of ancillary resources developed for medical students. However, medical educational resources are expensive and there have been few studies validating these resources for their ability to significantly modify student exam performance. METHODS: A post-exam survey was devised to evaluate medical students for resource usage, student-perceived preparedness, and exam performance. RESULTS: Students who felt more prepared for exams performed better than students who felt less prepared (p = .017). Students who watched didactic lectures online and those who utilized peer-to-peer tutoring outperformed students who did not use these resources (p = .035, p = .008). Analyses of the data show that none of the purchased resources utilized significantly improved student exam performance. The majority of students used between six and eight resources for exam preparation. There may be a slightly negative association with the quantity of resources used and exam scores (p = .18). DISCUSSION: Contrary to traditional confidence studies that correlate overconfidence with underperformance, medical students who reported feeling more prepared for exams performed better than students who felt less prepared. CONCLUSION: Medical students may have a more complete grasp of their knowledge base and deficits, which may enable a more accurate match between exam expectations and academic performance. This post-exam survey method can be customized and applied to evaluate resource utility as it pertains to specific undergraduate medical education curricula at individual institutions.

18.
Radiol Technol ; 92(3): 240-248, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33472876

RESUMEN

PURPOSE: To investigate the exam performance of reinstatement candidates pursuing the American Registry of Radiologic Technologists (ARRT) certification and registration in radiography. METHODS: This study compared exam performance data from reinstatement candidates taking the ARRT radiography exam in 2017 and 2018 (N = 412) to the performance of 2017 and 2018 first-time exam candidates (N = 22 731) and the reinstatement candidates' past successful passing exam attempts. RESULTS: Scores on the reinstatement exam at the overall exam and the section levels were significantly lower (P < .01) than scores for first-time radiography candidates, as well as the reinstatement candidates' own previous successful passing exam attempts. The first-time pass rate was 89%, while the reinstatement pass rate was 56%. Reinstatement candidates also scored lower than first-time radiography candidates on all sections of the exam. The section with the smallest effect size difference was Patient Interactions and Management, and the sections with the largest effect size differences were Radiation Protection and Extremity Procedures. There was not a strong relationship between time lag and exam scores. DISCUSSION: Reinstatement candidates had a lower exam pass rate than first-time exam candidates. The reinstatement pass rate was, however, greater than 50%, which indicates that reinstatement candidates perform better than do candidates failing and retaking the initial radiography exam (pass rate of 40%-45%). Reinstatement candidates' lower performance on multiple sections of the exam suggest that reinstatement candidates do not perform as well as first-time candidates on numerous portions of content. CONCLUSION: A lower percentage of reinstatement candidates for the ARRT radiography exam pass the exam on a first attempt compared with first-time exam candidates. Passing the exam as a reinstatement candidate is, however, achievable given the pass rate of above 50%.


Asunto(s)
Certificación , Competencia Clínica , Evaluación Educacional , Humanos , Radiografía , Sistema de Registros , Estados Unidos
19.
Med Educ Online ; 26(1): 1842660, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33121393

RESUMEN

SUBSTANCE: We reviewed the effect of a hybrid remediation model combining co-regulated learning and deliberate practice on future exam performance of pre-clerkship medical students who had been unsuccessful on a previous clinic skills exam. With this remediation model, we aimed to strengthen students' self-regulated learning to improve future exam performance and support sustained and improved learning. Educational problem addressed: Observing that some students who initially performed well post remediation with deliberate practice, still struggled on future exams, we looked to address a method that could improve both short-and long-term clinical skills learning success with sustained performance. Intervention outcome: Comparing the remediated students' exam scores pre- and post-coaching to their cohort's performance, we observed the majority of students post remediation performed above their cohort's exam average. Lessons learned: Combining learning models resulted in improved learning outcomes.


Asunto(s)
Prácticas Clínicas , Competencia Clínica , Evaluación Educacional , Tutoría , Logro , Prácticas Clínicas/métodos , Humanos , Aprendizaje , Modelos Educacionales , Estudiantes de Medicina
20.
Pharmacy (Basel) ; 8(3)2020 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-32927674

RESUMEN

Underperforming students are often unaware of deficiencies requiring improvement until after poor performance on summative exams. The goal of the current study was to determine whether inclusion of individual end-of-class formative quizzes, which comprise of higher level Bloom's questions, could encourage students to reflect on and address deficiencies and improve academic performance. Ninety-seven out of 123 first-year pharmacy students (79%) enrolled in a Biochemistry and Cell & Molecular Biology course participated in a single-blinded, randomized, controlled, crossover study. Paired t-test analyses demonstrated that that implementation of individual end-of-class formative quizzes resulted in significantly higher summative exam scores for below average students (p = 0.029). Notably, inclusion of quizzes significantly improved performance on higher Bloom's questions for these students (p = 0.006). Analysis of surveys completed by students prior to summative exam indicate that the formative end-of-class quizzes helped students identify deficiencies (89%) and making them feel compelled to study more (83%) and attend review sessions (61%). Many students indicated that quizzes increased stress levels (45%). Our collective data indicate that quizzes can improve summative exam performance for below average first year pharmacy students, and improve self-reflection and student motivation to study. However, the impact on student stress levels should be considered.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA