Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
MedEdPublish (2016) ; 13: 39, 2023.
Article in English | MEDLINE | ID: mdl-38813067

ABSTRACT

Background: Controversy remains about whether e-learning can improve clinical competences. Our study aimed to compare the effects of e-learning versus traditional education on medical students' reasoning and how they applied their knowledge to clinical competences, assess factors associated with e-learning that might influence exam scores, and evaluate medical students' satisfaction with these two learning methods. Methods: Prospective study of 299 medical students in two fourth-year pediatric clerkship cohorts (2016-17 and 2017-18) in Switzerland. Results: We found no evidence of a difference in students' reasoning or how they applied their knowledge to competences in clinical case resolution, whether they had followed e-learning modules or attended traditional lectures. The number of quizzes taken and being female were factors associated with better scores. Even though overall satisfaction with the two learning methods was similar, students claimed that they learned more in e-learning than in traditional lectures and that e-learning explained learning objectives better. Conclusions: E-learning could be used as a supplement or alternative to traditional face-to-face medical teaching methods without compromising teaching quality. E-learning modules should be better integrated into medical students' curricula but avoid the risk of curriculum overload, especially in case of repeated COVID-like context.

2.
BMC Med Educ ; 20(1): 46, 2020 Feb 11.
Article in English | MEDLINE | ID: mdl-32046697

ABSTRACT

BACKGROUND: The Objective Structured Clinical Examination (OSCE) has been used in pediatrics since the 1980s. Its main drawback is that large numbers of children are needed to make up for the fatigue factor inherent in prolonged testing periods. Also, examinations mainly include children between 7 and 16 years old. We describe the summative examination used in our institution to evaluate medical students' clinical competencies in pediatrics with realistic available resources and for a wider age-range. We also evaluated different factors known to influence medical students' performances. METHODS: This retrospective, descriptive, observational study evaluated the 740 distinct pediatric examination results of fourth-year medical students over 5 years. Their summative examination combined two different assessment methods: a structured real-patient examination (SRPE) using standardized assessment grids for the most frequent pediatric diagnoses, and a computer-based written examination (CBWE). RESULTS: Our approach defined an appropriate setting for some key elements of the educational objectives of pediatrics training, such as balancing the child-parent-pediatrician triangle and the ability to interact with pediatric patients, from newborns to 16-year-old adolescents, in a child-friendly fashion in realistic scenarios. SRPE scores showed no associations with students' degrees of exposure to specific lecture topics, vignettes, or bedside teaching. The impacts of clinical setting, topic, and individual examiners on SRPE scores was quite limited. Setting explained 1.6%, topic explained 4.5%, and examiner explained 4.7% of the overall variability in SRPE scores. CONCLUSIONS: By combining two different assessment methods, we were able to provide a best-practice approach for assessing clinical skills in Pediatrics over a wide range of real patients.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Pediatrics/education , Clinical Clerkship , Humans , Physical Examination , Practice Guidelines as Topic , Retrospective Studies
3.
BMC Med Educ ; 19(1): 219, 2019 Jun 18.
Article in English | MEDLINE | ID: mdl-31215430

ABSTRACT

BACKGROUND: Little is known regarding the psychometric properties of computerized long-menu formats in comparison to classic formats. We compared single-best-answer (Type A) and long-menu formats using identical question stems during the computer-based, summative, intermediate clinical-clerkship exams for nine disciplines. METHODS: In this randomised sequential trial, we assigned the examinees for every summative exam to either the Type A or long-menu format (four different experimental questions, otherwise identical). The primary outcome was the power of discrimination. The study was carried out at the Faculty of Medicine, University of Geneva, Switzerland, and included all the students enrolled for the exams that were part of the study. Examinees were surveyed about the long-menu format at the end of the trial. RESULTS: The trial was stopped for futility (p = 0.7948) after 22 exams including 88 experimental items. The long-menu format had a similar discriminatory power but was more difficult than the Type A format (71.45% vs 77.80%; p = 0.0001). Over half of the options (54.4%) chosen by the examinees in long-menu formats were not proposed as distractors in the Type A formats. Most examinees agreed that their reasoning strategy was different. CONCLUSIONS: In a non-selected population of examinees taking summative exams, long-menu questions have the same discriminatory power as classic Type A questions, but they are slightly more difficult. They are perceived to be closer to real practice, which could have a positive educational impact. We would recommend their use in the final years of the curriculum, within realistic key-feature problems, to assess clinical reasoning and patient management skills.


Subject(s)
Choice Behavior , Clinical Clerkship/statistics & numerical data , Computers , Education, Medical, Undergraduate/statistics & numerical data , Educational Measurement/methods , Students, Medical , Humans , Program Evaluation , Prospective Studies , Psychometrics , Reproducibility of Results , Students, Medical/psychology , Students, Medical/statistics & numerical data , Switzerland
SELECTION OF CITATIONS
SEARCH DETAIL
...