Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 8 de 8
1.
BMC Med Educ ; 24(1): 3, 2024 Jan 03.
Article En | MEDLINE | ID: mdl-38172823

BACKGROUND: All healthcare professional education programmes must adopt a systematic approach towards ensuring graduates achieve the competencies required to be an evidence-based practitioner. While a list of competencies for evidence-based practice exist, health care educators continue to struggle with effectively integrating the necessary competencies into existing curricula. The purpose of this project was to develop an open access cross-discipline, learning outcomes framework to support educators in integrating the teaching, learning and assessment required to ensure all graduates of health care professional programmes can achieve the necessary evidence-based practice competencies. METHODS: An interdisciplinary team of health care professional educators and a librarian completed a review of the health professions literature on the teaching and assessment of evidence-based practice. The literature, coupled with the teams' collective experiences in evidence-based education and research, were used to identify relevant teaching, learning and evidence-based competency frameworks to inform the project design. The guide and toolkit for experience-based co-design developed by the National Health Service Institute for Innovation and Improvement was adopted for this study ( Institute for Innovation and Improvement: Experience Based Design: Guide & Tools In. Leeds: NHS; 2009.). A four-step approach involving three online participatory co-design workshops and a national validation workshop was designed. Students (n = 33), faculty (n = 12), and clinical educators (n = 15) participated in formulating and mapping learning outcomes to evidence-based competencies. RESULTS: Through a rigorous, systematic co-design process the Evidenced-based Education Collaborative (EVIBEC) Learning Outcomes Framework was developed. This framework consists of a series of student-centred learning outcomes, aligned to evidence-based practice competencies, classified according to the 5 As of EBP and mapped to the cognitive levels of Bloom's taxonomy. Associated learning activities for each step of EBP are suggested. CONCLUSIONS: A consensus-based, student-centred learning outcomes framework aligned to a contemporary set of EBP core competencies has been developed. The freely accessible EVIBEC framework may support entry level health care professional EBP education, by informing EBP curriculum development and offering the potential for interdisciplinary approaches to and sharing of valuable teaching and learning resources. Co-design proved an effective method in creating and refining this framework.


Curriculum , State Medicine , Humans , Learning , Evidence-Based Practice , Health Personnel
2.
Vet Surg ; 51(5): 788-800, 2022 Jul.
Article En | MEDLINE | ID: mdl-35261056

OBJECTIVE: To gather and evaluate validity evidence in the form of content and reliability of scores produced by 2 surgical skills assessment instruments, 1) a checklist, and 2) a modified form of the Objective Structured Assessment of Technical Skills (OSATS) global rating scale (GRS). STUDY DESIGN: Prospective randomized blinded study. SAMPLE POPULATION: Veterinary surgical skills educators (n =10) evaluated content validity. Scores from students in their third preclinical year of veterinary school (n = 16) were used to assess reliability. METHODS: Content validity was assessed using Lawshe's method to calculate the Content Validity Index (CVI) for the checklist and modified OSATS GRS. The importance and relevance of each item was determined in relation to skills needed to successfully perform supervised surgical procedures. The reliability of scores produced by both instruments was determined using generalizability (G) theory. RESULTS: Based on the results of the content validation, 39 of 40 checklist items were included. The 39-item checklist CVI was 0.81. One of the 6 OSATS GRS items was included. The 1-item GRS CVI was 0.80. The G-coefficients for the 40-item checklist and 6-item GRS were 0.85 and 0.79, respectively. CONCLUSION: Content validity was very good for the 39-item checklist and good for the 1-item OSATS GRS. The reliability of scores from both instruments was acceptable for a moderate stakes examination. IMPACT: These results provide evidence to support the use of the checklist described and a modified 1-item OSAT GRS in moderate stakes examinations when evaluating preclinical third-year veterinary students' technical surgical skills on low-fidelity models.


Clinical Competence , Internship and Residency , Animals , Checklist , Humans , Prospective Studies , Reproducibility of Results , Students
3.
J Vet Med Educ ; 48(4): 485-491, 2021 Aug.
Article En | MEDLINE | ID: mdl-32758091

The Objective Structured Clinical Examination (OSCE) is a valid, reliable assessment of veterinary students' clinical skills that requires significant examiner training and scoring time. This article seeks to investigate the utility of implementing video recording by scoring OSCEs in real-time using live examiners, and afterwards using video examiners from within and outside the learners' home institution. Using checklists, learners (n=33) were assessed by one live examiner and five video examiners on three OSCE stations: suturing, arthrocentesis, and thoracocentesis. When stations were considered collectively, there was no difference between pass/fail outcome between live and video examiners (χ2 = 0.37, p = .55). However, when considered individually, stations (χ2 = 16.64, p < .001) and interaction between station and type of examiner (χ2 = 7.13, p = .03) demonstrated a significant effect on pass/fail outcome. Specifically, learners being assessed on suturing with a video examiner had increased odds of passing the station as compared with their arthrocentesis or thoracocentesis stations. Internal consistency was fair to moderate (0.34-0.45). Inter-rater reliability measures varied but were mostly moderate to strong (0.56-0.82). Video examiners spent longer assessing learners than live raters (mean of 21 min/learner vs. 13 min/learner). Station-specific differences among video examiners may be due to intermittent visibility issues during video capture. Overall, video recording learner performances appears reliable and feasible, although there were time, cost, and technical issues that may limit its routine use.


Education, Veterinary , Educational Measurement , Animals , Clinical Competence , Feasibility Studies , Reproducibility of Results , Video Recording
5.
Vet Surg ; 48(6): 966-974, 2019 Aug.
Article En | MEDLINE | ID: mdl-31069811

OBJECTIVE: To compare a low-fidelity foam and fabric (FF) model to a high fidelity silicone (SI) model for teaching canine celiotomy closure. STUDY DESIGN: Prospective blinded comparison of learning outcomes. SAMPLE POPULATION: Second-year veterinary students who had never performed surgery as a primary surgeon (n = 46) and veterinarians experienced in performing canine celiotomy (n = 10). METHODS: Veterinary students performed a digitally recorded celiotomy closure on a canine cadaver before and after participation in 4 facilitated laboratory training sessions on their randomly assigned model. Recordings were scored by masked, trained educators with an 8-item task-specific rubric. Students completed surveys evaluating the models. Experienced veterinarians tested the models and provided feedback on their features. RESULTS: Completed pretest and posttest recordings were available for 38 of 46 students. Students' performance improved regardless of the model used to practice (P = .04). The magnitude of improvement did not differ between the 2 groups (P = .10). All students (n = 46) described their models favorably. Ninety percent of veterinarians thought both models were helpful for training students and gave similar ratings on all measures except for realism, which was rated higher for the SI model's skin (median, agree) compared with the FF model (median, neutral, P = .02). CONCLUSION: Model-based training was effective at improving students' surgical skills. Less experienced learners achieved similar skill gains after practicing with FF or SI models. CLINICAL SIGNIFICANCE: The acquisition of surgical skills required to perform celiotomy closure in companion animals occurs similarly well on models made of foam and fabric or of silicone, providing flexibility in model selection.


Clinical Competence , Dogs/surgery , Education, Veterinary , Laparotomy/veterinary , Models, Anatomic , Suture Techniques/education , Animals , Cadaver , Humans , Prospective Studies , Students , Surveys and Questionnaires
6.
Vet Surg ; 47(3): 378-384, 2018 Apr.
Article En | MEDLINE | ID: mdl-29380866

OBJECTIVE: To evaluate a method to assess surgical skills of veterinary students that is based on digital recording of their performance during closure of a celiotomy in canine cadavers. SAMPLE POPULATION: Second year veterinary students without prior experience with live animal or simulated surgical procedure (n = 19) METHODS: Each student completed a 3-layer closure of a celiotomy on a canine cadaver. Each procedure was digitally recorded with a single small wide-angle camera mounted to the overhead surgical light. The performance was scored by 2 of 5 trained raters who were unaware of the identity of the students. Scores were based on an 8-item rubric that was created to evaluate surgical skills that are required to close a celiotomy. The reliability of scores was tested with Cronbach's α, intraclass correlation, and a generalizability study. RESULTS: The internal consistency of the grading rubric, as measured by α, was .76. Interrater reliability, as measured by intraclass correlation, was 0.64. The generalizability coefficient was 0.56. CONCLUSION: Reliability measures of 0.60 and above have been suggested as adequate to assess low-stakes skills. The task-specific grading rubric used in this study to evaluate veterinary surgical skills captured by a single wide-angle camera mounted to an overhead surgical light produced scores with acceptable internal consistency, substantial interrater reliability, and marginal generalizability. IMPACT: Evaluation of veterinary students' surgical skills by using digital recordings with a validated rubric improves flexibility when designing accurate assessments.


Clinical Competence , Dog Diseases/surgery , Laparotomy/veterinary , Surgery, Veterinary/education , Veterinary Medicine , Animals , Cadaver , Dogs , Education, Veterinary/standards , Educational Measurement/methods , Female , Humans , Laparotomy/education , Male , Photography , Reproducibility of Results , Students , Surgery, Veterinary/standards
7.
J Vet Med Educ ; 45(1): 108-118, 2018.
Article En | MEDLINE | ID: mdl-28813173

Creating effective learning experiences for veterinary clinical skills and procedures can be a challenging task. Drawing from both medical and veterinary education literature and personal experiences as practicing veterinarians and educators, the authors share nine key steps that describe the development process of a pre-clinical veterinary clinical skills teaching session. Relevant research and pedagogical principles supporting the effectiveness of the proposed nine-step process were identified and discussed. The aims of this article were to describe the development of a dermatology techniques teaching session and to provide the reader with a structured approach that can be used as a template to design or audit other clinical skills teaching sessions.


Clinical Competence , Dermatology/education , Education, Veterinary , Skin Diseases/veterinary , Animals , Curriculum , Diagnostic Techniques and Procedures/veterinary , Humans
8.
J Vet Med Educ ; 43(2): 190-213, 2016.
Article En | MEDLINE | ID: mdl-27111005

This paper describes the development and evaluation of training intended to enhance students' performance on their first live-animal ovariohysterectomy (OVH). Cognitive task analysis informed a seven-page lab manual, 30-minute video, and 46-item OVH checklist (categorized into nine surgery components and three phases of surgery). We compared two spay simulator models (higher-fidelity silicone versus lower-fidelity cloth and foam). Third-year veterinary students were randomly assigned to a training intervention: lab manual and video only; lab manual, video, and $675 silicone-based model; lab manual, video, and $64 cloth and foam model. We then assessed transfer of training to a live-animal OVH. Chi-square analyses determined statistically significant differences between the interventions on four of nine surgery components, all three phases of surgery, and overall score. Odds ratio analyses indicated that training with a spay model improved the odds of attaining an excellent or good rating on 25 of 46 checklist items, six of nine surgery components, all three phases of surgery, and the overall score. Odds ratio analyses comparing the spay models indicated an advantage for the $675 silicon-based model on only 6 of 46 checklist items, three of nine surgery components, and one phase of surgery. Training with a spay model improved performance when compared to training with a manual and video only. Results suggested that training with a lower-fidelity/cost model might be as effective when compared to a higher-fidelity/cost model. Further research is required to investigate simulator fidelity and costs on transfer of training to the operational environment.


Clinical Competence , Education, Veterinary , Hysterectomy/veterinary , Ovariectomy/veterinary , Adult , Alberta , Animals , Dogs , Female , Humans , Hysterectomy/education , Ovariectomy/education , Perception , Pilot Projects , Students , Surveys and Questionnaires , Young Adult
...