Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Med Teach ; 40(3): 267-274, 2018 03.
Article in English | MEDLINE | ID: mdl-29172940

ABSTRACT

CONTEXT: Creating a new testing program requires the development of a test blueprint that will determine how the items on each test form are distributed across possible content areas and practice domains. To achieve validity, categories of a blueprint are typically based on the judgments of content experts. How experts judgments are elicited and combined is important to the quality of resulting test blueprints. METHODS: Content experts in dentistry participated in a day-long faculty-wide workshop to discuss, refine, and confirm the categories and their relative weights. After reaching agreement on categories and their definitions, experts judged the relative importance between category pairs, registering their judgments anonymously using iClicker, an audience response system. Judgments were combined in two ways: a simple calculation that could be performed during the workshop and a multidimensional scaling of the judgments performed later. RESULTS: Content experts were able to produce a set of relative weights using this approach. The multidimensional scaling yielded a three-dimensional model with the potential to provide deeper insights into the basis of the experts' judgments. CONCLUSION: The approach developed and demonstrated in this study can be applied across academic disciplines to elicit and combine content experts judgments for the development of test blueprints.


Subject(s)
Education, Dental , Education, Medical, Undergraduate , Educational Measurement , Clinical Competence/standards , Educational Measurement/methods , Humans , Interviews as Topic , Qualitative Research
2.
Med Educ ; 49(8): 815-27, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26152493

ABSTRACT

CONTEXT: Interest in applying cognitive load theory in health care simulation is growing. This line of inquiry requires measures that are sensitive to changes in cognitive load arising from different instructional designs. Recently, mental effort ratings and secondary task performance have shown promise as measures of cognitive load in health care simulation. OBJECTIVES: We investigate the sensitivity of these measures to predicted differences in intrinsic load arising from variations in task complexity and learner expertise during simulation-based surgical skills training. METHODS: We randomly assigned 28 novice medical students to simulation training on a simple or complex surgical knot-tying task. Participants completed 13 practice trials, interspersed with computer-based video instruction. On trials 1, 5, 9 and 13, knot-tying performance was assessed using time and movement efficiency measures, and cognitive load was assessed using subjective rating of mental effort (SRME) and simple reaction time (SRT) on a vibrotactile stimulus-monitoring secondary task. RESULTS: Significant improvements in knot-tying performance (F(1.04,24.95)  = 41.1, p < 0.001 for movements; F(1.04,25.90)  = 49.9, p < 0.001 for time) and reduced cognitive load (F(2.3,58.5)  = 57.7, p < 0.001 for SRME; F(1.8,47.3)  = 10.5, p < 0.001 for SRT) were observed in both groups during training. The simple-task group demonstrated superior knot tying (F(1,24)  = 5.2, p = 0.031 for movements; F(1,24)  = 6.5, p = 0.017 for time) and a faster decline in SRME over the first five trials (F(1,26)  = 6.45, p = 0.017) compared with their peers. Although SRT followed a similar pattern, group differences were not statistically significant. CONCLUSIONS: Both secondary task performance and mental effort ratings are sensitive to changes in intrinsic load among novices engaged in simulation-based learning. These measures can be used to track cognitive load during skills training. Mental effort ratings are also sensitive to small differences in intrinsic load arising from variations in the physical complexity of a simulation task. The complementary nature of these subjective and objective measures suggests their combined use is advantageous in simulation instructional design research.


Subject(s)
Cognition , General Surgery/education , Simulation Training/methods , Task Performance and Analysis , Adult , Computer-Assisted Instruction , Female , Humans , Male , Ontario , Prospective Studies , Psychological Theory , Students, Medical
3.
BMC Public Health ; 14: 331, 2014 Apr 08.
Article in English | MEDLINE | ID: mdl-24712314

ABSTRACT

BACKGROUND: Few researchers have the data required to adequately understand how the school environment impacts youth health behaviour development over time. METHODS/DESIGN: COMPASS is a prospective cohort study designed to annually collect hierarchical longitudinal data from a sample of 90 secondary schools and the 50,000+ grade 9 to 12 students attending those schools. COMPASS uses a rigorous quasi-experimental design to evaluate how changes in school programs, policies, and/or built environment (BE) characteristics are related to changes in multiple youth health behaviours and outcomes over time. These data will allow for the quasi-experimental evaluation of natural experiments that will occur within schools over the course of COMPASS, providing a means for generating "practice based evidence" in school-based prevention programming. DISCUSSION: COMPASS is the first study with the infrastructure to robustly evaluate the impact that changes in multiple school-level programs, policies, and BE characteristics within or surrounding a school might have on multiple youth health behaviours or outcomes over time. COMPASS will provide valuable new insight for planning, tailoring and targeting of school-based prevention initiatives where they are most likely to have impact.


Subject(s)
Environment Design , Health Behavior , Policy , School Health Services , Adolescent , Canada , Cohort Studies , Humans , Schools , Students/psychology
4.
Med Educ ; 52(10): 1003-1004, 2018 10.
Article in English | MEDLINE | ID: mdl-29700841
5.
J Atten Disord ; 24(9): 1355-1365, 2020 07.
Article in English | MEDLINE | ID: mdl-28006996

ABSTRACT

Objective: This study used latent class analysis to identify patterns of co-occurrence among common childhood difficulties (inattention/hyperactivity, internalizing, externalizing, peer problems, and reading difficulties). Method: Parents and teachers of 501 children ages 6 to 9 provided mental health and social ratings, and children completed a reading task. Results: Four latent classes were identified in the analysis of parent ratings and reading: one with inattention/hyperactivity, externalizing, peer problems, and internalizing difficulties; one with inattention/hyperactivity and reading difficulties; one with internalizing and peer problems; and one normative class. The analysis of teacher ratings and reading also identified four latent classes: one with inattention/hyperactivity and externalizing, one with inattention/hyperactivity and reading difficulties, one with internalizing problems, and one normative class. Children in latent classes characterized by one or more difficulties were more impaired than children in the normative latent class 1 year later. Conclusion: The results highlight the need for multifaceted interventions.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Attention Deficit Disorder with Hyperactivity/diagnosis , Attention Deficit Disorder with Hyperactivity/epidemiology , Child , Humans , Parents , Schools
6.
J Dent Educ ; 82(6): 565-574, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29858252

ABSTRACT

Progress testing is an innovative formative assessment practice that has been found successful in many educational programs. In progress testing, one exam is given to students at regular intervals as they progress through a curriculum, allowing them to benchmark their increase in knowledge over time. The aim of this study was to assess the first two years of results of a progress testing system implemented in a Canadian dental school. This was the first time in North America a dental school had introduced progress testing. Each test form contains 200 multiple-choice questions (MCQs) to assess the cognitive knowledge base that a competent dentist should have by the end of the program. All dental students are required to complete the test in three hours. In the first three administrations, three test forms with 86 common items were administered to all DMD students. The total of 383 MCQs spanning nine domains of cognitive knowledge in dentistry were distributed among these three test forms. Each student received a test form different from the previous one in the subsequent two semesters. In the fourth administration, 299 new questions were introduced to create two test forms sharing 101 questions. Each administration occurred at the beginning of a semester. All students received individualized reports comparing their performance with their class median in each of the domains. Aggregated results from each administration were provided to the faculty. Based on analysis of students' responses to the common items in the first two administrations, progression in all domains was observed. Comparing equated results across the four administrations also showed progress. This experience suggests that introducing a progress testing assessment system for competency-based dental education has many merits. Challenges and lessons learned with this assessment are discussed.


Subject(s)
Clinical Competence , Competency-Based Education , Education, Dental , Schools, Dental , Canada , Humans , Models, Educational , Surveys and Questionnaires
7.
Acad Med ; 78(10 Suppl): S65-7, 2003 Oct.
Article in English | MEDLINE | ID: mdl-14557099

ABSTRACT

PURPOSE: This study investigates (a) whether items within the Multiple-Choice Questions component of the Medical Council of Canada's Qualifying Examination Part I exhibit local dependencies and (b) potential sources of such dependencies. METHOD: The dimensionality of each of six discipline-based subtests was assessed based on exploratory nonlinear factor analyses. A standardized Fisher's z statistic was used to test residual item correlations for local item dependencies. The characteristics of pairs of items flagged as possibly locally dependent were reviewed. RESULTS: Some items in the Pediatrics and Public Medicine/Community Health subtests are locally dependent; these tend to be the more difficult items on the subtests. DISCUSSION: While these results are encouraging, the possible causes and potential impacts of any local dependencies should be investigated further.


Subject(s)
Educational Measurement/statistics & numerical data , Licensure, Medical , Canada , Clinical Competence , Humans , Psychometrics , Students, Medical
8.
Acad Med ; 78(10 Suppl): S62-4, 2003 Oct.
Article in English | MEDLINE | ID: mdl-14557098

ABSTRACT

PROBLEM STATEMENT AND BACKGROUND: Examinees can make three types of errors on the short-menu questions in the Clinical Reasoning Skills component of the Medical Council of Canada's Qualifying Examination Part I: (1) failing to select any correct responses, (2) selecting too many responses, or (3) selecting a response that is inappropriate or harmful to the patient. This study compared the information provided by equal and differential weighting of these errors. METHOD: The item response theory nominal model was applied to fit examinees' response patterns on the 1998 test. RESULTS: Differential error weighting resulted in improved model fit and increased test information for examinees in the lower half of the achievement continuum. CONCLUSION: Differential error weighting appears promising. The pass score is near the lower end of the achievement continuum; therefore, this approach may improve the accuracy of pass-fail decisions.


Subject(s)
Clinical Competence/statistics & numerical data , Educational Measurement/statistics & numerical data , Statistics as Topic/methods , Canada , Humans , Students, Medical
9.
J Allied Health ; 38(3): 158-62, 2009.
Article in English | MEDLINE | ID: mdl-19753427

ABSTRACT

Determining admission criteria that will predict successful student outcomes is a challenging undertaking for newly established health professional programs. This study examined data from the students who entered a medical radiation sciences program in September 2002. By analyzing the correlation between undergraduate GPA, grades in undergraduate science courses, performance in program coursework, and post-graduation certification examination results, the authors determined admission criteria that were linked to successful student outcomes for radiological technology and radiation therapy students.


Subject(s)
Nuclear Medicine/education , School Admission Criteria , Students, Health Occupations , Technology, Radiologic/education , Education, Professional/organization & administration , Education, Professional/standards , Educational Measurement , Humans , Ontario
10.
J Pers Assess ; 78(1): 130-44, 2002 Feb.
Article in English | MEDLINE | ID: mdl-11936205

ABSTRACT

All clinical psychology doctoral programs accredited by the American Psychological Association provide training in psychological assessment. However, what the programs teach and how they teach it vary widely. So, also, do beliefs about what should be taught. In this study, program descriptive materials and course syllabi from 84 programs were analyzed. Findings highlight commonalities in basic course content and supervised practice in administering, scoring, and interpreting assessment instruments as well as differences in coverage of psychometric and other assessment-related topics and in the extent to which lectures, labs, and practica are integrated.


Subject(s)
Education, Graduate , Personality Assessment , Psychology, Clinical/education , Teaching , Curriculum , Humans , Psychometrics
SELECTION OF CITATIONS
SEARCH DETAIL