Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
2.
Br J Surg ; 97(3): 443-9, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20095020

ABSTRACT

BACKGROUND: Although the objective in European Union and North American surgical residency programmes is similar-to train competent surgeons-residents' working hours are different. It was hypothesized that practice-ready surgeons with more working hours would perform significantly better than those being educated within shorter working week curricula. METHODS: At each test site, 21 practice-ready candidate surgeons were recruited. Twenty qualified Canadian and 19 qualified Dutch surgeons served as examiners. At both sites, three validated outcome instruments assessing multiple aspects of surgical competency were used. RESULTS: No significant differences were found in performance on the integrative and cognitive examination (Comprehensive Integrative Puzzle) or the technical skills test (Objective Structured Assessment of Technical Skill; OSATS). A significant difference in outcome was observed only on the Patient Assessment and Management Examination, which focuses on skills needed to manage patients with complex problems (P < 0.001). A significant interaction was observed between examiner and candidate origins for both task-specific OSATS checklist (P = 0.001) and OSATS global rating scale (P < 0.001) scores. CONCLUSION: Canadian residents, serving many more working hours, perform equivalently to Dutch residents when assessed on technical skills and cognitive knowledge, but outperformed Dutch residents in skills for patient management. Secondary analyses suggested that cultural differences influence the assessment process significantly.


Subject(s)
Clinical Competence/standards , General Surgery/standards , Internship and Residency/standards , Canada , Culture , Humans , Netherlands , Personnel Staffing and Scheduling
3.
Qual Saf Health Care ; 15(3): 165-70, 2006 Jun.
Article in English | MEDLINE | ID: mdl-16751464

ABSTRACT

This paper explores the factors that influence the persistence of unsafe practice in an interprofessional team setting in health care, towards the development of a descriptive theoretical model for analyzing problematic practice routines. Using data collected during a mixed method interview study of 28 members of an operating room team, participants' approaches to unsafe practice were analyzed using the following three theoretical models from organizational and cognitive psychology: Reason's theory of "vulnerable system syndrome", Tucker and Edmondson's concept of first and second order problem solving, and Amalberti's model of practice migration. These three theoretical approaches provide a critical insight into key trends in the interview data, including team members' definition of error as the breaching of standards of practice, nurses' sense of scope of practice as a constraint on their reporting behaviours, and participants' reports of the forces influencing tacit agreements to work around safety regulations. However, the relational factors underlying unsafe practice routines are poorly accounted for in these theoretical approaches. Incorporating an additional theoretical construct such as "relational coordination" to account for the emotional human features of team practice would provide a more comprehensive theoretical approach for use in exploring unsafe practice routines and the forces that sustain them in healthcare team settings.


Subject(s)
Anesthesiology/standards , Attitude of Health Personnel , Clinical Competence/standards , General Surgery/standards , Medical Errors/prevention & control , Operating Room Nursing/standards , Operating Rooms/standards , Problem Solving , Safety Management , Systems Analysis , Cognition , Humans , Interprofessional Relations , Interviews as Topic , Learning , Medical Errors/classification , Organizational Culture , Patient Care Team/standards
4.
Qual Saf Health Care ; 13(5): 330-4, 2004 Oct.
Article in English | MEDLINE | ID: mdl-15465935

ABSTRACT

BACKGROUND: Ineffective team communication is frequently at the root of medical error. The objective of this study was to describe the characteristics of communication failures in the operating room (OR) and to classify their effects. This study was part of a larger project to develop a team checklist to improve communication in the OR. METHODS: Trained observers recorded 90 hours of observation during 48 surgical procedures. Ninety four team members participated from anesthesia (16 staff, 6 fellows, 3 residents), surgery (14 staff, 8 fellows, 13 residents, 3 clerks), and nursing (31 staff). Field notes recording procedurally relevant communication events were analysed using a framework which considered the content, audience, purpose, and occasion of a communication exchange. A communication failure was defined as an event that was flawed in one or more of these dimensions. RESULTS: 421 communication events were noted, of which 129 were categorized as communication failures. Failure types included "occasion" (45.7% of instances) where timing was poor; "content" (35.7%) where information was missing or inaccurate, "purpose" (24.0%) where issues were not resolved, and "audience" (20.9%) where key individuals were excluded. 36.4% of failures resulted in visible effects on system processes including inefficiency, team tension, resource waste, workaround, delay, patient inconvenience and procedural error. CONCLUSION: Communication failures in the OR exhibited a common set of problems. They occurred in approximately 30% of team exchanges and a third of these resulted in effects which jeopardized patient safety by increasing cognitive load, interrupting routine, and increasing tension in the OR.


Subject(s)
Communication Barriers , Interprofessional Relations , Operating Rooms/standards , Patient Care Team/standards , Surgical Procedures, Operative/standards , Anesthesia Department, Hospital/standards , Humans , Medical Errors/prevention & control , Observation , Problem Solving , Quality Indicators, Health Care , Safety , Sentinel Surveillance , Surgery Department, Hospital/standards , Surgical Procedures, Operative/classification , Systems Analysis , Vascular Surgical Procedures/standards
5.
Surg Endosc ; 18(12): 1800-4, 2004 Dec.
Article in English | MEDLINE | ID: mdl-15809794

ABSTRACT

BACKGROUND: Decision making on the competency of surgical trainees to perform laparoscopic procedures has been hampered by the lack of reliable methods to evaluate operative performance. The goal of this study was to develop a feasible and reliable method of evaluation. METHODS: Twenty-nine senior surgical residents were videotaped performing a low anterior resection and a Nissen fundoplication in a pig. Ten blinded laparoscopists rated the videos independently on two scales. Rating time was minimized by allowing raters to fast-forward through the tapes at their discretion. Interrater reliability and the time required to rate a procedure were assessed. RESULTS: Rating time per procedure was a median of 15 min (range, 6-40). The mean interrater reliability for the two scales was 0.74. CONCLUSIONS: The use of videotapes of operations enabled multiple raters to assess a performance reliably and shortened assessment times by 80%. This assessment technique shows potential as a means of evaluating the performance of advanced laparoscopic procedures by surgical trainees.


Subject(s)
Clinical Competence , Internship and Residency , Laparoscopy/standards , Video Recording , Feasibility Studies , Reproducibility of Results
6.
Acad Med ; 76(12): 1241-6, 2001 Dec.
Article in English | MEDLINE | ID: mdl-11739051

ABSTRACT

PURPOSE: To develop a valid and reliable examination to assess the technical proficiency of family medicine residents' performance of minor surgical office procedures. METHOD: A multi-station OSCE-style examination using bench-model simulations of minor surgical procedures was developed. Participants were a randomly selected group of 33 family medicine residents (PGY-1 = 16, PGY-2 = 17) and 14 senior surgical residents who functioned as a validation group. Examiners were qualified surgeons and family physicians who used both checklists and global rating scales to score the participants' performances. RESULTS: When family medicine residents were evaluated by family physicians, interstation reliabilities were .29 for checklists and .42 for global ratings. When family medicine residents were evaluated by surgeons, the reliabilities were .53 for checklists and .75 for global ratings. Interrater reliability, measured as a correlation for total examination scores, was .97. Mean scores on the examination were 60%, 64%, and 87% for PGY-1 family medicine, PGY-2 family medicine, and surgery residents, respectively. The difference in scores between family medicine and surgery residents was significant (p < .001), providing evidence of construct validity. CONCLUSION: A new examination developed for assessing family medicine residents' skills with minor surgical office procedures is reliable and has evidence for construct validity. The examination has low reliability when family physicians serve as examiners, but moderate reliability when surgeons are the evaluators.


Subject(s)
Clinical Competence , Educational Measurement , Family Practice/education , Internship and Residency , Minor Surgical Procedures , Ambulatory Surgical Procedures , Analysis of Variance , Humans , Random Allocation , Reproducibility of Results
8.
Am J Surg ; 182(3): 254-6, 2001 Sep.
Article in English | MEDLINE | ID: mdl-11587687

ABSTRACT

BACKGROUND: The Objective Structured Assessment of Technical Skill (OSATS) is a multistation performance-based examination that assesses the technical skills of surgery residents. This study explores the implementation issues involved in remote administration of the OSATS focusing on feasibility and the psychometric properties of the examination. METHODS: An eight-station OSATS was administered to surgical residents in Los Angeles and Chicago. The University of Toronto and the local institutions shared responsibility for organization and administration of the examination. RESULTS: There was good reliability for both the checklist (alpha = 0.68 for LA, 0.73 for Chicago) and global rating forms (alpha = 0.82 for both sites). Both iterations also showed evidence of construct validity, with a significant effect of training year for the checklist and global rating forms at both sites (analysis of variance: F = 8.66 to 19.93, P <0.01). Despite some challenges, the model of central organization and peripheral delivery was effective for the administration of the examinations. CONCLUSIONS: Two iterations of the OSATS at remote sites demonstrated psychometric properties that are highly consistent with previously reported data suggesting that the examination is portable. Both faculty and residents indicated satisfaction with the examination experience. A model of central administration with peripheral delivery was feasible and effective.


Subject(s)
Educational Measurement/methods , General Surgery/education , Internship and Residency , California , Clinical Competence/standards , Feasibility Studies , Illinois , Psychometrics
10.
Am J Surg ; 181(3): 221-5, 2001 Mar.
Article in English | MEDLINE | ID: mdl-11376575

ABSTRACT

PURPOSE: The purposes of this study were to develop and assess a rating form for selection of surgical residents, determine the criteria most important in selection, determine the reliability of the assessment form and process both within and across sites, and document differences in procedure and structure of resident selection processes across Canada. METHODS: Twelve of 13 English-speaking orthopedic surgery training programs in Canada participated during the 1999 selection year. The critical incident technique was utilized to determine the criteria most important in selection. From these criteria a 10-item rating form was developed with each item on a 5-point scale. Sixty-six candidates were invited for interviews across the country. Each interviewer completed one assessment form for each candidate, and independently ranked all candidates at the conclusion of all interviews. Consensus final rank orders were then created for each residency program. Across all programs, pairwise program-by-program correlations for each assessment parameter were made. RESULTS: The internal consistency of assessment form ratings for each interviewer was moderately high (mean Cronbach's alpha = 0.71). A correlation between each item and the final rank order for each program revealed that the items work ethic, interpersonal qualities, orthopedic experience, and enthusiasm correlated most highly with final candidate rank orders (r = 0.5, 0.48, 0.48, 0.45, respectively). The interrater reliabilities (within panels) and interpanel reliabilities (within programs) for the rank orders were 0.67 and 0.63, respectively. Using the Spearman-Brown prophecy formula, it was found that two panels with two interviewers on each panel are required to obtain a stable measure of a given candidate (reliabilities of 0.80). The average pairwise program-by-program correlations were low for the final candidate rank orders (0.14). CONCLUSIONS: A method was introduced to develop a standard, reliable candidate assessment form to evaluate residency selection procedures. The assessment form ratings were found to be consistent within interviewers. Candidate assessments within programs (both between interviewers and between panels) were moderately reliable suggesting agreement within programs regarding the relative quality of candidates, but there was very little agreement across programs.


Subject(s)
Internship and Residency , Orthopedics/education , Personnel Selection/methods , Canada , Data Interpretation, Statistical , Humans , Personnel Selection/standards , Reproducibility of Results
11.
Med Educ ; 35(1): 78-81, 2001 Jan.
Article in English | MEDLINE | ID: mdl-11123600

ABSTRACT

The development and maintenance of expertise in any domain requires extensive, sustained practice of the necessary skills. However, the quantity of time spent is not the only factor in achieving expertise; the quality of this time is at least as important. The development and maintenance of expertise requires extensive time dedicated specifically to the improvement of skills, an activity termed deliberate practise. Unfortunately, determining how to engage in this deliberate practise is not obvious for tasks such as diagnosis, which involve high stakes and are predominantly cognitive nature. Reflection on and adaptation of one's cognitive processes is important; this could be supplanted by seeking out the opportunity to engage in trial and error in low risk environments such as simulators. Regardless, most individuals tend to favour well-entrenched activities and avoid practise. This may be due to lack of awareness of deficiencies in performance. However, it may also be due to the individual's conception of the nature of expertise. Although expertise requires experience, experience alone is insufficient. Rather, the development of expertise is critically dependent on the individual making the most of that experience. As a result, motivational factors are fundamental to the development of expertise. Overcoming deficiencies in self-monitoring is not a sufficient remedy. It is also necessary is that clinicians form an attitude toward work that includes continual re-investment in improvement.


Subject(s)
Clinical Competence , Education, Medical/methods , Educational Measurement , Humans , Professional Competence , Quality of Health Care
12.
Am J Surg ; 180(3): 234-7, 2000 Sep.
Article in English | MEDLINE | ID: mdl-11084137

ABSTRACT

BACKGROUND: This study examined whether an operative product and time to completion could serve as measures of technical skill. METHODS: Nine final-year (PGY5) and 11 penultimate-year (PGY4) general surgery residents participated in a 6-station bench model examination. Time to completion was recorded. Twelve faculty surgeons (2 per station) evaluated the quality of the final product using a 5-point scale. RESULTS: The mean interrater reliability was 0. 59 for product quality. Interstation reliability was 0.59 for analysis of the final product and 0.72 for time to completion. There was 63% and 78% agreement between attendings' ratings and product quality and time scores respectively. PGY5s' mean product quality score was 4.14 +/- 0.26, compared with 3.82 +/- 0.33 for PGY4s (P < 0.05). PGY5s' mean time was 110 +/- 19 minutes compared with PGY4s' 132 +/- 15 (P < 0.05). CONCLUSIONS: Analysis of the operative end product and time to completion offer efficient alternatives to on-line examiner scoring for bench model examinations of technical competence.


Subject(s)
Benchmarking/standards , Clinical Competence/standards , General Surgery/education , Internship and Residency/standards , Feasibility Studies , Humans , Ontario , Reproducibility of Results
15.
Am J Surg ; 179(4): 341-3, 2000 Apr.
Article in English | MEDLINE | ID: mdl-10875999

ABSTRACT

BACKGROUND: Computer-assisted learning (CAL) offers a number of potential advantages for surgical technical skills teaching. The purpose of this study was to evaluate the impact of individualized external feedback on surgical skill acquisition when a CAL package is used for instruction. METHODS: Freshman and sophomore students participated in a 1-hour CAL session designed to teach them how to tie a two-handed square knot. One group received individualized external feedback during the session and the other group did not. Subjects were videotaped performing the skill before and after the session. The tapes were independently analyzed, in blinded fashion, by three surgeons. Three measures were obtained: the total time for the task, whether or not the knot was square, and the general quality of the performance using a rating scale. RESULTS: Data from 105 subjects were available for final analysis. For both groups there were significant increases in the proportion of knots that were square when the posttest performance was compared with the pretest performance but there was no difference between groups on this measure. Comparison of the performance scores demonstrated that both groups had a significant improvement after the session but the performance scores were significantly better in the group that had received feedback. CONCLUSIONS: Novices in both groups using CAL showed improvement in two of the outcomes measured, suggesting that subjects in both groups attained some degree of competence with this skill. The higher posttest performance score for the group receiving feedback demonstrates that external feedback results in a higher level of mastery when CAL is used to teach surgical technical skills.


Subject(s)
Clinical Competence , Computer-Assisted Instruction/methods , General Surgery/education , Analysis of Variance , Computer-Assisted Instruction/statistics & numerical data , Feedback , Humans , Suture Techniques , Videotape Recording
16.
J Surg Res ; 92(1): 53-5, 2000 Jul.
Article in English | MEDLINE | ID: mdl-10864482

ABSTRACT

BACKGROUND: The surgical literature suggests that collaborative learning using peers may be a valid way to teach surgical skills and there is a growing interest in the use of computer-assisted learning for this purpose. Combining this evolving technology with this type of teaching would theoretically offer a number of advantages including a reduction in the amount of faculty time devoted to this task. In this study, we evaluate the efficacy of a type of collaborative learning in a computer-assisted learning environment. MATERIALS AND METHODS: We designed a prospective, randomized study comparing novice learners who were allowed to work in pairs with those who worked independently in a specially equipped computer-assisted learning classroom. Both pretest and posttest assessments were performed by videotaping this skill. Three experts then evaluated the videotapes, in a blinded fashion. Three different outcomes were assessed. RESULTS: Seventy-seven subjects were enrolled in and completed the study. Comparison of the outcome measures demonstrated no between group difference in the average performance scores or posttest times. The proportion of subjects who correctly tied a square knot was significantly lower in the computer-assisted peer teaching group when compared with the computer-assisted learning alone group (P = 0.04). CONCLUSIONS: Collaborative learning in a computer-assisted learning environment is not an effective combination for teaching surgical skills to novices.


Subject(s)
Computer-Assisted Instruction , Education, Medical/methods , General Surgery/education , Peer Group , Cooperative Behavior , Humans , Random Allocation
17.
Am J Surg ; 179(3): 190-3, 2000 Mar.
Article in English | MEDLINE | ID: mdl-10827317

ABSTRACT

BACKGROUND: Two complimentary examinations designed to comprehensively assess competence for surgical practice have been developed. The Objective Structured Assessment of Technical Skill (OSATS) evaluates a resident's operative skill, and the Patient Assessment and Management Examination (PAME) evaluates clinical management skills. METHODS: Twenty-four postgraduate year (PGY)-4 and PGY-5 general surgery residents from four training programs were examined. Each examination had eight stations, with a total of 6 hours of testing time. RESULTS: Interstation reliability for the OSATS was 0.64, for the PAME was 0.71, and for the total test was 0. 74. Examination scores discriminated between PGY-4 and PGY-5 residents for the OSATS (t = 4.39, P <.01), the PAME (t = 1.86, P <. 05), and the total examination (t = 3.90, P <.01). Year of training accounted for 40%, of the variance of scores. CONCLUSIONS: This comprehensive examination is a reliable and valid method of assessing critical skills in senior surgical residents and may be useful for the formal assessment of readiness for practice.


Subject(s)
Clinical Competence , Educational Measurement/methods , General Surgery/education , Internship and Residency , Clinical Competence/standards , Educational Measurement/standards , Feasibility Studies , Humans , Internship and Residency/classification , Internship and Residency/standards , Ontario , Reproducibility of Results , Time Factors
18.
Am J Surg ; 179(3): 223-8, 2000 Mar.
Article in English | MEDLINE | ID: mdl-10827325

ABSTRACT

BACKGROUND: The management of multiply injured trauma patients is a skill requiring broad knowledge, sound judgment, and leadership capabilities. The purpose of this study was to evaluate the effectiveness of a computer-based trauma simulator as a teaching tool for senior medical students. METHODS: All year-4 clinical clerks at the University of Toronto were approached to participate in a focused, 2-hour trauma management course. The volunteer rate for the course was 79%. Students were randomized to either computer-based simulator or seminar-based teaching groups. Outcome measures in this study were students' trauma objective structured clinical examination (OSCE) scores. RESULTS: Both the trauma simulator and seminar teaching groups performed significantly better than the comparison group (no additional teaching) on the trauma OSCE patient encounter component, but not the written component of the examination. There was no significant difference in the performances of the trauma simulator and seminar teaching groups. Students overwhelmingly felt the trauma simulator was effective for their trauma teaching, and improved their overall confidence in clinical trauma scenarios. CONCLUSIONS: There is a significant benefit associated with a focused, clinically based trauma management course for senior medical students. No additional improvement was noted with the use of a high fidelity computer-based trauma simulator.


Subject(s)
Clinical Competence , Computer-Assisted Instruction , Traumatology/education , Analysis of Variance , Clinical Clerkship , Computer Simulation , Educational Measurement , Humans , Judgment , Leadership , Manikins , Multiple Trauma/surgery , Ontario , Personal Satisfaction , Self Concept , Students, Medical , Teaching/methods , Transfer, Psychology
20.
Acad Med ; 74(10): 1129-34, 1999 Oct.
Article in English | MEDLINE | ID: mdl-10536636

ABSTRACT

PURPOSE: To evaluate the effectiveness of binary content checklists in measuring increasing levels of clinical competence. METHOD: Fourteen clinical clerks, 14 family practice residents, and 14 family physicians participated in two 15-minute standardized patient interviews. An examiner rated each participant's performance using a binary content checklist and a global process rating. The participants provided a diagnosis two minutes into and at the end of the interview. RESULTS: On global scales, the experienced clinicians scored significantly better than did the residents and clerks, but on checklists, the experienced clinicians scored significantly worse than did the residents and clerks. Diagnostic accuracy increased for all groups between the two-minute and 15-minute marks without significant differences between the groups. CONCLUSION: These findings are consistent with the hypothesis that binary checklists may not be valid measures of increasing clinical competence.


Subject(s)
Clinical Competence , Education, Medical/methods , Educational Measurement/methods , Analysis of Variance , Clinical Clerkship , Family Practice/education , Humans , Internship and Residency , Mental Disorders/diagnosis , Ontario , Psychiatry/education , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL