Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
J Interprof Care ; 37(4): 613-622, 2023.
Article in English | MEDLINE | ID: mdl-36448594

ABSTRACT

Workplace-based learning provides medical students exposure to interprofessional competencies through repeated exposures and active participation in interprofessional learning activities. Using Situated Learning Theory as our theoretical lens, we explored with medical students how interacting with existing interprofessional teams contributes to development of an expanded health care professional identity. An embedded mixed methods study using semi-structured interviews and questionnaires to assess readiness for interprofessional learning was conducted with 14 medical students completing an elective at an interprofessional pain medicine clinic. Within this workplace-based context, a model identifying key themes and supporting factors contributing to the development of an extended professional identity was developed. These findings help describe the processes by which students gain interprofessional collaboration competence.


Subject(s)
Attitude of Health Personnel , Students, Medical , Humans , Interprofessional Relations , Learning , Health Personnel
2.
J Contin Educ Health Prof ; 42(4): 249-255, 2022 10 01.
Article in English | MEDLINE | ID: mdl-35180742

ABSTRACT

INTRODUCTION: Verbal feedback from trainees to supervisors is rare in medical education, although valuable for improvement in teaching skills. Research has mostly examined narrative comments on resident evaluations of their supervisors. This study aimed to explore supervisors' and residents' beliefs and experiences with upward feedback, along with recommendations to initiate and facilitate effective conversations. METHODS: Using 60-minute focus group discussions, a previous study explored opinions of internal medicine residents and clinical supervisors at the Brigham and Women's Hospital regarding the impact of institutional culture on feedback conversations. For this study, we conducted a thematic analysis of the transcribed, anonymous data to identify key concepts pertaining only to verbal upward feedback, through the theoretical lens of Positioning theory. RESULTS: Twenty-two supervisors and 29 residents participated in three and five focus groups, respectively. Identified themes were mapped to three research questions regarding (1) existing beliefs (lack of impact, risks to giving supervisors feedback, need for preparation and reflection), (2) experiences (nonspecific language, avoidance of upward feedback, bypassing the supervisor), and (3) recommended approaches (setting clear expectations, seeking specific feedback, emphasizing interest in growth). DISCUSSION: Study participants appeared to assume learner-teacher positions during feedback conversations, resulting in residents' concerns of adverse consequences, beliefs that supervisors will neither accept feedback nor change their behaviors, and avoidance of constructive upward feedback. Residents suggested that emphasis on mutual professional growth and regular feedback seeking by supervisors could encourage them to take on the role of feedback providers. Their recommendations could be a valuable starting point for faculty development initiatives on upward feedback.


Subject(s)
Internship and Residency , Female , Humans , Feedback , Qualitative Research , Formative Feedback , Focus Groups , Clinical Competence
3.
Med Educ ; 53(5): 477-493, 2019 05.
Article in English | MEDLINE | ID: mdl-30779210

ABSTRACT

OBJECTIVES: Coaching in medical education has recently gained prominence, but minimal attention has been given to key skills and determining how they work to effectively ensure residents are progressing and developing self-assessment skills. This study examined process-oriented and content-oriented coaching skills used in coaching sessions, with particular attention to how supervisors use them to enhance resident acceptance of feedback to enhance learning. METHODS: This qualitative study analysed secondary audiotaped data from 15 supervisors: resident dyads during two feedback sessions, 4 months apart. The R2C2 model was used to engage the resident, build a relationship, explore reactions to feedback, explore resident perceptions of content, and coach for change. Framework analysis was used, including familiarisation with the data, identifying the thematic framework, indexing and charting the data and mapping and interpretation. RESULTS: Process skills included preparation, relationship development, using micro communication skills and techniques to promote reflection and self-assessment by the resident and supervisor flexibility. Content skills related to the specific feedback content included engaging the resident in discussion, ensuring the discussion was collaborative and focused on goal setting, co-developing a Learning Change Plan, ensuring resident commitment and following up on the plan. Together, these skills foster agency in the resident learner. Three overarching themes emerged from the analysis: the interconnectedness of process and content; tensions between encouraging self-direction and ensuring progress and competence; and balancing a coaching dialogue and a teaching monologue. CONCLUSIONS: Effective coaching by supervisors requires a combination of specific process and content skills that are chosen depending on the needs of the individual resident. Mastering these skills helps residents engage and develop agency in their own professional development. These outcomes depend on faculty maintaining a balance between coaching and teaching, encouraging resident self-direction and ensuring progression to competence.


Subject(s)
Clinical Competence/standards , Feedback , Internship and Residency , Mentoring , Education, Medical, Graduate , Faculty, Medical , Female , Humans , Male , Qualitative Research , Self-Assessment
4.
J Contin Educ Health Prof ; 38(4): 235-243, 2018.
Article in English | MEDLINE | ID: mdl-30169379

ABSTRACT

INTRODUCTION: Fellows of the Royal College of Physicians and Surgeons of Canada are required to participate in assessment activities for all new 5-year cycles beginning on or after January 2014 to meet the maintenance of certification program requirements. This study examined the assessment activities which psychiatrists reported in their maintenance of certification e-portfolios to determine the types and frequency of activities reported; the resultant learning, planned learning, and/or changes to the practice they planned or implemented; and the interrelationship between the types of assessment activities, learning that was affirmed or planned, and changes planned or implemented. METHODS: A total of 5000 entries from 2195 psychiatrists were examined. A thematic analysis drawing on the framework analysis was undertaken of the 2016 entries. RESULTS: There were 3841 entries for analysis; 1159 entries did not meet the criteria for assessment. The most commonly reported activities were self-assessment programs, feedback on teaching, regular performance reviews, and chart reviews. Less frequent were direct observation, peer supervision, and reviews by provincial medical regulatory authorities. In response to the data, psychiatrists affirmed that their practices were appropriate, identified gaps they intended to address, planned future learning, and/or planned or implemented changes. The assessment activities were internally or externally initiated and resulted in no or small changes (accommodations and adjustments) or redirections. DISCUSSION: Psychiatrists reported participating in a variety of assessment activities that resulted in variable impact on learning and change. The study underscores the need to ensure that assessments being undertaken are purposeful, relevant, and designed to enable identification of outcomes that impact practice.


Subject(s)
Documentation/trends , Psychiatry/methods , Canada , Certification/methods , Clinical Competence/standards , Documentation/methods , Documentation/standards , Education, Medical, Continuing/trends , Humans , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical data
5.
Simul Healthc ; 13(3): 195-200, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29381589

ABSTRACT

INTRODUCTION: Feedback in clinical education and after simulated experiences facilitates learning. Although evidence-based guidelines for feedback exist, faculty experience challenges in applying the guidelines. We set out to explore how faculty approach feedback and how these approaches align with current recommendations. METHODS: There is strong evidence for the following four components of feedback: feedback as a social interaction, tailoring content, providing specific descriptions of performance, and identifying actionable items. Faculty preceptors participated in feedback simulations followed by debriefing. The simulations were video recorded, transcribed, and analyzed qualitatively using template analysis to examine faculty approaches to feedback relative to evidence-informed recommendations. RESULTS: Recorded encounters involving 18 faculty and 11 facilitators yielded 111 videos. There was variability in the extent to which feedback approaches aligned with recommended practices. Faculty behaviors aligned with recommendations included a conversational approach, flexibly adapting feedback techniques to resident context, offering rich descriptions of observations with specific examples and concrete suggestions, achieving a shared understanding of strengths and gaps early on to allow sufficient time for problem-solving, and establishing a plan for ongoing development. Behaviors misaligned with guidelines included prioritizing the task of feedback over the relationship, lack of flexibility in techniques applied, using generic questions that did not explore residents' experiences, and ending with a vague plan for improvement. CONCLUSIONS: Faculty demonstrate variability in feedback skills in relation to recommended practices. Simulated feedback experiences may offer a safe environment for faculty to further develop the skills needed to help residents progress within competency-based medical education.


Subject(s)
Faculty, Medical/psychology , Formative Feedback , Internship and Residency/methods , Simulation Training/methods , Clinical Competence , Communication , Educational Measurement , Guidelines as Topic , Humans , Internship and Residency/standards , Qualitative Research , Simulation Training/standards , Videotape Recording
6.
J Contin Educ Health Prof ; 38(1): 32-40, 2018.
Article in English | MEDLINE | ID: mdl-29329147

ABSTRACT

INTRODUCTION: Multisource feedback is a questionnaire-based assessment tool that provides physicians with data about workplace behaviors and may combine numeric and narrative (free-text) comments. Little attention has been paid to wording of requests for comments, potentially limiting its utility to support physician performance. This study tested the phrasing of two different sets of questions. METHODS: Two sets of questions were tested with family physicians, medical and surgical specialists, and their medical colleague and coworker respondents. One set asked respondents to identify one thing the participant physician does well and one thing the physician could target for action. Set 2 questions asked what does the physician do well and what might the physician do to enhance practice. Resulting free-text comments provided by respondents were coded for polarity (positive, neutral, or negative), specificity (precision and detail), actionability (ability to use the feedback to direct future activity), and CanMEDS roles (competencies) and analyzed descriptively. RESULTS: Data for 222 physicians (111 physicians per set) were analyzed. A total of 1824 comments (8.2/physician) were submitted, with more comments from coworkers than medical colleagues. Set 1 yielded more comments and were more likely to be positive, semi specific, and very actionable than set 2. However, set 2 generated more very specific comments. Comments covered all CanMEDS roles with more comments for collaborator and leader roles. DISCUSSION: The wording of questions inviting free-text responses influences the volume and nature of the comments provided. Individuals designing multisource feedback tools should carefully consider wording of items soliciting narrative responses.


Subject(s)
Feedback , Physicians/psychology , Staff Development/methods , Surveys and Questionnaires/standards , Humans , Physicians/standards , Professional Competence/standards , Professional Competence/statistics & numerical data , Qualitative Research , Staff Development/standards , Staff Development/statistics & numerical data , Surveys and Questionnaires/statistics & numerical data
7.
Acad Med ; 93(7): 1055-1063, 2018 07.
Article in English | MEDLINE | ID: mdl-29342008

ABSTRACT

PURPOSE: The authors previously developed and tested a reflective model for facilitating performance feedback for practice improvement, the R2C2 model. It consists of four phases: relationship building, exploring reactions, exploring content, and coaching. This research studied the use and effectiveness of the model across different residency programs and the factors that influenced its effectiveness and use. METHOD: From July 2014-October 2016, case study methodology was used to study R2C2 model use and the influence of context on use within and across five cases. Five residency programs (family medicine, psychiatry, internal medicine, surgery, and anesthesia) from three countries (Canada, the United States, and the Netherlands) were recruited. Data collection included audiotaped site assessment interviews, feedback sessions, and debriefing interviews with residents and supervisors, and completed learning change plans (LCPs). Content, thematic, template, and cross-case analysis were conducted. RESULTS: An average of nine resident-supervisor dyads per site were recruited. The R2C2 feedback model, used with an LCP, was reported to be effective in engaging residents in a reflective, goal-oriented discussion about performance data, supporting coaching, and enabling collaborative development of a change plan. Use varied across cases, influenced by six general factors: supervisor characteristics, resident characteristics, qualities of the resident-supervisor relationship, assessment approaches, program culture and context, and supports provided by the authors. CONCLUSIONS: The R2C2 model was reported to be effective in fostering a productive, reflective feedback conversation focused on resident development and in facilitating collaborative development of a change plan. Factors contributing to successful use were identified.


Subject(s)
Educational Measurement/standards , Feedback , Internship and Residency/methods , Mentoring/standards , Educational Measurement/methods , Humans , Internal Medicine/education , Internship and Residency/standards , Interviews as Topic/methods , Mentoring/methods , Mentoring/trends , United Kingdom
8.
Can J Anaesth ; 64(8): 810-819, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28573361

ABSTRACT

PURPOSE: This study explored how anesthesiologists understand situational awareness (SA) and how they think SA is learned, taught, and assessed. METHODS: Semi-structured interviews were performed with practicing anesthesiologists involved in teaching. This qualitative study was conducted using constructivist grounded theory techniques (i.e., line-by-line coding, memoing, and constant comparison) in a thematic analysis of interview transcripts. Group meetings were held to develop and review themes emerging from the data. RESULTS: Eighteen anesthesiologists were interviewed. Respondents displayed an understanding of SA using a mixture of examples from clinical experience and everyday life. Despite agreeing on the importance of SA, formal definitions of SA were lacking, and the participants did not explicate the topic of SA in either their practice or their teaching activities. Situational awareness had been learned informally through increasing independence in the clinical context, role modelling, reflection on errors, and formally through simulation. Respondents taught SA through modelling and discussing scanning behaviour, checklists, verbalization of thought processes, and debriefings. Although trainees' understanding of SA was assessed as part of the decision-making process for granting clinical independence, respondents found it difficult to give meaningful feedback on SA to their trainees. CONCLUSION: Although SA is an essential concept in anesthesiology, its use remains rather tacit, primarily due to the lack of a common operational definition of the term. Faculty development is required to help anesthesiologists teach and assess SA more explicitly in the clinical environment.


Subject(s)
Anesthesiologists/psychology , Anesthesiology/methods , Awareness , Decision Making , Anesthesiologists/education , Anesthesiology/education , Female , Grounded Theory , Humans , Interviews as Topic , Male
9.
J Vet Med Educ ; 43(1): 104-10, 2016.
Article in English | MEDLINE | ID: mdl-26983054

ABSTRACT

Effective faculty development for veterinary preceptors requires knowledge about their learning needs and delivery preferences. Veterinary preceptors at community practice locations in Alberta, Canada, were surveyed to determine their confidence in teaching ability and interest in nine faculty development topics. The study included 101 veterinarians (48.5% female). Of these, 43 (42.6%) practiced veterinary medicine in a rural location and 54 (53.5%) worked in mixed-animal or food-animal practice. Participants reported they were more likely to attend an in-person faculty development event than to participate in an online presentation. The likelihood of attending an in-person event differed with the demographics of the respondent. Teaching clinical reasoning, assessing student performance, engaging and motivating students, and providing constructive feedback were topics in which preceptors had great interest and high confidence. Preceptors were least confident in the areas of student learning styles, balancing clinical workload with teaching, and resolving conflict involving the student. Disparities between preceptors' interest and confidence in faculty development topics exist, in that topics with the lowest confidence scores were not rated as those of greatest interest. While the content and format of clinical teaching faculty development events should be informed by the interests of preceptors, consideration of preceptors' confidence in teaching ability may be warranted when developing a faculty development curriculum.


Subject(s)
Education, Veterinary , Needs Assessment , Preceptorship , Teaching , Adult , Aged , Alberta , Faculty , Female , Humans , Learning , Male , Middle Aged , Models, Theoretical , Young Adult
10.
J Vet Med Educ ; 43(1): 95-103, 2016.
Article in English | MEDLINE | ID: mdl-26752019

ABSTRACT

Optimization of clinical veterinary education requires an understanding of what compels veterinary preceptors in their role as clinical educators, what satisfaction they receive from the teaching experience, and what struggles they encounter while supervising students in private practice. We explored veterinary preceptors' teaching motivations, enjoyment, and challenges by undertaking a thematic content analysis of 97 questionnaires and 17 semi-structured telephone interviews. Preceptor motivations included intrinsic factors (obligation to the profession, maintenance of competence, satisfaction) and extrinsic factors (promotion of the veterinary field, recruitment). Veterinarians enjoyed observing the learner (motivation and enthusiasm, skill development) and engaging with the learner (sharing their passion for the profession, developing professional relationships). Challenges for veterinary preceptors included variability in learner interest and engagement, time management, and lack of guidance from the veterinary medicine program. We found dynamic interactions among the teaching motivations, enjoyment, and challenges for preceptors. Our findings suggest that in order to sustain the veterinary preceptor, there is a need to recognize the interplay between the incentives and disincentives for teaching, to foster the motivations and enjoyment for teaching, and to mitigate the challenges of teaching in community private practice.


Subject(s)
Education, Veterinary , Motivation , Personal Satisfaction , Preceptorship , Teaching , Adult , Aged , Alberta , Female , Humans , Male , Middle Aged , Qualitative Research , Young Adult
11.
Med Teach ; 38(8): 815-22, 2016 Aug.
Article in English | MEDLINE | ID: mdl-26618220

ABSTRACT

INTRODUCTION: Physicians identify teaching as a factor that enhances performance, although existing data to support this relationship is limited. PURPOSE: To determine whether there were differences in clinical performance scores as assessed through multisource feedback (MSF) data based on clinical teaching. METHODS: MSF data for 1831 family physicians, 1510 medical specialists, and 542 surgeons were collected from physicians' medical colleagues, co-workers (e.g., nurses and pharmacists), and patients and examined in relation to information about physician teaching activities including percentage of time spent teaching during patient care and academic appointment. Multivariate analysis of variance, partial eta squared effect sizes, and Tukey's HSD post hoc comparisons were used to determine between group differences in total MSF mean and subscale mean performance scores by teaching and academic appointment data. RESULTS: Higher clinical performance scores were associated with holding any academic appointment and generally with any time teaching versus no teaching during patient care. This was most evident for data from medical colleagues, where these differences existed across all specialty groups. CONCLUSION: More involvement in teaching was associated with higher clinical performance ratings from medical colleagues and co-workers. These results may support promoting teaching as a method to enhance and maintain high-quality clinical performance.


Subject(s)
Clinical Competence , Physicians , Teaching , Formative Feedback , Humans , Surveys and Questionnaires
12.
Pediatrics ; 131(2): e344-52, 2013 Feb.
Article in English | MEDLINE | ID: mdl-23339215

ABSTRACT

OBJECTIVE: This study evaluated the effectiveness of Helping Babies Breathe (HBB) newborn care and resuscitation training for birth attendants in reducing stillbirth (SB), and predischarge and neonatal mortality (NMR). India contributes to a large proportion of the worlds annual 3.1 million neonatal deaths and 2.6 million SBs. METHODS: This prospective study included 4187 births at >28 weeks' gestation before and 5411 births after HBB training in Karnataka. A total of 599 birth attendants from rural primary health centers and district and urban hospitals received HBB training developed by the American Academy of Pediatrics, using a train-the-trainer cascade. Pre-post written trainee knowledge, posttraining provider performance and skills, SB, predischarge mortality, and NMR before and after HBB training were assessed by using χ(2) and t-tests for categorical and continuous variables, respectively. Backward stepwise logistic regression analysis adjusted for potential confounding. RESULTS: Provider knowledge and performance systematically improved with HBB training. HBB training reduced resuscitation but increased assisted bag and mask ventilation incidence. SB declined from 3.0% to 2.3% (odds ratio [OR] 0.76, 95% confidence interval [CI] 0.59-0.98) and fresh SB from 1.7% to 0.9% (OR 0.54, 95% CI 0.37-0.78) after HBB training. Predischarge mortality was 0.1% in both periods. NMR was 1.8% before and 1.9% after HBB training (OR 1.09, 95% CI 0.80-1.47, P = .59) but unknown status at 28 days was 2% greater after HBB training (P = .007). CONCLUSIONS: HBB training reduced SB without increasing NMR, indicating that resuscitated infants survived the neonatal period. Monitoring and community-based assessment are recommended.


Subject(s)
Asphyxia Neonatorum/mortality , Asphyxia Neonatorum/nursing , Developing Countries , Inservice Training/organization & administration , Midwifery/education , Noninvasive Ventilation/nursing , Resuscitation/education , Resuscitation/nursing , Stillbirth/epidemiology , Teaching/organization & administration , Clinical Competence , Curriculum , Female , Follow-Up Studies , Humans , India , Infant, Newborn , Male , Noninvasive Ventilation/mortality , Pregnancy , Prospective Studies , Resuscitation/mortality , Survival Rate
13.
BMC Med Educ ; 12: 17, 2012 Mar 26.
Article in English | MEDLINE | ID: mdl-22448658

ABSTRACT

BACKGROUND: There has been little study of the role of the essay question in selection for medical school. The purpose of this study was to obtain a better understanding of how applicants approached the essay questions used in selection at our medical school in 2007. METHODS: The authors conducted a qualitative analysis of 210 essays written as part of the medical school admissions process, and developed a conceptual framework to describe the relationships, ideas and concepts observed in the data. RESULTS: Findings of this analysis were confirmed in interviews with applicants and assessors. Analysis revealed a tension between "genuine" and "expected" responses that we believe applicants experience when choosing how to answer questions in the admissions process. A theory named "What do they want me to say?" was developed to describe the ways in which applicants modulate their responses to conform to their expectations of the selection process; the elements of this theory were confirmed in interviews with applicants and assessors. CONCLUSIONS: This work suggests the existence of a "hidden curriculum of admissions" and demonstrates that the process of selection has a strong influence on applicant response. This paper suggests ways that selection might be modified to address this effect. Studies such as this can help us to appreciate the unintended consequences of admissions processes and can identify ways to make the selection process more consistent, transparent and fair.


Subject(s)
Curriculum , School Admission Criteria , Schools, Medical , Writing , Alberta , Concept Formation , Humans , Interviews as Topic , Models, Theoretical
14.
Resuscitation ; 83(7): 887-93, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22286047

ABSTRACT

INTRODUCTION: It is critical that competency in pediatric resuscitation is achieved and assessed during residency or post graduate medical training. The purpose of this study was to create and evaluate a tool to measure all elements of pediatric resuscitation team leadership competence. METHODS: An initial set of items, derived from a literature review and a brainstorming session, were refined to a 26 item assessment tool through the use of Delphi methodology. The tool was tested using videos of standardized resuscitations. A psychometric assessment of the evidence for instrument validity and reliability was undertaken. RESULTS: The performance of 30 residents on two videotaped scenarios was assessed by 4 pediatricians using the tool, with 12 items assessing 'leadership and communication skills' (LCS) and 14 items assessing 'knowledge and clinical skills' (KCS). The instrument showed evidence of reliability; the Cronbach's alpha and generalizability co-efficients for the overall instrument were α=0.818 and Ep(2)=0.76, for LCS were α=0.827 and Ep(2)=0.844, and for KCS were α=0.673 and Ep(2)=0.482. While validity was initially established through literature review and brainstorming by the panel of experts, it was further built through the high strength of correlation between global scores and scores for overall performance (r=0.733), LCS (r=0.718) and KCS (r=0.662) as well as the factor analysis which accounted for 40.2% of the variance. CONCLUSION: The results of the study demonstrate that the instrument is a valid and reliable tool to evaluate pediatric resuscitation team leader competence.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Internship and Residency/standards , Pediatrics/education , Resuscitation/education , Humans , Internship and Residency/methods , Patient Simulation , Reproducibility of Results , Resuscitation/standards
15.
Med Educ ; 45(6): 636-47, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21564201

ABSTRACT

CONTEXT: Conceptualisations of self-assessment are changing as its role in professional development comes to be viewed more broadly as needing to be both externally and internally informed through activities that enable access to and the interpretation and integration of data from external sources. Education programmes use various activities to promote learners' reflection and self-direction, yet we know little about how effective these activities are in 'informing' learners' self-assessments. OBJECTIVES: This study aimed to increase understanding of the specific ways in which undergraduate and postgraduate learners used learning and assessment activities to inform self-assessments of their clinical performance. METHODS: We conducted an international qualitative study using focus groups and drawing on principles of grounded theory. We recruited volunteer participants from three undergraduate and two postgraduate programmes using structured self-assessment activities (e.g. portfolios). We asked learners to describe their perceptions of and experiences with formal and informal activities intended to inform self-assessment. We conducted analysis as a team using a constant comparative process. RESULTS: Eighty-five learners (53 undergraduate, 32 postgraduate) participated in 10 focus groups. Two main findings emerged. Firstly, the perceived effectiveness of formal and informal assessment activities in informing self-assessment appeared to be both person- and context-specific. No curricular activities were considered to be generally effective or ineffective. However, the availability of high-quality performance data and standards was thought to increase the effectiveness of an activity in informing self-assessment. Secondly, the fostering and informing of self-assessment was believed to require credible and engaged supervisors. CONCLUSIONS: Several contextual and personal conditions consistently influenced learners' perceptions of the extent to which assessment activities were useful in informing self-assessments of performance. Although learners are not guaranteed to be accurate in their perceptions of which factors influence their efforts to improve performance, their perceptions must be taken into account; assessment strategies that are perceived as providing untrustworthy information can be anticipated to have negligible impact.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/methods , Education, Medical, Undergraduate/methods , Educational Measurement/methods , Self-Assessment , Students, Medical/psychology , Belgium , Curriculum , Education, Medical, Graduate/standards , Education, Medical, Undergraduate/standards , Educational Measurement/standards , Humans , Netherlands , Self-Evaluation Programs , United Kingdom
16.
BMC Med Educ ; 10: 93, 2010 Dec 12.
Article in English | MEDLINE | ID: mdl-21143996

ABSTRACT

BACKGROUND: The increasing burden of illness related to musculoskeletal diseases makes it essential that attention be paid to musculoskeletal education in medical schools. This case study examines the undergraduate musculoskeletal curriculum at one medical school. METHODS: A case study research methodology used quantitative and qualitative approaches to systematically examine the undergraduate musculoskeletal course at the University of Calgary (Alberta, Canada) Faculty of Medicine. The aim of the study was to understand the strengths and weaknesses of the curriculum guided by four questions: (1) Was the course structured according to standard principles for curriculum design as described in the Kern framework? (2) How did students and faculty perceive the course? (3) Was the assessment of the students valid and reliable? (4) Were the course evaluations completed by student and faculty valid and reliable? RESULTS: The analysis showed that the structure of the musculoskeletal course mapped to many components of Kern's framework in course design. The course had a high level of commitment by teachers, included a valid and reliable final examination, and valid evaluation questionnaires that provided relevant information to assess curriculum function. The curricular review identified several weaknesses in the course: the apparent absence of a formalized needs assessment, course objectives that were not specific or measurable, poor development of clinical presentations, small group sessions that exceeded normal 'small group' sizes, and poor alignment between the course objectives, examination blueprint and the examination. Both students and faculty members perceived the same strengths and weaknesses in the curriculum. Course evaluation data provided information that was consistent with the findings from the interviews with the key stakeholders. CONCLUSIONS: The case study approach using the Kern framework and selected questions provided a robust way to assess a curriculum, identify its strengths and weaknesses and guide improvements.


Subject(s)
Education, Medical, Undergraduate/methods , Hospitals, University , Musculoskeletal Diseases , Schools, Medical , Alberta , Attitude of Health Personnel , Curriculum/standards , Faculty, Medical , Humans , Organizational Case Studies
18.
Acad Med ; 84(10): 1342-7, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19881418

ABSTRACT

PURPOSE: To determine the long-term effects of curriculum length on physician competence, the authors compared the performance of graduates from the University of Calgary (U of C; a school with a three-year curriculum) with matched samples from the University of Alberta (U of A) and from other Canadian schools with a four-year curriculum. METHOD: The authors used data from the College of Physicians and Surgeons of Alberta, Physician Achievement Review (PAR) program to determine curricular outcomes. The authors analyzed PAR program data, comprising reviews from medical colleagues, nonphysician coworkers (e.g., nurses, pharmacists), patients, and the physicians themselves, for 166 physicians from U of C, U of A, and other universities. They compared groups using one-way analysis of covariance (ANCOVA) and multivariate analysis of covariance (MANCOVA), with years since graduation as a covariate, and a Cohen d effect size calculation to assess the magnitude of the differences. RESULTS: The authors analyzed review data for 498 physicians. The results of ANCOVA showed that no significant differences existed among schools for the self and the patient aggregate mean questionnaire scores. Aggregate mean questionnaire scores from the medical colleague and coworker surveys were significant, albeit with a small effect size. MANCOVA showed small but significant differences among schools on the aggregate mean factor scores for medical colleague, coworker, and patient questionnaires. CONCLUSIONS: Although differences among schools exist, they are small. They suggest at least equivalent performance for graduates of three- and four-year medical schools who practice in Alberta.


Subject(s)
Clinical Competence , Curriculum , Adult , Canada , Career Choice , Curriculum/statistics & numerical data , Humans , Internship and Residency
19.
Arch Pathol Lab Med ; 133(8): 1301-8, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19653730

ABSTRACT

CONTEXT: There is increasing interest in ensuring that physicians demonstrate the full range of Accreditation Council for Graduate Medical Education competencies. OBJECTIVE: To determine whether it is possible to develop a feasible and reliable multisource feedback instrument for pathologists and laboratory medicine physicians. DESIGN: Surveys with 39, 30, and 22 items were developed to assess individual physicians by 8 peers, 8 referring physicians, and 8 coworkers (eg, technologists, secretaries), respectively, using 5-point scales and an unable-to-assess category. Physicians completed a self-assessment survey. Items addressed key competencies related to clinical competence, collaboration, professionalism, and communication. RESULTS: Data from 101 pathologists and laboratory medicine physicians were analyzed. The mean number of respondents per physician was 7.6, 7.4, and 7.6 for peers, referring physicians, and coworkers, respectively. The reliability of the internal consistency, measured by Cronbach alpha, was > or = .95 for the full scale of all instruments. Analysis indicated that the medical peer, referring physician, and coworker instruments achieved a generalizability coefficient of .78, .81, and .81, respectively. Factor analysis showed 4 factors on the peer questionnaire accounted for 68.8% of the total variance: reports and clinical competency, collaboration, educational leadership, and professional behavior. For the referring physician survey, 3 factors accounted for 66.9% of the variance: professionalism, reports, and clinical competency. Two factors on the coworker questionnaire accounted for 59.9% of the total variance: communication and professionalism. CONCLUSIONS: It is feasible to assess this group of physicians using multisource feedback with instruments that are reliable.


Subject(s)
Clinical Competence/standards , Feedback , Medical Laboratory Personnel , Pathology, Clinical , Practice Patterns, Physicians'/standards , Allied Health Personnel/statistics & numerical data , Feasibility Studies , Humans , Peer Review, Health Care , Quality Assurance, Health Care , Reproducibility of Results , Self-Assessment , Surveys and Questionnaires , Workforce
20.
Med Educ ; 42(10): 1007-13, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18823520

ABSTRACT

OBJECTIVES: Multi-source feedback (MSF) enables performance data to be provided to doctors from patients, co-workers and medical colleagues. This study examined the evidence for the validity of MSF instruments for general practice, investigated changes in performance for doctors who participated twice, 5 years apart, and determined the association between change in performance and initial assessment and socio-demographic characteristics. METHODS: Data for 250 doctors included three datasets per doctor from, respectively, 25 patients, eight co-workers and eight medical colleagues, collected on two occasions. RESULTS: There was high internal consistency (alpha > 0.90) and adequate generalisability (Ep(2) > 0.70). D study results indicate adequate generalisability coefficients for groups of eight assessors (medical colleagues, co-workers) and 25 patient surveys. Confirmatory factor analyses provided evidence for the validity of factors that were theoretically expected, meaningful and cohesive. Comparative fit indices were 0.91 for medical colleague data, 0.87 for co-worker data and 0.81 for patient data. Paired t-test analysis showed significant change between the two assessments from medical colleagues and co-workers, but not between the two patient surveys. Multiple linear regressions explained 2.1% of the variance at time 2 for medical colleagues, 21.4% of the variance for co-workers and 16.35% of the variance for patient assessments, with professionalism a key variable in all regressions. CONCLUSIONS: There is evidence for the construct validity of the instruments and for their stability over time. Upward changes in performance will occur, although their effect size is likely to be small to moderate.


Subject(s)
Clinical Competence/standards , Family Practice/standards , Feedback , Health Personnel/psychology , Patients/psychology , Physicians, Family/standards , Canada , Family Practice/education , Female , Humans , Longitudinal Studies , Male , Physician-Patient Relations , Physicians, Family/education , Psychometrics , Statistics as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...