Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Med Educ ; 58(1): 27-35, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37559341

RESUMEN

CONTEXT: Electronic health records (EHRs) have transformed clinical practice. They are not simply replacements for paper records but integrated systems with the potential to improve patient safety and quality of care. Training physicians in the use of EHR is a highly complex intervention that occurs in a dynamic socio-technical health system. Training in this complex space is considered a wicked problem and would benefit from different analytic approaches to the traditional linear causal relationship analysis. Social Sciences theories see technological change in relation to complex social and institutional processes and provide a useful starting point. AIM: Our aim, therefore, is to introduce the medical education scholar to a selection of theoretical approaches from the Social Studies of Science and Technology (SSST) literatures, to inform educational efforts in training for EHR use. METHODS: We suggest a body of theories and frameworks that can expand the epistemological repertoire of medical education scholarship to respond to this wicked problem. Drawing from our work on EHR implementation, we discuss current limitations in framing training for EHRs use as a research problem in medical education. We then present a selection of alternative theories. RESULTS: Unified Theory of Acceptance and Use of Technology (UTAUT) explains the individual adoption of new technologies in the workplace and has four key constructs: performance/effort expectancy, social influence and facilitating conditions. Social Practice Theory (SPT), rather than focusing on individuals or institutions, starts with the activity or practice. The socio-technical model (STM) is a comprehensive theory that offers a multidimensional framework for studying the innovation and application of EHRs. Practical examples are provided. CONCLUSIONS: We argue that education for effective utilisation of EHRs requires moving beyond the epistemological monism often present in the field. New theoretical lenses can illuminate the complexity of research to identify the best practices for educating and training physicians.


Asunto(s)
Educación Médica , Informática Médica , Médicos , Humanos , Registros Electrónicos de Salud , Ciencias Sociales
2.
Med Educ ; 57(4): 337-348, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36181382

RESUMEN

BACKGROUND/PURPOSE: Despite widespread use of Electronic Health Records (EHR), the promise of benefits has not been clearly realised due, in part, to inadequate physician training. Training for EHR use is a highly complex intervention that occurs in a dynamic socio-technical health system. The purpose of this study was to describe and critically assess the interplay between educational activities and organisational factors that influenced EHR training and implementation across two different hospitals. METHODS: Based in a socio-technical framework, a comparative qualitative case study was undertaken as well suited to real-world processes. Semi-structured interviews were completed (n = 43), representing administrative leaders, staff physicians, residents and EHR trainers from two Canadian academic hospitals. Thematic analysis was employed for analysis. RESULTS: Similar findings were noted at both hospitals despite different implementation strategies. Despite mandatory training, physicians described limited transferability of training to the workplace. Factors contributing to this included standardised vendor modules (lacking specificity for their clinical context); variable EHR trainer expertise; limited post-launch training; and insufficient preparation for changes to workflow. They described learning while caring for patients and using workarounds. Strong emotional responses were described, including anger, frustration, anxiety and fear of harming patients. CONCLUSIONS: Training physicians for effective EHR utilisation requires organisational culture transformation as EHRs impacts all aspects of clinical workflows. Analytic thinking to consider workflows, ongoing post-launch training and the recognition of the interdependency of multiple factors are critical to preparing physicians to provide effective clinical care, and potentially reducing burnout. A list of key considerations is provided for educational leaders.


Asunto(s)
Registros Electrónicos de Salud , Médicos , Humanos , Canadá , Médicos/psicología , Hospitales , Escolaridad
3.
Artículo en Inglés | MEDLINE | ID: mdl-38010576

RESUMEN

First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.

4.
Med Teach ; 45(9): 978-983, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36786837

RESUMEN

INTRODUCTION: The Ottawa Conference on the Assessment of Competence in Medicine and the Healthcare Professions was first convened in 1985 in Ottawa. Since then, what has become known as the Ottawa conference has been held in various locations around the world every 2 years. It has become an important conference for the community of assessment - including researchers, educators, administrators and leaders - to share contemporary knowledge and develop international standards for assessment in medical and health professions education. METHODS: The Ottawa 2022 conference was held in Lyon, France, in conjunction with the AMEE 2022 conference. A diverse group of international assessment experts were invited to present a symposium at the AMEE conference to summarise key concepts from the Ottawa conference. This paper was developed from that symposium. RESULTS AND DISCUSSION: This paper summarises key themes and issues that emerged from the Ottawa 2022 conference. It highlights the importance of the consensus statements and discusses challenges for assessment such as issues of equity, diversity, and inclusion, shifts in emphasis to systems of assessment, implications of 'big data' and analytics, and challenges to ensure published research and practice are based on contemporary theories and concepts.


Asunto(s)
Medicina , Competencia Profesional , Humanos
5.
BMC Med Educ ; 23(1): 581, 2023 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-37592282

RESUMEN

BACKGROUND: Headache disorders are the most common neurological disorders worldwide. Despite their widespread prevalence and importance, the topic of headache is inconsistently taught at both the undergraduate and postgraduate levels. The goal of this study is to establish a better picture of the current state of Headache Medicine (HM) training in Neurology postgraduate programs in Canada and describe the impact of the current pandemic on training in this domain. METHODS: Online surveys were sent to senior residents of adult Neurology programs in Canada. We also conducted telephone interviews with Neurology Program Directors. Descriptive statistics were analyzed, and thematic analysis was used to review free text. RESULTS: A total of 36 residents, and 3 Program Directors participated in the study. Most of the teaching in HM is done by headache specialists and general neurology faculty. Formal teaching is mainly given during academic half day. Most of the programs expose their residents to Onabotulinum toxin A injections and peripheral nerve blocks, but they don't offer much formal teaching regarding these procedures. Residents consider HM teaching important and they would like to have more. They don't feel comfortable performing interventional headache treatments, despite feeling this should be part of the skillset of a general neurologist. CONCLUSION: Our study is the first to establish the current state of headache teaching in post-graduate neurology programs as perceived by trainees and program directors in Canada. The current educational offerings leave residents feeling poorly prepared to manage headaches, including procedural interventions. There is a need to diversify the source of teaching, so the educational burden doesn't lie mostly upon Headache specialists who are already in short supply. Neurology Residency programs need to adapt their curriculum to face the current need in HM.


Asunto(s)
Internado y Residencia , Neurología , Adulto , Humanos , Canadá , Escolaridad , Cefalea/terapia
6.
Adv Health Sci Educ Theory Pract ; 26(3): 1133-1156, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33566199

RESUMEN

Understanding which factors can impact rater judgments in assessments is important to ensure quality ratings. One such factor is whether prior performance information (PPI) about learners influences subsequent decision making. The information can be acquired directly, when the rater sees the same learner, or different learners over multiple performances, or indirectly, when the rater is provided with external information about the same learner prior to rating a performance (i.e., learner handover). The purpose of this narrative review was to summarize and highlight key concepts from multiple disciplines regarding the influence of PPI on subsequent ratings, discuss implications for assessment and provide a common conceptualization to inform research. Key findings include (a) assimilation (rater judgments are biased towards the PPI) occurs with indirect PPI and contrast (rater judgments are biased away from the PPI) with direct PPI; (b) negative PPI appears to have a greater effect than positive PPI; (c) when viewing multiple performances, context effects of indirect PPI appear to diminish over time; and (d) context effects may occur with any level of target performance. Furthermore, some raters are not susceptible to context effects, but it is unclear what factors are predictive. Rater expertise and training do not consistently reduce effects. Making raters more accountable, providing specific standards and reducing rater cognitive load may reduce context effects. Theoretical explanations for these findings will be discussed.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Humanos , Juicio , Variaciones Dependientes del Observador , Investigadores
7.
Adv Health Sci Educ Theory Pract ; 26(1): 199-214, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-32577927

RESUMEN

Learner handover (LH), the process of sharing of information about learners between faculty supervisors, allows for longitudinal assessment fundamental in the competency-based education model. However, the potential to bias future assessments has been raised as a concern. The purpose of this study is to determine whether prior performance information such as LH influences the assessment of learners in the clinical context. Between December 2017 and June 2018, forty-two faculty members and final-year residents from the Department of Medicine at the University of Ottawa were assigned to one of three study groups through quasi-randomisation, taking into account gender, speciality and rater experience. In a counter-balanced design, each group received either positive, negative or no LH prior to watching six simulated learner-patient encounter videos. Participants rated each video using the mini-CEX and completed a questionnaire on the raters' general impressions of LH. A significant difference in the mean mini-CEX competency scale scores between the negative (M = 5.29) and positive (M = 5.97) LH groups (P < .001, d = 0.81) was noted. Similar findings were found for the single overall clinical competence ratings. In the post-study questionnaire, 22/28 (78%) of participants had correctly deduced the purpose of the study and 14/28 (50%) felt LH did not influence their assessment. LH influenced mini-CEX scores despite raters' awareness of the potential for bias. These results suggest that LH could influence a rater's performance assessment and careful consideration of the potential implications of LH is required.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/normas , Internado y Residencia/organización & administración , Variaciones Dependientes del Observador , Adulto , Canadá , Educación Basada en Competencias , Evaluación Educacional/métodos , Femenino , Humanos , Internado y Residencia/normas , Masculino , Persona de Mediana Edad , Factores Sexuales
8.
Adv Health Sci Educ Theory Pract ; 23(4): 721-732, 2018 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29556923

RESUMEN

There is an increasing focus on factors that influence the variability of rater-based judgments. First impressions are one such factor. First impressions are judgments about people that are made quickly and are based on little information. Under some circumstances, these judgments can be predictive of subsequent decisions. A concern for both examinees and test administrators is whether the relationship remains stable when the performance of the examinee changes. That is, once a first impression is formed, to what degree will an examiner be willing to modify it? The purpose of this study is to determine the degree that first impressions influence final ratings when the performance of examinees changes within the context of an objective structured clinical examination (OSCE). Physician examiners (n = 29) viewed seven videos of examinees (i.e., actors) performing a physical exam on a single OSCE station. They rated the examinees' clinical abilities on a six-point global rating scale after 60 s (first impression or FIGR). They then observed the examinee for the remainder of the station and provided a final global rating (GRS). For three of the videos, the examinees' performance remained consistent throughout the videos. For two videos, examinee performance changed from initially strong to weak and for two videos, performance changed from initially weak to strong. The mean FIGR rating for the Consistent condition (M = 4.80) and the Strong to Weak condition (M = 4.87) were higher compared to their respective GRS ratings (M = 3.93, M = 2.73) with a greater decline for the Strong to Weak condition. The mean FIGR rating for the Weak to Strong condition was lower (3.60) than the corresponding mean GRS (4.81). This pattern of findings suggests that raters were willing to change their judgments based on examinee performance. Future work should explore the impact of making a first impression judgment explicit versus implicit and the role of context on the relationship between a first impression and a subsequent judgment.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Evaluación Educacional/normas , Variaciones Dependientes del Observador , Adulto , Femenino , Humanos , Juicio , Masculino , Persona de Mediana Edad , Factores Socioeconómicos
9.
Teach Learn Med ; 30(2): 152-161, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29240463

RESUMEN

Construct: The purpose of this study was to provide validity evidence for the mini-clinical evaluation exercise (mini-CEX) as an assessment tool for clinical skills in the workplace. BACKGROUND: Previous research has demonstrated validity evidence for the mini-CEX, but most studies were carried out in internal medicine or single disciplines, therefore limiting generalizability of the findings. If the mini-CEX is to be used in multidisciplinary contexts, then validity evidence should be gathered in similar settings. The purpose of this study was to gather further validity evidence for the mini-CEX but in a broader context. Specifically we sought to explore the effects of discipline and rater type on mini-CEX scores, internal structure, and the relationship between mini-CEXs and OSCEs in a multidisciplinary context. APPROACH: During clerkship, medical students completed eight different rotations (family medicine, internal medicine, surgery, psychiatry, pediatrics, emergency, anesthesiology and obstetrics and gynecology). During each rotation, mini-CEX forms and a written examination were completed. Two multidisciplinary OSCEs (in Clerkship Year 3 and start of Year 4) assessed clinical skills. The reliability of the mini-CEX was assessed using Generalizability analyses. To assess the influence of discipline and rater type, mean scores were analyzed using a factorial analysis of variance. The total mini-CEX score was correlated to scores from the students' respective OSCEs and corresponding written exams. RESULTS: Eighty-two students met inclusion criteria for a total of 781 ratings (average of 9.82 mini-CEX forms per student). There was a significant effect of discipline (p < .001, = .16), and faculty provided lower scores than nonfaculty raters (7.12 vs. 7.41; p = .002, = .02). The g-coefficient was .53 when discipline was included as a facet and .23 when rater type was a facet. There were low, but statistically significant correlations between the mini-CEX and scores for the 4th-year OSCE Total Score and the OSCE communication scores, r(80) = .40, p < .001 and r(80) = .29, p = .009. The mini-CEX was not correlated with the written examination scores for any of the disciplines. CONCLUSIONS: Our results provide conflicting findings for validity evidence for the mini-CEX. Mini-CEX ratings were correlated to multidisciplinary OSCEs but not written examinations, supporting the validity argument. However, reliability of the mini-CEX was low to moderate, and error accounted for the greatest amount of variability in scores. There was variation in scores due to discipline and resident raters gave higher scores than faculty. These results should be considered when considering the use of the mini-CEX in different contexts.


Asunto(s)
Prácticas Clínicas , Competencia Clínica/normas , Comunicación Interdisciplinaria , Medicina Interna/educación , Canadá , Humanos
10.
Med Educ ; 51(7): 755-767, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28418162

RESUMEN

CONTEXT: Although health professions education scholarship units (HPESUs) share a commitment to the production and dissemination of rigorous educational practices and research, they are situated in many different contexts and have a wide range of structures and functions. OBJECTIVES: In this study, the authors explore the institutional logics common across HPESUs, and how these logics influence the organisation and activities of HPESUs. METHODS: The authors analysed interviews with HPESU leaders in Canada (n = 12), Australia (n = 21), New Zealand (n = 3) and the USA (n = 11). Using an iterative process, they engaged in inductive and deductive analyses to identify institutional logics across all participating HPESUs. They explored the contextual factors that influence how these institutional logics impact each HPESU's structure and function. RESULTS: Participants identified three institutional logics influencing the organisational structure and functions of an HPESU: (i) the logic of financial accountability; (ii) the logic of a cohesive education continuum, and (iii) the logic of academic research, service and teaching. Although most HPESUs embodied all three logics, the power of the logics varied among units. The relative power of each logic influenced leaders' decisions about how members of the unit allocate their time, and what kinds of scholarly contribution and product are valued by the HPESU. CONCLUSIONS: Identifying the configuration of these three logics within and across HPESUs provides insights into the reasons why individual units are structured and function in particular ways. Having a common language in which to discuss these logics can enhance transparency, facilitate evaluation, and help leaders select appropriate indicators of HPESU success.


Asunto(s)
Becas/economía , Administración Financiera , Empleos en Salud , Liderazgo , Australia , Canadá , Administración Financiera/economía , Empleos en Salud/economía , Humanos , Lógica , Nueva Zelanda
11.
Adv Health Sci Educ Theory Pract ; 22(4): 969-983, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27848171

RESUMEN

Competency-based assessment is placing increasing emphasis on the direct observation of learners. For this process to produce valid results, it is important that raters provide quality judgments that are accurate. Unfortunately, the quality of these judgments is variable and the roles of factors that influence the accuracy of those judgments are not clearly understood. One such factor is first impressions: that is, judgments about people we do not know, made quickly and based on very little information. This study explores the influence of first impressions in an OSCE. Specifically, the purpose is to begin to examine the accuracy of a first impression and its influence on subsequent ratings. We created six videotapes of history-taking performance. Each video was scripted from a real performance by six examinee residents within a single OSCE station. Each performance was re-enacted with six different actors playing the role of the examinees and one actor playing the role of the patient and videotaped. A total of 23 raters (i.e., physician examiners) reviewed each video and were asked to make a global judgment of the examinee's clinical abilities after 60 s (First Impression GR) by providing a rating on a six-point global rating scale and then to rate their confidence in the accuracy of that judgment by providing a rating on a five-point rating scale (Confidence GR). After making these ratings, raters then watched the remainder of the examinee's performance and made another global rating of performance (Final GR) before moving on to the next video. First impression ratings of ability varied across examinees and were moderately correlated to expert ratings (r = .59, 95% CI [-.13, .90]). There were significant differences in mean ratings for three examinees. Correlations ranged from .05 to .56 but were only significant for three examinees. Rater confidence in their first impression was not related to the likelihood of a rater changing their rating between the first impression and a subsequent rating. The findings suggest that first impressions could play a role in explaining variability in judgments, but their importance was determined by the videotaped performance of the examinees. More work is needed to clarify conditions that support or discourage the use of first impressions.


Asunto(s)
Educación Médica/métodos , Evaluación Educacional/métodos , Evaluación Educacional/normas , Docentes Médicos/psicología , Competencia Clínica/normas , Educación Médica/normas , Docentes Médicos/normas , Humanos , Anamnesis/normas , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Grabación de Cinta de Video
12.
Med Teach ; 39(1): 14-19, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27841062

RESUMEN

Consensus group methods are widely used in research to identify and measure areas where incomplete evidence exists for decision-making. Despite their widespread use, these methods are often inconsistently used and reported. Using examples from the three most commonly used methods, the Delphi, Nominal Group and RAND/UCLA; this paper and associated Guide aim to describe these methods and to highlight common weaknesses in methodology and reporting. The paper outlines a series of recommendations to assist researchers using consensus group methods in providing a comprehensive description and justification of the steps taken in their study.


Asunto(s)
Consenso , Técnica Delphi , Educación Médica/organización & administración , Proyectos de Investigación , Procesos de Grupo , Humanos
13.
Med Educ ; 50(3): 351-8, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26896020

RESUMEN

CONTEXT: Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency-based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high-stakes examination. METHODS: The results of 8 years' of data from an Internal Medicine Residency OSCE (IM-OSCE) progress test were compared with scores on the Royal College of Physicians and Surgeons of Canada Comprehensive Objective Examination in Internal Medicine (RCPSC IM examination), which is comprised of both a written and performance-based component (n = 180). Correlations between scores in the two examinations were calculated. Logistic regression analyses were performed comparing IM-OSCE progress test scores with an 'elevated risk of failure' on either component of the RCPSC IM examination. RESULTS: Correlations between scores from the IM-OSCE (for PGY-1 residents to PGY-4 residents) and those from the RCPSC IM examination ranged from 0.316 (p = 0.001) to 0.554 (<.001) for the performance-based component and 0.305 (p = 0.002) to 0.516 (p < 0.001) for the written component. Logistic regression models demonstrated that PGY-2 and PGY-4 scores from the IM-OSCE were predictive of an 'elevated risk of failure' on both components of the RCPSC IM examination. CONCLUSIONS: This study provides further evidence for the use of OSCE progress testing by demonstrating a correlation between scores from an OSCE progress test and a national high-stakes examination. Furthermore, there is evidence that OSCE progress test scores are predictive of future performance on a national high-stakes examination.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Internado y Residencia/normas , Licencia Médica , Canadá , Medicina Interna/educación
14.
Med Teach ; 38(1): 59-63, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-25310244

RESUMEN

OBJECTIVE: The purpose of this study is to investigate whether webcast lectures are comparable to live lectures as a teaching tool in medical school. METHODS: Three Otolaryngology-Head&Neck Surgery (OTO-HNS) lectures were given to third year medical students through their regular academic curriculum with one group receiving lectures in a live lecture format and the other group in a webcast format. All lectures (live or webcast) were given by the same lecturer and contained identical material. Three outcome measures were used: a student satisfaction survey, performance on the OTO-HNS component of their written examination, and performance on an OTO-HNS OSCE station in the general end of year OSCE examination session. RESULTS: Students performance on the written examination was equal between the two groups. The webcast group outperformed the live lecture group in the OSCE station. The majority of students in the webcast group felt it was an effective learning tool for them. Most viewed the lectures more than once, and felt that this was beneficial to their learning. CONCLUSION: Webcasts appear equally effective to live lectures as a teaching tool.


Asunto(s)
Educación de Pregrado en Medicina/métodos , Enseñanza/métodos , Difusión por la Web como Asunto , Comportamiento del Consumidor , Curriculum , Evaluación Educacional , Humanos
15.
Med Teach ; 38(2): 168-73, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-25909896

RESUMEN

PURPOSE: The purpose of this study was to explore the use of an objective structured clinical examination for Internal Medicine residents (IM-OSCE) as a progress test for clinical skills. METHODS: Data from eight administrations of an IM-OSCE were analyzed retrospectively. Data were scaled to a mean of 500 and standard deviation (SD) of 100. A time-based comparison, treating post-graduate year (PGY) as a repeated-measures factor, was used to determine how residents' performance progressed over time. RESULTS: Residents' total IM-OSCE scores (n = 244) increased over training from a mean of 445 (SD = 84) in PGY-1 to 534 (SD = 71) in PGY-3 (p < 0.001). In an analysis of sub-scores, including only those who participated in the IM OSCE for all three years of training (n = 46), mean structured oral scores increased from 464 (SD = 92) to 533 (SD = 83) (p < 0.001), physical examination scores increased from 464 (SD = 82) to 520 (SD = 75) (p < 0.001), and procedural skills increased from 495 (SD = 99) to 555 (SD = 67) (p = 0.033). There was no significant change in communication scores (p = 0.97). CONCLUSIONS: The IM-OSCE can be used to demonstrate progression of clinical skills throughout residency training. Although most of the clinical skills assessed improved as residents progressed through their training, communication skills did not appear to change.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Medicina Interna/educación , Internado y Residencia , Humanos , Ontario , Estudios Retrospectivos
16.
Teach Learn Med ; 28(4): 406-414, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27700252

RESUMEN

Construct: The impact of using nonbinary checklists for scoring residents from different levels of training participating in objective structured clinical examination (OSCE) progress tests was explored. BACKGROUND: OSCE progress tests typically employ similar rating instruments as traditional OSCEs. However, progress tests differ from other assessment modalities because learners from different stages of training participate in the same examination, which can pose challenges when deciding how to assign scores. In an attempt to better capture performance, nonbinary checklists were introduced in two OSCE progress tests. The purposes of this study were (a) to identify differences in the use of checklist options (e.g., done satisfactorily, attempted, or not done) by task type, (b) to analyze the impact of different scoring methods using nonbinary checklists for two OSCE progress tests (nonprocedural and procedural) for Internal Medicine residents, and (c) to determine which scoring method is better suited for a given task. APPROACH: A retrospective analysis examined differences in scores (n = 119) for two OSCE progress tests (procedural and nonprocedural). Scoring methods (hawk, dove, and hybrid) varied in stringency in how they awarded marks for nonbinary checklist items that were rated as done satisfactorily, attempted, or not done. Difficulty, reliability (internal consistency), item-total correlations and pass rates were compared for each OSCE using the three scoring methods. RESULTS: Mean OSCE scores were highest using the dove method and lowest using the hawk method. The hawk method resulted in higher item-total correlations for most stations, but there were differences by task type. Overall score reliability calculated using the three methods did not differ significantly. Pass-fail status differed as a function of scoring methods and exam type, with the hawk and hybrid methods resulting in higher failure rates for the nonprocedural OSCE and the dove method resulting in a higher failure rate for the procedural OSCE. CONCLUSION: The use of different scoring methods for nonbinary OSCE checklists resulted in differences in mean scores and pass-fail status. The results varied with procedural and nonprocedural OSCEs.


Asunto(s)
Lista de Verificación , Competencia Clínica , Evaluación Educacional , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos
17.
Teach Learn Med ; 28(1): 52-60, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26787085

RESUMEN

THEORY: The move to competency-based education has heightened the importance of direct observation of clinical skills and effective feedback. The Objective Structured Clinical Examination (OSCE) is widely used for assessment and affords an opportunity for both direct observation and feedback to occur simultaneously. For feedback to be effective, it should include direct observation, assessment of performance, provision of feedback, reflection, decision making, and use of feedback for learning and change. HYPOTHESES: If one of the goals of feedback is to engage students to think about their performance (i.e., reflection), it would seem imperative that they can recall this feedback both immediately and into the future. This study explores recall of feedback in the context of an OSCE. Specifically, the purpose of this study was to (a) determine the amount and the accuracy of feedback that trainees remember immediately after an OSCE, as well as 1 month later, and (b) assess whether prompting immediate recall improved delayed recall. METHODS: Internal medicine residents received 2 minutes of verbal feedback from physician examiners in the context of an OSCE. The feedback was audio-recorded and later transcribed. Residents were randomly allocated to the immediate recall group (immediate-RG; n = 10) or the delayed recall group (delayed-RG; n = 8). The immediate-RG completed a questionnaire prompting recall of feedback received immediately after the OSCE, and then again 1 month later. The delayed-RG completed a questionnaire only 1 month after the OSCE. The total number and accuracy of feedback points provided by examiners were compared to the points recalled by residents. Results comparing recall at 1 month between the immediate-RG and the delayed-RG were also studied. RESULTS: Physician examiners provided considerably more feedback points (M = 16.3) than the residents recalled immediately after the OSCE (M = 2.61, p < .001). There was no significant difference between the number of feedback points recalled upon completion of the OSCE (2.61) compared to 1 month later (M = 1.96, p = .06, Cohen's d = .70). Prompting immediate recall did not improve later recall. The mean accuracy score for feedback recall immediately after the OSCE was 4.3/9 or "somewhat representative," and at 1 month the score dropped to 3.5/9 or "not representative" (ns). CONCLUSION: Residents recall very few feedback points immediately after the OSCE and 1 month later. The feedback points that are recalled are neither very accurate nor representative of the feedback actually provided.


Asunto(s)
Retroalimentación , Medicina Interna/educación , Internado y Residencia , Recuerdo Mental , Evaluación Educacional , Humanos , Médicos , Encuestas y Cuestionarios , Grabación en Cinta
18.
Teach Learn Med ; 28(4): 385-394, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27285377

RESUMEN

Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace. BACKGROUND: Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use. APPROACH: Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale. RESULTS: Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36-2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5). CONCLUSIONS: The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.


Asunto(s)
Educación Basada en Competencias , Educación de Postgrado en Medicina , Retroalimentación , Competencia Clínica , Humanos , Estudiantes de Medicina
19.
Adv Health Sci Educ Theory Pract ; 20(1): 85-100, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24823793

RESUMEN

Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2%) and non-technical (83.2 vs. 75.1%) components of the PS-OSCE (p < 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (p < 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.


Asunto(s)
Competencia Clínica , Educación de Postgrado en Medicina/normas , Evaluación Educacional/métodos , Medicina Interna/educación , Internado y Residencia , Adulto , Femenino , Humanos , Masculino , Modelos Educacionales , Ontario , Reproducibilidad de los Resultados
20.
Med Educ ; 48(3): 255-61, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24528460

RESUMEN

CONTEXT: Selected-response (SR) formats (e.g. multiple-choice questions) and constructed-response (CR) formats (e.g. short-answer questions) are commonly used to test the knowledge of examinees. Scores on SR formats are typically higher than scores on CR formats. This difference is often attributed to examinees being cued by options within an SR question, but there could be alternative explanations. The purpose of this study was to expand on previous work with regards to the cueing effect of SR formats by directly contrasting conditions that support cueing versus memory of previously seen questions. METHODS: During an objective structured clinical examination, students (n = 144) completed two consecutive stations in which they were presented with the same written cases but in different formats. Group 1 students were presented with CR questions followed by SR questions. Group 2 students were presented with questions in reverse order. Participants were asked to describe their testing experience. RESULTS: Selected-response scores (M = 4.21/10) were statistically higher than the CR scores (M = 3.82/10). However, there was no significant interaction between sequence and format (F(1,142) = 1.59, p = 0.21, ηp2 = 0.01) with scores increasing from 3.49/10 to 4.06/10 in the group that started with CR and decreasing (4.38/10-4.15/10) in the group that started with SR first. Correlations between SR scores and CR scores were high (CR first = 0.78, SR first = 0.89). Questionnaire results indicated that students felt the SR format was easier and led to cueing. CONCLUSION: To better understand test performance, it is important to know how different response formats could influence results. Because SR scores were higher than CR scores, irrespective of the format seen first, the pattern is consistent with what would be expected if cueing rather than memory for prior questions led to higher SR scores. This could have implications for test designers, especially when selecting question formats.


Asunto(s)
Señales (Psicología) , Educación Médica , Evaluación Educacional/métodos , Estudiantes de Medicina/psicología , Abdomen Agudo/diagnóstico por imagen , Competencia Clínica/normas , Toma de Decisiones , Diagnóstico Diferencial , Evaluación Educacional/estadística & datos numéricos , Insuficiencia Cardíaca/diagnóstico por imagen , Humanos , Radiografía , Distribución Aleatoria
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA