RESUMEN
While multiple theories exist to explain the diagnostic process, there are few available assessments that reliably determine diagnostic competence in trainees. Most methods focus on aspects of the process of diagnostic reasoning, such as the relation between case features and diagnostic hypotheses. Inevitably, detailed elucidation of aspects of the process requires substantial time per case and limits the number of cases that can be examined given a limited testing time. Shifting assessment to the outcome of diagnostic reasoning, accuracy of the diagnosis, may serve as a reliable measure of diagnostic competence and would allow increased sampling across cases. The present study is a retrospective analysis of 7 large studies, conducted by 3 research teams, that all used a series of brief written cases to examine the outcome of diagnostic reasoning-the diagnosis. The studies involved over 600 clinicians ranging from final year medical students to practicing emergency physicians. For 4 studies with usable reliability data, reliability for a 2 h test ranged from .63 to .94. On average speeded tests were more reliable (.85 vs. .73).To achieve a reliability of .75 required an average test time of 1.11 h for speeded tests and 1.99 for unspeeded tests. The measure was shown to be positively correlated with both written knowledge tests and measures of problem solving derived from OSCE performance tests. This retrospective analysis provides evidence to support the implementation of outcome-based assessments of clinical reasoning.
Asunto(s)
Competencia Clínica , Diagnóstico , Educación Médica/métodos , Evaluación Educacional/métodos , Pensamiento , Humanos , Médicos , Estudios Retrospectivos , Estudiantes de MedicinaRESUMEN
It is not uncommon for medical students to raise concerns over the difficulty of a single station within an Objective Structured Clinical Examination (OSCE), particularly when they feel they were subject to an unfair situation. Indeed, test developers also share these concerns about the possibility that a single extremely difficult station may impact student performance on the station that follows. As a reaction to the concerns of both students and examiners, we conducted a study, analyzing the scores of multiple OSCEs. Although our analyses did not support the complaints of unfairness targeted at the OSCE, we feel it is a rather enlightening story nevertheless, and one worth sharing.
Asunto(s)
Competencia Clínica , Educación Médica/organización & administración , Evaluación Educacional/métodos , Evaluación Educacional/normas , Educación Médica/normas , Femenino , Humanos , MasculinoRESUMEN
BACKGROUND: An experimenter controlled form of reflection has been shown to improve the detection and correction of diagnostic errors in some situations; however, the benefits of participant-controlled reflection have not been assessed. OBJECTIVE: The goal of the current study is to examine how experience and a self-directed decision to reflect affect the accuracy of revised diagnoses. DESIGN: Medical residents diagnosed 16 medical cases (pass 1). Participants were then given the opportunity to reflect on each case and revise their diagnoses (pass 2). PARTICIPANTS: Forty-seven medical Residents in post-graduate year (PGY) 1, 2 and 3 were recruited from Hamilton Health Care Centres. MAIN MEASURES: Diagnoses were scored as 0 (incorrect), 1 (partially correct) and 2 (correct). Accuracies and response times in pass 1 were analyzed using an ANOVA with three factors-PGY, Decision to revise yes/no, and Case 1-16, averaged across residents. The extent to which additional reflection affected accuracy was examined by analyzing only those cases that were revised, using a repeated measures ANOVA, with pass 1 or 2 as a within subject factor, and PGY and Case or Resident as a between-subject factor. KEY RESULTS: The mean score at pass 1 for each level was PGY1, 1.17 (SE 0.50); PGY2, 1.35 (SE 0.67) and PGY3, 1.27 (SE 0.94). While there was a trend for increased accuracy with level, this did not achieve significance. The number of residents at each level who revised at least one diagnosis was 12/19 PGY1 (63 %), 9/11 PGY2 (82 %) and 8/17 PGY3 (47 %). Only 8 % of diagnoses were revised resulting in a small but significant increase in scores from Pass 1 to 2, from 1.20/2 to 1.22 /2 (t = 2.15, p = 0.03). CONCLUSIONS: Participants did engage in self-directed reflection for incorrect diagnoses; however, this strategy provided minimal benefits compared to knowing the correct answer. Education strategies should be directed at improving formal and experiential knowledge.
Asunto(s)
Competencia Clínica , Errores Diagnósticos/psicología , Medicina Interna/educación , Internado y Residencia , Pensamiento , Adulto , Toma de Decisiones , Educación de Postgrado en Medicina , Evaluación Educacional , Femenino , Humanos , MasculinoRESUMEN
Healthcare practice and education are highly emotional endeavors. While this is recognized by educators and researchers seeking to develop interventions aimed at improving wellness in health professionals and at providing them with skills to deal with emotional interpersonal situations, the field of health professions education has largely ignored the role that emotions play on cognitive processes. The purpose of this review is to provide an introduction to the broader field of emotions, with the goal of better understanding the integral relationship between emotions and cognitive processes. Individuals, at any given time, are in an emotional state. This emotional state influences how they perceive the world around them, what they recall from it, as well as the decisions they make. Rather than treating emotions as undesirable forces that wreak havoc on the rational being, the field of health professions education could be enriched by a greater understanding of how these emotions can shape cognitive processes in increasingly predictable ways.
Asunto(s)
Atención/fisiología , Toma de Decisiones/fisiología , Emociones/fisiología , Memoria/fisiología , HumanosRESUMEN
Background Avoiding or correcting a diagnostic error first requires identification of an error and perhaps deciding to revise a diagnosis, but little is known about the factors that lead to revision. Three aspects of reflective practice, seeking Alternative explanations, exploring the Consequences of missing these alternative diagnoses, identifying Traits that may contradict the provisional diagnosis, were incorporated into a three-point diagnostic checklist (abbreviated to ACT). Methods Seventeen first and second year emergency medicine residents from the University of Toronto participated. Participants read up to eight case vignettes and completed the ACT diagnostic checklist. Provisional and final diagnoses and all responses for alternatives, consequences, and traits were individually scored as correct or incorrect. Additionally, each consequence was scored on a severity scale from 0 (not severe) to 3 (very severe). Average scores for alternatives, consequences, and traits and the severity rating for each consequence were entered into a binary logistic regression analysis with the outcome of revised or retained provisional diagnosis. Results Only 13% of diagnoses were revised. The binary logistic regression revealed that three scores derived from the ACT tool responses were associated with the decision to revise: severity rating of the consequence for missing the provisional diagnosis, the percent correct for identifying consequences, and the percent correct for identifying traits (χ2 = 23.5, df = 6, p < 0.001). The other three factors were not significant predictors. Conclusions Decisions to revise diagnoses may be cued by the detection of contradictory evidence. Education interventions may be more effective at reducing diagnostic error by targeting the ability to detect contradictory information within patient cases.
Asunto(s)
Lista de Verificación , Diagnóstico , Medicina de Emergencia/educación , Servicio de Urgencia en Hospital , Internado y Residencia , Toma de Decisiones , Errores Diagnósticos , Educación de Postgrado en Medicina , Humanos , OntarioRESUMEN
Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.
Asunto(s)
Cognición , Pensamiento , Sesgo , Errores Diagnósticos/psicología , Humanos , MemoriaRESUMEN
PURPOSE: Others have suggested that increased time pressure, sometimes caused by interruptions, may result in increased diagnostic errors. The authors previously found, however, that increased time pressure alone does not result in increased errors, but they did not test the effect of interruptions. It is unclear whether experience modulates the combined effects of time pressure and interruptions. This study investigated whether increased time pressure, interruptions, and experience level affect diagnostic accuracy and response time. METHOD: In October 2012, 152 residents were recruited at five Medical Council of Canada Qualifying Examination Part II test sites. Forty-six emergency physicians were recruited from one Canadian and one U.S. academic health center. Participants diagnosed 20 written general medicine cases. They were randomly assigned to receive fast (time pressure) or slow condition instructions. Visual and auditory case interruptions were manipulated as a within-subject factor. RESULTS: Diagnostic accuracy was not affected by interruptions or time pressure but was related to experience level: Emergency physicians were more accurate (71%) than residents (43%) (F = 234.0, P < .0001) and responded more quickly (54 seconds) than residents (65 seconds) (F = 9.0, P < .005). Response time was shorter for participants in the fast condition (55 seconds) than in the slow condition (73 seconds) (F = 22.2, P < .0001). Interruptions added about 8 seconds to response time. CONCLUSIONS: Experienced emergency physicians were both faster and more accurate than residents. Instructions to proceed quickly and interruptions had a small effect on response time but no effect on accuracy.