Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 58
Filter
1.
J Hosp Med ; 19(6): 468-474, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38528679

ABSTRACT

BACKGROUND: Formulating a thoughtful problem representation (PR) is fundamental to sound clinical reasoning and an essential component of medical education. Aside from basic structural recommendations, little consensus exists on what characterizes high-quality PRs. OBJECTIVES: To elucidate characteristics that distinguish PRs created by experts and novices. METHODS: Early internal medicine residents (novices) and inpatient teaching faculty (experts) from two academic medical centers were given two written clinical vignettes and were instructed to write a PR and three-item differential diagnosis for each. Deductive content analysis described the characteristics comprising PRs. An initial codebook of characteristics was refined iteratively. The primary outcome was differences in characteristic frequencies between groups. The secondary outcome was characteristics correlating with diagnostic accuracy. Mixed-effects regression with random effects modeling compared case-level outcomes by group. RESULTS: Overall, 167 PRs were analyzed from 30 novices and 54 experts. Experts included 0.8 fewer comorbidities (p < .01) and 0.6 more examination findings (p = .01) than novices on average. Experts were less likely to include irrelevant comorbidities (odds ratio [OR] = 0.4, 95% confidence interval [CI] = 0.2-0.8) or a diagnosis (OR = 0.3, 95% CI = 0.1-0.8) compared with novices. Experts encapsulated clinical data into higher-order terms (e.g., sepsis) than novices (p < .01) while including similar numbers of semantic qualifiers (SQs). Regardless of expertise level, PRs following a three-part structure (e.g., demographics, temporal course, and clinical syndrome) and including temporal SQs were associated with diagnostic accuracy (p < .01). CONCLUSIONS: Compared with novices, expert PRs include less irrelevant data and synthesize information into higher-order concepts. Future studies should determine whether targeted educational interventions for PRs improve diagnostic accuracy.


Subject(s)
Clinical Competence , Internal Medicine , Internship and Residency , Humans , Internal Medicine/education , Clinical Competence/standards , Female , Clinical Reasoning , Male , Adult , Diagnosis, Differential
2.
BMJ Qual Saf ; 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38365449

ABSTRACT

BACKGROUND: Diagnostic errors have been attributed to reasoning flaws caused by cognitive biases. While experiments have shown bias to cause errors, physicians of similar expertise differed in susceptibility to bias. Resisting bias is often said to depend on engaging analytical reasoning, disregarding the influence of knowledge. We examined the role of knowledge and reasoning mode, indicated by diagnosis time and confidence, as predictors of susceptibility to anchoring bias. Anchoring bias occurs when physicians stick to an incorrect diagnosis triggered by early salient distracting features (SDF) despite subsequent conflicting information. METHODS: Sixty-eight internal medicine residents from two Dutch university hospitals participated in a two-phase experiment. Phase 1: assessment of knowledge of discriminating features (ie, clinical findings that discriminate between lookalike diseases) for six diseases. Phase 2 (1 week later): diagnosis of six cases of these diseases. Each case had two versions differing exclusively in the presence/absence of SDF. Each participant diagnosed three cases with SDF (SDF+) and three without (SDF-). Participants were randomly allocated to case versions. Based on phase 1 assessment, participants were split into higher knowledge or lower knowledge groups. MAIN OUTCOME MEASUREMENTS: frequency of diagnoses associated with SDF; time to diagnose; and confidence in diagnosis. RESULTS: While both knowledge groups performed similarly on SDF- cases, higher knowledge physicians succumbed to anchoring bias less frequently than their lower knowledge counterparts on SDF+ cases (p=0.02). Overall, physicians spent more time (p<0.001) and had lower confidence (p=0.02) on SDF+ than SDF- cases (p<0.001). However, when diagnosing SDF+ cases, the groups did not differ in time (p=0.88) nor in confidence (p=0.96). CONCLUSIONS: Physicians apparently adopted a more analytical reasoning approach when presented with distracting features, indicated by increased time and lower confidence, trying to combat bias. Yet, extended deliberation alone did not explain the observed performance differences between knowledge groups. Success in mitigating anchoring bias was primarily predicted by knowledge of discriminating features of diagnoses.

3.
J Gen Intern Med ; 2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38277023

ABSTRACT

BACKGROUND: Diagnostic errors cause significant patient harm. The clinician's ultimate goal is to achieve diagnostic excellence in order to serve patients safely. This can be accomplished by learning from both errors and successes in patient care. However, the extent to which clinicians grow and navigate diagnostic errors and successes in patient care is poorly understood. Clinically experienced hospitalists, who have cared for numerous acutely ill patients, should have great insights from their successes and mistakes to inform others striving for excellence in patient care. OBJECTIVE: To identify and characterize clinical lessons learned by experienced hospitalists from diagnostic errors and successes. DESIGN: A semi-structured interview guide was used to collect qualitative data from hospitalists at five independently administered hospitals in the Mid-Atlantic area from February to June 2022. PARTICIPANTS: 12 academic and 12 community-based hospitalists with ≥ 5 years of clinical experience. APPROACH: A constructivist qualitative approach was used and "reflexive thematic analysis" of interview transcripts was conducted to identify themes and patterns of meaning across the dataset. RESULTS: Five themes were generated from the data based on clinical lessons learned by hospitalists from diagnostic errors and successes. The ideas included appreciating excellence in clinical reasoning as a core skill, connecting with patients and other members of the health care team to be able to tap into their insights, reflecting on the diagnostic process, committing to growth, and prioritizing self-care. CONCLUSIONS: The study identifies key lessons learned from the errors and successes encountered in patient care by clinically experienced hospitalists. These findings may prove helpful for individuals and groups that are authentically committed to moving along the continuum from diagnostic competence towards excellence.

4.
Med Teach ; 46(1): 65-72, 2024 01.
Article in English | MEDLINE | ID: mdl-37402384

ABSTRACT

PURPOSE: Deliberate reflection on initial diagnosis has been found to repair diagnostic errors. We investigated the effectiveness of teaching students to use deliberate reflection on future cases and whether their usage would depend on their perception of case difficulty. METHOD: One-hundred-nineteen medical students solved cases either with deliberate-reflection or without instructions to reflect. One week later, all participants solved six cases, each with two equally likely diagnoses, but some symptoms in the case were associated with only one of the diagnoses (discriminating features). Participants provided one diagnosis and subsequently wrote down everything they remembered from it. After the first three cases, they were told that the next three would be difficult cases. Reflection was measured by the proportion of discriminating features recalled (overall; related to their provided diagnosis; related to alternative diagnosis). RESULTS: The deliberate-reflection condition recalled more features for the alternative diagnosis than the control condition (p = .013) regardless of described difficulty. They also recalled more features related to their provided diagnosis on the first three cases (p = .004), but on the last three cases (described as difficult), there was no difference. CONCLUSION: Learning deliberate reflection helped students engage in more reflective reasoning when solving future cases.


Subject(s)
Students, Medical , Humans , Clinical Competence , Learning , Problem Solving , Diagnostic Errors , Teaching
5.
BMC Med Educ ; 23(1): 934, 2023 Dec 08.
Article in English | MEDLINE | ID: mdl-38066602

ABSTRACT

BACKGROUND: Diagnostic errors in internal medicine are common. While cognitive errors have previously been identified to be the most common contributor to errors, very little is known about errors in specific fields of internal medicine such as endocrinology. This prospective, multicenter study focused on better understanding the causes of diagnostic errors made by general practitioners and internal specialists in the area of endocrinology. METHODS: From August 2019 until January 2020, 24 physicians completed five endocrine cases on an online platform that simulated the diagnostic process. After each case, the participants had to state and explain why they chose their assumed diagnosis. The data gathering process as well as the participants' explanations were quantitatively and qualitatively analyzed to determine the causes of the errors. The diagnostic processes in correctly and incorrectly solved cases were compared. RESULTS: Seven different causes of diagnostic error were identified, the most frequent being misidentification (mistaking one diagnosis with a related one or with more frequent and similar diseases) in 23% of the cases. Other causes were faulty context generation (21%) and premature closure (17%). The diagnostic confidence did not differ between correctly and incorrectly solved cases (median 8 out of 10, p = 0.24). However, in incorrectly solved cases, physicians spent less time on the technical findings (such as lab results, imaging) (median 250 s versus 199 s, p < 0.049). CONCLUSIONS: The causes for errors in endocrine case scenarios are similar to the causes in other fields of internal medicine. Spending more time on technical findings might prevent misdiagnoses in everyday clinical practice.


Subject(s)
Endocrinology , General Practitioners , Humans , Prospective Studies , Diagnostic Errors/prevention & control , Internal Medicine
6.
J Patient Saf ; 19(8): 573-579, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37796227

ABSTRACT

OBJECTIVES: Diagnostic errors, that is, missed, delayed, or wrong diagnoses, are a common type of medical errors and preventable iatrogenic harm. Errors in the laboratory testing process can lead to diagnostic errors. This retrospective analysis of voluntary incident reports aimed to investigate the nature, causes, and clinical impact of errors, including diagnostic errors, in the clinical laboratory testing process. METHODS: We used a sample of 600 voluntary incident reports concerning diagnostic testing selected from all incident reports filed at the University Medical Center Utrecht in 2017-2018. From these incident reports, we included all reports concerning the clinical laboratory testing process. For these incidents, we determined the following: nature: in which phase of the testing process the error occurred; cause: human, technical, organizational; and clinical impact: the type and severity of the harm to the patient, including diagnostic error. RESULTS: Three hundred twenty-seven reports were included in the analysis. In 77.1%, the error occurred in the preanalytical phase, 13.5% in the analytical phase and 8.0% in the postanalytical phase (1.5% undetermined). Human factors were the most frequent cause (58.7%). Severe clinical impact occurred relatively more often in the analytical and postanalytical phase, 32% and 28%, respectively, compared with the preanalytical phase (40%). In 195 cases (60%), there was a potential diagnostic error as consequence, mainly a potential delay in the diagnostic process (50.5%). CONCLUSIONS: Errors in the laboratory testing process often lead to potential diagnostic errors. Although prone to incomplete information on causes and clinical impact, voluntary incident reports are a valuable source for research on diagnostic error related to errors in the clinical laboratory testing process.


Subject(s)
Clinical Laboratory Techniques , Risk Management , Humans , Retrospective Studies , Diagnostic Errors/prevention & control , Medical Errors
7.
BMC Med Educ ; 23(1): 684, 2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37735677

ABSTRACT

PURPOSE: Diagnostic errors are a large burden on patient safety and improving clinical reasoning (CR) education could contribute to reducing these errors. To this end, calls have been made to implement CR training as early as the first year of medical school. However, much is still unknown about pre-clerkship students' reasoning processes. The current study aimed to observe how pre-clerkship students use clinical information during the diagnostic process. METHODS: In a prospective observational study, pre-clerkship medical students completed 10-11 self-directed online simulated CR diagnostic cases. CR skills assessed included: creation of the differential diagnosis (Ddx), diagnostic justification (DxJ), ordering investigations, and identifying the most probable diagnosis. Student performances were compared to expert-created scorecards and students received detailed individualized formative feedback for every case. RESULTS: 121 of 133 (91%) first- and second-year medical students consented to the research project. Students scored much lower for DxJ compared to scores obtained for creation of the Ddx, ordering tests, and identifying the correct diagnosis, (30-48% lower, p < 0.001). Specifically, students underutilized physical exam data (p < 0.001) and underutilized data that decreased the probability of incorrect diagnoses (p < 0.001). We observed that DxJ scores increased 40% after 10-11 practice cases (p < 0.001). CONCLUSIONS: We implemented deliberate practice with formative feedback for CR starting in the first year of medical school. Students underperformed in DxJ, particularly with analyzing the physical exam data and pertinent negative data. We observed significant improvement in DxJ performance with increased practice.


Subject(s)
Dichlorodiphenyl Dichloroethylene , Students, Medical , Humans , Educational Status , Clinical Competence , Clinical Reasoning
8.
BMJ Qual Saf ; 2023 Aug 09.
Article in English | MEDLINE | ID: mdl-37558403

ABSTRACT

INTRODUCTION: Although diagnostic errors have gained renewed focus within the patient safety domain, measuring them remains a challenge. They are often measured using methods that lack information on decision-making processes given by involved physicians (eg, record reviews). The current study analyses serious adverse event (SAE) reports from Dutch hospitals to identify common contributing factors of diagnostic errors in hospital medicine. These reports are the results of thorough investigations by highly trained, independent hospital committees into the causes of SAEs. The reports include information from involved healthcare professionals and patients or family obtained through interviews. METHODS: All 71 Dutch hospitals were invited to participate in this study. Participating hospitals were asked to send four diagnostic SAE reports of their hospital. Researchers applied the Safer Dx Instrument, a Generic Analysis Framework, the Diagnostic Error Evaluation and Research (DEER) taxonomy and the Eindhoven Classification Model (ECM) to analyse reports. RESULTS: Thirty-one hospitals submitted 109 eligible reports. Diagnostic errors most often occurred in the diagnostic testing, assessment and follow-up phases according to the DEER taxonomy. The ECM showed human errors as the most common contributing factor, especially relating to communication of results, task planning and execution, and knowledge. Combining the most common DEER subcategories and the most common ECM classes showed that clinical reasoning errors resulted from failures in knowledge, and task planning and execution. Follow-up errors and errors with communication of test results resulted from failures in coordination and monitoring, often accompanied by usability issues in electronic health record design and missing protocols. DISCUSSION: Diagnostic errors occurred in every hospital type, in different specialties and with different care teams. While clinical reasoning errors remain a common problem, often caused by knowledge and skill gaps, other frequent errors in communication of test results and follow-up require different improvement measures (eg, improving technological systems).

9.
BMC Med Educ ; 23(1): 474, 2023 Jun 26.
Article in English | MEDLINE | ID: mdl-37365590

ABSTRACT

BACKGROUND: Using malpractice claims cases as vignettes is a promising approach for improving clinical reasoning education (CRE), as malpractice claims can provide a variety of content- and context-rich examples. However, the effect on learning of adding information about a malpractice claim, which may evoke a deeper emotional response, is not yet clear. This study examined whether knowing that a diagnostic error resulted in a malpractice claim affects diagnostic accuracy and self-reported confidence in the diagnosis of future cases. Moreover, suitability of using erroneous cases with and without a malpractice claim for CRE, as judged by participants, was evaluated. METHODS: In the first session of this two-phased, within-subjects experiment, 81 first-year residents of general practice (GP) were exposed to both erroneous cases with (M) and erroneous cases without (NM) malpractice claim information, derived from a malpractice claims database. Participants rated suitability of the cases for CRE on a five-point Likert scale. In the second session, one week later, participants solved four different cases with the same diagnoses. Diagnostic accuracy was measured with three questions, scored on a 0-1 scale: (1) What is your next step? (2) What is your differential diagnosis? (3) What is your most probable diagnosis and what is your level of certainty on this? Both subjective suitability and diagnostic accuracy scores were compared between the versions (M and NM) using repeated measures ANOVA. RESULTS: There were no differences in diagnostic accuracy parameters (M vs. NM next step: 0.79 vs. 0.77, p = 0.505; differential diagnosis 0.68 vs. 0.75, p = 0.072; most probable diagnosis 0.52 vs. 0.57, p = 0.216) and self-reported confidence (53.7% vs. 55.8% p = 0.390) of diagnoses previously seen with or without malpractice claim information. Subjective suitability- and complexity scores for the two versions were similar (suitability: 3.68 vs. 3.84, p = 0.568; complexity 3.71 vs. 3.88, p = 0.218) and significantly increased for higher education levels for both versions. CONCLUSION: The similar diagnostic accuracy rates between cases studied with or without malpractice claim information suggests both versions are equally effective for CRE in GP training. Residents judged both case versions to be similarly suitable for CRE; both were considered more suitable for advanced than for novice learners.


Subject(s)
General Practice , Malpractice , Humans , Diagnostic Errors , Educational Status , Clinical Reasoning , Learning
10.
BMJ Open ; 13(3): e072649, 2023 03 29.
Article in English | MEDLINE | ID: mdl-36990482

ABSTRACT

INTRODUCTION: Computerised diagnostic decision support systems (CDDS) suggesting differential diagnoses to physicians aim to improve clinical reasoning and diagnostic quality. However, controlled clinical trials investigating their effectiveness and safety are absent and the consequences of its use in clinical practice are unknown. We aim to investigate the effect of CDDS use in the emergency department (ED) on diagnostic quality, workflow, resource consumption and patient outcomes. METHODS AND ANALYSIS: This is a multicentre, outcome assessor and patient-blinded, cluster-randomised, multiperiod crossover superiority trial. A validated differential diagnosis generator will be implemented in four EDs and randomly allocated to a sequence of six alternating intervention and control periods. During intervention periods, the treating ED physician will be asked to consult the CDDS at least once during diagnostic workup. During control periods, physicians will not have access to the CDDS and diagnostic workup will follow usual clinical care. Key inclusion criteria will be patients' presentation to the ED with either fever, abdominal pain, syncope or a non-specific complaint as chief complaint. The primary outcome is a binary diagnostic quality risk score composed of presence of an unscheduled medical care after discharge, change in diagnosis or death during time of follow-up or an unexpected upscale in care within 24 hours after hospital admission. Time of follow-up is 14 days. At least 1184 patients will be included. Secondary outcomes include length of hospital stay, diagnostics and data regarding CDDS usage, physicians' confidence calibration and diagnostic workflow. Statistical analysis will use general linear mixed modelling methods. ETHICS AND DISSEMINATION: Approved by the cantonal ethics committee of canton Berne (2022-D0002) and Swissmedic, the Swiss national regulatory authority on medical devices. Study results will be disseminated through peer-reviewed journals, open repositories and the network of investigators and the expert and patients advisory board. TRIAL REGISTRATION NUMBER: NCT05346523.


Subject(s)
Hospitalization , Research Design , Humans , Cross-Over Studies , Emergency Service, Hospital , Randomized Controlled Trials as Topic , Multicenter Studies as Topic
11.
Patient Educ Couns ; 110: 107650, 2023 05.
Article in English | MEDLINE | ID: mdl-36731167

ABSTRACT

BACKGROUND: Most people experience a diagnostic error at least once in their lifetime. Patients' experiences with their diagnosis could provide important insights when setting research priorities to reduce diagnostic error. OBJECTIVE: Our objective was to engage patients in research agenda setting for improving diagnosis. PATIENT INVOLVEMENT: Patients were involved in generating, discussing, prioritizing, and ranking of research questions for diagnostic error reduction. METHODS: We used the prioritization methodology based on the Child Health and Nutrition Research Initiative (CHNRI). We first solicited research questions important for diagnostic error reduction from a large group of patients. Thirty questions were initially prioritized at an in-person meeting with 8 patients who were supported by 4 researchers. The resulting list was further prioritized by patients who scored questions on five predefined criteria. We then applied previously determined weights to these prioritization criteria to adjust the final prioritization score for each question, resulting in 10 highest priority research questions. RESULTS: Forty-one patients submitted 171 research questions. After prioritization, the highest priority topics included better care coordination across the diagnostic continuum and improving care transitions, improved identification and measurement of diagnostic errors and attention for implicit bias towards patients who are vulnerable to diagnostic errors. DISCUSSION: We systematically identified the top-10 patient generated research priorities for diagnostic error reduction using transparent and objective methods. Patients prioritized different research questions than researchers and therefore complemented an agenda previously generated by researchers. PRACTICAL VALUE: Research priorities identified by patients can be used by funders and researchers to conduct future research focused on reducing diagnostic errors. FUNDING: This project was funded by the Gordon and Betty Moore Foundation.


Subject(s)
Biomedical Research , Child , Humans , Diagnostic Errors , Patient Participation , Health Priorities
12.
J Gen Intern Med ; 38(4): 1076, 2023 03.
Article in English | MEDLINE | ID: mdl-35469361

Subject(s)
Communication , Humans
13.
Adv Health Sci Educ Theory Pract ; 28(3): 893-910, 2023 08.
Article in English | MEDLINE | ID: mdl-36529764

ABSTRACT

Diagnostic reasoning is an important topic in General Practitioners' (GPs) vocational training. Interestingly, research has paid little attention to the content of the cases used in clinical reasoning education. Malpractice claims of diagnostic errors represent cases that impact patients and that reflect potential knowledge gaps and contextual factors. With this study, we aimed to identify and prioritize educational content from a malpractice claims database in order to improve clinical reasoning education in GP training. With input from various experts in clinical reasoning and diagnostic error, we defined five priority criteria that reflect educational relevance. Fifty unique medical conditions from a malpractice claims database were scored on those priority criteria by stakeholders in clinical reasoning education in 2021. Subsequently, we calculated the mean total priority score for each condition. Mean total priority score (min 5-max 25) for all fifty diagnoses was 17,11 with a range from 13,89 to 19,61. We identified and described the fifteen highest scoring diseases (with priority scores ranging from 18,17 to 19,61). The prioritized conditions involved complex common (e.g., cardiovascular diseases, renal insufficiency and cancer), complex rare (e.g., endocarditis, ectopic pregnancy, testicular torsion) and more straightforward common conditions (e.g., tendon rupture/injury, eye infection). The claim cases often demonstrated atypical presentations or complex contextual factors. Including those malpractice cases in GP vocational training could enrich the illness scripts of diseases that are at high risk of errors, which may reduce diagnostic error and related patient harm.


Subject(s)
General Practitioners , Malpractice , Humans , Vocational Education , Diagnostic Errors , Educational Status , Retrospective Studies
14.
Diagnosis (Berl) ; 10(2): 121-129, 2023 05 01.
Article in English | MEDLINE | ID: mdl-36490202

ABSTRACT

OBJECTIVES: Checklists that aim to support clinicians' diagnostic reasoning processes are often recommended to prevent diagnostic errors. Evidence on checklist effectiveness is mixed and seems to depend on checklist type, case difficulty, and participants' expertise. Existing studies primarily use abnormal cases, leaving it unclear how the diagnosis of normal cases is affected by checklist use. We investigated how content-specific and debiasing checklists impacted performance for normal and abnormal cases in electrocardiogram (ECG) diagnosis. METHODS: In this randomized experiment, 42 first year general practice residents interpreted normal, simple abnormal, and complex abnormal ECGs without a checklist. One week later, they were randomly assigned to diagnose the ECGs again with either a debiasing or content-specific checklist. We measured residents' diagnostic accuracy, confidence, patient management, and time taken to diagnose. Additionally, confidence-accuracy calibration was assessed. RESULTS: Accuracy, confidence, and patient management were not significantly affected by checklist use. Time to diagnose decreased with a checklist (M=147s (77)) compared to without a checklist (M=189s (80), Z=-3.10, p=0.002). Additionally, residents' calibration improved when using a checklist (phase 1: R2=0.14, phase 2: R2=0.40). CONCLUSIONS: In both normal and abnormal cases, checklist use improved confidence-accuracy calibration, though accuracy and confidence were not significantly affected. Time to diagnose was reduced. Future research should evaluate this effect in more experienced GPs. Checklists appear promising for reducing overconfidence without negatively impacting normal or simple ECGs. Reducing overconfidence has the potential to improve diagnostic performance in the long term.


Subject(s)
Checklist , Clinical Competence , Humans , Decision Making , Electrocardiography , Problem Solving
15.
Diagnosis (Berl) ; 10(1): 31-37, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36378520

ABSTRACT

Diagnostic performance is uniquely challenging to measure, and providing feedback on diagnostic performance to catalyze diagnostic recalibration remains the exception to the rule in healthcare. Diagnostic accuracy, timeliness, and explanation to the patient are essential dimensions of diagnostic performance that each intersect with a variety of technical, contextual, cultural, and policy barriers. Setting aside assumptions about current constraints, we explore the future of diagnostic performance feedback by describing the "minimum viable products" and the "ideal state" solutions that can be envisioned for each of several important barriers. Only through deliberate and iterative approaches to breaking down these barriers can we improve recalibration and continuously drive the healthcare ecosystem towards diagnostic excellence.


Subject(s)
Ecosystem , Humans , Feedback
16.
Adv Health Sci Educ Theory Pract ; 28(1): 13-26, 2023 03.
Article in English | MEDLINE | ID: mdl-35913665

ABSTRACT

Deliberate reflection has been found to foster diagnostic accuracy on complex cases or under circumstances that tend to induce cognitive bias. However, it is unclear whether the procedure can also be learned and thereby autonomously applied when diagnosing future cases without instructions to reflect. We investigated whether general practice residents would learn the deliberate reflection procedure through 'learning-by-teaching' and apply it to diagnose new cases. The study was a two-phase experiment. In the learning phase, 56 general-practice residents were randomly assigned to one of two conditions. They either (1) studied examples of deliberate reflection and then explained the procedure to a fictitious peer on video; or (2) solved cases without reflection (control). In the test phase, one to three weeks later, all participants diagnosed new cases while thinking aloud. The analysis of the test phase showed no significant differences between the conditions on any of the outcome measures (diagnostic accuracy, p = .263; time to diagnose, p = .598; mental effort ratings, p = .544; confidence ratings, p = .710; proportion of contradiction units (i.e. measure of deliberate reflection), p = .544). In contrast to findings on learning-by-teaching from other domains, teaching deliberate reflection to a fictitious peer, did not increase reflective reasoning when diagnosing future cases. Potential explanations that future research might address are that either residents in the experimental condition did not apply the learned deliberate reflection procedure in the test phase, or residents in the control condition also engaged in reflection.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Humans , Diagnosis, Differential , Education, Medical, Undergraduate/methods , Learning , Problem Solving
17.
BMJ Qual Saf ; 31(12): 899-910, 2022 12.
Article in English | MEDLINE | ID: mdl-36396150

ABSTRACT

BACKGROUND: Preventable diagnostic errors are a large burden on healthcare. Cognitive reasoning tools, that is, tools that aim to improve clinical reasoning, are commonly suggested interventions. However, quantitative estimates of tool effectiveness have been aggregated over both workplace-oriented and educational-oriented tools, leaving the impact of workplace-oriented cognitive reasoning tools alone unclear. This systematic review and meta-analysis aims to estimate the effect of cognitive reasoning tools on improving diagnostic performance among medical professionals and students, and to identify factors associated with larger improvements. METHODS: Controlled experimental studies that assessed whether cognitive reasoning tools improved the diagnostic accuracy of individual medical students or professionals in a workplace setting were included. Embase.com, Medline ALL via Ovid, Web of Science Core Collection, Cochrane Central Register of Controlled Trials and Google Scholar were searched from inception to 15 October 2021, supplemented with handsearching. Meta-analysis was performed using a random-effects model. RESULTS: The literature search resulted in 4546 articles of which 29 studies with data from 2732 participants were included for meta-analysis. The pooled estimate showed considerable heterogeneity (I2=70%). This was reduced to I2=38% by removing three studies that offered training with the tool before the intervention effect was measured. After removing these studies, the pooled estimate indicated that cognitive reasoning tools led to a small improvement in diagnostic accuracy (Hedges' g=0.20, 95% CI 0.10 to 0.29, p<0.001). There were no significant subgroup differences. CONCLUSION: Cognitive reasoning tools resulted in small but clinically important improvements in diagnostic accuracy in medical students and professionals, although no factors could be distinguished that resulted in larger improvements. Cognitive reasoning tools could be routinely implemented to improve diagnosis in practice, but going forward, more large-scale studies and evaluations of these tools in practice are needed to determine how these tools can be effectively implemented. PROSPERO REGISTRATION NUMBER: CRD42020186994.


Subject(s)
Students, Medical , Workplace , Humans , Diagnostic Errors , Cognition
18.
J Eval Clin Pract ; 2022 Jun 27.
Article in English | MEDLINE | ID: mdl-35761764

ABSTRACT

As big data becomes more publicly accessible, artificial intelligence (AI) is increasingly available and applicable to problems around clinical decision-making. Yet the adoption of AI technology in healthcare lags well behind other industries. The gap between what technology could do, and what technology is actually being used for is rapidly widening. While many solutions are proposed to address this gap, clinician resistance to the adoption of AI remains high. To aid with change, we propose facilitating clinician decisions through technology by seamlessly weaving what we call 'invisible AI' into existing clinician workflows, rather than sequencing new steps into clinical processes. We explore evidence from the change management and human factors literature to conceptualize a new approach to AI implementation in health organizations. We discuss challenges and provide recommendations for organizations to employ this strategy.

19.
Acad Med ; 97(10): 1484-1488, 2022 10 01.
Article in English | MEDLINE | ID: mdl-35612911

ABSTRACT

PROBLEM: Clinical reasoning is a core competency for physicians and also a common source of errors, driving high rates of misdiagnoses and patient harm. Efforts to provide training in and assessment of clinical reasoning skills have proven challenging because they are either labor- and resource-prohibitive or lack important data relevant to clinical reasoning. The authors report on the creation and use of online simulation cases to train and assess clinical reasoning skills among medical students. APPROACH: Using an online library of simulation cases, they collected data relevant to the creation of the differential diagnosis, analysis of the history and physical exam, diagnostic justification, ordering tests; interpreting tests, and ranking of the most probable diagnosis. These data were compared with an expert-created scorecard, and detailed quantitative and qualitative feedback were generated and provided to the learners and instructors. OUTCOMES: Following an initial pilot study to troubleshoot the software, the authors conducted a second pilot study in which 2 instructors developed and provided 6 cases to 75 second-year medical students. The students completed 376 cases (average 5.0 cases per student), generating more than 40,200 data points that the software analyzed to inform individual learner formative feedback relevant to clinical reasoning skills. The instructors reported that the workload was acceptable and sustainable. NEXT STEPS: The authors are actively expanding the library of clinical cases and providing more students and schools with formative feedback in clinical reasoning using our tool. Further, they have upgraded the software to identify and provide feedback on behaviors consistent with premature closure, anchoring, and confirmation biases. They are currently collecting and analyzing additional data using the same software to inform validation and psychometric outcomes for future publications.


Subject(s)
Education, Medical, Undergraduate , Students, Medical , Clinical Competence , Clinical Reasoning , Humans , Pilot Projects
20.
J Patient Saf ; 18(8): e1135-e1141, 2022 12 01.
Article in English | MEDLINE | ID: mdl-35443259

ABSTRACT

INTRODUCTION: Human error plays a vital role in diagnostic errors in the emergency department. A thorough analysis of these human errors, using information-rich reports of serious adverse events (SAEs), could help to better study and understand the causes of these errors and formulate more specific recommendations. METHODS: We studied 23 SAE reports of diagnostic events in emergency departments of Dutch general hospitals and identified human errors. Two researchers independently applied the Safer Dx Instrument, Diagnostic Error Evaluation and Research Taxonomy, and the Model of Unsafe acts to analyze reports. RESULTS: Twenty-one reports contained a diagnostic error, in which we identified 73 human errors, which were mainly based on intended actions (n = 69) and could be classified as mistakes (n = 56) or violations (n = 13). Most human errors occurred during the assessment and testing phase of the diagnostic process. DISCUSSION: The combination of different instruments and information-rich SAE reports allowed for a deeper understanding of the mechanisms underlying diagnostic error. Results indicated that errors occurred most often during the assessment and the testing phase of the diagnostic process. Most often, the errors could be classified as mistakes and violations, both intended actions. These types of errors are in need of different recommendations for improvement, as mistakes are often knowledge based, whereas violations often happen because of work and time pressure. These analyses provided valuable insights for more overarching recommendations to improve diagnostic safety and would be recommended to use in future research and analysis of (serious) adverse events.


Subject(s)
Emergency Service, Hospital , Humans , Diagnostic Errors
SELECTION OF CITATIONS
SEARCH DETAIL
...