Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 103
Filter
1.
J Eval Clin Pract ; 2024 May 31.
Article in English | MEDLINE | ID: mdl-38818694

ABSTRACT

AIMS AND OBJECTIVES: Contextual information which is implicitly available to physicians during clinical encounters has been shown to influence diagnostic reasoning. To better understand the psychological mechanisms underlying the influence of context on diagnostic accuracy, we conducted a review of experimental research on this topic. METHOD: We searched Web of Science, PubMed, and Scopus for relevant articles and looked for additional records by reading the references and approaching experts. We limited the review to true experiments involving physicians in which the outcome variable was the accuracy of the diagnosis. RESULTS: The 43 studies reviewed examined two categories of contextual variables: (a) case-intrinsic contextual information and (b) case-extrinsic contextual information. Case-intrinsic information includes implicit misleading diagnostic suggestions in the disease history of the patient, or emotional volatility of the patient. Case-extrinsic or situational information includes a similar (but different) case seen previously, perceived case difficulty, or external digital diagnostic support. Time pressure and interruptions are other extrinsic influences that may affect the accuracy of a diagnosis but have produced conflicting findings. CONCLUSION: We propose two tentative hypotheses explaining the role of context in diagnostic accuracy. According to the negative-affect hypothesis, diagnostic errors emerge when the physician's attention shifts from the relevant clinical findings to the (irrelevant) source of negative affect (for instance patient aggression) raised in a clinical encounter. The early-diagnosis-primacy hypothesis attributes errors to the extraordinary influence of the initial hypothesis that comes to the physician's mind on the subsequent collecting and interpretation of case information. Future research should test these mechanisms explicitly. Possible alternative mechanisms such as premature closure or increased production of (irrelevant) rival diagnoses in response to context deserve further scrutiny. Implications for medical education and practice are discussed.

2.
Med Educ ; 57(10): 932-938, 2023 10.
Article in English | MEDLINE | ID: mdl-36860135

ABSTRACT

INTRODUCTION: Newer electronic differential diagnosis supports (EDSs) are efficient and effective at improving diagnostic skill. Although these supports are encouraged in practice, they are prohibited in medical licensing examinations. The purpose of this study is to determine how using an EDS impacts examinees' results when answering clinical diagnosis questions. METHOD: The authors recruited 100 medical students from McMaster University (Hamilton, Ontario) to answer 40 clinical diagnosis questions in a simulated examination in 2021. Of these, 50 were first-year students and 50 were final-year students. Participants from each year of study were randomised into one of two groups. During the survey, half of the students had access to Isabel (an EDS) and half did not. Differences were explored using analysis of variance (ANOVA), and reliability estimates were compared for each group. RESULTS: Test scores were higher for final-year versus first-year students (53 ± 13% versus 29 ± 10, p < 0.001) and higher with the use of EDS (44 ± 28% versus 36 ± 26%, p < 0.001). Students using the EDS took longer to complete the test (p < 0.001). Internal consistency reliability (Cronbach's alpha) increased with EDS use among final-year students but was reduced among first-year students, although the effect was not significant. A similar pattern was noted in item discrimination, which was significant. CONCLUSION: EDS use during diagnostic licensing style questions was associated with modest improvements in performance, increased discrimination in senior students and increased testing time. Given that clinicians have access to EDS in routine clinical practice, allowing EDS use for diagnostic questions would maintain ecological validity of testing while preserving important psychometric test characteristics.


Subject(s)
Students, Medical , Humans , Diagnosis, Differential , Reproducibility of Results , Licensure , Surveys and Questionnaires , Educational Measurement/methods
3.
Qual Life Res ; 32(5): 1239-1246, 2023 May.
Article in English | MEDLINE | ID: mdl-36396874

ABSTRACT

PURPOSE: Anchor-based methods are group-level approaches used to derive clinical outcome assessment (COA) interpretation thresholds of meaningful within-patient change over time for understanding impacts of disease and treatment. The methods explore the associations between change in the targeted concept of the COA measure and the concept measured by the external anchor(s), typically a global rating, chosen as easier to interpret than the COA measure. While they are valued for providing plausible interpretation thresholds, group-level anchor-based methods pose a number of inherent theoretical and methodological conundrums for interpreting individual-level change. METHODS: This investigation provides a critical appraisal of anchor-based methods for COA interpretation thresholds and details key biases in anchor-based methods that directly influences the magnitude of the interpretation threshold. RESULTS: Five important research issues inherent with the use of anchor-based methods deserve attention: (1) global estimates of change are consistently biased toward the present state; (2) the use of static current state global measures, while not subject to artifacts of recall, may exacerbate the problem of estimating clinically meaningful change; (3) the specific anchor assessment response(s) that identify the meaningful change group usually involves an arbitrary judgment; (4) the calculated interpretation thresholds are sensitive to the proportion of patients who have improved; and (5) examination of anchor-based regression methods reveals that the correlation between the COA change scores and the anchor has a direct linear relationship to the magnitude of the interpretation threshold derived using an anchor-based approach; stronger correlations yielding larger interpretation thresholds. CONCLUSIONS: While anchor-based methods are recognized for their utility in deriving interpretation thresholds for COAs, attention to the biases associated with estimation of the threshold using these methods is needed to progress in the development of standard-setting methodologies for COAs.


Subject(s)
Outcome Assessment, Health Care , Quality of Life , Humans , Quality of Life/psychology , Outcome Assessment, Health Care/methods
4.
Adv Health Sci Educ Theory Pract ; 28(1): 47-63, 2023 03.
Article in English | MEDLINE | ID: mdl-35943606

ABSTRACT

Students are often encouraged to learn 'deeply' by abstracting generalizable principles from course content rather than memorizing details. So widespread is this perspective that Likert-style inventories are now routinely administered to students to quantify how much a given course or curriculum evokes deep learning. The predictive validity of these inventories, however, has been criticized based on sparse empirical support and ambiguity in what specific outcome measures indicate whether deep learning has occurred. Here we further tested the predictive validity of a prevalent deep learning inventory, the Revised Two-Factor Study Process Questionnaire, by selectively analyzing outcome measures that reflect a major goal of medical education-i.e., knowledge transfer. Students from two undergraduate health sciences courses completed the deep learning inventory before their course's final exam. Shortly after, a random subset of students rated how much each final exam item aligned with three task demands associated with transfer: (1) application of general principles, (2) integration of multiple ideas or examples, and (3) contextual novelty. We then used these ratings from students to examine performance on a subset of exam items that were collectively perceived to demand transfer. Despite good reliability, the resulting transfer outcomes were not substantively predicted by the deep learning inventory. These findings challenge the validity of this tool and others like it.


Subject(s)
Deep Learning , Education, Medical , Humans , Reproducibility of Results , Curriculum , Students
5.
Acad Med ; 97(8): 1213-1218, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35507461

ABSTRACT

PURPOSE: Postgraduate medical education in Canada has quickly transformed to a competency-based model featuring new entrustable professional activities (EPAs) and associated milestones. It remains unclear, however, how these milestones are distributed between the central medical expert role and 6 intrinsic roles of the larger CanMEDS competency framework. A document review was thus conducted to measure how many EPA milestones are classified under each CanMEDS role, focusing on the overall balance between representation of intrinsic roles and that of medical expert. METHOD: Data were extracted from the EPA guides of 40 Canadian specialties in 2021 to measure the percentage of milestones formally linked to each role. Subsequent analyses explored for differences when milestones were separated by stage of postgraduate training, weighted by an EPA's minimum number of observations, or sorted by surgical and medical specialties. RESULTS: Approximately half of all EPA milestones (mean = 48.6%; 95% confidence interval [CI] = 45.9, 51.3) were classified under intrinsic roles overall. However, representation of the health advocate role was consistently low (mean = 2.95%; 95% CI = 2.49, 3.41), and some intrinsic roles-mainly leader, scholar, and professional-were more heavily concentrated in the final stage of postgraduate training. These findings held true under all conditions examined. CONCLUSIONS: The observed distribution of roles in EPA milestones fits with high-level descriptions of CanMEDS in that intrinsic roles are viewed as inextricably linked to medical expertise, implying both are equally important to cultivate through curricula. Yet a fine-grained analysis suggests that a low prevalence or late emphasis of some intrinsic roles may hinder how they are taught or assessed. Future work must explore whether the quantity or timing of milestones shapes the perceived value of each role, and other factors determining the optimal distribution of roles throughout training.


Subject(s)
Education, Medical , Internship and Residency , Medicine , Canada , Clinical Competence , Competency-Based Education , Curriculum , Humans
6.
BMJ ; 376: e064389, 2022 01 05.
Article in English | MEDLINE | ID: mdl-34987062

ABSTRACT

Research in cognitive psychology shows that expert clinicians make a medical diagnosis through a two step process of hypothesis generation and hypothesis testing. Experts generate a list of possible diagnoses quickly and intuitively, drawing on previous experience. Experts remember specific examples of various disease categories as exemplars, which enables rapid access to diagnostic possibilities and gives them an intuitive sense of the base rates of various diagnoses. After generating diagnostic hypotheses, clinicians then test the hypotheses and subjectively estimate the probability of each diagnostic possibility by using a heuristic called anchoring and adjusting. Although both novices and experts use this two step diagnostic process, experts distinguish themselves as better diagnosticians through their ability to mobilize experiential knowledge in a manner that is content specific. Experience is clearly the best teacher, but some educational strategies have been shown to modestly improve diagnostic accuracy. Increased knowledge about the cognitive psychology of the diagnostic process and the pitfalls inherent in the process may inform clinical teachers and help learners and clinicians to improve the accuracy of diagnostic reasoning. This article reviews the literature on the cognitive psychology of diagnostic reasoning in the context of cardiovascular disease.


Subject(s)
Cardiology/methods , Cardiovascular Diseases/diagnosis , Clinical Decision-Making/methods , Cognitive Psychology , Clinical Competence , Heuristics , Humans , Problem Solving
7.
Anat Sci Educ ; 13(3): 401-412, 2020 May.
Article in English | MEDLINE | ID: mdl-31665563

ABSTRACT

Anatomy education has been revolutionized through digital media, resulting in major advances in realism, portability, scalability, and user satisfaction. However, while such approaches may well be more portable, realistic, or satisfying than traditional photographic presentations, it is less clear that they have any superiority in terms of student learning. In this study, it was hypothesized that virtual and mixed reality presentations of pelvic anatomy will have an advantage over two-dimensional (2D) presentations and perform approximately equal to physical models and that this advantage over 2D presentations will be reduced when stereopsis is decreased by covering the non-dominant eye. Groups of 20 undergraduate students learned pelvic anatomy under seven conditions: physical model with and without stereo vision, mixed reality with and without stereo vision, virtual reality with and without stereo vision, and key views on a computer monitor. All were tested with a cadaveric pelvis and a 15-item, short-answer recognition test. Compared to the key views, the physical model had a 70% increase in accuracy in structure identification; the virtual reality a 25% increase, and the mixed reality a non-significant 2.5% change. Blocking stereopsis reduced performance on the physical model by 15%, on virtual reality by 60%, but by only 2.5% on the mixed reality technology. The data show that virtual and mixed reality technologies tested are inferior to physical models and that true stereopsis is critical in learning anatomy.


Subject(s)
Anatomy/education , Depth Perception/physiology , Learning/physiology , Students/psychology , Virtual Reality , Adolescent , Educational Measurement/statistics & numerical data , Female , Humans , Male , Models, Anatomic , Pelvic Bones/anatomy & histology , Students/statistics & numerical data , User-Computer Interface , Young Adult
8.
Med Educ ; 52(11): 1138-1146, 2018 11.
Article in English | MEDLINE | ID: mdl-30345680

ABSTRACT

BACKGROUND: Although several studies (Anat Sci Educ, 8 [6], 525, 2015) have shown that computer-based anatomy programs (three-dimensional visualisation technology [3DVT]) are inferior to ordinary physical models (PMs), the mechanism is not clear. In this study, we explored three mechanisms: haptic feedback, transfer-appropriate processing and stereoscopic vision. METHODS: The test of these hypotheses required nine groups of 20 students: two from a previous study (Anat Sci Educ, 6 [4], 211, 2013) and seven new groups. (i) To explore haptic feedback from physical models, participants in one group were allowed to touch the model during learning; in the other group, they could not; (ii) to test 'transfer-appropriate processing' (TAP), learning ( PM or 3DVT) was crossed with testing (cadaver or two-dimensional display of cadaver); (iii) finally, to examine the role of stereo vision, we tested groups who had the non-dominant eye covered during learning and testing, during learning, or not at all, on both PM and 3DVT. The test was a 15-item short-answer test requiring naming structures on a cadaver pelvis. A list of names was provided. RESULTS: The test of haptic feedback showed a large advantage of the PM over 3DVT regardless of whether or not participants had haptic feedback: 67% correct for the PM with haptic feedback, 69% for PM without haptic feedback, versus 41% for 3DVT (p < 0.0001). In the study of TAP, the PM had an average score of 74% versus 43% for 3DVT (p < 0.0001) regardless of two-dimensional versus three-dimensional test outcome. The third study showed that the large advantage of the PM over 3DVT (28%) with binocular vision nearly disappeared (5%) when the non-dominant eye was covered for both learning and testing. CONCLUSIONS: A physical model is superior to a computer projection, primarily as a consequence of stereoscopic vision with the PM. The results have implications for the use of digital technology in spatial learning.


Subject(s)
Anatomy/education , Computer-Assisted Instruction/methods , Depth Perception , Education, Medical/methods , Educational Measurement/methods , Models, Anatomic , Adult , Curriculum , Female , Humans , Male , Ontario , Young Adult
10.
J Rehabil Med ; 50(6): 569-574, 2018 Jun 15.
Article in English | MEDLINE | ID: mdl-29767226

ABSTRACT

OBJECTIVE: Therapeutic footwear is often prescribed at considerable cost. Foot-care specialists normally assess the wear-and-tear of therapeutic footwear in order to monitor the adequacy of the prescribed footwear and to gain an indicator of its use. We developed a simple, rapid, easily applicable indicator of wear-and-tear of therapeutic footwear: the wear-and-tear scale. The aim of this study was to investigate the intra- and inter-rater reliability of the wear-and-tear scale. METHODS: A test set of 100 therapeutic shoes was assembled; 24 raters (6 inexperienced and 6 experienced physiatrists, and 6 inexperienced and 6 experienced orthopaedic shoe technicians) rated the degree of wear-and-tear of the shoes on the scale (range 0-100) twice on 1 day with a 4-h interval (short-term) and twice over a 4-week interval (long-term). Generalizability theory was applied for the analysis. RESULTS: Short-term, long-term and overall intra-rater reliability was excellent (coefficients 0.99, 0.99 and 0.98; standard error of measurement (SEM) 2.6, 2.9 and 3.9; smallest detectable changes (SDC) 7.3, 8.0 and 10.8, respectively). Inter-rater reliability between professions, experience and inexperienced raters, and overall was excellent (coefficients 0.97, 0.98 and 0.93; SEM 4.9, 4.5, and 8.1; SDC 13.7, 12.4 and 22.5, respectively). CONCLUSION: The wear-and-tear scale has excellent intra-rater, inter-rater, and overall reliability.


Subject(s)
Shoes/standards , Female , Humans , Male , Reproducibility of Results , Weight-Bearing
11.
Adv Health Sci Educ Theory Pract ; 22(5): 1321-1322, 2017 12.
Article in English | MEDLINE | ID: mdl-29063308

ABSTRACT

In re-examining the paper "CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores" published in AHSE (22(2), 327-336), we recognized two errors of interpretation.

12.
Am J Med ; 130(6): 629-634, 2017 06.
Article in English | MEDLINE | ID: mdl-28238695

ABSTRACT

Research has shown that expert clinicians make a medical diagnosis through a process of hypothesis generation and verification. Experts begin the diagnostic process by generating a list of diagnostic hypotheses using intuitive, nonanalytic reasoning. Analytic reasoning then allows the clinician to test and verify or reject each hypothesis, leading to a diagnostic conclusion. In this article, we focus on the initial step of hypothesis generation and review how expert clinicians use experiential knowledge to intuitively recognize a medical diagnosis.


Subject(s)
Clinical Competence , Clinical Decision-Making , Intuition , Heuristics , Humans
13.
Acad Med ; 92(1): 23-30, 2017 01.
Article in English | MEDLINE | ID: mdl-27782919

ABSTRACT

Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.


Subject(s)
Cognition , Thinking , Bias , Diagnostic Errors/psychology , Humans , Memory
14.
Adv Health Sci Educ Theory Pract ; 22(2): 327-336, 2017 May.
Article in English | MEDLINE | ID: mdl-27873137

ABSTRACT

Typically, only a minority of applicants to health professional training are invited to interview. However, pre-interview measures of cognitive skills predict for national licensure scores (Gauer et al. in Med Educ Online 21 2016) and subsequently licensure scores predict for performance in practice (Tamblyn et al. in JAMA 288(23): 3019-3026, 2002; Tamblyn et al. in JAMA 298(9):993-1001, 2007). Assessment of personal and professional characteristics, with the same psychometric rigour of measures of cognitive abilities, are needed upstream in the selection to health profession training programs. To fill that need, Computer-based Assessment for Sampling Personal characteristics (CASPer)-an on-line, video-based screening test-was created. In this paper, we examine the correlation between CASPer and Canadian national licensure examination outcomes in 109 doctors who took CASPer at the time of selection to medical school. Specifically, CASPer scores were correlated against performance on cognitive and 'non-cognitive' subsections of both the Medical Council of Canada Qualifying Examination (MCCQE) Parts I (end of medical school) and Part II (18 months into specialty training). Unlike most national licensure exams, MCCQE has specific subcomponents examining personal/professional qualities, providing a unique opportunity for comparison. The results demonstrated moderate predictive validity of CASPer to national licensure outcomes of personal/professional characteristics three to six years after admission to medical school. These types of disattenuated correlations (r = 0.3-0.5) are not otherwise predicted by traditional screening measures. These data support the ability of a computer-based strategy to screen applicants in a feasible, reliable test, which has now demonstrated predictive validity, lending evidence of its validation for medical school applicant selection.


Subject(s)
Licensure/statistics & numerical data , School Admission Criteria/statistics & numerical data , Schools, Medical/statistics & numerical data , Schools, Medical/standards , Canada , Cognition , Educational Measurement , Humans , Personality , Predictive Value of Tests
16.
J Gen Intern Med ; 30(9): 1270-4, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26173528

ABSTRACT

BACKGROUND: An experimenter controlled form of reflection has been shown to improve the detection and correction of diagnostic errors in some situations; however, the benefits of participant-controlled reflection have not been assessed. OBJECTIVE: The goal of the current study is to examine how experience and a self-directed decision to reflect affect the accuracy of revised diagnoses. DESIGN: Medical residents diagnosed 16 medical cases (pass 1). Participants were then given the opportunity to reflect on each case and revise their diagnoses (pass 2). PARTICIPANTS: Forty-seven medical Residents in post-graduate year (PGY) 1, 2 and 3 were recruited from Hamilton Health Care Centres. MAIN MEASURES: Diagnoses were scored as 0 (incorrect), 1 (partially correct) and 2 (correct). Accuracies and response times in pass 1 were analyzed using an ANOVA with three factors-PGY, Decision to revise yes/no, and Case 1-16, averaged across residents. The extent to which additional reflection affected accuracy was examined by analyzing only those cases that were revised, using a repeated measures ANOVA, with pass 1 or 2 as a within subject factor, and PGY and Case or Resident as a between-subject factor. KEY RESULTS: The mean score at pass 1 for each level was PGY1, 1.17 (SE 0.50); PGY2, 1.35 (SE 0.67) and PGY3, 1.27 (SE 0.94). While there was a trend for increased accuracy with level, this did not achieve significance. The number of residents at each level who revised at least one diagnosis was 12/19 PGY1 (63 %), 9/11 PGY2 (82 %) and 8/17 PGY3 (47 %). Only 8 % of diagnoses were revised resulting in a small but significant increase in scores from Pass 1 to 2, from 1.20/2 to 1.22 /2 (t = 2.15, p = 0.03). CONCLUSIONS: Participants did engage in self-directed reflection for incorrect diagnoses; however, this strategy provided minimal benefits compared to knowing the correct answer. Education strategies should be directed at improving formal and experiential knowledge.


Subject(s)
Clinical Competence , Diagnostic Errors/psychology , Internal Medicine/education , Internship and Residency , Thinking , Adult , Decision Making , Education, Medical, Graduate , Educational Measurement , Female , Humans , Male
17.
Med Educ ; 49(3): 276-85, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25693987

ABSTRACT

CONTEXT: A principal justification for the use of high-fidelity (HF) simulation is that, because it is closer to reality, students will be more motivated to learn and, consequently, will be better able to transfer their learning to real patients. However, the increased authenticity is accompanied by greater complexity, which may reduce learning, and variability in the presentation of a condition on an HF simulator is typically restricted. OBJECTIVES: This study was conducted to explore the effectiveness of HF and low-fidelity (LF) simulation for learning within the clinical education and practice domains of cardiac and respiratory auscultation and physical assessment skills. METHODS: Senior-level nursing students were randomised to HF and LF instruction groups or to a control group. Primary outcome measures included LF (digital sounds on a computer) and HF (human patient simulator) auscultation tests of cardiac and respiratory sounds, as well as observer-rated performances in simulated clinical scenarios. RESULTS: On the LF auscultation test, the LF group consistently demonstrated performance comparable or superior to that of the HF group, and both were superior to the performance of the control group. For both HF outcome measures, there was no significant difference in performance between the HF and LF instruction groups. CONCLUSIONS: The results from this study suggest that highly contextualised learning environments may not be uniformly advantageous for instruction and may lead to ineffective learning by increasing extraneous cognitive load in novice learners.


Subject(s)
Computer Simulation , Education, Nursing, Baccalaureate/methods , Heart Auscultation , Heart Sounds/physiology , Patient Simulation , Humans , Learning , Lung/physiology , Manikins , Models, Educational , Respiration
18.
Acad Med ; 90(4): 511-7, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25565260

ABSTRACT

PURPOSE: Others have suggested that increased time pressure, sometimes caused by interruptions, may result in increased diagnostic errors. The authors previously found, however, that increased time pressure alone does not result in increased errors, but they did not test the effect of interruptions. It is unclear whether experience modulates the combined effects of time pressure and interruptions. This study investigated whether increased time pressure, interruptions, and experience level affect diagnostic accuracy and response time. METHOD: In October 2012, 152 residents were recruited at five Medical Council of Canada Qualifying Examination Part II test sites. Forty-six emergency physicians were recruited from one Canadian and one U.S. academic health center. Participants diagnosed 20 written general medicine cases. They were randomly assigned to receive fast (time pressure) or slow condition instructions. Visual and auditory case interruptions were manipulated as a within-subject factor. RESULTS: Diagnostic accuracy was not affected by interruptions or time pressure but was related to experience level: Emergency physicians were more accurate (71%) than residents (43%) (F = 234.0, P < .0001) and responded more quickly (54 seconds) than residents (65 seconds) (F = 9.0, P < .005). Response time was shorter for participants in the fast condition (55 seconds) than in the slow condition (73 seconds) (F = 22.2, P < .0001). Interruptions added about 8 seconds to response time. CONCLUSIONS: Experienced emergency physicians were both faster and more accurate than residents. Instructions to proceed quickly and interruptions had a small effect on response time but no effect on accuracy.


Subject(s)
Diagnosis , Emergency Medicine , Internship and Residency , Reaction Time , Adult , Diagnostic Errors , Humans , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...