Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 103
Filtrar
Más filtros

Intervalo de año de publicación
1.
Qual Life Res ; 32(5): 1239-1246, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36396874

RESUMEN

PURPOSE: Anchor-based methods are group-level approaches used to derive clinical outcome assessment (COA) interpretation thresholds of meaningful within-patient change over time for understanding impacts of disease and treatment. The methods explore the associations between change in the targeted concept of the COA measure and the concept measured by the external anchor(s), typically a global rating, chosen as easier to interpret than the COA measure. While they are valued for providing plausible interpretation thresholds, group-level anchor-based methods pose a number of inherent theoretical and methodological conundrums for interpreting individual-level change. METHODS: This investigation provides a critical appraisal of anchor-based methods for COA interpretation thresholds and details key biases in anchor-based methods that directly influences the magnitude of the interpretation threshold. RESULTS: Five important research issues inherent with the use of anchor-based methods deserve attention: (1) global estimates of change are consistently biased toward the present state; (2) the use of static current state global measures, while not subject to artifacts of recall, may exacerbate the problem of estimating clinically meaningful change; (3) the specific anchor assessment response(s) that identify the meaningful change group usually involves an arbitrary judgment; (4) the calculated interpretation thresholds are sensitive to the proportion of patients who have improved; and (5) examination of anchor-based regression methods reveals that the correlation between the COA change scores and the anchor has a direct linear relationship to the magnitude of the interpretation threshold derived using an anchor-based approach; stronger correlations yielding larger interpretation thresholds. CONCLUSIONS: While anchor-based methods are recognized for their utility in deriving interpretation thresholds for COAs, attention to the biases associated with estimation of the threshold using these methods is needed to progress in the development of standard-setting methodologies for COAs.


Asunto(s)
Evaluación de Resultado en la Atención de Salud , Calidad de Vida , Humanos , Calidad de Vida/psicología , Evaluación de Resultado en la Atención de Salud/métodos
2.
Med Educ ; 57(10): 932-938, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-36860135

RESUMEN

INTRODUCTION: Newer electronic differential diagnosis supports (EDSs) are efficient and effective at improving diagnostic skill. Although these supports are encouraged in practice, they are prohibited in medical licensing examinations. The purpose of this study is to determine how using an EDS impacts examinees' results when answering clinical diagnosis questions. METHOD: The authors recruited 100 medical students from McMaster University (Hamilton, Ontario) to answer 40 clinical diagnosis questions in a simulated examination in 2021. Of these, 50 were first-year students and 50 were final-year students. Participants from each year of study were randomised into one of two groups. During the survey, half of the students had access to Isabel (an EDS) and half did not. Differences were explored using analysis of variance (ANOVA), and reliability estimates were compared for each group. RESULTS: Test scores were higher for final-year versus first-year students (53 ± 13% versus 29 ± 10, p < 0.001) and higher with the use of EDS (44 ± 28% versus 36 ± 26%, p < 0.001). Students using the EDS took longer to complete the test (p < 0.001). Internal consistency reliability (Cronbach's alpha) increased with EDS use among final-year students but was reduced among first-year students, although the effect was not significant. A similar pattern was noted in item discrimination, which was significant. CONCLUSION: EDS use during diagnostic licensing style questions was associated with modest improvements in performance, increased discrimination in senior students and increased testing time. Given that clinicians have access to EDS in routine clinical practice, allowing EDS use for diagnostic questions would maintain ecological validity of testing while preserving important psychometric test characteristics.


Asunto(s)
Estudiantes de Medicina , Humanos , Diagnóstico Diferencial , Reproducibilidad de los Resultados , Concesión de Licencias , Encuestas y Cuestionarios , Evaluación Educacional/métodos
3.
Adv Health Sci Educ Theory Pract ; 28(1): 47-63, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-35943606

RESUMEN

Students are often encouraged to learn 'deeply' by abstracting generalizable principles from course content rather than memorizing details. So widespread is this perspective that Likert-style inventories are now routinely administered to students to quantify how much a given course or curriculum evokes deep learning. The predictive validity of these inventories, however, has been criticized based on sparse empirical support and ambiguity in what specific outcome measures indicate whether deep learning has occurred. Here we further tested the predictive validity of a prevalent deep learning inventory, the Revised Two-Factor Study Process Questionnaire, by selectively analyzing outcome measures that reflect a major goal of medical education-i.e., knowledge transfer. Students from two undergraduate health sciences courses completed the deep learning inventory before their course's final exam. Shortly after, a random subset of students rated how much each final exam item aligned with three task demands associated with transfer: (1) application of general principles, (2) integration of multiple ideas or examples, and (3) contextual novelty. We then used these ratings from students to examine performance on a subset of exam items that were collectively perceived to demand transfer. Despite good reliability, the resulting transfer outcomes were not substantively predicted by the deep learning inventory. These findings challenge the validity of this tool and others like it.


Asunto(s)
Aprendizaje Profundo , Educación Médica , Humanos , Reproducibilidad de los Resultados , Curriculum , Estudiantes
4.
Med Educ ; 52(11): 1138-1146, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30345680

RESUMEN

BACKGROUND: Although several studies (Anat Sci Educ, 8 [6], 525, 2015) have shown that computer-based anatomy programs (three-dimensional visualisation technology [3DVT]) are inferior to ordinary physical models (PMs), the mechanism is not clear. In this study, we explored three mechanisms: haptic feedback, transfer-appropriate processing and stereoscopic vision. METHODS: The test of these hypotheses required nine groups of 20 students: two from a previous study (Anat Sci Educ, 6 [4], 211, 2013) and seven new groups. (i) To explore haptic feedback from physical models, participants in one group were allowed to touch the model during learning; in the other group, they could not; (ii) to test 'transfer-appropriate processing' (TAP), learning ( PM or 3DVT) was crossed with testing (cadaver or two-dimensional display of cadaver); (iii) finally, to examine the role of stereo vision, we tested groups who had the non-dominant eye covered during learning and testing, during learning, or not at all, on both PM and 3DVT. The test was a 15-item short-answer test requiring naming structures on a cadaver pelvis. A list of names was provided. RESULTS: The test of haptic feedback showed a large advantage of the PM over 3DVT regardless of whether or not participants had haptic feedback: 67% correct for the PM with haptic feedback, 69% for PM without haptic feedback, versus 41% for 3DVT (p < 0.0001). In the study of TAP, the PM had an average score of 74% versus 43% for 3DVT (p < 0.0001) regardless of two-dimensional versus three-dimensional test outcome. The third study showed that the large advantage of the PM over 3DVT (28%) with binocular vision nearly disappeared (5%) when the non-dominant eye was covered for both learning and testing. CONCLUSIONS: A physical model is superior to a computer projection, primarily as a consequence of stereoscopic vision with the PM. The results have implications for the use of digital technology in spatial learning.


Asunto(s)
Anatomía/educación , Instrucción por Computador/métodos , Percepción de Profundidad , Educación Médica/métodos , Evaluación Educacional/métodos , Modelos Anatómicos , Adulto , Curriculum , Femenino , Humanos , Masculino , Ontario , Adulto Joven
5.
Adv Health Sci Educ Theory Pract ; 22(5): 1321-1322, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-29063308

RESUMEN

In re-examining the paper "CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores" published in AHSE (22(2), 327-336), we recognized two errors of interpretation.

6.
Adv Health Sci Educ Theory Pract ; 22(2): 327-336, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-27873137

RESUMEN

Typically, only a minority of applicants to health professional training are invited to interview. However, pre-interview measures of cognitive skills predict for national licensure scores (Gauer et al. in Med Educ Online 21 2016) and subsequently licensure scores predict for performance in practice (Tamblyn et al. in JAMA 288(23): 3019-3026, 2002; Tamblyn et al. in JAMA 298(9):993-1001, 2007). Assessment of personal and professional characteristics, with the same psychometric rigour of measures of cognitive abilities, are needed upstream in the selection to health profession training programs. To fill that need, Computer-based Assessment for Sampling Personal characteristics (CASPer)-an on-line, video-based screening test-was created. In this paper, we examine the correlation between CASPer and Canadian national licensure examination outcomes in 109 doctors who took CASPer at the time of selection to medical school. Specifically, CASPer scores were correlated against performance on cognitive and 'non-cognitive' subsections of both the Medical Council of Canada Qualifying Examination (MCCQE) Parts I (end of medical school) and Part II (18 months into specialty training). Unlike most national licensure exams, MCCQE has specific subcomponents examining personal/professional qualities, providing a unique opportunity for comparison. The results demonstrated moderate predictive validity of CASPer to national licensure outcomes of personal/professional characteristics three to six years after admission to medical school. These types of disattenuated correlations (r = 0.3-0.5) are not otherwise predicted by traditional screening measures. These data support the ability of a computer-based strategy to screen applicants in a feasible, reliable test, which has now demonstrated predictive validity, lending evidence of its validation for medical school applicant selection.


Asunto(s)
Concesión de Licencias/estadística & datos numéricos , Criterios de Admisión Escolar/estadística & datos numéricos , Facultades de Medicina/estadística & datos numéricos , Facultades de Medicina/normas , Canadá , Cognición , Evaluación Educacional , Humanos , Personalidad , Valor Predictivo de las Pruebas
7.
J Gen Intern Med ; 30(9): 1270-4, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26173528

RESUMEN

BACKGROUND: An experimenter controlled form of reflection has been shown to improve the detection and correction of diagnostic errors in some situations; however, the benefits of participant-controlled reflection have not been assessed. OBJECTIVE: The goal of the current study is to examine how experience and a self-directed decision to reflect affect the accuracy of revised diagnoses. DESIGN: Medical residents diagnosed 16 medical cases (pass 1). Participants were then given the opportunity to reflect on each case and revise their diagnoses (pass 2). PARTICIPANTS: Forty-seven medical Residents in post-graduate year (PGY) 1, 2 and 3 were recruited from Hamilton Health Care Centres. MAIN MEASURES: Diagnoses were scored as 0 (incorrect), 1 (partially correct) and 2 (correct). Accuracies and response times in pass 1 were analyzed using an ANOVA with three factors-PGY, Decision to revise yes/no, and Case 1-16, averaged across residents. The extent to which additional reflection affected accuracy was examined by analyzing only those cases that were revised, using a repeated measures ANOVA, with pass 1 or 2 as a within subject factor, and PGY and Case or Resident as a between-subject factor. KEY RESULTS: The mean score at pass 1 for each level was PGY1, 1.17 (SE 0.50); PGY2, 1.35 (SE 0.67) and PGY3, 1.27 (SE 0.94). While there was a trend for increased accuracy with level, this did not achieve significance. The number of residents at each level who revised at least one diagnosis was 12/19 PGY1 (63 %), 9/11 PGY2 (82 %) and 8/17 PGY3 (47 %). Only 8 % of diagnoses were revised resulting in a small but significant increase in scores from Pass 1 to 2, from 1.20/2 to 1.22 /2 (t = 2.15, p = 0.03). CONCLUSIONS: Participants did engage in self-directed reflection for incorrect diagnoses; however, this strategy provided minimal benefits compared to knowing the correct answer. Education strategies should be directed at improving formal and experiential knowledge.


Asunto(s)
Competencia Clínica , Errores Diagnósticos/psicología , Medicina Interna/educación , Internado y Residencia , Pensamiento , Adulto , Toma de Decisiones , Educación de Postgrado en Medicina , Evaluación Educacional , Femenino , Humanos , Masculino
8.
Med Educ ; 49(3): 276-85, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25693987

RESUMEN

CONTEXT: A principal justification for the use of high-fidelity (HF) simulation is that, because it is closer to reality, students will be more motivated to learn and, consequently, will be better able to transfer their learning to real patients. However, the increased authenticity is accompanied by greater complexity, which may reduce learning, and variability in the presentation of a condition on an HF simulator is typically restricted. OBJECTIVES: This study was conducted to explore the effectiveness of HF and low-fidelity (LF) simulation for learning within the clinical education and practice domains of cardiac and respiratory auscultation and physical assessment skills. METHODS: Senior-level nursing students were randomised to HF and LF instruction groups or to a control group. Primary outcome measures included LF (digital sounds on a computer) and HF (human patient simulator) auscultation tests of cardiac and respiratory sounds, as well as observer-rated performances in simulated clinical scenarios. RESULTS: On the LF auscultation test, the LF group consistently demonstrated performance comparable or superior to that of the HF group, and both were superior to the performance of the control group. For both HF outcome measures, there was no significant difference in performance between the HF and LF instruction groups. CONCLUSIONS: The results from this study suggest that highly contextualised learning environments may not be uniformly advantageous for instruction and may lead to ineffective learning by increasing extraneous cognitive load in novice learners.


Asunto(s)
Simulación por Computador , Bachillerato en Enfermería/métodos , Auscultación Cardíaca , Ruidos Cardíacos/fisiología , Simulación de Paciente , Humanos , Aprendizaje , Pulmón/fisiología , Maniquíes , Modelos Educacionales , Respiración
9.
J Eval Clin Pract ; 2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38818694

RESUMEN

AIMS AND OBJECTIVES: Contextual information which is implicitly available to physicians during clinical encounters has been shown to influence diagnostic reasoning. To better understand the psychological mechanisms underlying the influence of context on diagnostic accuracy, we conducted a review of experimental research on this topic. METHOD: We searched Web of Science, PubMed, and Scopus for relevant articles and looked for additional records by reading the references and approaching experts. We limited the review to true experiments involving physicians in which the outcome variable was the accuracy of the diagnosis. RESULTS: The 43 studies reviewed examined two categories of contextual variables: (a) case-intrinsic contextual information and (b) case-extrinsic contextual information. Case-intrinsic information includes implicit misleading diagnostic suggestions in the disease history of the patient, or emotional volatility of the patient. Case-extrinsic or situational information includes a similar (but different) case seen previously, perceived case difficulty, or external digital diagnostic support. Time pressure and interruptions are other extrinsic influences that may affect the accuracy of a diagnosis but have produced conflicting findings. CONCLUSION: We propose two tentative hypotheses explaining the role of context in diagnostic accuracy. According to the negative-affect hypothesis, diagnostic errors emerge when the physician's attention shifts from the relevant clinical findings to the (irrelevant) source of negative affect (for instance patient aggression) raised in a clinical encounter. The early-diagnosis-primacy hypothesis attributes errors to the extraordinary influence of the initial hypothesis that comes to the physician's mind on the subsequent collecting and interpretation of case information. Future research should test these mechanisms explicitly. Possible alternative mechanisms such as premature closure or increased production of (irrelevant) rival diagnoses in response to context deserve further scrutiny. Implications for medical education and practice are discussed.

11.
Med Educ ; 47(10): 979-89, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24016168

RESUMEN

CONTEXT: Medical education research focuses extensively on experience and deliberate practice (DP) as key factors in the development of expert performance. The research on DP minimises the role of individual ability in expert performance. This claim ignores a large body of research supporting the importance of innate individual cognitive differences. We review the relationship between DP and an innate individual ability, working memory (WM) capacity, to illustrate how both DP and individual ability predict expert performance. METHODS: This narrative review examines the relationship between DP and WM in accounting for expert performance. Studies examining DP, WM and individual differences were identified through a targeted search. RESULTS: Although all studies support extensive DP as a factor in explaining expertise, much research suggests individual cognitive differences, such as WM capacity, predict expert performance after controlling for DP. The extent to which this occurs may be influenced by the nature of the task under study and the cognitive processes used by experts. The importance of WM capacity is greater for tasks that are non-routine or functionally complex. Clinical reasoning displays evidence of this task-dependent importance of individual ability. CONCLUSIONS: No single factor is both necessary and sufficient in explaining expertise, and individual abilities such as WM can be important. These individual abilities are likely to contribute to expert performance in clinical settings. Medical education research and practice should identify the individual differences in novices and experts that are important to clinical performance.


Asunto(s)
Competencia Clínica/normas , Aprendizaje , Competencia Profesional/normas , Aptitud , Educación Médica , Humanos , Inteligencia , Memoria a Corto Plazo , Médicos/normas , Análisis y Desempeño de Tareas
12.
Adv Health Sci Educ Theory Pract ; 18(5): 987-96, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23307097

RESUMEN

The purpose of this study was to determine the reliability of a computer-based encounter card (EC) to assess medical students during an emergency medicine rotation. From April 2011 to March 2012, multiple physicians assessed an entire medical school class during their emergency medicine rotation using the CanMEDS framework. At the end of an emergency department shift, an EC was scored (1-10) for each student on Medical Expert, 2 additional Roles, and an overall score. Analysis of 1,819 ECs (155 of 186 students) revealed the following: Collaborator, Manager, Health Advocate and Scholar were assessed on less than 25 % of ECs. On average, each student was assessed 11 times with an inter-rater reliability of 0.6. The largest source of variance was rater bias. A D-study showed that a minimum of 17 ECs were required for a reliability of 0.7. There was moderate to strong correlations between all Roles and overall score; and the factor analysis revealed all items loading on a single factor, accounting for 87 % of the variance. The global assessment of the CanMEDS Roles using ECs has significant variance in estimates of performance, derived from differences between raters. Some Roles are seldom selected for assessment, suggesting that raters have difficulty identifying related performance. Finally, correlation and factor analyses demonstrate that raters are unable to discriminate among Roles and are basing judgments on an overall impression.


Asunto(s)
Competencia Clínica/normas , Educación Basada en Competencias , Educación de Pregrado en Medicina/métodos , Evaluación Educacional/métodos , Medicina de Emergencia/educación , Rol del Médico , Retroalimentación , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados
13.
Med Educ ; 46(3): 326-35, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22324532

RESUMEN

CONTEXT: The inadequacy of self-assessment as a mechanism to guide performance improvements has placed greater emphasis on the value of testing as a pedagogic strategy. The mechanism whereby testing influences learning is incompletely understood. This study was performed to examine which aspects of a testing experience most influence self-regulated learning behaviour among medical students. METHODS: Sixty-seven medical students participated in a computer-based, multiple-choice test. Initially, participants were instructed to attempt only items for which they felt confident of their response. They were then asked to indicate their best responses to deferred items. Students were then given an opportunity to review the items, with correct responses indicated. Accuracy, the attempt/defer decision and the time taken to reach this decision were recorded, along with participants' ratings of their confidence in each response and the time spent reviewing each item on completion of the test. RESULTS: Students correctly answered a larger proportion of attempted items than deferred items (71% versus 40%; p < 0.001), and indicated a higher mean confidence in responses to items they answered correctly compared with items they answered incorrectly (70 versus 46; p < 0.001). They spent longer reviewing items they had answered incorrectly than correctly (8.3 versus 4.0 seconds; p < 0.001), and paid particular attention to items for which the attempt/defer decision and accuracy were discordant (p < 0.01). The amount of time required to make a decision on whether or not to answer a test question was also related to reviewing time. CONCLUSIONS: Medical students showed a robust ability to accurately and consciously self-monitor their likelihood of success on multiple-choice test items. By focusing their subsequent self-regulated learning on areas in which performance and self-monitoring judgements were misaligned, participants reinforced the importance of providing learners with opportunities to discover the limits of their ability and further elucidated the mechanism through which test-enhanced learning might be derived.


Asunto(s)
Educación de Pregrado en Medicina/métodos , Autoevaluación (Psicología) , Estudiantes de Medicina/psicología , Adulto , Competencia Clínica , Femenino , Humanos , Aprendizaje , Masculino , Habilidades para Tomar Exámenes , Adulto Joven
14.
JAMA ; 308(21): 2233-40, 2012 Dec 05.
Artículo en Inglés | MEDLINE | ID: mdl-23212501

RESUMEN

CONTEXT: There has been difficulty designing medical school admissions processes that provide valid measurement of candidates' nonacademic qualities. OBJECTIVE: To determine whether students deemed acceptable through a revised admissions protocol using a 12-station multiple mini-interview (MMI) outperform others on the 2 parts of the Canadian national licensing examinations (Medical Council of Canada Qualifying Examination [MCCQE]). The MMI process requires candidates to rotate through brief sequential interviews with structured tasks and independent assessment within each interview. DESIGN, SETTING, AND PARTICIPANTS: Cohort study comparing potential medical students who were interviewed at McMaster University using an MMI in 2004 or 2005 and accepted (whether or not they matriculated at McMaster) with those who were interviewed and rejected but gained entry elsewhere. The computer-based MCCQE part I (aimed at assessing medical knowledge and clinical decision making) can be taken on graduation from medical school; MCCQE part II (involving simulated patient interactions testing various aspects of practice) is based on the objective structured clinical examination and typically completed 16 months into postgraduate training. Interviews were granted to 1071 candidates, and those who gained entry could feasibly complete both parts of their licensure examination between May 2007 and March 2011. Scores could be matched on the examinations for 751 (part I) and 623 (part II) interviewees. INTERVENTION: Admissions decisions were made by combining z score transformations of scores assigned to autobiographical essays, grade point average, and MMI performance. Academic and nonacademic measures contributed equally to the final ranking. MAIN OUTCOME MEASURES: Scores on MCCQE part I (standardized cut-score, 390 [SD, 100]) and part II (standardized mean, 500 [SD, 100]). RESULTS: Candidates accepted by the admissions process had higher scores than those who were rejected for part I (mean total score, 531 [95% CI, 524-537] vs 515 [95% CI, 507-522]; P = .003) and for part II (mean total score, 563 [95% CI, 556-570] vs 544 [95% CI, 534-554]; P = .007). Among the accepted group, those who matriculated at McMaster did not outperform those who matriculated elsewhere for part I (mean total score, 524 [95% CI, 515-533] vs 546 [95% CI, 535-557]; P = .004) and for part II (mean total score, 557 [95% CI, 548-566] vs 582 [95% CI, 569-594]; P = .003). CONCLUSION: Compared with students who were rejected by an admission process that used MMI assessment, students who were accepted scored higher on Canadian national licensing examinations.


Asunto(s)
Educación de Pregrado en Medicina/normas , Evaluación Educacional , Entrevistas como Asunto , Criterios de Admisión Escolar , Facultades de Medicina , Estudios de Cohortes , Humanos , Concesión de Licencias , Ontario
15.
BMJ ; 376: e064389, 2022 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-34987062

RESUMEN

Research in cognitive psychology shows that expert clinicians make a medical diagnosis through a two step process of hypothesis generation and hypothesis testing. Experts generate a list of possible diagnoses quickly and intuitively, drawing on previous experience. Experts remember specific examples of various disease categories as exemplars, which enables rapid access to diagnostic possibilities and gives them an intuitive sense of the base rates of various diagnoses. After generating diagnostic hypotheses, clinicians then test the hypotheses and subjectively estimate the probability of each diagnostic possibility by using a heuristic called anchoring and adjusting. Although both novices and experts use this two step diagnostic process, experts distinguish themselves as better diagnosticians through their ability to mobilize experiential knowledge in a manner that is content specific. Experience is clearly the best teacher, but some educational strategies have been shown to modestly improve diagnostic accuracy. Increased knowledge about the cognitive psychology of the diagnostic process and the pitfalls inherent in the process may inform clinical teachers and help learners and clinicians to improve the accuracy of diagnostic reasoning. This article reviews the literature on the cognitive psychology of diagnostic reasoning in the context of cardiovascular disease.


Asunto(s)
Cardiología/métodos , Enfermedades Cardiovasculares/diagnóstico , Toma de Decisiones Clínicas/métodos , Psicología Cognitiva , Competencia Clínica , Heurística , Humanos , Solución de Problemas
16.
Acad Med ; 97(8): 1213-1218, 2022 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-35507461

RESUMEN

PURPOSE: Postgraduate medical education in Canada has quickly transformed to a competency-based model featuring new entrustable professional activities (EPAs) and associated milestones. It remains unclear, however, how these milestones are distributed between the central medical expert role and 6 intrinsic roles of the larger CanMEDS competency framework. A document review was thus conducted to measure how many EPA milestones are classified under each CanMEDS role, focusing on the overall balance between representation of intrinsic roles and that of medical expert. METHOD: Data were extracted from the EPA guides of 40 Canadian specialties in 2021 to measure the percentage of milestones formally linked to each role. Subsequent analyses explored for differences when milestones were separated by stage of postgraduate training, weighted by an EPA's minimum number of observations, or sorted by surgical and medical specialties. RESULTS: Approximately half of all EPA milestones (mean = 48.6%; 95% confidence interval [CI] = 45.9, 51.3) were classified under intrinsic roles overall. However, representation of the health advocate role was consistently low (mean = 2.95%; 95% CI = 2.49, 3.41), and some intrinsic roles-mainly leader, scholar, and professional-were more heavily concentrated in the final stage of postgraduate training. These findings held true under all conditions examined. CONCLUSIONS: The observed distribution of roles in EPA milestones fits with high-level descriptions of CanMEDS in that intrinsic roles are viewed as inextricably linked to medical expertise, implying both are equally important to cultivate through curricula. Yet a fine-grained analysis suggests that a low prevalence or late emphasis of some intrinsic roles may hinder how they are taught or assessed. Future work must explore whether the quantity or timing of milestones shapes the perceived value of each role, and other factors determining the optimal distribution of roles throughout training.


Asunto(s)
Educación Médica , Internado y Residencia , Medicina , Canadá , Competencia Clínica , Educación Basada en Competencias , Curriculum , Humanos
18.
Med Educ ; 45(4): 407-14, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21401689

RESUMEN

CONTEXT: Previous research has demonstrated the influence of familiar symptom descriptions and entire case similarity on diagnostic reasoning. In this paper, we extend the role of familiarity to examine the influence of familiar non-diagnostic patient information (e.g. name and age) on the diagnostic decisions of novices, both immediately following training and after a delay. If an instance model (reliance on similar previously seen cases) has strong explanatory power in clinical reasoning, we should see an influence of familiar patient information on later cases containing similar identifying characteristics even though such information is objectively irrelevant. METHODS: Thirty-six participants (undergraduate psychology students) were trained to competence on four simplified psychiatric diagnoses and allowed to practise their diagnostic skills on 12 prototypical case vignettes, for which feedback was provided. One-third of participants were tested immediately, one-third following a 24-hour delay, and one-third following a 1-week delay; all were tested on novel cases. Test cases were created to have two equiprobable diagnoses, both of which were supported by two novel symptom descriptions. However, one diagnosis was also supported by non-diagnostic patient information similar to information on a patient seen in the training phase. A deviation from an equal assignment of diagnostic probability, in support of the familiar patient information, demonstrates a reliance on the familiar, non-diagnostic information, and therefore indicates an instance model of reasoning. RESULTS: Participants assigned significantly higher diagnostic probability to the diagnosis cued by the familiar patient information (52.6%) than to the plausible alternative diagnosis (38.9%). Participants also reported a higher number of clinically relevant symptoms to support the diagnosis associated with the familiar patient information than to support the plausible alternative diagnosis. The influence of familiar patient identity was consistent across delay periods and cannot be accounted for by the forgetting of diagnostic rules. CONCLUSIONS: Participants were clearly relying on familiar patient identity information as evidenced by their diagnostic conclusions and differential reporting of clinically relevant features. These results support an instance model of reasoning which is not limited by whole case similarity or similarity of diagnostic information.


Asunto(s)
Toma de Decisiones , Educación de Pregrado en Medicina/métodos , Psicología/educación , Estudiantes de Medicina/psicología , Enseñanza/métodos , Competencia Clínica , Diagnóstico Diferencial , Educación de Pregrado en Medicina/normas , Humanos , Solución de Problemas
19.
Teach Learn Med ; 23(1): 78-84, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21240788

RESUMEN

BACKGROUND: Cognitive forcing strategies, a form of metacognition, have been advocated as a strategy to prevent diagnostic error. Increasingly, curricula are being implemented in medical training to address this error. Yet there is no experimental evidence that these curricula are effective. DESCRIPTION: This was an exploratory, prospective study using consecutive enrollment of 56 senior medical students during their emergency medicine rotation. Students received interactive, standardized cognitive forcing strategy training. EVALUATION: Using a cross-over design to assess transfer between similar (to instructional cases) and novel diagnostic cases, students were evaluated on 6 test cases. Forty-seven students were immediately tested and 9 were tested 2 weeks later. Data were analyzed using descriptive statistics and a McNemar chi-square test. CONCLUSIONS: This is the first study to explore the impact of cognitive forcing strategy training on diagnostic error. Our preliminary findings suggest that application and retention is poor. Further large studies are required to determine if transfer across diagnostic formats occurs.


Asunto(s)
Errores Diagnósticos/prevención & control , Aprendizaje , Modelos Psicológicos , Enseñanza , Distribución de Chi-Cuadrado , Competencia Clínica , Cognición , Estudios Cruzados , Curriculum , Educación Médica/métodos , Evaluación Educacional , Escolaridad , Electrocardiografía , Humanos , Proyectos Piloto , Estudios Prospectivos , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA