Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Can J Surg ; 64(3): E317-E323, 2021 05 26.
Artículo en Inglés | MEDLINE | ID: mdl-34038060

RESUMEN

Background: Script concordance testing (SCT) is an objective method to evaluate clinical reasoning that assesses the ability to interpret medical information under conditions of uncertainty. Many studies have supported its validity as a tool to assess higher levels of learning, but little is known about its acceptability to major stakeholders. The aim of this study was to determine the acceptability of SCT to residents in otolaryngology ­ head and neck surgery (OTL-HNS) and a reference group of experts. Methods: In 2013 and 2016, a set of SCT questions, as well a post-test exit survey, were included in the National In-Training Examination (NITE) for OTL-HNS. This examination is administered to all OTL-HNS residents across Canada who are in the second to fifth year of residency. The same SCT questions and survey were then sent to a group of OTL-HNS surgeons from 4 Canadian universities. Results: For 64.4% of faculty and residents, the study was their first exposure to SCT. Overall, residents found it difficult to adapt to this form of testing, thought that the clinical scenarios were not clear and believed that SCT was not useful for assessing clinical reasoning. In contrast, the vast majority of experts felt that the test questions reflected real-life clinical situations and would recommend SCT as an evaluation method in OTL-HNS. Conclusion: Views about the acceptability of SCT as an assessment tool for clinical reasoning differed between OTL-HNS residents and experts. Education about SCT and increased exposure to this testing method are necessary to improve residents' perceptions of SCT.


Contexte: Le test de concordance de script (TCS) est une méthode objective d'évaluation du raisonnement clinique qui mesure la capacité d'interpréter les renseignements médicaux en contexte d'incertitude. Beaucoup d'études en appuient la validité en tant qu'outil pour évaluer l'enseignement supérieur, mais on en sait peu sur son acceptabilité auprès des principales parties prenantes. Le but de cette étude était de déterminer l'acceptabilité du TCS chez les résidents en otorhinolaryngologie ­ chirurgie de la tête et du cou (ORL ­ chirurgie tête et cou) et un groupe de référence composé d'experts. Méthodes: En 2013 et 2016, une série de questions de TCS, de même qu'un questionnaire post-test, ont été inclus dans l'examen national en cours de formation NITE (National In-Training Examination) pour l'ORL ­ chirurgie tête et cou. Cet examen est administré à tous les résidents en ORL ­ chirurgie tête et cou au Canada qui sont entre leurs deuxième et cinquième années de résidence. Les mêmes questions de TCS ont été envoyées à un groupe de chirurgiens en ORL ­ chirurgie tête et cou de 4 université canadiennes. Résultats: Pour 64,4 % des membres facultaires et des résidents, l'étude était leur première exposition au TCS. Dans l'ensemble, les résidents ont trouvé difficile de s'adapter à cette forme de test, même si les scénarios cliniques étaient clairs, et ils ont estimé que le TCS était peu utile pour évaluer le raisonnement clinique. En revanche, la grande majorité des experts ont jugé que les questions du test reflétaient la réalité des cas cliniques et recommanderaient le TCS comme méthode d'évaluation en ORL ­ chirurgie tête et cou. Conclusion: Entre les résidents et les experts en ORL ­ chirurgie tête et cou, les points de vue quant à l'acceptabilité du TCS comme outil d'évaluation du raisonnement clinique ont différé et il faudrait y exposer les résidents davantage pour améliorer leur perception du TCS.


Asunto(s)
Actitud del Personal de Salud , Razonamiento Clínico , Evaluación Educacional , Internado y Residencia , Otolaringología/educación , Canadá , Humanos , Encuestas y Cuestionarios
2.
Ann Emerg Med ; 75(2): 206-217, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31474478

RESUMEN

STUDY OBJECTIVE: Clinical reasoning is considered a core competency of physicians. Yet there is a paucity of research on clinical reasoning specifically in emergency medicine, as highlighted in the literature. METHODS: We conducted a scoping review to examine the state of research on clinical reasoning in this specialty. Our team, composed of content and methodological experts, identified 3,763 articles in the literature, 95 of which were included. RESULTS: Most studies were published after 2000. Few studies focused on the cognitive processes involved in decisionmaking (ie, clinical reasoning). Of these, many confirmed findings from the general literature on clinical reasoning; specifically, the role of both intuitive and analytic processes. We categorized factors that influence decisionmaking into contextual, patient, and physician factors. Many studies focused on decisions in regard to investigations and admission. Test ordering is influenced by physicians' experience, fear of litigation, and concerns about malpractice. Fear of litigation and malpractice also increases physicians' propensity to admit patients. Context influences reasoning but findings pertaining to specific factors, such as patient flow and workload, were inconsistent. CONCLUSION: Many studies used designs such as descriptive or correlational methods, limiting the strength of findings. Many gray areas persist, in which studies are either scarce or yield conflicting results. The findings of this scoping review should encourage us to intensify research in the field of emergency physicians' clinical reasoning, particularly on the cognitive processes at play and the factors influencing them, using appropriate theoretical frameworks and more robust methods.


Asunto(s)
Toma de Decisiones , Medicina de Emergencia/métodos , Servicio de Urgencia en Hospital , Médicos/psicología , Medicina Defensiva , Humanos
3.
Adv Health Sci Educ Theory Pract ; 25(4): 989-1002, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-31768787

RESUMEN

Scoping reviews are increasingly used in health professions education to synthesize research and scholarship, and to report on the depth and breadth of the literature on a given topic. In this Perspective, we argue that the philosophical stance scholars adopt during the execution of a scoping review, including the meaning they attribute to fundamental concepts such as knowledge and evidence, influences how they gather, analyze, and interpret information obtained from a heterogeneous body of literature. We highlight the principles informing scoping reviews and outline how epistemology-the aspect of philosophy that "deals with questions involving the nature of knowledge, the justification of beliefs, and rationality"-should guide methodological considerations, toward the aim of ensuring the production of a high-quality review with defensible and appropriate conclusions. To contextualize our claims, we illustrate some of the methodological challenges we have personally encountered while executing a scoping review on clinical reasoning and reflect on how these challenges could have been reconciled through a broader understanding of the methodology's philosophical foundation. We conclude with a description of lessons we have learned that might usefully inform other scholars who are considering undertaking a scoping review in their own domains of inquiry.


Asunto(s)
Empleos en Salud/educación , Conocimiento , Revisiones Sistemáticas como Asunto/métodos , Revisiones Sistemáticas como Asunto/normas , Humanos
4.
BMC Med Educ ; 20(1): 107, 2020 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-32264895

RESUMEN

BACKGROUND: Clinical reasoning is at the core of health professionals' practice. A mapping of what constitutes clinical reasoning could support the teaching, development, and assessment of clinical reasoning across the health professions. METHODS: We conducted a scoping study to map the literature on clinical reasoning across health professions literature in the context of a larger Best Evidence Medical Education (BEME) review on clinical reasoning assessment. Seven databases were searched using subheadings and terms relating to clinical reasoning, assessment, and Health Professions. Data analysis focused on a comprehensive analysis of bibliometric characteristics and the use of varied terminology to refer to clinical reasoning. RESULTS: Literature identified: 625 papers spanning 47 years (1968-2014), in 155 journals, from 544 first authors, across eighteen Health Professions. Thirty-seven percent of papers used the term clinical reasoning; and 110 other terms referring to the concept of clinical reasoning were identified. Consensus on the categorization of terms was reached for 65 terms across six different categories: reasoning skills, reasoning performance, reasoning process, outcome of reasoning, context of reasoning, and purpose/goal of reasoning. Categories of terminology used differed across Health Professions and publication types. DISCUSSION: Many diverse terms were present and were used differently across literature contexts. These terms likely reflect different operationalisations, or conceptualizations, of clinical reasoning as well as the complex, multi-dimensional nature of this concept. We advise authors to make the intended meaning of 'clinical reasoning' and associated terms in their work explicit in order to facilitate teaching, assessment, and research communication.


Asunto(s)
Competencia Clínica/normas , Razonamiento Clínico , Empleos en Salud/normas , Práctica Profesional/normas , Humanos , Rol Profesional
5.
Med Teach ; 41(11): 1277-1284, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31314612

RESUMEN

Introduction: Clinical reasoning is considered to be at the core of health practice. Here, we report on the diversity and inferred meanings of the terms used to refer to clinical reasoning and consider implications for teaching and assessment. Methods: In the context of a Best Evidence Medical Education (BEME) review of 625 papers drawn from 18 health professions, we identified 110 terms for clinical reasoning. We focus on iterative categorization of these terms across three phases of coding and considerations for how terminology influences educational practices. Results: Following iterative coding with 5 team members, consensus was possible for 74, majority coding was possible for 16, and full team disagreement existed for 20 terms. Categories of terms included: purpose/goal of reasoning, outcome of reasoning, reasoning performance, reasoning processes, reasoning skills, and context of reasoning. Discussion: Findings suggest that terms used in reference to clinical reasoning are non-synonymous, not uniformly understood, and the level of agreement differed across terms. If the language we use to describe, to teach, or to assess clinical reasoning is not similarly understood across clinical teachers, program directors, and learners, this could lead to confusion regarding what the educational or assessment targets are for "clinical reasoning."


Asunto(s)
Toma de Decisiones Clínicas/métodos , Empleos en Salud/educación , Terminología como Asunto , Competencia Clínica , Humanos
10.
J Interprof Care ; 30(5): 689-92, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27309966

RESUMEN

Clinical work occurs in a context which is heavily influenced by social interactions. The absence of theoretical frameworks underpinning the design of collaborative learning has become a roadblock for interprofessional education (IPE). This article proposes a script-based framework for the design of IPE. This framework provides suggestions for designing learning environments intended to foster competences we feel are fundamental to successful interprofessional care. The current literature describes two script concepts: "illness scripts" and "internal/external collaboration scripts". Illness scripts are specific knowledge structures that link general disease categories and specific examples of diseases. "Internal collaboration scripts" refer to an individual's knowledge about how to interact with others in a social situation. "External collaboration scripts" are instructional scaffolds designed to help groups collaborate. Instructional research relating to illness scripts and internal collaboration scripts supports (a) putting learners in authentic situations in which they need to engage in clinical reasoning, and (b) scaffolding their interaction with others with "external collaboration scripts". Thus, well-established experiential instructional approaches should be combined with more fine-grained script-based scaffolding approaches. The resulting script-based framework offers instructional designers insights into how students can be supported to develop the necessary skills to master complex interprofessional clinical situations.


Asunto(s)
Conducta Cooperativa , Curriculum , Personal de Salud/educación , Comunicación Interdisciplinaria , Enseñanza/organización & administración , Humanos
12.
Occup Ther Health Care ; 29(2): 186-200, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25821884

RESUMEN

Multiple-Mini Interviews (MMIs) were used to assess professional attributes of candidates seeking admission to an occupational therapy professional entry-level master's program. Candidates and interviewers were invited to complete a questionnaire comprised of quantitative and open-ended questions following the MMIs. The MMIs were perceived to be fair, enjoyable, and capable of capturing professional attributes. Descriptive analysis of candidates' data revealed perceptions regarding logistics, interview station content, process, and interviewers. Interviewers commented on the positive and challenging aspects of the scenarios and the MMI process. Admissions committees need to consider several logistical, content, and process issues when designing and implementing MMIs as a selection tool.


Asunto(s)
Técnicos Medios en Salud/educación , Actitud del Personal de Salud , Entrevistas como Asunto , Terapia Ocupacional/educación , Criterios de Admisión Escolar , Humanos , Percepción , Encuestas y Cuestionarios
14.
Med Educ ; 47(5): 453-62, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23574058

RESUMEN

OBJECTIVES: Traditional student feedback questionnaires are imperfect course evaluation tools, largely because they generate low response rates and are susceptible to response bias. Preliminary research suggests that prediction-based methods of course evaluation - in which students estimate their peers' opinions rather than provide their own personal opinions - require significantly fewer respondents to achieve comparable results and are less subject to biasing influences. This international study seeks further support for the validity of these findings by investigating: (i) the performance of the prediction-based method, and (ii) its potential for bias. METHODS: Participants (210 Year 1 undergraduate medical students at McGill University, Montreal, Quebec, Canada, and 371 Year 1 and 385 Year 3 undergraduate medical students at the University Medical Center Groningen [UMCG], University of Groningen, Groningen, the Netherlands) were randomly assigned to complete course evaluations using either the prediction-based or the traditional opinion-based method. The numbers of respondents required to achieve stable outcomes were determined using an iterative process. Differences between the methods regarding the number of respondents required were analysed using t-tests. Differences in evaluation outcomes between the methods and between groups of students stratified by four potentially biasing variables (gender, estimated general level of achievement, expected test result, satisfaction after examination completion) were analysed using multivariate analysis of variance (manova). RESULTS: Overall response rates in the three student cohorts ranged from 70% to 94%. The prediction-based method required significantly fewer respondents than the opinion-based method (averages of 26-28 and 67-79 respondents, respectively) across all samples (p < 0.001), whereas the outcomes achieved were fairly similar. Bias was found in four of 12 opinion-based condition comparisons (three sites, four variables), and in only one comparison in the prediction-based condition. CONCLUSIONS: Our study supports previous findings that prediction-based methods require significantly fewer respondents to achieve results comparable with those obtained through traditional course evaluation methods. Moreover, our findings support the hypothesis that prediction-based responses are less subject to bias than traditional opinion-based responses. These findings lend credence to prediction-based as an accurate and efficient method of course evaluation.


Asunto(s)
Educación de Pregrado en Medicina/normas , Estudiantes de Medicina/psicología , Análisis de Varianza , Actitud del Personal de Salud , Curriculum/normas , Humanos , Países Bajos , Variaciones Dependientes del Observador , Grupo Paritario , Quebec , Encuestas y Cuestionarios/normas
15.
Med Teach ; 35(3): 184-93, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23360487

RESUMEN

The script concordance test (SCT) is used in health professions education to assess a specific facet of clinical reasoning competence: the ability to interpret medical information under conditions of uncertainty. Grounded in established theoretical models of knowledge organization and clinical reasoning, the SCT has three key design features: (1) respondents are faced with ill-defined clinical situations and must choose between several realistic options; (2) the response format reflects the way information is processed in challenging problem-solving situations; and (3) scoring takes into account the variability of responses of experts to clinical situations. SCT scores are meant to reflect how closely respondents' ability to interpret clinical data compares with that of experienced clinicians in a given knowledge domain. A substantial body of research supports the SCT's construct validity, reliability, and feasibility across a variety of health science disciplines, and across the spectrum of health professions education from pre-clinical training to continuing professional development. In practice, its performance as an assessment tool depends on careful item development and diligent panel selection. This guide, intended as a primer for the uninitiated in SCT, will cover the basic tenets, theoretical underpinnings, and construction principles governing script concordance testing.


Asunto(s)
Evaluación Educacional/métodos , Empleos en Salud/educación , Modelos Teóricos , Pensamiento , Competencia Clínica/normas , Diagnóstico Diferencial , Humanos , Incertidumbre
16.
Med Educ ; 46(5): 454-63, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-22515753

RESUMEN

CONTEXT: Clinical reasoning is a core skill in medical practice, but remains notoriously difficult for students to grasp and teachers to nurture. To date, an accepted model that adequately captures the complexity of clinical reasoning processes does not exist. Knowledge-modelling software such as mot Plus (Modelling using Typified Objects [MOT]) may be exploited to generate models capable of unravelling some of this complexity. OBJECTIVES: This study was designed to create a comprehensive generic model of clinical reasoning processes that is intended for use by teachers and learners, and to provide data on the validity of the model. METHODS: Using a participatory action research method and the established modelling software (mot Plus), knowledge was extracted and entered into the model by a cognitician in a series of encounters with a group of experienced clinicians over more than 250 contact hours. The model was then refined through an iterative validation process involving the same group of doctors, after which other groups of clinicians were asked to solve a clinical problem involving simulated patients. RESULTS: A hierarchical model depicting the multifaceted processes of clinical reasoning was produced. Validation rounds suggested generalisability across disciplines and situations. CONCLUSIONS: The MOT model of clinical reasoning processes has potentially important applications for use within undergraduate and graduate medical curricula to inform teaching, learning and assessment. Specifically, it could be used to support curricular development because it can help to identify opportune moments for learning specific elements of clinical reasoning. It could also be used to precisely identify and remediate reasoning errors in students, residents and practising doctors with persistent difficulties in clinical reasoning.


Asunto(s)
Competencia Clínica/normas , Gráficos por Computador , Toma de Decisiones Asistida por Computador , Educación de Pregrado en Medicina/métodos , Instrucción por Computador , Humanos , Solución de Problemas
17.
Med Educ ; 45(4): 329-38, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21401680

RESUMEN

CONTEXT: Script concordance test (SCT) scores are intended to reflect respondents' competence in interpreting clinical data under conditions of uncertainty. The validity of inferences based on SCT scores has not been rigorously established. OBJECTIVES: This study was conducted in order to develop a structured validity argument for the interpretation of test scores derived through use of the script concordance method. METHODS: We searched the PubMed, EMBASE and PsycINFO databases for articles pertaining to script concordance testing. We then reviewed these articles to evaluate the construct validity of the script concordance method, following an established approach for analysing validity data from five categories: content; response process; internal structure; relations to other variables, and consequences. RESULTS: Content evidence derives from clear guidelines for the creation of authentic, ill-defined scenarios. High internal consistency reliability supports the internal structure of SCT scores. As might be expected, SCT scores correlate poorly with assessments of pure factual knowledge, in which correlations for more advanced learners are lower. The validity of SCT scores is weakly supported by evidence pertaining to examinee response processes and educational consequences. CONCLUSIONS: Published research generally supports the use of SCT to assess the interpretation of clinical data under conditions of uncertainty, although specifics of the validity argument vary and require verification in different contexts and for particular SCTs. Our review identifies potential areas of further validity inquiry in all five categories of evidence. In particular, future SCT research might explore the impact of the script concordance method on teaching and learning, and examine how SCTs integrate with other assessment methods within comprehensive assessment programmes.


Asunto(s)
Competencia Clínica , Educación Médica/métodos , Evaluación Educacional/métodos , Solución de Problemas , Educación Médica/normas , Evaluación Educacional/normas , Humanos , Reproducibilidad de los Resultados , Estudiantes de Medicina/psicología , Incertidumbre
18.
Adv Health Sci Educ Theory Pract ; 16(5): 601-8, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21286807

RESUMEN

The Script Concordance Test (SCT) uses a panel-based, aggregate scoring method that aims to capture the variability of responses of experienced practitioners to particular clinical situations. The use of this type of scoring method is a key determinant of the tool's discriminatory power, but deviant answers could potentially diminish the reliability of scores by introducing measurement error. (1) to investigate the effects on SCT psychometrics of excluding from the test's scoring key either deviant panelists or deviant answers; (2) to propose a method for excluding either deviant panelists or deviant answers. Using an SCT in radiation oncology, we examined three methods for reducing panel response variability. One method ('outliers') entailed removing from the panel members with very low total scores. Two other methods ('distance-from-mode' and 'judgment-by-experts') excluded widely deviant responses to individual questions from the test's scoring key. We compared the effects of these methods on score reliability, correlations between original and adjusted scores, and between-group effect sizes (panel-residents; panel-students; and residents-students). With a large panel (n = 45), optimization methods have no effect on reliability of scores, correlation and effect size. With a smaller panel (n = 15) no significant effect of optimization methods were observed on reliability and correlation, but significant variation on effect size was observed across samples. Measurement error resulting from deviant panelist responses on SCTs is negligible, provided the panel size is sufficiently large (>15). However, if removal of deviant answers is judged necessary, the distance-from-mode strategy is recommended.


Asunto(s)
Competencia Clínica/normas , Educación de Postgrado en Medicina , Evaluación Educacional/métodos , Juicio , Oncología por Radiación/educación , Adulto , Toma de Decisiones , Femenino , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Psicometría , Oncología por Radiación/normas , Reproducibilidad de los Resultados , Adulto Joven
19.
Teach Learn Med ; 22(3): 180-6, 2010 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-20563937

RESUMEN

BACKGROUND: The Script Concordance Test (SCT) uses authentic, ill-defined clinical cases to compare medical learners' judgment skills with those of experienced physicians. SCT scores are meant to measure the degree of concordance between the performance of examinees and that of the reference panel. Raw test scores have meaning only if statistics (mean and standard deviation) describing the panel's performance are concurrently provided. PURPOSE: The purpose of this study is to suggest a method for reporting scores that standardizes panel mean and standard deviation, allowing examinees to immediately gauge their performance relative to panel members. METHODS: Based on a statistical method of standardization, a new method for computing SCT scores is described. According to this method, test raw scores are converted into a scale in which the panel mean is set as the value of reference, and the standard deviation of the panel serves as a yardstick by which examinee performance is measured. RESULTS: The effect of this transformation on four data sets obtained from SCTs in radio-oncology, surgery, neurology, and nursing is discussed. CONCLUSION: This transformation method proposes a common metric basis for reporting SCT scores and provides examinees with clear, interpretable insights into their performance relative to that of physicians of the field. We recommend reporting SCT scores with the mean and standard deviation of panel scores set at standard scores of 80 and 5, respectively. Beyond SCT, our transformation method may be generalizable to the scoring of other test formats in which the performance of examinees and those of a panel of reference undertaking the same cognitive tasks are compared.


Asunto(s)
Competencia Clínica , Educación de Postgrado en Medicina/métodos , Evaluación Educacional/métodos , Conocimientos, Actitudes y Práctica en Salud , Aprendizaje , Incertidumbre , Escolaridad , Cirugía General/educación , Humanos , Neurología/educación , Enfermería , Estadística como Asunto , Encuestas y Cuestionarios
20.
Can J Neurol Sci ; 36(3): 326-31, 2009 May.
Artículo en Inglés | MEDLINE | ID: mdl-19534333

RESUMEN

BACKGROUND: Clinical judgment, the ability to make appropriate decisions in uncertain situations, is central to neurological practice, but objective measures of clinical judgment in neurology trainees are lacking. The Script Concordance Test (SCT), based on script theory from cognitive psychology, uses authentic clinical scenarios to compare a trainee's judgment skills with those of experts. The SCT has been validated in several medical disciplines, but has not been investigated in neurology. METHODS: We developed an Internet-based neurology SCT (NSCT) comprising 24 clinical scenarios with three to four questions each. The scenarios were designed to reflect the uncertainty of real-life clinical encounters in adult neurology. The questions explored aspects of the scenario in which several responses might be acceptable; trainees were asked to judge which response they considered to be best. Forty-one PGY1-PGY5 neurology residents and eight medical students from three North American neurology programs (McGill, Calgary, and Mayo Clinic) completed the NSCT. The responses of trainees to each question were compared with the aggregate responses of an expert panel of 16 attending neurologists. RESULTS: The NSCT demonstrated good reliability (Cronbach alpha = 0.79). Neurology residents scored higher than medical students and lower than attending neurologists, supporting the test's construct validity. Furthermore, NSCT scores discriminated between senior (PGY3-5) and junior residents (PGY1-2). CONCLUSIONS: Our NSCT is a practical and reliable instrument, and our findings support its construct validity for assessing judgment in neurology trainees. The NSCT has potentially widespread applications as an evaluation tool, both in neurology training and for licensing examinations.


Asunto(s)
Educación de Postgrado en Medicina/métodos , Evaluación Educacional , Internet , Juicio , Neurología/educación , Aprendizaje Basado en Problemas , Humanos , Enfermedades del Sistema Nervioso/diagnóstico , Enfermedades del Sistema Nervioso/terapia , Neurología/métodos , Neurología/normas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA