Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Ann Surg ; 277(4): 704-711, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34954752

RESUMEN

OBJECTIVE: To gather validity evidence supporting the use and interpretation of scores from the American College of Surgeons Entering Resident Readiness Assessment (ACS ERRA) Program. SUMMARY AND BACKGROUND DATA: ACS ERRA is an online formative assessment program developed to assess entering surgery residents' ability to make critical clinical decisions, and includes 12 clinical areas and 20 topics identified by a national panel of surgeon educators and residency program directors. METHODS: Data from 3 national testing administrations of ACS ERRA (2018-2020) were used to gather validity evidence regarding content, response process, internal structure (reliability), relations to other variables, and consequences. RESULTS: Over the 3 administrations, 1975 surgery residents participated from 125 distinct residency programs. Overall scores [Mean = 64% (SD = 7%)] remained consistent across the 3 years ( P = 0.670). There were no significant differences among resident characteristics (gender, age, international medical graduate status). The mean case discrimination index was 0.54 [SD = 0.15]. Kappa inter-rater reliability for scoring was 0.87; the overall test score reliability (G-coefficient) was 0.86 (Ф-coefficient = 0.83). Residents who completed residency readiness programs had higher ACS ERRA scores (66% versus 63%, Cohen's d = 0.23, P < 0.001). On average, 15% of decisions made (21/140 per test) involved potentially harmful actions. Variability in scores from graduating medical schools (7%) carried over twice as much weight than from matched residency programs (3%). CONCLUSIONS: ACS ERRA scores provide valuable information to entering surgery residents and surgery program directors to aid in development of individual and group learning plans.


Asunto(s)
Internado y Residencia , Cirujanos , Humanos , Estados Unidos , Reproducibilidad de los Resultados , Evaluación de Programas y Proyectos de Salud , Competencia Clínica , Educación de Postgrado en Medicina
2.
Ann Surg ; 272(1): 194-198, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-30870178

RESUMEN

OBJECTIVE: To assess the readiness of entering residents for clinical responsibilities, the American College of Surgeons (ACS) Division of Education developed the "Entering Resident Readiness Assessment" (ACS-ERRA) Program. SUMMARY BACKGROUND: ACS-ERRA is an online formative assessment that uses a key features approach to measure clinical decision-making skills and focuses on cases encountered at the beginning of residency. Results can be used to develop learning plans to address areas that may need reinforcement. METHODS: A national panel of 16 content experts, 3 medical educators, and a psychometrician developed 98 short, key features cases. Each case required medical knowledge to be applied appropriately at challenging decision points during case management. Four pilot testing studies were conducted sequentially to gather validity evidence. RESULTS: Residents from programs across the United States participated in the studies (n = 58, 20, 87, 154, respectively). Results from the pilot studies enabled improvements after each pilot test. For the psychometric pilot (final pilot test), 2 parallel test forms of the ACS-ERRA were administered, each containing 40 cases, resulting in overall mean testing time of 2 hours 2 minutes (SD = 43 min). The mean test score was 61% (SD = 9%) and the G-coefficient reliability was 0.90. CONCLUSIONS: Results can be used to identify strengths and weaknesses in residents' decision-making skills and yield valuable information to create individualized learning plans. The data can also support efforts directed at the transition into residency training and inform discussions about levels of supervision. In addition, surgery program directors can use the aggregate test results to make curricular changes.


Asunto(s)
Educación de Postgrado en Medicina , Evaluación Educacional , Cirugía General/educación , Internado y Residencia , Competencia Clínica , Toma de Decisiones , Humanos , Proyectos Piloto , Sociedades Médicas , Estados Unidos
3.
Med Educ ; 53(7): 710-722, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30779204

RESUMEN

CONTEXT: The script concordance test (SCT), designed to measure clinical reasoning in complex cases, has recently been the subject of several critical research studies. Amongst other issues, response process validity evidence remains lacking. We explored the response processes of experts on an SCT scoring panel to better understand their seemingly divergent beliefs about how new clinical data alter the suitability of proposed actions within simulated patient cases. METHODS: A total of 10 Argentine gastroenterologists who served as the expert panel on an existing SCT re-answered 15 cases 9 months after their original panel participation. They then answered questions probing their reasoning and reactions to other experts' perspectives. RESULTS: The experts sometimes noted they would not ordinarily consider the actions proposed for the cases at all (30/150 instances [20%]) or would collect additional data first (54/150 instances [36%]). Even when groups of experts agreed about how new clinical data in a case affected the suitability of a proposed action, there was often disagreement (118/133 instances [89%]) about the suitability of the proposed action before the new clinical data had been introduced. Experts reported confidence in their responses, but showed limited consistency with the responses they had given 9 months earlier (linear weighted kappa = 0.33). Qualitative analyses showed nuanced and complex reasons behind experts' responses, revealing, for example, that experts often considered the unique affordances and constraints of their varying local practice environments when responding. Experts generally found other experts' alternative responses moderately compelling (mean ± standard deviation 2.93 ± 0.80 on a 5-point scale, where 3 = moderately compelling). Experts switched their own preferred responses after seeing others' reasoning in 30 of 150 (20%) instances. CONCLUSIONS: Expert response processes were not consistent with the classical interpretation and use of SCT scores. However, several fruitful and justifiable alternatives for the use of SCT-like methods are proposed, such as to guide assessments for learning.


Asunto(s)
Competencia Clínica , Toma de Decisiones , Testimonio de Experto , Gastroenterólogos/educación , Encuestas y Cuestionarios , Argentina , Educación Médica Continua , Evaluación Educacional , Humanos , Estudios Prospectivos , Reproducibilidad de los Resultados
4.
Med Educ ; 52(8): 851-860, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29574896

RESUMEN

CONTEXT: In postgraduate medical programmes, the progressive development of autonomy places residents in situations in which they must cope with uncertainty. We explored the phenomenon of hesitation, triggered by uncertainty, in the context of the operating room in order to understand the social behaviours surrounding supervision and progressive autonomy. METHODS: Nine surgical residents and their supervising surgeons at a Canadian medical school were selected. Each resident-supervisor pair was observed during a surgical procedure and subsequently participated in separate post-observation, semi-structured interviews. Constructivist grounded theory was used to guide the collection and analysis of data. RESULTS: Three hesitation-related themes were identified: the principle of progress; the meaning of hesitation, and the judgement of competence. Supervisors and residents understood hesitation in the context of a core surgical principle we termed the 'principle of progress'. This principle reflects the supervisors' and residents' shared norm that maintaining progress throughout a surgical procedure is of utmost importance. Resident hesitation was perceived as the first indication of a disruption to this principle and was therefore interpreted by supervisors and residents alike as a sign of incompetence. This interpretation influenced the teaching-learning process during these moments when residents were working at the edge of their abilities. CONCLUSIONS: The principle of progress influences the meaning of hesitation which, in turn, shapes judgements of competence. This has important implications for teaching and learning in direct supervision settings such as surgery. Without efforts to change the perception that hesitation represents incompetence, these potential teaching-learning moments will not fully support progressive autonomy.


Asunto(s)
Competencia Clínica , Cirugía General/educación , Internado y Residencia , Quirófanos/normas , Incertidumbre , Canadá , Educación de Postgrado en Medicina , Teoría Fundamentada , Humanos , Relaciones Interprofesionales , Cirujanos
5.
Adv Health Sci Educ Theory Pract ; 21(4): 761-73, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26757931

RESUMEN

Recent changes to the patient note (PN) format of the United States Medical Licensing Examination have challenged medical schools to improve the instruction and assessment of students taking the Step-2 clinical skills examination. The purpose of this study was to gather validity evidence regarding response process and internal structure, focusing on inter-rater reliability and generalizability, to determine whether a locally-developed PN scoring rubric and scoring guidelines could yield reproducible PN scores. A randomly selected subsample of historical data (post-encounter PN from 55 of 177 medical students) was rescored by six trained faculty raters in November-December 2014. Inter-rater reliability (% exact agreement and kappa) was calculated for five standardized patient cases administered in a local graduation competency examination. Generalizability studies were conducted to examine the overall reliability. Qualitative data were collected through surveys and a rater-debriefing meeting. The overall inter-rater reliability (weighted kappa) was .79 (Documentation = .63, Differential Diagnosis = .90, Justification = .48, and Workup = .54). The majority of score variance was due to case specificity (13 %) and case-task specificity (31 %), indicating differences in student performance by case and by case-task interactions. Variance associated with raters and its interactions were modest (<5 %). Raters felt that justification was the most difficult task to score and that having case and level-specific scoring guidelines during training was most helpful for calibration. The overall inter-rater reliability indicates high level of confidence in the consistency of note scores. Designs for scoring notes may optimize reliability by balancing the number of raters and cases.


Asunto(s)
Competencia Clínica/normas , Educación de Pregrado en Medicina/normas , Evaluación Educacional/normas , Anamnesis/normas , Examen Físico/normas , Diagnóstico Diferencial , Documentación , Humanos , Licencia Médica , Reproducibilidad de los Resultados , Estados Unidos
6.
Adv Health Sci Educ Theory Pract ; 21(4): 897-913, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26590984

RESUMEN

Despite multifaceted attempts to "protect the public," including the implementation of various assessment practices designed to identify individuals at all stages of training and practice who underperform, profound deficiencies in quality and safety continue to plague the healthcare system. The purpose of this reflections paper is to cast a critical lens on current assessment practices and to offer insights into ways in which they might be adapted to ensure alignment with modern conceptions of health professional education for the ultimate goal of improved healthcare. Three dominant themes will be addressed: (1) The need to redress unintended consequences of competency-based assessment; (2) The potential to design assessment systems that facilitate performance improvement; and (3) The importance of ensuring authentic linkage between assessment and practice. Several principles cut across each of these themes and represent the foundational goals we would put forward as signposts for decision making about the continued evolution of assessment practices in the health professions: (1) Increasing opportunities to promote learning rather than simply measuring performance; (2) Enabling integration across stages of training and practice; and (3) Reinforcing point-in-time assessments with continuous professional development in a way that enhances shared responsibility and accountability between practitioners, educational programs, and testing organizations. Many of the ideas generated represent suggestions for strategies to pilot test, for infrastructure to build, and for harmonization across groups to be enabled. These include novel strategies for OSCE station development, formative (diagnostic) assessment protocols tailored to shed light on the practices of individual clinicians, the use of continuous workplace-based assessment, and broadening the focus of high-stakes decision making beyond determining who passes and who fails. We conclude with reflections on systemic (i.e., cultural) barriers that may need to be overcome to move towards a more integrated, efficient, and effective system of assessment.


Asunto(s)
Evaluación Educacional , Empleos en Salud , Educación Basada en Competencias , Humanos , Seguridad del Paciente , Mejoramiento de la Calidad
7.
Med Teach ; 38(11): 1100-1104, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27248314

RESUMEN

The authors share 12 practical tips on creating effective titles and abstracts for a journal publication or conference presentation. When crafting a title authors should: (1) start thinking of the title from the start; (2) brainstorm many key words, create permutations, and ask others for input; (3) strive for an informative and indicative title; (4) start the title with the most important words; and (5) wait to finalize the title until the very end. When writing the abstract, authors should: (6) wait until the end to write the abstract; (7) copy and paste from main text as the starting point; (8) start with a detailed structured format; (9) describe what they did; (10) describe what they found; (11) highlight what readers can do with this information; and (12) ensure that the abstract aligns with the full text and conforms to submission guidelines.


Asunto(s)
Factor de Impacto de la Revista , Publicaciones Periódicas como Asunto/normas , Humanos , Escritura/normas
8.
Adv Health Sci Educ Theory Pract ; 20(1): 85-100, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24823793

RESUMEN

Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2%) and non-technical (83.2 vs. 75.1%) components of the PS-OSCE (p < 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (p < 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.


Asunto(s)
Competencia Clínica , Educación de Postgrado en Medicina/normas , Evaluación Educacional/métodos , Medicina Interna/educación , Internado y Residencia , Adulto , Femenino , Humanos , Masculino , Modelos Educacionales , Ontario , Reproducibilidad de los Resultados
9.
Teach Learn Med ; 27(3): 299-306, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26158332

RESUMEN

THEORY: Feedback and debriefing, as portrayed in expertise development and self-assessment, play critical roles in providing residents with useful information to foster their progress. HYPOTHESES: Prior work has shown that clinical preceptors' use of conceptual frameworks (CFs; ways of thinking based on theories, best practices, or models) while giving feedback to residents was positively associated with a greater diversity of responses. Also, senior preceptors produced more responses, used more CFs, and asked more probing-challenging questions than junior preceptors. The purpose was to confirm the generalization of these initial findings with a broader and better defined sample of preceptors. METHOD: We conducted a mixed-method study with 20 junior and 20 senior preceptors in a controlled environment to analyze their responses and rationales to residents' educational needs as portrayed in 6 written vignettes. The preceptors were recruited from 3 primary care specialties preceptors (family medicine, internal medicine, pediatrics) of the 3 French-speaking faculties of medicine in Québec, Canada. RESULTS: The preceptors increased the 2012 list of response topics (96 to 126) and doubled the number of distinct CFs (16 to 32). The junior and senior preceptors expressed the same number and diversity of CFs. On average, senior preceptors asked more clarification questions and reflected more than juniors on the learning process that occurs during case discussions. Preceptor specialty and prior training in medical education did not influence the number and diversity of responses and CFs, except that preceptors with prior training generated more responses per vignette and were more reflective. Senior preceptors had a stronger positive relationship between the number of total and distinct CFs and the number of responses than the juniors. CONCLUSIONS: Although senior preceptors did not give more responses or use more CFs compared to the prior study, they continue to probe residents more and reflected more. The positive relationship between responses and CFs has important implications for faculty development and calls for more research to better understand the specific contribution of CFs to feedback.


Asunto(s)
Internado y Residencia , Evaluación de Necesidades , Preceptoría , Estudiantes de Medicina , Femenino , Humanos , Entrevistas como Asunto , Masculino , Investigación Cualitativa
10.
Med Teach ; 37(4): 379-84, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25156235

RESUMEN

BACKGROUND: SNAPPS is a learner-centered approach to case presentations that was shown, in American studies, to facilitate the expression of clinical reasoning and uncertainties in the outpatient setting. AIM: To evaluate the SNAPPS technique in an Asian setting. METHODS: We conducted a quasi-experimental trial comparing the SNAPPS technique to the usual-and-customary method of case presentations for fifth-year medical students in an ambulatory internal medicine clerkship rotation at Khon Kaen University, Thailand. We created four experimental groups to test main and maturation effects. We measured 12 outcomes at the end of the rotations: total, summary, and discussion presentation times, number of basic clinical findings, summary thoroughness, number of diagnoses in the differential, number of justified diagnoses, number of basic attributes supporting the differential, number of student-initiated questions or discussions about uncertainties, diagnosis, management, and reading selections. RESULTS: SNAPPS users (90 case presentations), compared with the usual group (93 presentations), had more diagnoses in their differentials (1.81 vs. 1.42), more basic attributes to support the differential (2.39 vs. 1.22), more expression of uncertainties (6.67% vs. 1.08%), and more student-initiated reading selections (6.67% vs. 0%). Presentation times did not differ between groups (12 vs. 11.2 min). There were no maturation effects detected. CONCLUSIONS: The use of the SNAPPS technique among Thai medical students during their internal medicine ambulatory care clerkship rotation did facilitate the expression of their clinical reasoning and uncertainties. More intense student-preceptor training is needed to better foster the expression of uncertainties.


Asunto(s)
Atención Ambulatoria , Prácticas Clínicas/organización & administración , Competencia Clínica , Medicina Interna/educación , Modelos Educacionales , Humanos , Aprendizaje , Tailandia , Factores de Tiempo , Incertidumbre
11.
Med Educ ; 48(10): 1020-7, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25200022

RESUMEN

OBJECTIVES: Despite significant evidence supporting the use of three-option multiple-choice questions (MCQs), these are rarely used in written examinations for health professions students. The purpose of this study was to examine the effects of reducing four- and five-option MCQs to three-option MCQs on response times, psychometric characteristics, and absolute standard setting judgements in a pharmacology examination administered to health professions students. METHODS: We administered two versions of a computerised examination containing 98 MCQs to 38 Year 2 medical students and 39 Year 3 pharmacy students. Four- and five-option MCQs were converted into three-option MCQs to create two versions of the examination. Differences in response time, item difficulty and discrimination, and reliability were evaluated. Medical and pharmacy faculty judges provided three-level Angoff (TLA) ratings for all MCQs for both versions of the examination to allow the assessment of differences in cut scores. RESULTS: Students answered three-option MCQs an average of 5 seconds faster than they answered four- and five-option MCQs (36 seconds versus 41 seconds; p = 0.008). There were no significant differences in item difficulty and discrimination, or test reliability. Overall, the cut scores generated for three-option MCQs using the TLA ratings were 8 percentage points higher (p = 0.04). CONCLUSIONS: The use of three-option MCQs in a health professions examination resulted in a time saving equivalent to the completion of 16% more MCQs per 1-hour testing period, which may increase content validity and test score reliability, and minimise construct under-representation. The higher cut scores may result in higher failure rates if an absolute standard setting method, such as the TLA method, is used. The results from this study provide a cautious indication to health professions educators that using three-option MCQs does not threaten validity and may strengthen it by allowing additional MCQs to be tested in a fixed amount of testing time with no deleterious effect on the reliability of the test scores.


Asunto(s)
Educación de Pregrado en Medicina/métodos , Educación en Farmacia/métodos , Evaluación Educacional/métodos , Encuestas y Cuestionarios/normas , Adulto , California , Femenino , Humanos , Masculino , Psicometría , Tiempo de Reacción , Reproducibilidad de los Resultados , Adulto Joven
12.
Med Educ ; 48(9): 921-9, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25113118

RESUMEN

CONTEXT: First-year residents begin clinical practice in settings in which attending staff and senior residents are available to supervise their work. There is an expectation that, while being supervised and as they become more experienced, residents will gradually take on more responsibilities and function independently. OBJECTIVES: This study was conducted to define 'entrustable professional activities' (EPAs) and determine the extent of agreement between the level of supervision expected by clinical supervisors (CSs) and the level of supervision reported by first-year residents. METHODS: Using a nominal group technique, subject matter experts (SMEs) from multiple specialties defined EPAs for incoming residents; these represented a set of activities to be performed independently by residents by the end of the first year of residency, regardless of specialty. We then surveyed CSs and first-year residents from one institution in order to compare the levels of supervision expected and received during the day and night for each EPA. RESULTS: The SMEs defined 10 EPAs (e.g. completing admission orders, obtaining informed consent) that were ratified by a national panel. A total of 113 CSs and 48 residents completed the survey. Clinical supervisors had the same expectations regardless of time of day. For three EPAs (managing i.v. fluids, obtaining informed consent, obtaining advanced directives) the level of supervision reported by first-year residents was lower than that expected by CSs (p < 0.001) regardless of time of day (i.e. day or night). For four more EPAs (initiating the management of a critically ill patient, handing over the care of a patient to colleagues, writing a discharge prescription, coordinating a patient discharge) differences applied only to night-time work (p ≤ 0.001). CONCLUSIONS: First-year residents reported performing EPAs with less supervision than expected by CSs, especially during the night. Using EPAs to guide the content of the undergraduate curriculum and during examinations could help better align CSs' and residents' expectations about early residency supervision.


Asunto(s)
Competencia Clínica/normas , Internado y Residencia/normas , Actitud del Personal de Salud , Docentes Médicos , Humanos , Masculino , Ontario , Práctica Profesional/normas
13.
Adv Health Sci Educ Theory Pract ; 19(4): 497-506, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24449122

RESUMEN

Objective structured clinical examinations (OSCEs) are used worldwide for summative examinations but often lack acceptable reliability. Research has shown that reliability of scores increases if OSCE checklists for medical students include only clinically relevant items. Also, checklists are often missing evidence-based items that high-achieving learners are more likely to use. The purpose of this study was to determine if limiting checklist items to clinically discriminating items and/or adding missing evidence-based items improved score reliability in an Internal Medicine residency OSCE. Six internists reviewed the traditional checklists of four OSCE stations classifying items as clinically discriminating or non-discriminating. Two independent reviewers augmented checklists with missing evidence-based items. We used generalizability theory to calculate overall reliability of faculty observer checklist scores from 45 first and second-year residents and predict how many 10-item stations would be required to reach a Phi coefficient of 0.8. Removing clinically non-discriminating items from the traditional checklist did not affect the number of stations (15) required to reach a Phi of 0.8 with 10 items. Focusing the checklist on only evidence-based clinically discriminating items increased test score reliability, needing 11 stations instead of 15 to reach 0.8; adding missing evidence-based clinically discriminating items to the traditional checklist modestly improved reliability (needing 14 instead of 15 stations). Checklists composed of evidence-based clinically discriminating items improved the reliability of checklist scores and reduced the number of stations needed for acceptable reliability. Educators should give preference to evidence-based items over non-evidence-based items when developing OSCE checklists.


Asunto(s)
Lista de Verificación , Competencia Clínica/normas , Educación de Postgrado en Medicina , Práctica Clínica Basada en la Evidencia/normas , Medicina Interna/normas , Internado y Residencia/normas , Examen Físico/normas , Canadá , Evaluación Educacional/métodos , Humanos , Reproducibilidad de los Resultados , Estudiantes de Medicina
14.
Med Educ ; 47(12): 1175-83, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24206151

RESUMEN

CONTEXT: Recent reviews have claimed that the script concordance test (SCT) methodology generally produces reliable and valid assessments of clinical reasoning and that the SCT may soon be suitable for high-stakes testing. OBJECTIVES: This study is intended to describe three major threats to the validity of the SCT not yet considered in prior research and to illustrate the severity of these threats. METHODS: We conducted a review of SCT reports available through the Web of Science database. Additionally, we reanalysed scores from a previously published SCT administration to explore issues related to standard SCT scoring practice. RESULTS: Firstly, the predominant method for aggregate and partial credit scoring of SCTs introduces logical inconsistencies in the scoring key. Secondly, our literature review shows that SCT reliability studies have generally ignored inter-panel, inter-panellist and test-retest measurement error. Instead, studies have focused on observed levels of coefficient alpha, which is neither an informative index of internal structure nor a comprehensive index of reliability for SCT scores. As such, claims that SCT scores show acceptable reliability are premature. Finally, SCT criteria for item inclusion, in concert with a statistical artefact of the SCT format, cause anchors at the extremes of the scale to have less expected credit than anchors near or at the midpoint. Consequently, SCT scores are likely to reflect construct-irrelevant differences in examinees' response styles. This makes the test susceptible to bias against candidates who endorse extreme scale anchors more readily; it also makes two construct-irrelevant test taking strategies extremely effective. In our reanalysis, we found that examinees could drastically increase their scores by never endorsing extreme scale points. Furthermore, examinees who simply endorsed the scale midpoint for every item would still have outperformed most examinees who used the scale as it is intended. CONCLUSIONS: Given the severity of these threats, we conclude that aggregate scoring of SCTs cannot be recommended. Recommendations for revisions of SCT methodology are discussed.


Asunto(s)
Educación de Postgrado en Medicina/métodos , Evaluación Educacional/métodos , Competencia Clínica/normas , Toma de Decisiones , Humanos , Reproducibilidad de los Resultados
15.
Med Teach ; 40(11): 1195-1196, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30422035
16.
Med Educ ; 45(1): 87-94, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21155872

RESUMEN

CONTEXT: Although firmly grounded in Flexner's legacy of ideas, today's medical curriculum, as both an entity and a process, has become more and more complex. The curriculum as an entity is portrayed according to five key elements: the expected competencies and roles; the learners at the centre of the enterprise; assessment linking competencies and learners; the conditions and resources for learning; and a multifaceted socio-politico-cultural context in which the learning occurs. Significant developments have also occurred in the disciplines of curriculum studies, cognitive psychology and organisational change over the past century, as well as in institutional best practices, that help us to better understand and plan curricular innovations. DISCUSSION: Practical advice is offered to help curriculum developers in designing or reforming the medical curriculum. The key points of this are: (i) while focusing reform and innovation on specific elements of the curriculum, consider how those elements affect other elements and vice versa, in positive and negative ways; (ii) while grounding the reform or innovation in sound conceptual frameworks, seize any opportunities to formulate a research agenda that can build upon and advance our understanding of curricular innovations, and, (iii) moving beyond considering the curriculum as an entity, use deliberative and leadership processes that can lead to enduring curriculum reform.


Asunto(s)
Curriculum/tendencias , Toma de Decisiones , Educación Médica/tendencias , Desarrollo de Programa/métodos , Curriculum/normas , Educación Médica/normas , Humanos , Innovación Organizacional , Desarrollo de Programa/normas
17.
Med Teach ; 38(7): 752-3, 2016 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26806122
18.
Med Teach ; 33(5): 410-7, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-21355686

RESUMEN

INTRODUCTION: The physical examination is an essential clinical competence for all physicians. Most medical schools have students who learn the physical examination maneuvers using a head-to-toe approach. However, this promotes a rote approach to the physical exam, and it is not uncommon for students later on to fail to appreciate the meaning of abnormal findings and their contribution to the diagnostic reasoning process. The purpose of the project was to develop a model teaching session for the hypothesis-driven physical examination (HDPE) approach in which students could practice the physical examination in the context of diagnostic reasoning. METHODS: We used an action research methodology to create this HDPE model by developing a teaching session, implementing it over 100 times with approximately 700 students, conducting internal reflection and external evaluations, and making adjustments as needed. RESULTS: A model nine-step HDPE teaching session was developed, including: (1) orientation, (2) anticipation, (3) preparation, (4) role play, (5) discussion-1, (6) answers, (7) discussion-2, (8) demonstration and (9) reflection. DISCUSSIONS AND CONCLUSIONS: A structured model HDPE teaching session and tutor guide were developed into a workable instructional intervention. Faculty members are invited to teach the physical examination using this model.


Asunto(s)
Educación Médica/métodos , Modelos Educacionales , Examen Físico , Enseñanza/métodos , Competencia Clínica , Humanos , Simulación de Paciente , Desarrollo de Programa , Desempeño de Papel
19.
Med Educ ; 44(8): 775-85, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-20633217

RESUMEN

OBJECTIVES: This study aimed to test the extent to which the use of medicalese (i.e. formal medical terminology and semantic qualifiers) alters the test performance of medical graduates; to tease apart the extent to which any observed differences are driven by language difficulties versus differences in medical knowledge; and to assess the impact of varying the language used to present clinical features on the ability of the test to consistently discriminate between candidates. METHODS: Six clinical cases were manipulated in the context of pilot items on the Canadian national qualifying examination. Features indicative of two diagnoses were presented uniformly in lay terms, medical terminology and semantic qualifiers, respectively, and in mixed combinations (e.g. features of one diagnosis were presented using lay terminology and features of the other using medicalese). The rate at which the indicated diagnoses were named was considered as a function of language used, site of training, birthplace and medical knowledge (as measured by overall performance on the examination). RESULTS: In the mixed conditions, Canadian medical graduates were not influenced by the language used to present the cases, whereas international medical graduates (IMGs) were more likely to favour the diagnosis associated with medical terminology relative to that associated with lay terms. This was true regardless of whether the entire sample or only North American-born candidates were considered. Within the IMG cohort, high performers were not influenced by the language manipulation, whereas low performers were. Uniform use of lay terminology resulted in the highest test reliability compared with the other experimental conditions. CONCLUSIONS: The results indicate that the influence of medical terminology is driven more by substandard medical knowledge than by the language issues that challenge some candidates. Implications for both the assessment and education of medical professionals are discussed.


Asunto(s)
Competencia Clínica , Lenguaje , Terminología como Asunto , Diagnóstico , Humanos
20.
Ann Emerg Med ; 53(5): 647-52, 2009 May.
Artículo en Inglés | MEDLINE | ID: mdl-18722694

RESUMEN

STUDY OBJECTIVE: Clinical reasoning is a crucial skill for all residents to acquire during their training. During most patient encounters in pediatric emergency medicine, physicians and trainees are challenged by diagnostic, investigative, and treatment uncertainties. The Script Concordance Test may provide a means to assess reasoning skills in the context of uncertainty in the practice of pediatric emergency medicine. We gathered validity evidence for the use of a pediatric emergency medicine Script Concordance Test to evaluate residents' reasoning skills. METHODS: A 1-hour test containing 60 questions nested in 38 cases was administered to 53 residents at the end of their pediatric emergency medicine rotation at 1 academic institution. Twelve experienced pediatricians were part of a reference panel to establish the basis for the scoring process. RESULTS: An optimized version of the test, based on positive item discrimination data, contained 30 cases and 50 questions. Scores ranged from 48% to 82%, with a mean score of 69.9 (SD=11.5). The reliability of the optimized test (Cronbach's alpha) was 0.77. Performance on the test increased as the level of experience of the residents increased. The residents considered the Script Concordance Test true to real-life clinical problems and had enough time to complete the test. CONCLUSION: This pediatric emergency medicine Script Concordance Test was reliable and useful to assess the progression of clinical reasoning during residency training.


Asunto(s)
Evaluación Educacional/métodos , Medicina de Emergencia/educación , Internado y Residencia , Pediatría/educación , Solución de Problemas , Competencia Clínica , Educación de Postgrado en Medicina , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA