Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Am J Gastroenterol ; 115(2): 234-243, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31738285

RESUMEN

INTRODUCTION: Formative colonoscopy direct observation of procedural skills (DOPS) assessments were updated in 2016 and incorporated into UK training but lack validity evidence. We aimed to appraise the validity of DOPS assessments, benchmark performance, and evaluate competency development during training in diagnostic colonoscopy. METHODS: This prospective national study identified colonoscopy DOPS submitted over an 18-month period to the UK training e-portfolio. Generalizability analyses were conducted to evaluate internal structure validity and reliability. Benchmarking was performed using receiver operator characteristic analyses. Learning curves for DOPS items and domains were studied, and multivariable analyses were performed to identify predictors of DOPS competency. RESULTS: Across 279 training units, 10,749 DOPS submitted for 1,199 trainees were analyzed. The acceptable reliability threshold (G > 0.70) was achieved with 3 assessors performing 2 DOPS each. DOPS competency rates correlated with the unassisted caecal intubation rate (rho 0.404, P < 0.001). Demonstrating competency in 90% of assessed items provided optimal sensitivity (90.2%) and specificity (87.2%) for benchmarking overall DOPS competence. This threshold was attained in the following order: "preprocedure" (50-99 procedures), "endoscopic nontechnical skills" and "postprocedure" (150-199), "management" (200-249), and "procedure" (250-299) domain. At item level, competency in "proactive problem solving" (rho 0.787) and "loop management" (rho 0.780) correlated strongest with the overall DOPS rating (P < 0.001) and was the last to develop. Lifetime procedure count, DOPS count, trainer specialty, easier case difficulty, and higher cecal intubation rate were significant multivariable predictors of DOPS competence. DISCUSSION: This study establishes milestones for competency acquisition during colonoscopy training and provides novel validity and reliability evidence to support colonoscopy DOPS as a competency assessment tool.


Asunto(s)
Competencia Clínica , Colonoscopía/educación , Gastroenterología/educación , Cirugía General/educación , Enfermeras Especialistas/educación , Colonoscopía/normas , Gastroenterología/normas , Cirugía General/normas , Humanos , Enfermeras Especialistas/normas , Observación , Reproducibilidad de los Resultados , Reino Unido
2.
Surg Endosc ; 34(1): 105-114, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-30911922

RESUMEN

BACKGROUND: Validated competency assessment tools and the data supporting milestone development during gastroscopy training are lacking. We aimed to assess the validity of the formative direct observation of procedural skills (DOPS) assessment tool in diagnostic gastroscopy and study competency development using DOPS. METHODS: This was a prospective multicentre (N = 275) analysis of formative gastroscopy DOPS assessments. Internal structure validity was tested using exploratory factor analysis and reliability estimated using generalisability theory. Item and global DOPS scores were stratified by lifetime procedure count to define learning curves, using a threshold determined from receiver operator characteristics (ROC) analysis. Multivariable binary logistic regression analysis was performed to identify independent predictors of DOPS competence. RESULTS: In total, 10086 DOPS were submitted for 987 trainees. Exploratory factor analysis identified three distinct item groupings, representing 'pre-procedure', 'technical', and 'post-procedure non-technical' skills. From generalisability analyses, sources of variance in overall DOPS scores included trainee ability (31%), assessor stringency (8%), assessor subjectivity (18%), and trainee case-to-case variation (43%). The combination of three assessments from three assessors was sufficient to achieve the reliability threshold of 0.70. On ROC analysis, a mean score of 3.9 provided optimal sensitivity and specificity for determining competency. This threshold was attained in the order of 'pre-procedure' (100-124 procedures), 'technical' (150-174 procedures), 'post-procedure non-technical' skills (200-224 procedures), and global competency (225-249 procedures). Higher lifetime procedure count, DOPS count, surgical trainees and assessors, higher trainee seniority, and lower case difficulty were significant multivariable predictors of DOPS competence. CONCLUSION: This study establishes milestones for competency acquisition during gastroscopy training and provides validity and reliability evidence to support gastroscopy DOPS as a competency assessment tool.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional , Endoscopía del Sistema Digestivo/educación , Gastroscopía/educación , Evaluación Educacional/métodos , Evaluación Educacional/normas , Análisis Factorial , Humanos , Curva de Aprendizaje , Estudios Prospectivos , Reproducibilidad de los Resultados
3.
Surg Endosc ; 34(1): 115, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30937617

RESUMEN

The citation for Reference 22 should be replaced with: Kumar NL, Kugener G, Perencevich ML, et al (2018) The SAFE-T assessment tool: derivation and validation of a web-based application for point-of-care evaluation of gastroenterology fellow performance in colonoscopy. Gastrointest Endosc 87(1):262-269.

4.
Ann Emerg Med ; 74(5): 670-678, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31326204

RESUMEN

STUDY OBJECTIVE: The contribution of emergency medicine clinicians' nontechnical skills in providing safe, high-quality care in the emergency department (ED) is well known. In 2015, the UK Royal College of Emergency Medicine introduced explicit validated descriptors of nontechnical skills needed to function effectively in the ED. A new nontechnical skills assessment tool that provided a score for 12 domains of nontechnical skills and detailed narrative feedback, the Extended Supervised Learning Event (ESLE), was introduced and was mandated as part of the Royal College of Emergency Medicine assessment schedule. We aim to evaluate the psychometric reliability of the ESLE in its first year of use. METHODS: ESLEs were mandated for all UK emergency medicine trainees in the final 4 years of a 6-year national training program from August 2015. The completed assessments were uploaded to the Royal College of Emergency Medicine e-portfolio. All assessments recorded in the Royal College of Emergency Medicine e-portfolio database between August 2015 and August 2016 were anonymized and analyzed for psychometric reliability, using generalizability theory. Decision analysis was used to model the effect of altering the number of episodes and assessors on reliability. RESULTS: A total of 1,390 ESLEs were analyzed. The majority (62%) of the variation in nontechnical skills scores was attributable to the trainee's ability. The circumstances of the event (eg, case complexity, workload) accounted for 21% and the stringency or leniency of assessors the remaining 16%. Decision analysis suggests that 3 ESLEs by 2 or more assessors, as currently recommended in the Royal College of Emergency Medicine curriculum, provide an assessment with a reliability coefficient of 0.8. CONCLUSION: Board-certified-equivalent emergency medicine supervisors are able to provide reliable assessments of emergency medicine trainees' nontechnical skills in the workplace by using the ESLE.


Asunto(s)
Competencia Clínica/normas , Medicina de Emergencia/educación , Seguridad del Paciente/normas , Calidad de la Atención de Salud/normas , Aprendizaje Automático Supervisado , Evaluación Educacional , Medicina de Emergencia/normas , Retroalimentación , Humanos , Psicometría , Reproducibilidad de los Resultados , Lugar de Trabajo
5.
Med Teach ; 41(7): 787-794, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30912989

RESUMEN

Purpose: Examiner training has an inconsistent impact on subsequent performance. To understand this variation, we explored how examiners think about changing the way they assess. Method: We provided comparative data to 17 experienced examiners about their assessments, captured their sense-making processes using a modified think-aloud protocol, and identified patterns by inductive thematic analysis. Results: We observed five sense-making processes: (1) testing personal relevance (2) interpretation (3) attribution (4) considering the need for change, and (5) considering the nature of change. Three observed meta-themes describe the manner of examiners' thinking: Guarded curiosity - where examiners expressed curiosity over how their judgments compared with others', but they also expressed guardedness about the relevance of the comparisons; Dysfunctional assimilation - where examiners' interpretation and attribution exhibited cognitive anchoring, personalization, and affective bias; Moderated conservatism - where examiners expressed openness to change, but also loyalty to their judgment-framing values and aphorisms. Conclusions: Our examiners engaged in complex processes as they considered changing their assessments. The 'stabilising' mechanisms some used resembled learners assimilating educational feedback. If these are typical examiner responses, they may well explain the variable impact of examiner training, and have significant implications for the pursuit of meaningful and defensible judgment-based assessment.


Asunto(s)
Evaluación Educacional/métodos , Evaluación Educacional/normas , Retroalimentación Formativa , Juicio , Competencia Profesional/normas , Desarrollo de Personal/organización & administración , Humanos , Estándares de Referencia , Desarrollo de Personal/normas
6.
Educ Prim Care ; 28(1): 16-22, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27499463

RESUMEN

BACKGROUND: Educational feedback is amongst the most powerful of all learning interventions. RESEARCH QUESTIONS: (1) Can we measure the quality of written educational feedback with acceptable metrics? (2) Based on such a measure, does a quality improvement (QI) intervention improve the quality of feedback? STUDY DESIGN: We developed a QI instrument to measure the quality of written feedback and applied it to written feedback provided to medical students following workplace assessments. We evaluated the measurement characteristics of the QI score using generalisability theory. In an uncontrolled intervention, QI profiles were fed back to GP tutors and pre and post intervention scores compared. STUDY RESULTS: A single assessor scoring 6 feedback summaries can discriminate between practices with a reliability of 0.82.The quality of feedback rose for two years after the introduction of the QI instrument and stabilised in the third year. The estimated annual cost to provide this feedback is £12 per practice. Interpretation and recommendations: It is relatively straightforward and inexpensive to measure the quality of written feedback with good reliability. The QI process appears to improve the quality of written feedback. We recommend routine use of a QI process to improve the quality of educational feedback.


Asunto(s)
Educación de Pregrado en Medicina/métodos , Evaluación Educacional/métodos , Retroalimentación , Mejoramiento de la Calidad , Escritura/normas , Inglaterra , Humanos , Estudiantes de Medicina
7.
BMC Med Educ ; 15: 83, 2015 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-25924676

RESUMEN

BACKGROUND: Professional self-identity [PSI] can be defined as the degree to which an individual identifies with his or her professional group. Several authors have called for a better understanding of the processes by which healthcare students develop their professional identities, and suggested helpful theoretical frameworks borrowed from the social science and psychology literature. However to our knowledge, there has been little empirical work examining these processes in actual healthcare students, and we are aware of no data driven description of PSI development in healthcare students. Here, we report a data driven model of PSI formation in healthcare students. METHODS: We interviewed 17 student doctors and dentists who had indicated, on a tracking questionnaire, the most substantial changes in their PSI. We analysed their perceptions of the experiences that had influenced their PSI, to develop a descriptive model. Both the primary coder and the secondary coder considered the data without reference to the existing literature; i.e. we used a bottom up approach rather than a top down approach. RESULTS: The results indicate that two overlapping frames of reference affect PSI formation: the students' self-perception and their perception of the professional role. They are 'learning' both; neither is static. Underpinning those two learning processes, the following key mechanisms operated: [1] When students are allowed to participate in the professional role they learn by trying out their knowledge and skill in the real world and finding out to what extent they work, and by trying to visualise themselves in the role. [2] When others acknowledge students as quasi-professionals they experience transference and may respond with counter-transference by changing to meet expectations or fulfil a prototype. [3] Students may also dry-run their professional role (i.e., independent practice of professional activities) in a safe setting when invited. CONCLUSIONS: Students' experiences, and their perceptions of those experiences, can be evaluated through a simple model that describes and organises the influences and mechanisms affecting PSI. This empirical model is discussed in the light of prevalent frameworks from the social science and psychology literature.


Asunto(s)
Educación en Odontología , Educación Médica , Rol del Médico/psicología , Profesionalismo/educación , Autoimagen , Estudiantes de Odontología/psicología , Estudiantes de Medicina/psicología , Inglaterra , Femenino , Humanos , Entrevista Psicológica , Masculino
8.
Med Teach ; 36(8): 685-91, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24601877

RESUMEN

This article describes the problem of disorientation in students as they become doctors. Disorientation arises because students have a poor or inaccurate understanding of what they are training to become. If they do not know what they are becoming it is hard for them to prioritise and contextualise their learning, to make sense of information about where they are now (assessment and feedback) or to determine the steps they need to take to develop (formative feedback and "feedforward"). It is also a barrier to the early development of professional identity. Using the analogy of a map, the paper describes the idea of a curriculum that is articulated as a developmental journey--a "roadmap curriculum". This is not incompatible with a competency-based curriculum, and certainly requires the same integration of knowledge, skills and attitudes. However, the semantic essence of a roadmap curriculum is fundamentally different; it must describe the pathway or pathways of development toward being a doctor in ways that are both authentic to qualified doctors and meaningful to learners. Examples from within and outside medicine are cited. Potential advantages and implications of this kind of curricular reform are discussed.


Asunto(s)
Confusión/prevención & control , Educación Médica , Aprendizaje , Estudiantes de Medicina/psicología , Ansiedad , Curriculum , Humanos , Rol del Médico , Enseñanza
9.
J Org Chem ; 75(15): 5414-6, 2010 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-20608670

RESUMEN

The thermally promoted cycloaddition between alkynyliodides and nitrile oxides is reported. The process offers excellent regioselectivity and a broad scope with respect to both the iodoalkynes and chloro-oximes. Further functionalization of the highly decorated iodoisoxazole motifs can be achieved via Suzuki cross-coupling.

10.
J Gastrointestin Liver Dis ; 28(1): 33-40, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30851170

RESUMEN

BACKGROUND AND AIMS: Data supporting milestone development during flexible sigmoidoscopy (FS) training are lacking. We aimed to present validity evidence for our formative direct observation of procedural skills (DOPS) assessment in FS, and use DOPS to establish competency benchmarks and define learning curves for a national training cohort. METHODS: This prospective UK-wide (211 centres) study included all FS formative DOPS assessments submitted to the national e-portfolio. Reliability was estimated from generalisability theory analysis. Item and global DOPS scores were correlated with lifetime procedure count to study learning curves, with competency benchmarks defined using contrasting groups analysis. Multivariable binary logistic regression was performed to identify independent predictors of DOPS competence. RESULTS: This analysis included 3,616 DOPS submitted for 468 trainees. From generalisability analysis, sources of overall competency score variance included: trainee ability (27%), assessor stringency (15%), assessor subjectivity attributable to the trainee (18%) and case-to-case variation (40%), which enabled the modelling of reliability estimates. The competency benchmark (mean DOPS score: 3.84) was achieved after 150-174 procedures. Across the cohort, competency development occurred in the order of: pre-procedural (50-74), non-technical (75-149), technical (125-174) and post-procedural (175-199) skills. Lifetime procedural count (p<0.001), case difficulty (p<0.001), and lifetime formative DOPS count (p=0.001) were independently associated with DOPS competence, but not trainee or assessor specialty. CONCLUSION: Sigmoidoscopy DOPS can provide valid and reliable assessments of competency during training and can be used to chart competency development. Contrary to earlier studies, based on destination-orientated endpoints, overall competency in sigmoidoscopy was attained after 150 lifetime procedures.


Asunto(s)
Competencia Clínica , Evaluación Educacional/métodos , Gastroenterólogos/educación , Médicos Generales/educación , Curva de Aprendizaje , Sigmoidoscopía/educación , Cirujanos/educación , Análisis y Desempeño de Tareas , Diseño de Equipo , Humanos , Docilidad , Estudios Prospectivos , Sigmoidoscopios , Sigmoidoscopía/instrumentación , Especialización , Reino Unido
11.
Med Educ ; 42(4): 364-73, 2008 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-18338989

RESUMEN

OBJECTIVES: To evaluate the reliability and feasibility of assessing the performance of medical specialist registrars (SpRs) using three methods: the mini-clinical evaluation exercise (mini-CEX), directly observed procedural skills (DOPS) and multi-source feedback (MSF) to help inform annual decisions about the outcome of SpR training. METHODS: We conducted a feasibility study and generalisability analysis based on the application of these assessment methods and the resulting data. A total of 230 SpRs (from 17 specialties) in 58 UK hospitals took part from 2003 to 2004. Main outcome measures included: time taken for each assessment, and variance component analysis of mean scores and derivation of 95% confidence intervals for individual doctors' scores based on the standard error of measurement. Responses to direct questions on questionnaires were analysed, as were the themes emerging from open-comment responses. RESULTS: The methods can provide reliable scores with appropriate sampling. In our sample, all trainees who completed the number of assessments recommended by the Royal Colleges of Physicians had scores that were 95% certain to be better than unsatisfactory. The mean time taken to complete the mini-CEX (including feedback) was 25 minutes. The DOPS required the duration of the procedure being assessed plus an additional third of this time for feedback. The mean time required for each rater to complete his or her MSF form was 6 minutes. CONCLUSIONS: This is the first attempt to evaluate the use of comprehensive workplace assessment across the medical specialties in the UK. The methods are feasible to conduct and can make reliable distinctions between doctors' performances. With adaptation, they may be appropriate for assessing the workplace performance of other grades and specialties of doctor. This may be helpful in informing foundation assessment.


Asunto(s)
Competencia Clínica/normas , Evaluación del Rendimiento de Empleados/métodos , Cuerpo Médico de Hospitales/normas , Medicina , Especialización , Análisis de Varianza , Estudios de Factibilidad , Retroalimentación , Reino Unido , Lugar de Trabajo
12.
J Contin Educ Health Prof ; 35(2): 91-8, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26115108

RESUMEN

INTRODUCTION: Nurse appraisal is well established in the Western world because of its obvious educational advantages. Appraisal works best with many sources of information on performance. Multisource feedback (MSF) is widely used in business and in other clinical disciplines to provide such information. It has also been incorporated into nursing appraisals, but, so far, none of the instruments in use for nurses has been validated. We set out to develop an instrument aligned with the UK Knowledge and Skills Framework (KSF) and to evaluate its reliability and feasibility across a wide hospital-based nursing population. METHODS: The KSF framework provided a content template. Focus groups developed an instrument based on consensus. The instrument was administered to all the nursing staff in 2 large NHS hospitals forming a single trust in London, England. We used generalizability analysis to estimate reliability, response rates and unstructured interviews to evaluate feasibility, and factor structure and correlation studies to evaluate validity. RESULTS: On a voluntary basis the response rate was moderate (60%). A failure to engage with information technology and employment-related concerns were commonly cited as reasons for not responding. In this population, 11 responses provided a profile with sufficient reliability to inform appraisal (G = 0.7). Performance on the instrument was closely and significantly correlated with performance on a KSF questionnaire. DISCUSSION: This is the first contemporary psychometric evaluation of an MSF instrument for nurses. MSF appears to be as valid and reliable as an assessment method to inform appraisal in nurses as it is in other health professional groups.


Asunto(s)
Competencia Clínica , Evaluación del Rendimiento de Empleados/métodos , Retroalimentación , Personal de Enfermería , Encuestas y Cuestionarios/normas , Inglaterra , Grupos Focales , Humanos , Psicometría , Reproducibilidad de los Resultados , Desarrollo de Personal
13.
Adv Med Educ Pract ; 6: 447-57, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26109879

RESUMEN

In 2009, the General Medical Council UK (GMC) published its updated guidance on medical education for the UK medical schools - Tomorrow's Doctors 2009. The Council recommended that the UK medical schools introduce, for the first time, a clinical placement in which a senior medical student, "assisting a junior doctor and under supervision, undertakes most of the duties of an F1 doctor". In the UK, an F1 doctor is a postgraduation year 1 (PGY1) doctor. This new kind of placement was called a student assistantship. The recommendation was considered necessary because conventional UK clinical placements rarely provided medical students with opportunities to take responsibility for patients - even under supervision. This is in spite of good evidence that higher levels of learning, and the acquisition of essential clinical and nontechnical skills, depend on students participating in health care delivery and gradually assuming responsibility under supervision. This review discusses the gap between student and doctor, and the impact of the student assistantship policy. Early evaluation indicates substantial variation in the clarity of purpose, setting, length, and scope of existing assistantships. In particular, few models are explicit on the most critical issue: exactly how the student participates in care and how supervision is deployed to optimize learning and patient safety. Surveys indicate that these issues are central to students' perceptions of the assistantship. They know when they have experienced real responsibility and when they have not. This lack of clarity and variation has limited the impact of student assistantships. We also consider other important approaches to bridging the gap between student and doctor. These include supporting the development of the student as a whole person, commissioning and developing the right supervision, student-aligned curricula, and challenging the risk assumptions of health care providers.

14.
Med Educ ; 38(8): 852-8, 2004 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-15271046

RESUMEN

AIM: To improve the quality of outpatient letters used as communication between hospital and primary care doctors. METHODS: On 2 separate occasions, 15 unselected outpatient letters written by each of 7 hospital practitioners were rated by another hospital doctor and a general practitioner (GP) using the Sheffield Assessment Instrument for Letters (SAIL). Individualised feedback was provided to participants following the rating of the first set of letters. The audit cycle was completed 3 months later without forewarning by repeat assessment by the same hospital and GP assessors using the SAIL tool to see if there was any improvement in correspondence. SETTING: Single centre: general paediatric outpatient department in a large district general hospital. RESULTS: All 7 doctors available for reassessment completed the audit loop, each providing 15 outpatient letters per assessment. The mean of the quality scores, derived for each letter from the summation of a 20-point checklist and a global score, improved from 23.3 (95% CI 22.1-24.4) to 26.6 (95% CI 25.8-27.4) (P = 0.001). CONCLUSIONS: The SAIL provides a feasible and reliable method of assessing the quality and content of outpatient clinic letters. This study demonstrates that it can also provide feedback with a powerful educational impact. This approach holds real potential for appraisal and revalidation, providing an effective means for the quality improvement required by clinical governance.


Asunto(s)
Correspondencia como Asunto , Registros Médicos/normas , Derivación y Consulta/normas , Comunicación , Medicina Familiar y Comunitaria/organización & administración , Humanos , Cuerpo Médico de Hospitales/organización & administración , Control de Calidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA