Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
BMC Med Educ ; 24(1): 487, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38698352

RESUMO

BACKGROUND: Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors' feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident's performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses' assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopedic Surgery. However, different clinical settings may impact a tool's performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa. METHODS: O-RON forms were distributed on Internal Medicine, General Surgery, and Obstetrical wards at the University of Ottawa over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed. RESULTS: 179 O-RONs were completed on 30 residents. With four forms per resident, the ORON's reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P < 0.001). CONCLUSIONS: Consistent with the original study, the findings demonstrated strong evidence for validity. However, the number of forms collected was less than expected. Exit interviews identified factors impacting form completion, which included clinical workloads and interprofessional dynamics.


Assuntos
Competência Clínica , Internato e Residência , Psicometria , Humanos , Reprodutibilidade dos Testes , Feminino , Masculino , Avaliação Educacional/métodos , Ontário , Medicina Interna/educação
2.
Artigo em Inglês | MEDLINE | ID: mdl-38010576

RESUMO

First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.

3.
Med Educ ; 57(10): 949-957, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37387266

RESUMO

BACKGROUND: Work-based assessments (WBAs) are increasingly used to inform decisions about trainee progression. Unfortunately, WBAs often fail to discriminate between trainees of differing abilities and have poor reliability. Entrustment-supervision scales may improve WBA performance, but there is a paucity of literature directly comparing them to traditional WBA tools. METHODS: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) is a previously published WBA tool employing an entrustment-supervision scale with strong validity evidence. This pre-/post-implementation study compares the performance of the O-EDShOT with that of a traditional WBA tool using norm-based anchors. All assessments completed in 12-month periods before and after implementing the O-EDShOT were collected, and generalisability analysis was conducted with year of training, trainees within year and forms within trainee as nested factors. Secondary analysis included assessor as a factor. RESULTS: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors, for 152 and 138 trainees in the pre- and post-implementation phases respectively. The O-EDShOT generated a wider range of awarded scores than the traditional WBA, and mean scores increased more with increasing level of training (0.32 vs. 0.14 points per year, p = 0.01). A significantly greater proportion of overall score variability was attributable to trainees using the O-EDShOT (59%) compared with the traditional tool (21%, p < 0.001). Assessors contributed less to overall score variability for the O-EDShOT than for the traditional WBA (16% vs. 37%). Moreover, the O-EDShOT required fewer completed assessments than the traditional tool (27 vs. 51) for a reliability of 0.8. CONCLUSION: The O-EDShOT outperformed a traditional norm-referenced WBA in discriminating between trainees and required fewer assessments to generate a reliable estimate of trainee performance. More broadly, this study adds to the body of literature suggesting that entrustment-supervision scales generate more useful and reliable assessments in a variety of clinical settings.


Assuntos
Avaliação Educacional , Local de Trabalho , Humanos , Reprodutibilidade dos Testes , Competência Clínica , Educação de Pós-Graduação em Medicina
4.
Can Med Educ J ; 13(6): 36-45, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36440072

RESUMO

Background: Competence by design (CBD) residency programs increasingly depend on tools that provide reliable assessments, require minimal rater training, and measure progression through the CBD milestones. To assess intraoperative skills, global rating scales and entrustability ratings are commonly used but may require extensive training. The Competency Continuum (CC) is a CBD framework that may be used as an assessment tool to assess laparoscopic skills. The study aimed to compare the CC to two other assessment tools: the Global Operative Assessment of Laparoscopic Skills (GOALS) and the Zwisch scale. Methods: Four expert surgeons rated thirty laparoscopic cholecystectomy videos. Two raters used the GOALS scale while the remaining two raters used both the Zwisch scale and CC. Each rater received scale-specific training. Descriptive statistics, inter-rater reliabilities (IRR), and Pearson's correlations were calculated for each scale. Results: Significant positive correlations between GOALS and Zwisch (r = 0.75, p < 0.001), CC and GOALS (r = 0.79, p < 0.001), and CC and Zwisch (r = 0.90, p < 0.001) were found. The CC had an inter-rater reliability of 0.74 whereas the GOALS and Zwisch scales had inter-rater reliabilities of 0.44 and 0.43, respectively. Compared to GOALS and Zwisch scales, the CC had the highest inter-rater reliability and required minimal rater training to achieve reliable scores. Conclusion: The CC may be a reliable tool to assess intraoperative laparoscopic skills and provide trainees with formative feedback relevant to the CBD milestones. Further research should collect further validity evidence for the use of the CC as an independent assessment tool.


Contexte: Les programmes de résidence structurés autour de la compétence par conception (CPC) dépendent de plus en plus d'outils qui fournissent des évaluations fiables, nécessitent une formation minimale des évaluateurs et mesurent la progression dans les étapes de la CPC. Pour évaluer les compétences peropératoires, les échelles d'évaluation globale et de confiance sont couramment utilisées mais peuvent nécessiter une formation approfondie. Le Continuum des compétences (CC) est un cadre de la CPC qui peut être utilisé comme outil d'évaluation des compétences laparoscopiques. L'étude visait à comparer le CC à deux autres outils d'évaluation : l'évaluation globale opératoire des compétences laparoscopiques (GOALS) et l'échelle de Zwisch. Méthodes: Quatre chirurgiens experts ont évalué trente vidéos de cholécystectomie laparoscopique. Deux évaluateurs ont utilisé l'échelle GOALS tandis que les deux autres ont utilisé l'échelle Zwisch et le CC. Chacun d'eux avait reçu une formation spécifique à l'échelle utilisée. Des statistiques descriptives, la fiabilité inter-évaluateurs (FIÉ) et des corrélations de Pearson ont été calculées pour chaque échelle. Résultats: Des corrélations positives significatives ont été trouvées entre les échelles GOALS et Zwisch (r=0.75, p<0.001), CC et GOALS (r=0.79, p<0.001), et CC et Zwisch (r=0.90, p<0.001). Le CC avait une fiabilité inter-évaluateurs de 0,74 tandis que les échelles GOALS et Zwisch avaient des fiabilités inter-évaluateurs de 0,44 et 0,43, respectivement. Par rapport aux échelles GOALS et Zwisch, le CC avait la fiabilité inter-évaluateurs la plus élevée et ne nécessitait qu'une formation minimale des évaluateurs pour obtenir des scores fiables. Conclusion: Le CC constituerait un outil fiable pour évaluer les compétences laparoscopiques peropératoires et pour fournir aux stagiaires une rétroaction formatrice pertinente pour les étapes de la CPC. Des recherches supplémentaires devraient être entreprises pour recueillir plus de preuves de validité pour l'utilisation du CC comme outil d'évaluation indépendant.

5.
AEM Educ Train ; 6(4): e10781, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35903424

RESUMO

Background: A key component of competency-based medical education (CBME) is direct observation of trainees. Direct observation has been emphasized as integral to workplace-based assessment (WBA) yet previously identified challenges may limit its successful implementation. Given these challenges, it is imperative to fully understand the value of direct observation within a CBME program of assessment. Specifically, it is not known whether the quality of WBA documentation is influenced by observation type (direct or indirect). Methods: The objective of this study was to determine the influence of observation type (direct or indirect) on quality of entrustable professional activity (EPA) assessment documentation within a CBME program. EPA assessments were scored by four raters using the Quality of Assessment for Learning (QuAL) instrument, a previously published three-item quantitative measure of the quality of written comments associated with a single clinical performance score. An analysis of variance was performed to compare mean QuAL scores among the direct and indirect observation groups. The reliability of the QuAL instrument for EPA assessments was calculated using a generalizability analysis. Results: A total of 244 EPA assessments (122 direct observation, 122 indirect observation) were rated for quality using the QuAL instrument. No difference in mean QuAL score was identified between the direct and indirect observation groups (p = 0.17). The reliability of the QuAL instrument for EPA assessments was 0.84. Conclusions: Observation type (direct or indirect) did not influence the quality of EPA assessment documentation. This finding raises the question of how direct and indirect observation truly differ and the implications for meta-raters such as competence committees responsible for making judgments related to trainee promotion.

6.
J Surg Educ ; 78(5): 1666-1675, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34092533

RESUMO

OBJECTIVE: Most work-place based assessment relies on physician supervisors making observations of residents. Many areas of performance are not directly observed by physicians but rather by other healthcare professionals, most often nurses. Assessment of resident performance by nurses is captured with multi-source feedback tools. However, these tools combine the assessments of nurses with other healthcare professionals and so their perspective can be lost. A novel tool was developed and implemented to assess resident performance on a hospital ward from the perspective of the nurses. DESIGN: Through a nominal group technique, nurses identified dimensions of performance that are reflective of high-quality physician performance on a hospital ward. These were included as items in the Ottawa Resident Observation Form for Nurses (O-RON). The O-RON was voluntarily completed during an 11-month period. Validity evidence related to quantitative and qualitative data was collected. SETTING: The Orthopedic Surgery Residency Program at the University of Ottawa. PARTICIPANTS: 49 nurses on the Orthopedic Surgery wards at The Ottawa Hospital (tertiary care). RESULTS: The O-RON has 15 items rated on a 3-point frequency scale, one global judgment yes/no question regarding whether they would want the resident on their team and a space for comments. 1079 O-RONs were completed on 38 residents. There was an association between the response to the global judgment question and the frequency of concerns (p < 0.01). With 8 forms per resident, the reliability of the O-RON was 0.80. Open-ended responses referred to aspects of interpersonal skills, responsiveness, dependability, communication skills, and knowledge. CONCLUSIONS: The O-RON demonstrates promise as a work-place based assessment tool to provide residents and training programs with feedback on aspects of their performance on a hospital ward through the eyes of the nurses. It appears to be easy to use, has solid evidence for validity and can provide reliable data with a small number of completed forms.


Assuntos
Internato e Residência , Enfermeiras e Enfermeiros , Competência Clínica , Retroalimentação , Humanos , Reprodutibilidade dos Testes
7.
Virchows Arch ; 479(4): 803-813, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33966099

RESUMO

Competency-based medical education (CBME) is being implemented worldwide. In CMBE, residency training is designed around competencies required for unsupervised practice and use entrustable professional activities (EPAs) as workplace "units of assessment". Well-designed workplace-based assessment (WBA) tools are required to document competence of trainees in authentic clinical environments. In this study, we developed a WBA instrument to assess residents' performance of intra-operative pathology consultations and conducted a validity investigation. The entrustment-aligned pathology assessment instrument for intra-operative consultations (EPA-IC) was developed through a national iterative consultation and used clinical supervisors to assess residents' performance at an anatomical pathology program. Psychometric analyses and focus groups were conducted to explore the sources of evidence using modern validity theory: content, response process, internal structure, relations to other variables, and consequences of assessment. The content was considered appropriate, the assessment was feasible and acceptable by residents and supervisors, and it had a positive educational impact by improving performance of intra-operative consultations and feedback to learners. The results had low reliability, which seemed to be related to assessment biases, and supervisors were reluctant to fully entrust trainees due to cultural issues. With CBME implementation, new workplace-based assessment tools are needed in pathology. In this study, we showcased the development of the first instrument for assessing resident's performance of a prototypical entrustable professional activity in pathology using modern education principles and validity theory.


Assuntos
Educação Baseada em Competências/métodos , Educação Médica/métodos , Avaliação de Desempenho Profissional/métodos , Competência Clínica , Educação de Pós-Graduação em Medicina/métodos , Humanos , Aprendizagem , Encaminhamento e Consulta , Reprodutibilidade dos Testes , Local de Trabalho
8.
Adv Health Sci Educ Theory Pract ; 26(3): 1133-1156, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33566199

RESUMO

Understanding which factors can impact rater judgments in assessments is important to ensure quality ratings. One such factor is whether prior performance information (PPI) about learners influences subsequent decision making. The information can be acquired directly, when the rater sees the same learner, or different learners over multiple performances, or indirectly, when the rater is provided with external information about the same learner prior to rating a performance (i.e., learner handover). The purpose of this narrative review was to summarize and highlight key concepts from multiple disciplines regarding the influence of PPI on subsequent ratings, discuss implications for assessment and provide a common conceptualization to inform research. Key findings include (a) assimilation (rater judgments are biased towards the PPI) occurs with indirect PPI and contrast (rater judgments are biased away from the PPI) with direct PPI; (b) negative PPI appears to have a greater effect than positive PPI; (c) when viewing multiple performances, context effects of indirect PPI appear to diminish over time; and (d) context effects may occur with any level of target performance. Furthermore, some raters are not susceptible to context effects, but it is unclear what factors are predictive. Rater expertise and training do not consistently reduce effects. Making raters more accountable, providing specific standards and reducing rater cognitive load may reduce context effects. Theoretical explanations for these findings will be discussed.


Assuntos
Competência Clínica , Avaliação Educacional , Humanos , Julgamento , Variações Dependentes do Observador , Pesquisadores
9.
J Contin Educ Health Prof ; 38(3): 154-157, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30157157

RESUMO

A common research study in assessment involves measuring the amount of knowledge, skills, or attitudes that participants' possess. In the continuing professional development arena, a researcher might also want to assess this information as an outcome of an educational activity. At some point, the researcher may wish to publish the results from these assessment-based studies. The goal of this commentary is to highlight common problems that could negatively influence the likelihood of an assessment-based manuscript being published.


Assuntos
Avaliação de Processos em Cuidados de Saúde/métodos , Editoração/tendências , Redação/normas , Humanos , Editoração/normas
10.
Adv Health Sci Educ Theory Pract ; 23(4): 721-732, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29556923

RESUMO

There is an increasing focus on factors that influence the variability of rater-based judgments. First impressions are one such factor. First impressions are judgments about people that are made quickly and are based on little information. Under some circumstances, these judgments can be predictive of subsequent decisions. A concern for both examinees and test administrators is whether the relationship remains stable when the performance of the examinee changes. That is, once a first impression is formed, to what degree will an examiner be willing to modify it? The purpose of this study is to determine the degree that first impressions influence final ratings when the performance of examinees changes within the context of an objective structured clinical examination (OSCE). Physician examiners (n = 29) viewed seven videos of examinees (i.e., actors) performing a physical exam on a single OSCE station. They rated the examinees' clinical abilities on a six-point global rating scale after 60 s (first impression or FIGR). They then observed the examinee for the remainder of the station and provided a final global rating (GRS). For three of the videos, the examinees' performance remained consistent throughout the videos. For two videos, examinee performance changed from initially strong to weak and for two videos, performance changed from initially weak to strong. The mean FIGR rating for the Consistent condition (M = 4.80) and the Strong to Weak condition (M = 4.87) were higher compared to their respective GRS ratings (M = 3.93, M = 2.73) with a greater decline for the Strong to Weak condition. The mean FIGR rating for the Weak to Strong condition was lower (3.60) than the corresponding mean GRS (4.81). This pattern of findings suggests that raters were willing to change their judgments based on examinee performance. Future work should explore the impact of making a first impression judgment explicit versus implicit and the role of context on the relationship between a first impression and a subsequent judgment.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Avaliação Educacional/normas , Variações Dependentes do Observador , Adulto , Feminino , Humanos , Julgamento , Masculino , Pessoa de Meia-Idade , Fatores Socioeconômicos
12.
Ann Am Thorac Soc ; 13(4): 495-501, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26862890

RESUMO

RATIONALE: Flexible bronchoscopy is performed by clinicians representing multiple medical specialties in a variety of settings. Given the increasing importance of competency-based assessment in postgraduate training, it is important that this skill be assessed within a competency-based framework using a valid measurement tool. OBJECTIVES: The purpose of this study was to design and validate a practical, competency-based bronchoscopy assessment tool that could be applied to trainees in a clinical setting. METHODS: Focus groups of expert physicians were formed in Ottawa, Canada representing adult medical specialties routinely engaged in preparing trainees to perform flexible bronchoscopy (respiratory medicine, critical care, thoracic surgery and anesthesia). The focus groups were charged with identifying themes and items relevant to the assessment of competency in bronchoscopy. By an iterative process, a bronchoscopy assessment tool was developed, the Ontario Bronchoscopy Assessment Tool (OBAT). The tool was evaluated by first using it to assess learners in a pilot study, refining it based on the results, and then testing the OBAT again in a validation study. MEASUREMENTS AND MAIN RESULTS: The initial tool consisted of 19 items, organized into the following groups: preprocedure planning, sedation and monitoring, technical skill, diagnostic skill, and post-procedure planning. The tool demonstrated high reliability (0.91) and discriminated junior from senior trainees. Based on the results of the pilot, the tool was simplified to a 12-item scale with three subscales: preprocedure planning, technical skills, and post-procedure planning. In the validation study, the assessment tool remained highly reliable (0.92) and discriminated junior from senior trainees with an estimated eight assessments per trainee. CONCLUSIONS: The OBAT demonstrates promise as a reliable tool to assess trainee competence for bronchoscopy in clinical settings.


Assuntos
Broncoscopia/educação , Competência Clínica/normas , Educação de Pós-Graduação em Medicina/normas , Avaliação Educacional/métodos , Pneumologia/educação , Grupos Focais , Humanos , Ontário , Projetos Piloto , Reprodutibilidade dos Testes
13.
Eval Health Prof ; 33(1): 96-108, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20042416

RESUMO

Item disclosure is one of the most serious threats to the validity of high stakes examinations, and identifying examinees that may have had unauthorized access to material is an important step in ensuring the integrity of an examination. A procedure was developed to identify examinees that potentially had unauthorized prior access to examination content. A standardized difference score is created by comparing examinee ability estimates for potentially exposed items to ability estimates for unexposed items. Outliers in this distribution are then flagged for further review. The steps associated with this procedure are described and followed by an example of applying the procedure. In addition, the use of this procedure is supported by the results of a simulation that models the use of unauthorized access to examination material.


Assuntos
Competência Clínica/normas , Avaliação Educacional/normas , Ocupações em Saúde/ética , Conselhos de Especialidade Profissional/normas , Análise de Variância , Canadá , Competência Clínica/estatística & dados numéricos , Enganação , Avaliação Educacional/estatística & dados numéricos , Escolaridade , Estudos de Viabilidade , Ocupações em Saúde/educação , Humanos , Método de Monte Carlo , Psicometria , Análise de Regressão , Conselhos de Especialidade Profissional/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA