Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Med Teach ; 45(4): 433-441, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36306368

RESUMO

Multiple choice questions (MCQs) suffer from cueing, item quality and factual knowledge testing. This study presents a novel multimodal test containing alternative item types in a computer-based assessment (CBA) format, designated as Proxy-CBA. The Proxy-CBA was compared to a standard MCQ-CBA, regarding validity, reliability, standard error of measurement, and cognitive load, using a quasi-experimental crossover design. Biomedical students were randomized into two groups to sit a 65-item formative exam starting with the MCQ-CBA followed by the Proxy-CBA (group 1, n = 38), or the reverse (group 2, n = 35). Subsequently, a questionnaire on perceived cognitive load was taken, answered by 71 participants. Both CBA formats were analyzed according to parameters of the Classical Test Theory and the Rasch model. Compared to the MCQ-CBA, the Proxy-CBA had lower raw scores (p < 0.001, η2 = 0.276), higher reliability estimates (p < 0.001, η2 = 0.498), lower SEM estimates (p < 0.001, η2 = 0.807), and lower theta ability scores (p < 0.001, η2 = 0.288). The questionnaire revealed no significant differences between both CBA tests regarding perceived cognitive load. Compared to the MCQ-CBA, the Proxy-CBA showed increased reliability and a higher degree of validity with similar cognitive load, suggesting its utility as an alternative assessment format.


Assuntos
Avaliação Educacional , Estudantes de Medicina , Humanos , Reprodutibilidade dos Testes , Inquéritos e Questionários , Computadores
3.
BMC Med Educ ; 22(1): 409, 2022 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-35643442

RESUMO

BACKGROUND: Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student's performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making. METHODS: In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner's interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data. RESULTS: The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student's performance and quality of narrative feedback. CONCLUSION: The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these "difficult" portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.


Assuntos
Educação Baseada em Competências , Narração , Educação Baseada em Competências/métodos , Retroalimentação , Humanos , Inquéritos e Questionários
4.
Med Teach ; 43(10): 1139-1148, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34344274

RESUMO

INTRODUCTION: In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS: For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION: Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes.


Assuntos
Currículo , Consenso , Humanos
6.
Perspect Med Educ ; 10(1): 50-56, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32902828

RESUMO

Although there is consensus in the medical education world that feedback is an important and effective tool to support experiential workplace-based learning, learners tend to avoid the feedback associated with direct observation because they perceive it as a high-stakes evaluation with significant consequences for their future. The perceived dominance of the summative assessment paradigm throughout medical education reduces learners' willingness to seek feedback, and encourages supervisors to mix up feedback with provision of 'objective' grades or pass/fail marks. This eye-opener article argues that the provision and reception of effective feedback by clinical supervisors and their learners is dependent on both parties' awareness of the important distinction between feedback used in coaching towards growth and development (assessment for learning) and reaching a high-stakes judgement on the learner's competence and fitness for practice (assessment of learning). Using driving lessons and the driving test as a metaphor for feedback and assessment helps supervisors and learners to understand this crucial difference and to act upon it. It is the supervisor's responsibility to ensure that supervisor and learner achieve a clear mutual understanding of the purpose of each interaction (i.e. feedback or assessment). To allow supervisors to use the driving lesson-driving test metaphor for this purpose in their interactions with learners, it should be included in faculty development initiatives, along with a discussion of the key importance of separating feedback from assessment, to promote a feedback culture of growth and support programmatic assessment of competence.


Assuntos
Avaliação Educacional/normas , Docentes de Medicina/psicologia , Feedback Formativo , Metáfora , Avaliação Educacional/métodos , Docentes de Medicina/normas , Humanos
7.
MedEdPublish (2016) ; 10: 104, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-38486602

RESUMO

This article was migrated. The article was marked as recommended. Introduction All developed countries depend on International Medical Graduates (IMGs) to complement their workforce. However, the assessment of their fitness to practice and acculturation into the new system can be challenging. To improve this, we introduced Workplace Based Assessment (WBA), using a programmatic philosophy. This paper reports the reliability of this new approach. Method Over the past 10 years, we have assessed over 250 IMGs, each cohort assessed over a 6-month period. We used Mini-Cex, Case Based Discussions (CBD) and Multi-Source Feedback (MSF) to assess them. We analysed the reliability of each tool and the composite reliability of 12 Mini-Cex, 5 CBDs and 12 MSF assessments in the tool kit. Results A reliability coefficient of 0.78 with a SEM of 0.19 was obtained for the sample of 236 IMGs. We found the MSF to be the most reliable tool. By adding one more MSF to the assessment on two occasions, we can reach a reliability of 0.8 and SEM of 0.18. Conclusions The current assessment methodology has acceptable reliability. By increasing the MSF, we can improve the reliability. The lessons from this study are generalisable to IMG assessment and other medical education programs.

8.
Adv Health Sci Educ Theory Pract ; 25(5): 1045-1056, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33113056

RESUMO

The way quality of assessment has been perceived and assured has changed considerably in the recent 5 decades. Originally, assessment was mainly seen as a measurement problem with the aim to tell people apart, the competent from the not competent. Logically, reproducibility or reliability and construct validity were seen as necessary and sufficient for assessment quality and the role of human judgement was minimised. Later, assessment moved back into the authentic workplace with various workplace-based assessment (WBA) methods. Although originally approached from the same measurement framework, WBA and other assessments gradually became assessment processes that included or embraced human judgement but based on good support and assessment expertise. Currently, assessment is treated as a whole system problem in which competence is evaluated from an integrated rather than a reductionist perspective. Current research therefore focuses on how to support and improve human judgement, how to triangulate assessment information meaningfully and how to construct fairness, credibility and defensibility from a systems perspective. But, given the rapid changes in society, education and healthcare, yet another evolution in our thinking about good assessment is likely to lurk around the corner.


Assuntos
Educação Médica/história , Avaliação Educacional/história , Pesquisa/história , Competência Clínica/normas , Educação Médica/métodos , Avaliação Educacional/métodos , História do Século XX , Humanos , Julgamento , Psicometria , Reprodutibilidade dos Testes , Pesquisa/organização & administração , Local de Trabalho/normas
9.
Adv Health Sci Educ Theory Pract ; 24(5): 903-914, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31578642

RESUMO

Arguably, constructive alignment has been the major challenge for assessment in the context of problem-based learning (PBL). PBL focuses on promoting abilities such as clinical reasoning, team skills and metacognition. PBL also aims to foster self-directed learning and deep learning as opposed to rote learning. This has incentivized researchers in assessment to find possible solutions. Originally, these solutions were sought in developing the right instruments to measure these PBL-related skills. The search for these instruments has been accelerated by the emergence of competency-based education. With competency-based education assessment moved away from purely standardized testing, relying more heavily on professional judgment of complex skills. Valuable lessons have been learned that are directly relevant for assessment in PBL. Later, solutions were sought in the development of new assessment strategies, initially again with individual instruments such as progress testing, but later through a more holistic approach to the assessment program as a whole. Programmatic assessment is such an integral approach to assessment. It focuses on optimizing learning through assessment, while at the same gathering rich information that can be used for rigorous decision-making about learner progression. Programmatic assessment comes very close to achieving the desired constructive alignment with PBL, but its wide adoption-just like PBL-will take many years ahead of us.


Assuntos
Aprendizagem Baseada em Problemas , Avaliação de Programas e Projetos de Saúde , Educação Baseada em Competências , Educação Médica
10.
Med Teach ; 41(6): 678-682, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30707848

RESUMO

Purpose: According to the principles of programmatic assessment, a valid high-stakes assessment of the students' performance should amongst others, be based on a multiple data points, supposedly leading to saturation of information. Saturation of information is generated when a data point does not add important information to the assessor. In establishing saturation of information, institutions often set minimum requirements for the number of assessment data points to be included in the portfolio. Methods: In this study, we aimed to provide validity evidence for saturation of information by investigating the relationship between the number of data points exceeding the minimum requirements in a portfolio and the consensus between two independent assessors. Data were analyzed using a multiple logistic regression model. Results: The results showed no relation between the number of data points and the consensus. This suggests that either the consensus is predicted by other factors only, or, more likely, that assessors already reached saturation of information. This study took the first step in investigating saturation of information, further research is necessary to gain in-depth insights of this matter in relation to the complex process of decision-making.


Assuntos
Educação Baseada em Competências/estatística & dados numéricos , Avaliação Educacional/estatística & dados numéricos , Competência Clínica , Feedback Formativo , Humanos
11.
Med Educ ; 53(1): 64-75, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30289171

RESUMO

CONTEXT: In health professions education, assessment systems are bound to be rife with tensions as they must fulfil formative and summative assessment purposes, be efficient and effective, and meet the needs of learners and education institutes, as well as those of patients and health care organisations. The way we respond to these tensions determines the fate of assessment practices and reform. In this study, we argue that traditional 'fix-the-problem' approaches (i.e. either-or solutions) are generally inadequate and that we need alternative strategies to help us further understand, accept and actually engage with the multiple recurring tensions in assessment programmes. METHODS: Drawing from research in organisation science and health care, we outline how the Polarity Thinking™ model and its 'both-and' approach offer ways to systematically leverage assessment tensions as opportunities to drive improvement, rather than as intractable problems. In reviewing the assessment literature, we highlight and discuss exemplars of specific assessment polarities and tensions in educational settings. Using key concepts and principles of the Polarity Thinking™ model, and two examples of common tensions in assessment design, we describe how the model can be applied in a stepwise approach to the management of key polarities in assessment. DISCUSSION: Assessment polarities and tensions are likely to surface with the continued rise of complexity and change in education and health care organisations. With increasing pressures of accountability in times of stretched resources, assessment tensions and dilemmas will become more pronounced. We propose to add to our repertoire of strategies for managing key dilemmas in education and assessment design through the adoption of the polarity framework. Its 'both-and' approach may advance our efforts to transform assessment systems to meet complex 21st century education, health and health care needs.


Assuntos
Atenção à Saúde , Aprendizagem , Pensamento , Educação Médica , Humanos , Modelos Organizacionais
12.
Surg Endosc ; 32(12): 4923-4931, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29872946

RESUMO

BACKGROUND: The current shift towards competency-based residency training has increased the need for objective assessment of skills. In this study, we developed and validated an assessment tool that measures technical and non-technical competency in transurethral resection of bladder tumour (TURBT). METHODS: The 'Test Objective Competency' (TOCO)-TURBT tool was designed by means of cognitive task analysis (CTA), which included expert consensus. The tool consists of 51 items, divided into 3 phases: preparatory (n = 15), procedural (n = 21), and completion (n = 15). For validation of the TOCO-TURBT tool, 2 TURBT procedures were performed and videotaped by 25 urologists and 51 residents in a simulated setting. The participants' degree of competence was assessed by a panel of eight independent expert urologists using the TOCO-TURBT tool. Each procedure was assessed by two raters. Feasibility, acceptability and content validity were evaluated by means of a quantitative cross-sectional survey. Regression analyses were performed to assess the strength of the relation between experience and test scores (construct validity). Reliability was analysed by generalizability theory. RESULTS: The majority of assessors and urologists indicated the TOCO-TURBT tool to be a valid assessment of competency and would support the implementation of the TOCO-TURBT assessment as a certification method for residents. Construct validity was clearly established for all outcome measures of the procedural phase (all r > 0.5, p < 0.01). Generalizability-theory analysis showed high reliability (coefficient Phi ≥ 0.8) when using the format of two assessors and two cases. CONCLUSIONS: This study provides first evidence that the TOCO-TURBT tool is a feasible, valid and reliable assessment tool for measuring competency in TURBT. The tool has the potential to be used for future certification of competencies for residents and urologists. The methodology of CTA might be valuable in the development of assessment tools in other areas of clinical practice.


Assuntos
Competência Clínica/estatística & dados numéricos , Educação de Pós-Graduação em Medicina/normas , Endoscopia/educação , Internato e Residência/métodos , Neoplasias da Bexiga Urinária/cirurgia , Procedimentos Cirúrgicos Urológicos/educação , Urologistas/educação , Certificação , Estudos Transversais , Humanos , Masculino , Reprodutibilidade dos Testes , Uretra
13.
Int J Technol Assess Health Care ; 34(2): 218-223, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29656730

RESUMO

OBJECTIVES: The aim of this study was to explore the risk assessment tools and criteria used to assess the risk of medical devices in hospitals, and to explore the link between the risk of a medical device and how those risks impact or alter the training of staff. METHODS: Within a broader questionnaire on implementation of a national guideline, we collected quantitative data regarding the types of risk assessment tools used in hospitals and the training of healthcare staff. RESULTS: The response rate for the questionnaire was 81 percent; a total of sixty-five of eighty Dutch hospitals. All hospitals use a risk assessment tool and the biggest cluster (40 percent) use a tool developed internally. The criteria used to assess risk most often are: the function of the device (92 percent), the severity of adverse events (88 percent) and the frequency of use (77 percent). Forty-seven of fifty-six hospitals (84 percent) base their training on the risk associated with a medical device. For medium- and high-risk devices, the main method is practical training. As risk increases, the amount and type of training and examination increases. CONCLUSIONS: Dutch hospitals use a wide range of tools to assess the risk of medical devices. These tools are often based on the same criteria: the function of the device, the potential severity of adverse events, and the frequency of use. Furthermore, these tools are used to determine the amount and type of training required for staff. If the risk of a device is higher, then the training and examination is more extensive.


Assuntos
Equipamentos e Provisões , Administração Hospitalar , Avaliação da Tecnologia Biomédica/organização & administração , Meio Ambiente , Desenho de Equipamento , Falha de Equipamento , Humanos , Capacitação em Serviço , Países Baixos , Segurança do Paciente , Medição de Risco
14.
Physiother Can ; 70(4): 393-401, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30745725

RESUMO

Purpose: This study evaluated the impact of a quality improvement programme based on self- and peer assessment to justify nationwide implementation. Method: Four professional networks of physiotherapists in The Netherlands (n = 379) participated in the programme, which consisted of two cycles of online self-assessment and peer assessment using video recordings of client communication and clinical records. Assessment was based on performance indicators that could be scored on a 5-point Likert scale, and online assessment was followed by face-to-face feedback discussions. After cycle 1, participants developed personal learning goals. These goals were analyzed thematically, and goal attainment was measured using a questionnaire. Improvement in performance was tested with multilevel regression analyses, comparing the self-assessment and peer-assessment scores in cycles 1 and 2. Results: In total, 364 (96%) of the participants were active in online self-assessment and peer assessment. However, online activities varied between cycle 1 and cycle 2 and between client communication and recordkeeping. Personal goals addressed client-centred communication (54%), recordkeeping (24%), performance and outcome measurement (15%), and other (7%). Goals were completely attained (29%), partly attained (64%), or not attained at all (7%). Self-assessment and peer-assessment scores improved significantly for both client communication (self-assessment = 11%; peer assessment = 8%) and recordkeeping (self-assessment = 7%; peer assessment = 4%). Conclusions: Self-assessment and peer assessment are effective in enhancing commitment to change and improving clinical performance. Nationwide implementation of the programme is justified. Future studies should address the impact on client outcomes.


Objectif : évaluer les répercussions d'un programme d'amélioration de la qualité reposant sur l'autoévaluation et l'évaluation par les pairs pour en justifier la mise en œuvre nationale. Méthodologie : quatre réseaux professionnels de physiothérapeutes des Pays-Bas (n=379) ont participé au programme, composé de deux cycles d'autoévaluation en ligne et d'évaluation par les pairs à l'aide d'enregistrements vidéo des communications des clients et des dossiers cliniques. L'évaluation était fondée sur des indicateurs de la performance qui pouvaient être cotés sur une échelle de Likert de cinq points, et l'évaluation en ligne était suivie de rencontres de rétroaction. Après le cycle 1, les participants se sont donné des objectifs d'apprentissage personnel. Les chercheurs ont évalué ces objectifs par thème et en ont mesuré l'atteinte au moyen d'un questionnaire. Ils ont vérifié l'amélioration de la performance à l'aide d'analyses de régression multiniveaux et ont comparé les cotes d'autoévaluation et d'évaluation par les pairs des cycles 1 et 2. Résultats : au total, 364 des participants (96 %) étaient actifs dans l'autoévaluation en ligne et l'évaluation par les pairs. Cependant, les activités en ligne variaient entre le cycle 1 et le cycle 2 et entre les communications avec le client et la tenue de dossier. Les objectifs personnels portaient sur les communications axées sur le client (54 %), la tenue de dossiers (24 %), les mesures de la performance et des résultats cliniques (15 %) et d'autres points (7 %). Les objectifs étaient complètement atteints (29 %), partiellement atteints (64 %) ou pas du tout atteints (7 %). Les cotes d'autoévaluation et d'évaluation par les pairs s'amélioraient sensiblement dans les secteurs des communications avec le client (autoévaluation = 11 %; évaluation par les pairs = 8 %) et de la tenue de dossiers (autoévaluation = 7 %; évaluation par les pairs = 4 %). Conclusions : l'autoévaluation et l'évaluation par les pairs sont efficaces pour accroître la volonté de changer et améliorer la performance clinique. La mise en œuvre nationale du programme est justifiée. De futures études devraient aborder les répercussions de ce programme sur les résultats cliniques des clients.

15.
Educ Health (Abingdon) ; 30(1): 3-10, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28707630

RESUMO

BACKGROUND: Medical education in Sub-Saharan Africa is in need of reform to promote the number and quality of physicians trained. Curriculum change and innovation in this region, however, face a challenging context that may affect curriculum outcomes. Research on outcomes of curriculum innovation in Sub-Saharan Africa is scarce. We investigated curriculum outcomes in a Sub-Saharan African context by comparing students' perceived preparedness for practice within three curricula in Mozambique: a conventional curriculum and two innovative curricula. Both innovative curricula used problem-based learning and community-based education. METHODS: We conducted a comparative mixed methods study. We adapted a validated questionnaire on perceived professional competencies and administered it to 5th year students of the three curricula (n = 140). We conducted semi-structured interviews with 5th year students from these curricula (n = 12). Additional contextual information was collected. Statistical and thematic analyses were conducted. RESULTS: Perceived preparedness for practice of students from the conventional curriculum was significantly lower than for students from one innovative curriculum, but significantly higher than for students from the other innovative curriculum. Major human and material resource issues and disorganization impeded the latter's sense of preparedness. Both innovative curricula, however, stimulated a more holistic approach among students toward patients, as well an inquiring and independent attitude, which is valuable preparation for Sub-Saharan African healthcare. DISCUSSION: In Sub-Saharan Africa, risks and benefits of curriculum innovation are high. Positive outcomes add value to local healthcare in terms of doctors' meaningful preparedness for practice, but instead outcomes can be negative due to the implementation challenges sometimes found in Sub-Saharan African contexts. Before embarking on innovative curriculum reform, medical schools need to assess their capability and motivation for innovation.


Assuntos
Currículo , Educação de Graduação em Medicina/métodos , Aprendizagem Baseada em Problemas , Estudantes de Medicina/psicologia , Adulto , África Subsaariana , Competência Clínica , Feminino , Humanos , Masculino , Moçambique , Inovação Organizacional , Faculdades de Medicina/organização & administração , Inquéritos e Questionários , Ensino
16.
BMC Med Educ ; 17(1): 73, 2017 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-28454581

RESUMO

BACKGROUND: Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students' use of assessment-related feedback. METHODS: Using the principles of participatory design, a mixed group comprising medical students, clinical teachers and senior faculty members was challenged to develop radical solutions to improve the use of post-assessment feedback. Follow-up interviews were conducted with individual members of the group to explore their personal beliefs about the proposed redesign. Data were analysed using a socio-cultural lens. RESULTS: Proposed changes were dominated by a shared belief in the primacy of the summative assessment paradigm, which prevented radical redesign solutions from being accepted by group members. Participants' prior assessment experiences strongly influenced proposals for change. As participants had largely only experienced a summative assessment culture, they found it difficult to conceptualise radical change in the assessment culture. Although all group members participated, students were less successful at persuading the group to adopt their ideas. Faculty members and clinical teachers often used indirect techniques to close down discussions. The strength of individual beliefs became more apparent in the follow-up interviews. CONCLUSIONS: Naïve epistemologies and prior personal experiences were influential in the assessment redesign but were usually not expressed explicitly in a group setting, perhaps because of cultural conventions of politeness. In order to successfully implement a change in assessment culture, firmly-held intuitive beliefs about summative assessment will need to be clearly understood as a first step.


Assuntos
Difusão de Inovações , Avaliação Educacional/métodos , Estudantes de Medicina/psicologia , Educação de Graduação em Medicina , Feedback Formativo , Humanos , Entrevistas como Assunto , Pesquisa Qualitativa
17.
Acad Med ; 92(11): 1617-1621, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28403004

RESUMO

PURPOSE: In-training evaluation reports (ITERs) are ubiquitous in internal medicine (IM) residency. Written comments can provide a rich data source, yet are often overlooked. This study determined the reliability of using variable amounts of commentary to discriminate between residents. METHOD: ITER comments from two cohorts of PGY-1s in IM at the University of Toronto (graduating 2010 and 2011; n = 46-48) were put into sets containing 15 to 16 residents. Parallel sets were created: one with comments from the full year and one with comments from only the first three assessments. Each set was rank-ordered by four internists external to the program between April 2014 and May 2015 (n = 24). Generalizability analyses and a decision study were performed. RESULTS: For the full year of comments, reliability coefficients averaged across four rankers were G = 0.85 and G = 0.91 for the two cohorts. For a single ranker, G = 0.60 and G = 0.73. Using only the first three assessments, reliabilities remained high at G = 0.66 and G = 0.60 for a single ranker. In a decision study, if two internists ranked the first three assessments, reliability would be G = 0.80 and G = 0.75 for the two cohorts. CONCLUSIONS: Using written comments to discriminate between residents can be extremely reliable even after only several reports are collected. This suggests a way to identify residents early on who may require attention. These findings contribute evidence to support the validity argument for using qualitative data for assessment.


Assuntos
Competência Clínica , Medicina Interna/educação , Internato e Residência , Narração , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes
18.
Med Teach ; 39(5): 476-485, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28281369

RESUMO

BACKGROUND: It remains unclear which item format would best suit the assessment of clinical reasoning: context-rich single best answer questions (crSBAs) or key-feature problems (KFPs). This study compared KFPs and crSBAs with respect to students' acceptance, their educational impact, and psychometric characteristics when used in a summative end-of-clinical-clerkship pediatric exam. METHODS: Fifth-year medical students (n = 377) took a computer-based exam that included 6-9 KFPs and 9-20 crSBAs which assessed their clinical reasoning skills, in addition to an objective structured clinical exam (OSCE) that assessed their clinical skills. Each KFP consisted of a case vignette and three key features using a "long-menu" question format. We explored students' perceptions of the KFPs and crSBAs in eight focus groups and analyzed statistical data of 11 exams. RESULTS: Compared to crSBAs, KFPs were perceived as more realistic and difficult, providing a greater stimulus for the intense study of clinical reasoning, and were generally well accepted. The statistical analysis revealed no difference in difficulty, but KFPs resulted more reliable and efficient than crSBAs. The correlation between the two formats was high, while KFPs correlated more closely with the OSCE score. CONCLUSIONS: KFPs with long-menu exams seem to bring about a positive educational effect without psychometric drawbacks.


Assuntos
Estágio Clínico , Competência Clínica , Avaliação Educacional/métodos , Humanos , Resolução de Problemas , Estudantes de Medicina
19.
BMJ Open ; 7(2): e013726, 2017 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-28188156

RESUMO

OBJECTIVES: To evaluate the feasibility of a quality improvement programme aimed to enhance the client-centeredness, effectiveness and transparency of physiotherapy services by addressing three feasibility domains: (1) acceptability of the programme design, (2) appropriateness of the implementation strategy and (3) impact on quality improvement. DESIGN: Mixed methods study. PARTICIPANTS AND SETTING: 64 physiotherapists working in primary care, organised in a network of communities of practice in the Netherlands. METHODS: The programme contained: (1) two cycles of online self-assessment and peer assessment (PA) of clinical performance using client records and video-recordings of client communication followed by face-to-face group discussions, and (2) clinical audit assessing organisational performance. Assessment was based on predefined performance indicators which could be scored on a 5-point Likert scale. Discussions addressed performance standards and scoring differences. All feasibility domains were evaluated qualitatively with two focus groups and 10 in-depth interviews. In addition, we evaluated the impact on quality improvement quantitatively by comparing self-assessment and PA scores in cycles 1 and 2. RESULTS: We identified critical success features relevant to programme development and implementation, such as clarifying expectations at baseline, training in PA skills, prolonged engagement with video-assessment and competent group coaches. Self-reported impact on quality improvement included awareness of clinical and organisational performance, improved evidence-based practice and client-centeredness and increased motivation to self-direct quality improvement. Differences between self-scores and peer scores on performance indicators were not significant. Between cycles 1 and 2, scores for record keeping showed significant improvement, however not for client communication. CONCLUSIONS: This study demonstrated that bottom-up initiatives to improve healthcare quality can be effective. The results justify ongoing evaluation to inform nationwide implementation when the critical success features are addressed. Further research is necessary to explore the sustainability of the results and the impact on client outcomes in a full-scale study.


Assuntos
Auditoria Clínica , Revisão por Pares/métodos , Modalidades de Fisioterapia/normas , Atenção Primária à Saúde , Qualidade da Assistência à Saúde , Adulto , Estudos de Viabilidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Países Baixos , Fisioterapeutas , Melhoria de Qualidade
20.
Adv Health Sci Educ Theory Pract ; 22(5): 1213-1243, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28155004

RESUMO

Workplace-Based Assessment (WBA) plays a pivotal role in present-day competency-based medical curricula. Validity in WBA mainly depends on how stakeholders (e.g. clinical supervisors and learners) use the assessments-rather than on the intrinsic qualities of instruments and methods. Current research on assessment in clinical contexts seems to imply that variable behaviours during performance assessment of both assessors and learners may well reflect their respective beliefs and perspectives towards WBA. We therefore performed a Q methodological study to explore perspectives underlying stakeholders' behaviours in WBA in a postgraduate medical training program. Five different perspectives on performance assessment were extracted: Agency, Mutuality, Objectivity, Adaptivity and Accountability. These perspectives reflect both differences and similarities in stakeholder perceptions and preferences regarding the utility of WBA. In comparing and contrasting the various perspectives, we identified two key areas of disagreement, specifically 'the locus of regulation of learning' (i.e., self-regulated versus externally regulated learning) and 'the extent to which assessment should be standardised' (i.e., tailored versus standardised assessment). Differing perspectives may variously affect stakeholders' acceptance, use-and, consequently, the effectiveness-of assessment programmes. Continuous interaction between all stakeholders is essential to monitor, adapt and improve assessment practices and to stimulate the development of a shared mental model. Better understanding of underlying stakeholder perspectives could be an important step in bridging the gap between psychometric and socio-constructivist approaches in WBA.


Assuntos
Avaliação de Desempenho Profissional , Competência Clínica/normas , Avaliação Educacional , Avaliação de Desempenho Profissional/métodos , Medicina Geral/educação , Medicina Geral/normas , Humanos , Países Baixos , Local de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA