RESUMEN
Multiple choice questions (MCQs) suffer from cueing, item quality and factual knowledge testing. This study presents a novel multimodal test containing alternative item types in a computer-based assessment (CBA) format, designated as Proxy-CBA. The Proxy-CBA was compared to a standard MCQ-CBA, regarding validity, reliability, standard error of measurement, and cognitive load, using a quasi-experimental crossover design. Biomedical students were randomized into two groups to sit a 65-item formative exam starting with the MCQ-CBA followed by the Proxy-CBA (group 1, n = 38), or the reverse (group 2, n = 35). Subsequently, a questionnaire on perceived cognitive load was taken, answered by 71 participants. Both CBA formats were analyzed according to parameters of the Classical Test Theory and the Rasch model. Compared to the MCQ-CBA, the Proxy-CBA had lower raw scores (p < 0.001, η2 = 0.276), higher reliability estimates (p < 0.001, η2 = 0.498), lower SEM estimates (p < 0.001, η2 = 0.807), and lower theta ability scores (p < 0.001, η2 = 0.288). The questionnaire revealed no significant differences between both CBA tests regarding perceived cognitive load. Compared to the MCQ-CBA, the Proxy-CBA showed increased reliability and a higher degree of validity with similar cognitive load, suggesting its utility as an alternative assessment format.
Asunto(s)
Evaluación Educacional , Estudiantes de Medicina , Humanos , Reproducibilidad de los Resultados , Encuestas y Cuestionarios , ComputadoresRESUMEN
BACKGROUND: Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student's performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making. METHODS: In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner's interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data. RESULTS: The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student's performance and quality of narrative feedback. CONCLUSION: The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these "difficult" portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.
Asunto(s)
Educación Basada en Competencias , Narración , Educación Basada en Competencias/métodos , Retroalimentación , Humanos , Encuestas y CuestionariosRESUMEN
BACKGROUND: In medical residency, performance observations are considered an important strategy to monitor competence development, provide feedback and warrant patient safety. The aim of this study was to gain insight into whether and how supervisor-resident dyads build a working repertoire regarding the use of observations, and how they discuss and align goals and approaches to observation in particular. METHODS: We used a qualitative, social constructivist approach to explore if and how supervisory dyads work towards alignment of goals and preferred approaches to performance observations. We conducted semi-structured interviews with supervisor-resident dyads, performing a template analysis of the data thus obtained. RESULTS: The supervisory dyads did not frequently communicate about the use of observations, except at the start of training and unless they were triggered by internal or external factors. Their working repertoire regarding the use of observations seemed to be primarily driven by patient safety goals and institutional assessment requirements rather than by providing developmental feedback. Although intended as formative, the institutional test was perceived as summative by supervisors and residents, and led to teaching to the test rather than educating for purposes of competence development. CONCLUSIONS: To unlock the full educational potential of performance observations, and to foster the development of an educational alliance, it is essential that supervisory dyads and the training institute communicate clearly about these observations and the role of assessment practices of- and for learning, in order to align their goals and respective approaches.
Asunto(s)
Medicina General , Internado y Residencia , Comunicación , Medicina Familiar y Comunitaria , Humanos , Lugar de TrabajoRESUMEN
INTRODUCTION: In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS: For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION: Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes.
Asunto(s)
Curriculum , Consenso , HumanosRESUMEN
The way quality of assessment has been perceived and assured has changed considerably in the recent 5 decades. Originally, assessment was mainly seen as a measurement problem with the aim to tell people apart, the competent from the not competent. Logically, reproducibility or reliability and construct validity were seen as necessary and sufficient for assessment quality and the role of human judgement was minimised. Later, assessment moved back into the authentic workplace with various workplace-based assessment (WBA) methods. Although originally approached from the same measurement framework, WBA and other assessments gradually became assessment processes that included or embraced human judgement but based on good support and assessment expertise. Currently, assessment is treated as a whole system problem in which competence is evaluated from an integrated rather than a reductionist perspective. Current research therefore focuses on how to support and improve human judgement, how to triangulate assessment information meaningfully and how to construct fairness, credibility and defensibility from a systems perspective. But, given the rapid changes in society, education and healthcare, yet another evolution in our thinking about good assessment is likely to lurk around the corner.
Asunto(s)
Educación Médica/historia , Evaluación Educacional/historia , Investigación/historia , Competencia Clínica/normas , Educación Médica/métodos , Evaluación Educacional/métodos , Historia del Siglo XX , Humanos , Juicio , Psicometría , Reproducibilidad de los Resultados , Investigación/organización & administración , Lugar de Trabajo/normasRESUMEN
BACKGROUND: Self-directed learning (SDL) is an appropriate and preferred learning process to prepare students for lifelong learning in their professions and make them stay up-to-date. The purpose of this study was to explore preclinical students following a hybrid curriculum in Ethiopia experiences to SDL and the support of several learning activities from the curriculum on their SDL. A mixed-method research design was employed. METHODS: Quantitative data were collected by using a self-administered questionnaire of 80 items measuring students' perceptions on their SDL capability as well as to explore students' views about the influence of components of the curriculum on their SDL. Additional two focus group discussions, each containing eight participants from year-1 and year- 2 students, were conducted. The quantitative data were analyzed using SPSS. The focus group discussions were reviewed, coded, and then thematically analyzed. RESULTS: Our study showed a significant increase in SDL score on comparing students at year-1 with students at year-2 (p = 0.002). Both year-1 and 2 students rated PBL tutorial discussion and tutors had high influence on their individual learning; whereas, other curricular components such as lectures and testes had low influence on their SDL ability. PBL tutorial discussion and module objectives showed strong correlation with students' SDL scores, r = 0.718 & r = 0.648 (p < 0.01), respectively. Besides, PBL tutorial discussion was found strongly correlated with tutors (r = 0.599 (p < 0.01)) and module objectives (r = 0.574 (p < 0.01)). Assessment was highly correlated with lectures (r = 0.595 (p < 0.01)). Findings from qualitative data showed that certain curricular components played role in promoting students' SDL. Tutorials analyzing problems played a major role on students' self-directed learning abilities. CONCLUSIONS: Although the study implied that components of the hybrid curriculum, mainly PBL, could encourage preclinical students' self-directed learning, the curriculum is still not free from teacher-centred culture as the majority of teachers still have high power in deciding the learning process. A further longitudinal study is needed to verify the actual level and ability of medical students' SDL.
Asunto(s)
Curriculum , Educación de Pregrado en Medicina/organización & administración , Autoaprendizaje como Asunto , Estudiantes de Medicina/psicología , Rendimiento Académico , Adulto , Estudios Transversales , Etiopía , Femenino , Grupos Focales , Humanos , Masculino , Encuestas y Cuestionarios , Adulto JovenRESUMEN
BACKGROUND: Direct observation of clinical task performance plays a pivotal role in competency-based medical education. Although formal guidelines require supervisors to engage in direct observations, research demonstrates that trainees are infrequently observed. Supervisors may not only experience practical and socio-cultural barriers to direct observations in healthcare settings, they may also question usefulness or have low perceived self-efficacy in performing direct observations. A better understanding of how these multiple factors interact to influence supervisors' intention to perform direct observations may help us to more effectively implement the aforementioned guidelines and increase the frequency of direct observations. METHODS: We conducted an exploratory quantitative study, using the Theory of Planned Behaviour (TPB) as our theoretical framework. In applying the TPB, we transfer a psychological theory to medical education to get insight in the influence of cognitive and emotional processes on intentions to use direct observations in workplace based learning and assessment. We developed an instrument to investigate supervisors intention to perform direct observations. The relationships between the TPB measures of our questionnaire were explored by computing bivariate correlations using Pearson's R tests. Hierarchical regression analysis was performed in order to assess the impact of the respective TPB measures as predictors on the intention to perform direct observations. RESULTS: In our study 82 GP supervisors completed our TPB questionnaire. We found that supervisors had a positive attitude towards direct observations. Our TPB model explained 45% of the variance in supervisors' intentions to perform them. Normative beliefs and past behaviour were significant determinants of this intention. CONCLUSION: Our study suggests that supervisors use their past experiences to form intentions to perform direct observations in a careful, thoughtful manner and, in doing so, also take the preferences of the learner and other stakeholders potentially engaged in direct observations into consideration. These findings have potential implications for research into work-based assessments and the development of training interventions to foster a shared mental model on the use of direct observations.
Asunto(s)
Competencia Clínica/normas , Educación Basada en Competencias/normas , Evaluación del Rendimiento de Empleados/normas , Internado y Residencia/normas , Relaciones Interprofesionales , Adulto , Evaluación Educacional/normas , Femenino , Humanos , Masculino , Persona de Mediana Edad , Encuestas y CuestionariosRESUMEN
CONTEXT: In health professions education, assessment systems are bound to be rife with tensions as they must fulfil formative and summative assessment purposes, be efficient and effective, and meet the needs of learners and education institutes, as well as those of patients and health care organisations. The way we respond to these tensions determines the fate of assessment practices and reform. In this study, we argue that traditional 'fix-the-problem' approaches (i.e. either-or solutions) are generally inadequate and that we need alternative strategies to help us further understand, accept and actually engage with the multiple recurring tensions in assessment programmes. METHODS: Drawing from research in organisation science and health care, we outline how the Polarity Thinking™ model and its 'both-and' approach offer ways to systematically leverage assessment tensions as opportunities to drive improvement, rather than as intractable problems. In reviewing the assessment literature, we highlight and discuss exemplars of specific assessment polarities and tensions in educational settings. Using key concepts and principles of the Polarity Thinking™ model, and two examples of common tensions in assessment design, we describe how the model can be applied in a stepwise approach to the management of key polarities in assessment. DISCUSSION: Assessment polarities and tensions are likely to surface with the continued rise of complexity and change in education and health care organisations. With increasing pressures of accountability in times of stretched resources, assessment tensions and dilemmas will become more pronounced. We propose to add to our repertoire of strategies for managing key dilemmas in education and assessment design through the adoption of the polarity framework. Its 'both-and' approach may advance our efforts to transform assessment systems to meet complex 21st century education, health and health care needs.
Asunto(s)
Atención a la Salud , Aprendizaje , Pensamiento , Educación Médica , Humanos , Modelos OrganizacionalesRESUMEN
Arguably, constructive alignment has been the major challenge for assessment in the context of problem-based learning (PBL). PBL focuses on promoting abilities such as clinical reasoning, team skills and metacognition. PBL also aims to foster self-directed learning and deep learning as opposed to rote learning. This has incentivized researchers in assessment to find possible solutions. Originally, these solutions were sought in developing the right instruments to measure these PBL-related skills. The search for these instruments has been accelerated by the emergence of competency-based education. With competency-based education assessment moved away from purely standardized testing, relying more heavily on professional judgment of complex skills. Valuable lessons have been learned that are directly relevant for assessment in PBL. Later, solutions were sought in the development of new assessment strategies, initially again with individual instruments such as progress testing, but later through a more holistic approach to the assessment program as a whole. Programmatic assessment is such an integral approach to assessment. It focuses on optimizing learning through assessment, while at the same gathering rich information that can be used for rigorous decision-making about learner progression. Programmatic assessment comes very close to achieving the desired constructive alignment with PBL, but its wide adoption-just like PBL-will take many years ahead of us.
Asunto(s)
Aprendizaje Basado en Problemas , Evaluación de Programas y Proyectos de Salud , Educación Basada en Competencias , Educación MédicaRESUMEN
Feedback in medical education has traditionally showcased techniques and skills of giving feedback, and models used in staff development have focused on feedback providers (teachers) not receivers (learners). More recent definitions have questioned this approach, arguing that the impact of feedback lies in learner acceptance and assimilation of feedback with improvement in practice and professional growth. Over the last decade, research findings have emphasized that feedback conversations are complex interpersonal interactions influenced by a multitude of sociocultural factors. However, feedback culture is a concept that is challenging to define, thus strategies to enhance culture are difficult to pin down. In this twelve tips paper, we have attempted to define elements that constitute a feedback culture from four different perspectives and describe distinct strategies that can be used to foster a learning culture with a growth mind-set.
Asunto(s)
Comunicación , Educación Médica/organización & administración , Retroalimentación Formativa , Relaciones Interpersonales , Objetivos , Humanos , Aprendizaje , Cultura Organizacional , Rol Profesional , Investigación Cualitativa , Autoeficacia , Autoevaluación (Psicología)RESUMEN
This AMEE guide provides a framework and practical strategies for teachers, learners and institutions to promote meaningful feedback conversations that emphasise performance improvement and professional growth. Recommended strategies are based on recent feedback research and literature, which emphasise the sociocultural nature of these complex interactions. We use key concepts from three theories as the underpinnings of the recommended strategies: sociocultural, politeness and self-determination theories. We view the content and impact of feedback conversations through the perspective of learners, teachers and institutions, always focussing on learner growth. The guide emphasises the role of teachers in forming educational alliances with their learners, setting a safe learning climate, fostering self-awareness about their performance, engaging with learners in informed self-assessment and reflection, and co-creating the learning environment and learning opportunities with their learners. We highlight the role of institutions in enhancing the feedback culture by encouraging a growth mind-set and a learning goal-orientation. Practical advice is provided on techniques and strategies that can be used and applied by learners, teachers and institutions to effectively foster all these elements. Finally, we highlight throughout the critical importance of congruence between the three levels of culture: unwritten values, espoused values and day to day behaviours.
Asunto(s)
Docentes Médicos/psicología , Retroalimentación , Relaciones Interprofesionales , Autoeficacia , Estudiantes de Medicina/psicología , Guías como Asunto , Humanos , Aprendizaje , Motivación , AutoimagenRESUMEN
Purpose: According to the principles of programmatic assessment, a valid high-stakes assessment of the students' performance should amongst others, be based on a multiple data points, supposedly leading to saturation of information. Saturation of information is generated when a data point does not add important information to the assessor. In establishing saturation of information, institutions often set minimum requirements for the number of assessment data points to be included in the portfolio. Methods: In this study, we aimed to provide validity evidence for saturation of information by investigating the relationship between the number of data points exceeding the minimum requirements in a portfolio and the consensus between two independent assessors. Data were analyzed using a multiple logistic regression model. Results: The results showed no relation between the number of data points and the consensus. This suggests that either the consensus is predicted by other factors only, or, more likely, that assessors already reached saturation of information. This study took the first step in investigating saturation of information, further research is necessary to gain in-depth insights of this matter in relation to the complex process of decision-making.
Asunto(s)
Educación Basada en Competencias/estadística & datos numéricos , Evaluación Educacional/estadística & datos numéricos , Competencia Clínica , Retroalimentación Formativa , HumanosRESUMEN
BACKGROUND: The current shift towards competency-based residency training has increased the need for objective assessment of skills. In this study, we developed and validated an assessment tool that measures technical and non-technical competency in transurethral resection of bladder tumour (TURBT). METHODS: The 'Test Objective Competency' (TOCO)-TURBT tool was designed by means of cognitive task analysis (CTA), which included expert consensus. The tool consists of 51 items, divided into 3 phases: preparatory (n = 15), procedural (n = 21), and completion (n = 15). For validation of the TOCO-TURBT tool, 2 TURBT procedures were performed and videotaped by 25 urologists and 51 residents in a simulated setting. The participants' degree of competence was assessed by a panel of eight independent expert urologists using the TOCO-TURBT tool. Each procedure was assessed by two raters. Feasibility, acceptability and content validity were evaluated by means of a quantitative cross-sectional survey. Regression analyses were performed to assess the strength of the relation between experience and test scores (construct validity). Reliability was analysed by generalizability theory. RESULTS: The majority of assessors and urologists indicated the TOCO-TURBT tool to be a valid assessment of competency and would support the implementation of the TOCO-TURBT assessment as a certification method for residents. Construct validity was clearly established for all outcome measures of the procedural phase (all r > 0.5, p < 0.01). Generalizability-theory analysis showed high reliability (coefficient Phi ≥ 0.8) when using the format of two assessors and two cases. CONCLUSIONS: This study provides first evidence that the TOCO-TURBT tool is a feasible, valid and reliable assessment tool for measuring competency in TURBT. The tool has the potential to be used for future certification of competencies for residents and urologists. The methodology of CTA might be valuable in the development of assessment tools in other areas of clinical practice.
Asunto(s)
Competencia Clínica/estadística & datos numéricos , Educación de Postgrado en Medicina/normas , Endoscopía/educación , Internado y Residencia/métodos , Neoplasias de la Vejiga Urinaria/cirugía , Procedimientos Quirúrgicos Urológicos/educación , Urólogos/educación , Certificación , Estudios Transversales , Humanos , Masculino , Reproducibilidad de los Resultados , UretraRESUMEN
WHERE DO WE STAND NOW?: In the 30 years that have passed since The Edinburgh Declaration on Medical Education, we have made tremendous progress in research on fostering 'self-directed and independent study' as propagated in this declaration, of which one prime example is research carried out on problem-based learning. However, a large portion of medical education happens outside of classrooms, in authentic clinical contexts. Therefore, this article discusses recent developments in research regarding fostering active learning in clinical contexts. SELF-REGULATED, LIFELONG LEARNING IN MEDICAL EDUCATION: Clinical contexts are much more complex and flexible than classrooms, and therefore require a modified approach when fostering active learning. Recent efforts have been increasingly focused on understanding the more complex subject of supporting active learning in clinical contexts. One way of doing this is by using theory regarding self-regulated learning (SRL), as well as situated learning, workplace affordances, self-determination theory and achievement goal theory. Combining these different perspectives provides a holistic view of active learning in clinical contexts. ENTRY TO PRACTICE, VOCATIONAL TRAINING AND CONTINUING PROFESSIONAL DEVELOPMENT: Research on SRL in clinical contexts has mostly focused on the undergraduate setting, showing that active learning in clinical contexts requires not only proficiency in metacognition and SRL, but also in reactive, opportunistic learning. These studies have also made us aware of the large influence one's social environment has on SRL, the importance of professional relationships for learners, and the role of identity development in learning in clinical contexts. Additionally, research regarding postgraduate lifelong learning also highlights the importance of learners interacting about learning in clinical contexts, as well as the difficulties that clinical contexts may pose for lifelong learning. However, stimulating self-regulated learning in undergraduate medical education may also make postgraduate lifelong learning easier for learners in clinical contexts.
Asunto(s)
Competencia Clínica , Educación Médica Continua/métodos , Objetivos , Aprendizaje Basado en Problemas/métodos , Logro , Humanos , Modelos Educacionales , Estudiantes de MedicinaRESUMEN
CONTEXT: Single-best-answer questions (SBAQs) have been widely used to test knowledge because they are easy to mark and demonstrate high reliability. However, SBAQs have been criticised for being subject to cueing. OBJECTIVES: We used a novel assessment tool that facilitates efficient marking of open-ended very-short-answer questions (VSAQs). We compared VSAQs with SBAQs with regard to reliability, discrimination and student performance, and evaluated the acceptability of VSAQs. METHODS: Medical students were randomised to sit a 60-question assessment administered in either VSAQ and then SBAQ format (Group 1, n = 155) or the reverse (Group 2, n = 144). The VSAQs were delivered on a tablet; responses were computer-marked and subsequently reviewed by two examiners. The standard error of measurement (SEM) across the ability spectrum was estimated using item response theory. RESULTS: The review of machine-marked questions took an average of 1 minute, 36 seconds per question for all students. The VSAQs had high reliability (alpha: 0.91), a significantly lower SEM than the SBAQs (p < 0.001) and higher mean item-total point biserial correlations (p < 0.001). The VSAQ scores were significantly lower than the SBAQ scores (p < 0.001). The difference in scores between VSAQs and SBAQs was attenuated in Group 2. Although 80.4% of students found the VSAQs more difficult, 69.2% found them more authentic. CONCLUSIONS: The VSAQ format demonstrated high reliability and discrimination and items were perceived as more authentic. The SBAQ format was associated with significant cueing. The present results suggest the VSAQ format has a higher degree of validity.
Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Reproducibilidad de los Resultados , Señales (Psicología) , Educación de Pregrado en Medicina , Evaluación Educacional/normas , Femenino , Humanos , Masculino , Estudiantes de Medicina , Encuestas y CuestionariosRESUMEN
OBJECTIVES: The aim of this study was to explore the risk assessment tools and criteria used to assess the risk of medical devices in hospitals, and to explore the link between the risk of a medical device and how those risks impact or alter the training of staff. METHODS: Within a broader questionnaire on implementation of a national guideline, we collected quantitative data regarding the types of risk assessment tools used in hospitals and the training of healthcare staff. RESULTS: The response rate for the questionnaire was 81 percent; a total of sixty-five of eighty Dutch hospitals. All hospitals use a risk assessment tool and the biggest cluster (40 percent) use a tool developed internally. The criteria used to assess risk most often are: the function of the device (92 percent), the severity of adverse events (88 percent) and the frequency of use (77 percent). Forty-seven of fifty-six hospitals (84 percent) base their training on the risk associated with a medical device. For medium- and high-risk devices, the main method is practical training. As risk increases, the amount and type of training and examination increases. CONCLUSIONS: Dutch hospitals use a wide range of tools to assess the risk of medical devices. These tools are often based on the same criteria: the function of the device, the potential severity of adverse events, and the frequency of use. Furthermore, these tools are used to determine the amount and type of training required for staff. If the risk of a device is higher, then the training and examination is more extensive.
Asunto(s)
Equipos y Suministros , Administración Hospitalaria , Evaluación de la Tecnología Biomédica/organización & administración , Ambiente , Diseño de Equipo , Falla de Equipo , Humanos , Capacitación en Servicio , Países Bajos , Seguridad del Paciente , Medición de RiesgoRESUMEN
BACKGROUND: In postgraduate training, there is a need to continuously assess the learning and working conditions to optimize learning. Students or trainees respond to the learning climate as they perceive it. The Dutch Residency Educational Climate Test (D-RECT) is a learning climate measurement tool with well-substantiated validity. However, it was originally designed for Dutch postgraduate trainees and it remains to be shown whether extrapolation to non-Western settings is viable. The dual objective of this study was to revalidate D-RECT outside of a Western setting and to evaluate the factor structure of a recently revised version of the D-RECT containing 35 items. METHODS: We invited Filipino internal medicine residents from 96 hospitals to complete the revised 35-item D-RECT. Subsequently, we performed a confirmatory factor analysis to check the fit of the 9 scale model of the revised 35-item D-RECT. Inter-rater reliability was assessed using generalizability theory. RESULTS: Confirmatory factor analysis unveiled that the factor structure of the revised 35-item D-RECT provided a reasonable fit to the Filipino data, after removal of 7 items. Five to seven evaluations of individual residents were needed per scale to obtain a reliable result. CONCLUSION: Even in a non-Western setting, the D-RECT exhibited psychometric validity. This study validated the factor structure of the revised 35-item D-RECT after some modifications. We recommend that its application be extended to other Asian countries and specialties.
Asunto(s)
Educación de Postgrado en Medicina/normas , Aprendizaje , Estudiantes de Medicina , Comparación Transcultural , Análisis Factorial , Femenino , Humanos , Masculino , Modelos Educacionales , Filipinas , Psicometría , Reproducibilidad de los Resultados , Medio SocialRESUMEN
OBJECTIVES: Undergraduate medical students are prone to struggle with learning in clinical environments. One of the reasons may be that they are expected to self-regulate their learning, which often turns out to be difficult. Students' self-regulated learning is an interactive process between person and context, making a supportive context imperative. From a socio-cultural perspective, learning takes place in social practice, and therefore teachers and other hospital staff present are vital for students' self-regulated learning in a given context. Therefore, in this study we were interested in how others in a clinical environment influence clinical students' self-regulated learning. METHODS: We conducted a qualitative study borrowing methods from grounded theory methodology, using semi-structured interviews facilitated by the visual Pictor technique. Fourteen medical students were purposively sampled based on age, gender, experience and current clerkship to ensure maximum variety in the data. The interviews were transcribed verbatim and were, together with the Pictor charts, analysed iteratively, using constant comparison and open, axial and interpretive coding. RESULTS: Others could influence students' self-regulated learning through role clarification, goal setting, learning opportunities, self-reflection and coping with emotions. We found large differences in students' self-regulated learning and their perceptions of the roles of peers, supervisors and other hospital staff. Novice students require others, mainly residents and peers, to actively help them to navigate and understand their new learning environment. Experienced students who feel settled in a clinical environment are less susceptible to the influence of others and are better able to use others to their advantage. CONCLUSIONS: Undergraduate medical students' self-regulated learning requires context-specific support. This is especially important for more novice students learning in a clinical environment. Their learning is influenced most heavily by peers and residents. Supporting novice students' self-regulated learning may be improved by better equipping residents and peers for this role.
Asunto(s)
Actitud del Personal de Salud , Prácticas Clínicas/métodos , Aprendizaje , Autocontrol/psicología , Estudiantes de Medicina/psicología , Adaptación Psicológica , Educación de Pregrado en Medicina , Teoría Fundamentada , Humanos , Grupo Paritario , Investigación CualitativaRESUMEN
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is prevalent. The resulting 'idiosyncratic rater variance' is considered to be unusable error of measurement in psychometric models and is a threat to the defensibility of our assessments. Prior studies of inter-rater variation in clinical assessments have used open response formats to gather raters' comments and justifications. This design choice allows participants to use idiosyncratic response styles that could result in a distorted representation of the underlying rater cognition and skew subsequent analyses. In this study we explored rater variability using the structured response format of Q methodology. Physician raters viewed video-recorded clinical performances and provided Mini Clinical Evaluation Exercise (Mini-CEX) assessment ratings through a web-based system. They then shared their assessment impressions by sorting statements that described the most salient aspects of the clinical performance onto a forced quasi-normal distribution ranging from "most consistent with my impression" to "most contrary to my impression". Analysis of the resulting Q-sorts revealed distinct points of view for each performance shared by multiple physicians. The points of view corresponded with the ratings physicians assigned to the performance. Each point of view emphasized different aspects of the performance with either rapport-building and/or medical expertise skills being most salient. It was rare for the points of view to diverge based on disagreements regarding the interpretation of a specific aspect of the performance. As a result, physicians' divergent points of view on a given clinical performance cannot be easily reconciled into a single coherent assessment judgment that is impacted by measurement error. If inter-rater variability does not wholly reflect error of measurement, it is problematic for our current measurement models and poses challenges for how we are to adequately analyze performance assessment ratings.
Asunto(s)
Educación Médica/métodos , Educación Médica/normas , Evaluación Educacional/métodos , Evaluación Educacional/normas , Competencia Clínica , Femenino , Humanos , Masculino , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Grabación en VideoRESUMEN
Clinical workplaces offer postgraduate trainees a wealth of opportunities to learn from experience. To promote deliberate and meaningful learning self-regulated learning skills are foundational. We explored trainees' learning activities related to patient encounters to better understand what aspects of self-regulated learning contribute to trainees' development, and to explore supervisor's role herein. We conducted a qualitative non-participant observational study in seven general practices. During two days we observed trainee's patient encounters, daily debriefing sessions and educational meetings between trainee and supervisor and interviewed them separately afterwards. Data collection and analysis were iterative and inspired by a phenomenological approach. To organise data we used networks, time-ordered matrices and codebooks. Self-regulated learning supported trainees to increasingly perform independently. They engaged in self-regulated learning before, during and after encounters. Trainees' activities depended on the type of medical problem presented and on patient, trainee and supervisor characteristics. Trainees used their sense of confidence to decide if they could manage the encounter alone or if they should consult their supervisor. They deliberately used feedback on their performance and engaged in reflection. Supervisors appeared vital in trainees' learning by reassuring trainees, discussing experience, knowledge and professional issues, identifying possible unawareness of incompetence, assessing performance and securing patient safety. Self-confidence, reflection and feedback, and support from the supervisor are important aspects of self-regulated learning in practice. The results reflect how self-regulated learning and self-entrustment promote trainees' increased participation in the workplace. Securing organized moments of interaction with supervisors is beneficial to trainees' self-regulated learning.