RESUMO
Medical school admissions is a contentious and high stakes selection activity. Many assessment approaches are available to support selection; but how are decisions about building, monitoring, and adapting admissions systems made? What shapes the processes and practices that underpin selection decisions? We explore how these decisions are made across several Canadian medical schools, and how values shape the creation, monitoring, and adaptation of admissions systems. Using phenomenography (a qualitative method suited to examining variability), the authors analyzed interviews with 10 current or previous heads of admissions from 10 different undergraduate medical education programs in Canada. Interviews were conducted in English and French, and data was collected between 2016 and 2017 (therefore participants no longer hold these roles). Data was coded and analyzed iteratively, focusing on identifying underlying values, and exploring how these values shape admissions practices and considerations for validity. Eight different intersecting values were identified. Of these, four were shared across all participants: critically questioning the process and tools, aiming for equity, striving for better, and embracing the challenges of change. The expression of these values depended on different contextual variables (e.g., geographic location, access to expertise, resource availability), and values shaped how admissions systems were built, enacted, and monitored for quality. Ultimately, values shaped: (1) admissions practices resulting in different candidates being offered admission, and (2) how arguments supporting score interpretation are built (i.e., validity). This study documents various values that influence admissions processes, practices, and quality monitoring. The values that shape what is assessed, how it is assessed, and how fair and defensible practices are conceptualized have significant impact, ultimately determining who is selected. These values-whether implicit or explicit-result in intended and unintended consequences in selection processes. However, these values are rarely explicitly examined and questioned, leaving it uncertain as to which consequences are the intended outcomes of deliberately chosen values, and which are unintended consequences of implicitly held values of admissions systems and their actors.
RESUMO
BACKGROUND: Given the widespread use of Multiple Mini Interviews (MMIs), their impact on the selection of candidates and the considerable resources invested in preparing and administering them, it is essential to ensure their quality. Given the variety of station formats used and the degree to which that factor resides in the control of training programmes that we know so little about, format's effect on MMI quality is a considerable oversight. This study assessed the effect of two popular station formats (interview vs. role-play) on the psychometric properties of MMIs. METHODS: We analysed candidate data from the first 8 years of the Integrated French MMIs (IF-MMI) (2010-2017, n = 11 761 applicants), an MMI organised yearly by three francophone universities and administered at four testing sites located in two Canadian provinces. There were 84 role-play and 96 interview stations administered, totalling 180 stations. Mixed design analyses of variance (ANOVAs) were used to test the effect of station format on candidates' scores and stations' discrimination. Cronbach's alpha coefficients for interview and role-play stations were also compared. Predictive validity of both station formats was estimated with a mixed multiple linear regression model testing the relation between interview and role-play scores with average clerkship performance for those who gained entry to medical school (n = 462). RESULTS: Role-play stations (M = 20.67, standard deviation [SD] = 3.38) had a slightly lower mean score than interview stations (M = 21.36, SD = 3.08), p < 0.01, Cohen's d = 0.2. The correlation between role-play and interview stations scores was r = 0.5 (p < 0.01). Discrimination coefficients, Cronbach's alpha and predictive validity statistics did not vary by station format. CONCLUSION: Interview and role-play stations have comparable psychometric properties, suggesting format to be interchangeable. Programmes should select station format based on match to the personal qualities for which they are trying to select.
Assuntos
Critérios de Admissão Escolar , Faculdades de Medicina , Canadá , Humanos , Psicometria , Reprodutibilidade dos TestesRESUMO
BACKGROUND: The widespread implementation of longitudinal assessment (LA) to document trainees' progression to independent practice rests more on speculative rather than evidence-based benefits. We aimed to document stakeholders' knowledge of- and attitudes towards LA, and identify how the supports and barriers can help or hinder the uptake and sustainable use of LA. METHODS: We interviewed representatives from four stakeholder groups involved in LA. The interview protocols were based on the Theoretical Domains Framework (TDF), which contains a total of 14 behaviour change determinants. Two team members coded the interviews deductively to the TDF, with a third resolving differences in coding. The qualitative data analysis was completed with iterative consultations and discussions with team members until consensus was achieved. Saliency analysis was used to identify dominant domains. RESULTS: Forty-one individuals participated in the study. Three dominant domains were identified. Participants perceive that LA has more positive than negative consequences and requires substantial ressources. All the elements and characteristics of LA are present in our data, with differences between stakeholders. CONCLUSION: Going forward, we could develop and implement tailored and theory driven interventions to promote a shared understanding of LA, and maintain potential positive outcomes while reducing negative ones. Furthermore, ressources to support LA implementation need to be addressed to facilitate its uptake.
Assuntos
Atitude , Percepção , Humanos , Pesquisa QualitativaRESUMO
Assessment is more educationally effective when learners engage with assessment processes and perceive the feedback received as credible. With the goal of optimizing the educational value of assessment in medical education, we mapped the primary literature to identify factors that may affect a learner's perceptions of the credibility of assessment and assessment-generated feedback (i.e., scores or narrative comments). For this scoping review, search strategies were developed and executed in five databases. Eligible articles were primary research studies with medical learners (i.e., medical students to post-graduate fellows) as the focal population, discussed assessment of individual learners, and reported on perceived credibility in the context of assessment or assessment-generated feedback. We identified 4705 articles published between 2000 and November 16, 2020. Abstracts were screened by two reviewers; disagreements were adjudicated by a third reviewer. Full-text review resulted in 80 articles included in this synthesis. We identified three sets of intertwined factors that affect learners' perceived credibility of assessment and assessment-generated feedback: (i) elements of an assessment process, (ii) learners' level of training, and (iii) context of medical education. Medical learners make judgments regarding the credibility of assessments and assessment-generated feedback, which are influenced by a variety of individual, process, and contextual factors. Judgments of credibility appear to influence what information will or will not be used to improve later performance. For assessment to be educationally valuable, design and use of assessment-generated feedback should consider how learners interpret, use, or discount assessment-generated feedback.
Assuntos
Educação Médica , Estudantes de Medicina , Retroalimentação , Humanos , JulgamentoRESUMO
BACKGROUND: Medical students need to acquire a continuously growing body of knowledge during their training and throughout their practice. Medical training programs should aim to provide students with the skills to manage this knowledge. Mobile technology, for example, could be a strategy used through training and practice. The objective of this study was to identify drivers of using mobile technology (an iPad) in a UGME preclinical settings and to study the evolution of those drivers over time. METHODS: We solicited all students from two cohorts of a preclinical component of a Canadian UGME program. They were asked to answer two online surveys: one on their first year of study and another on the second year. Surveys were built based on the Technology Acceptance Model (TAM) to which other factors were also added. Data from the two cohorts were combined and analysed with partial least squares structural equation modelling (PLS-SEM) to test two measurement models, one for each year. RESULTS: We tested fifteen hypotheses on both data sets (first year and second year). Factors that explained the use of an iPad the first year were knowledge, preferences, perceived usefulness and anticipation. In the second year, perceived usefulness, knowledge and satisfaction explained the use of an iPad. Other factors have also significantly, but indirectly influenced the use of the iPad. CONCLUSIONS: We identified factors that influenced the use of an iPad in a preclinical medical program. These factors differed from the first year to the second year in the program. Our results suggest that interventions should be tailored for different point in time to foster the use of an iPad. Further study should investigate how interventions based on these factors may influence implementation of mobile technology to help students acquire ability to navigate efficiently through medical knowledge.
Assuntos
Estudantes de Medicina , Canadá , Humanos , Inquéritos e Questionários , TecnologiaRESUMO
OBJECTIVES: Educators and researchers recently implemented developmental progress assessment (DPA) in the context of competency-based education. To reap its anticipated benefits, much still remains to be understood about its implementation. In this study, we aimed to determine the nature and extent of the current evidence on DPA, in an effort to broaden our understanding of the major goals and intended outcomes of DPA as well as the lessons learned from how it has been executed in, or applied across, educational contexts. METHODS: We conducted a scoping study based on the methodology of Arksey and O'Malley. Our search strategy yielded 2494 articles. These articles were screened for inclusion and exclusion (90% agreement), and numerical and qualitative data were extracted from 56 articles based on a pre-defined set of charting categories. The thematic analysis of the qualitative data was completed with iterative consultations and discussions until consensus was achieved for the interpretation of the results. RESULTS: Tools used to document DPA include scales, milestones and portfolios. Performances were observed in clinical or standardised contexts. We identified seven major themes in our qualitative thematic analysis: (a) underlying aims of DPA; (b) sources of information; (c) barriers; (d) contextual factors that can act as barriers or facilitators to the implementation of DPA; (e) facilitators; (f) observed outcomes, and (g) documented validity evidences. CONCLUSIONS: Developmental progress assessment seems to fill a need in the training of future competent health professionals. However, moving forward with a widespread implementation of DPA, factors such as lack of access to user-friendly technology and time to observe performance may render its operationalisation burdensome in the context of competency-based medical education.
Assuntos
Atenção à Saúde , Pessoal de Saúde , HumanosRESUMO
BACKGROUND: Although clinical teachers can often identify struggling learners readily and reliably, they can be reluctant to act upon their impressions, resulting in failure to fail. In the absence of a clear process for identifying and remediating struggling learners, clinical teachers can be put off by the prospect of navigating the politically and personally charged waters of remediation and potential failing of students. METHODS: To address this gap, we developed a problem-solving algorithm to support clinical teachers from the identification through the remediation of learners with clinical reasoning difficulties, which have significant implications for patient care. Based on this algorithm, a mobile application (Pdx) was developed and assessed in two emergency departments at a Canadian university, from 2015 to 2016, using interpretive description as our research design. Semi-structured interviews were conducted before and after a three-month trial with the application. Interviews were analysed both deductively, using pre-determined categories, and inductively, using emerging categories. RESULTS: Twelve clinical teachers were interviewed. Their experience with the application revealed their need to first validate their impressions of difficulties in learners and to find the right words to describe them before difficulties could be addressed. The application was unanimously considered helpful regarding both these aspects, while the mobile format appeared instrumental in allowing clinical teachers to quickly access targeted information during clinical supervision. CONCLUSIONS: The value placed on verifying impressions and finding the right words to pinpoint difficulties should be further explored in endeavours that aim to address the failure to fail phenomenon. Moreover, just-in-time mobile solutions, which mirror habitual clinical practices, may be used profitably for knowledge transfer in medical education, as an alternative form of faculty development.
Assuntos
Docentes de Medicina , Aplicativos Móveis , Aprendizagem Baseada em Problemas/organização & administração , Estudantes de Medicina , Canadá/epidemiologia , Técnicas de Apoio para a Decisão , Pessoal de Educação , Docentes de Medicina/educação , Humanos , Pesquisa QualitativaRESUMO
CONTEXT: Assessment can have far-reaching consequences for future health care professionals and for society. Thus, it is essential to establish the quality of assessment. Few modern approaches to validity are well situated to ensure the quality of complex assessment approaches, such as authentic and programmatic assessments. Here, we explore and delineate the concept of validity as a social imperative in the context of assessment in health professions education (HPE) as a potential framework for examining the quality of complex and programmatic assessment approaches. METHODS: We conducted a concept analysis using Rodgers' evolutionary method to describe the concept of validity as a social imperative in the context of assessment in HPE. Supported by an academic librarian, we developed and executed a search strategy across several databases for literature published between 1995 and 2016. From a total of 321 citations, we identified 67 articles that met our inclusion criteria. Two team members analysed the texts using a specified approach to qualitative data analysis. Consensus was achieved through full team discussions. RESULTS: Attributes that characterise the concept were: (i) demonstration of the use of evidence considered credible by society to document the quality of assessment; (ii) validation embedded through the assessment process and score interpretation; (iii) documented validity evidence supporting the interpretation of the combination of assessment findings, and (iv) demonstration of a justified use of a variety of evidence (quantitative and qualitative) to document the quality of assessment strategies. CONCLUSIONS: The emerging concept of validity as a social imperative highlights some areas of focus in traditional validation frameworks, whereas some characteristics appear unique to HPE and move beyond traditional frameworks. The study reflects the importance of embedding consideration for society and societal concerns throughout the assessment and validation process, and may represent a potential lens through which to examine the quality of complex and programmatic assessment approaches.
Assuntos
Competência Clínica/normas , Ocupações em Saúde/educação , Reprodutibilidade dos Testes , Projetos de Pesquisa , Avaliação Educacional , Humanos , Aprendizagem , Pesquisa QualitativaRESUMO
Medical education research has unique characteristics that raise their own set of ethical issues, which differ significantly from those commonly found in clinical research. In contexts where researchers have a dual role as teachers, free consent to participate in research may be undermined and students' data must be kept confidential from faculty who play any role in their academic or professional path. Faculty members who recruit students as research subjects within their institution for education research should pay particular attention to ensure students' consent to participate is indeed free and continuous and that their privacy is adequately protected. A good understanding of ethical standards and of the appropriate strategies to fulfill them is essential to conduct ethical medical education research and to ensure ethics approval is obtained. These twelve tips draw from the Declaration of Helsinki, from the ICMJE recommendations and from the example of their application to medical education research in a Canadian and North American context. They aim to act as a reminder and as a guide to address the main ethical issues which should be given proper consideration when designing a study involving students as subjects for medical education research.
Assuntos
Educação Médica/organização & administração , Seleção de Pessoal/ética , Projetos de Pesquisa , Sujeitos da Pesquisa , Pesquisa/organização & administração , Confidencialidade , Humanos , Consentimento Livre e Esclarecido , UniversidadesRESUMO
Validity is one of the most debated constructs in our field; debates abound about what is legitimate and what is not, and the word continues to be used in ways that are explicitly disavowed by current practice guidelines. The resultant tensions have not been well characterized, yet their existence suggests that different uses may maintain some value for the user that needs to be better understood. We conducted an empirical form of Discourse Analysis to document the multiple ways in which validity is described, understood, and used in the health professions education field. We created and analyzed an archive of texts identified from multiple sources, including formal databases such as PubMED, ERIC and PsycINFO as well as the authors' personal assessment libraries. An iterative analytic process was used to identify, discuss, and characterize emerging discourses about validity. Three discourses of validity were identified. Validity as a test characteristic is underpinned by the notion that validity is an intrinsic property of a tool and could, therefore, be seen as content and context independent. Validity as an argument-based evidentiary-chain emphasizes the importance of supporting the interpretation of assessment results with ongoing analysis such that validity does not belong to the tool/instrument itself. The emphasis is on process-based validation (emphasizing the journey instead of the goal). Validity as a social imperative foregrounds the consequences of assessment at the individual and societal levels, be they positive or negative. The existence of different discourses may explain-in part-results observed in recent systematic reviews that highlighted discrepancies and tensions between recommendations for practice and the validation practices that are actually adopted and reported. Some of these practices, despite contravening accepted validation 'guidelines', may nevertheless respond to different and somewhat unarticulated needs within health professional education.
Assuntos
Avaliação Educacional/normas , Ocupações em Saúde/educação , Terminologia como Assunto , Competência Clínica/normas , Humanos , Psicometria/normas , Reprodutibilidade dos TestesRESUMO
BACKGROUND: Given the complexity of competency frameworks, associated skills and abilities, and contexts in which they are to be assessed in competency-based education (CBE), there is an increased reliance on rater judgements when considering trainee performance. This increased dependence on rater-based assessment has led to the emergence of rater cognition as a field of research in health professions education. The topic, however, is often conceptualised and ultimately investigated using many different perspectives and theoretical frameworks. Critically analysing how researchers think about, study and discuss rater cognition or the judgement processes in assessment frameworks may provide meaningful and efficient directions in how the field continues to explore the topic. METHODS: We conducted a critical and integrative review of the literature to explore common conceptualisations and unified terminology associated with rater cognition research. We identified 1045 articles on rater-based assessment in health professions education using Scorpus, Medline and ERIC and 78 articles were included in our review. RESULTS: We propose a three-phase framework of observation, processing and integration. We situate nine specific mechanisms and sub-mechanisms described across the literature within these phases: (i) generating automatic impressions about the person; (ii) formulating high-level inferences; (iii) focusing on different dimensions of competencies; (iv) categorising through well-developed schemata based on (a) personal concept of competence, (b) comparison with various exemplars and (c) task and context specificity; (v) weighting and synthesising information differently, (vi) producing narrative judgements; and (vii) translating narrative judgements into scales. CONCLUSION: Our review has allowed us to identify common underlying conceptualisations of observed rater mechanisms and subsequently propose a comprehensive, although complex, framework for the dynamic and contextual nature of the rating process. This framework could help bridge the gap between researchers adopting different perspectives when studying rater cognition and enable the interpretation of contradictory findings of raters' performance by determining which mechanism is enabled or disabled in any given context.
Assuntos
Cognição , Avaliação Educacional , Educação Baseada em Competências , Educação Médica , Avaliação Educacional/métodos , Humanos , JulgamentoRESUMO
Performance-based assessment (PBA) is a valued assessment approach in medical education, be it in a clerkship, residency, or practice context. Raters are intrinsic to PBA and the increased use of PBA has lead to an increased interest in rater cognition. Although several researchers have tackled factors that may influence the variability in rater judgment, the critical examination of rater observation of performance and the translation of that data into judgements are being investigated. The purpose of this study was to qualitatively investigate the cognitive processes of raters, and to create a framework that conceptualizes those processes when raters assess a complex performance. We conducted semi-structured interviews with 11 faculty members (nominated as excellent assessors) from a Department of Medicine to investigate how raters observe, interpret, and translate performance into judgments. The transcribed verbal protocols were analyzed using Constructivist Grounded Theory in order to develop a theoretical model of raters' assessment processes. Several themes emerged from the data and were grouped according to three macro-level themes describing how the raters balance two sources of data [(1) external sources of information and (2) internal/personal sources of information] by relying on specific cognitive processes to assess an examinee performance. The results from our study demonstrate that assessment is a difficult cognitive task that involves nuance using specific cognitive processes to weigh external and internal data against each other. Our data clearly draws attention to the constant struggle between objectivity and subjectivity that is observed in assessment as illustrated by the importance given to nuancing the examinee's observed performance.
Assuntos
Cognição , Avaliação Educacional , Competência Clínica/normas , Educação Médica/normas , Avaliação Educacional/métodos , Docentes de Medicina/psicologia , Feminino , Humanos , Entrevistas como Assunto , Julgamento , MasculinoRESUMO
Since cognitive abilities have been shown to decrease with age, it is expected that older physicians would not perform as well as their younger counterparts on clinical cases unless their expertise can counteract the cognitive effects of aging. However, studies on the topic have shown contradictory results. This study aimed to further investigate the effect of aging on physicians' diagnostic accuracy when diagnosing prevalent and less prevalent cases based on clinical vignettes. A mixed design was used to assess the influence of case prevalence (high vs. low) as a within-subjects factor, and age group as a between subjects factor (<30; n = 23, 30-39; n = 19, 40-49; n = 27, >50 years old; n = 19) on the diagnostic accuracy of 65 family physicians and 25 residents. Repeated Measure ANOVA revealed a significant effect of case prevalence (p < .001) and age group (p < .001). Post-hoc analyses revealed that younger physicians showed the best performance. This study did not demonstrate the positive effect of experience in older physicians. In line with previous studies on expertise development, findings of the present study suggest that skills should be actively maintained to assure a high performance level throughout one's lifespan. If not, performance level could gradually decline with age.
Assuntos
Envelhecimento/psicologia , Competência Clínica/estatística & dados numéricos , Diagnóstico , Médicos/estatística & dados numéricos , Adulto , Fatores Etários , Competência Clínica/normas , Humanos , Internato e Residência/estatística & dados numéricos , Pessoa de Meia-Idade , Médicos/psicologia , Médicos de Família/psicologia , Médicos de Família/estatística & dados numéricosRESUMO
CONTEXT: Recent studies suggest that self-explanation (SE) while diagnosing cases fosters the development of clinical reasoning in medical students; however, the conditions that optimise the impact of SE remain unknown. The example-based learning framework justifies an exploration of students' use of their own SEs combined with the study of examples. This study aimed to assess the impact on medical students' diagnostic performance of: (i) combining students' SEs with their listening to examples of residents' SEs, and (ii) the addition of prompts (specific questions) while working with examples. METHODS: This study consisted of a training phase and an assessment phase conducted 1 week later. In the training phase, 54 Year 3 medical students were randomly assigned to one of three groups. In all groups, students first solved four clinical cases using SE. Subsequently, Group 1 listened to examples of residents' SEs with prompts; Group 2 listened to examples of residents' SEs without prompts, and the control group solved word puzzles. Then, all students again solved the same four cases. One week later, all students solved four similar and four different cases. Students' diagnostic performance and diagnostic accuracy scores were assessed for each case at each time-point. RESULTS: Although all groups' diagnostic accuracy scores on similar cases improved significantly between the training and the assessment phase, Group 1 showed a significantly higher diagnostic performance score after 1 week than the control group (p = 0.037). On different cases, Group 1 obtained significantly higher diagnostic accuracy (p = 0.011) and diagnostic performance (p < 0.001) scores than the control group and a significantly higher diagnostic performance score than Group 2 (p = 0.018). CONCLUSIONS: Self-explanation seems to be an effective technique to help medical students learn clinical reasoning. Its impact is increased significantly by combining it with examples of residents' SEs and prompts. Although students' exposure to examples of clinical reasoning is important, their 'active processing' of these examples appears to be critical to their learning from them.
Assuntos
Competência Clínica , Educação de Graduação em Medicina/métodos , Aprendizagem Baseada em Problemas , Estágio Clínico , Técnicas e Procedimentos Diagnósticos , Feminino , Humanos , Masculino , Quebeque , Faculdades de Medicina , Estudantes de MedicinaRESUMO
Testing has been shown to enhance retention of learned information beyond simple studying, a phenomena known as test-enhanced learning (TEL). Research has shown that TEL effects are greater for tests that require the production of responses [e.g., short-answer questions (SAQs)] relative to tests that require the recognition of correct answers [e.g., multiple-choice questions (MCQs)]. High stakes licensure examinations have recently differentiated MCQs that require the application of clinical knowledge (context-rich MCQs) from MCQs that rely on the recognition of "facts" (context-free MCQs). The present study investigated the influence of different types of educational activities (including studying, SAQs, context-rich MCQs and context-free MCQs) on later performance on a mock licensure examination. Fourth-year medical students (n = 224) from four Quebec universities completed four educational activities: one reading-based activity and three quiz-based activities (SAQs, context-rich MCQs, and context-free MCQs). We assessed the influence of the type of educational activity on students' subsequent performance in a mock licensure examination, which consisted of two types of context-rich MCQs: (1) verbatim replications of previous items and (2) items that tested the same learning objective but were new. Mean accuracy scores on the mock licensure exam were higher when intervening educational activities contained either context-rich MCQs (Mean z-score = 0.40) or SAQs (M = 0.39) compared to context-free MCQs (M = -0.38) or study only items (M = -0.42; all p < 0.001). Higher mean scores were only present for verbatim items (p < 0.001). The benefit of testing was observed when intervening educational activities required either the generation of a response (SAQs) or the application of knowledge (context-rich MCQs); however, this effect was only observed for verbatim test items. These data provide evidence that context-rich MCQs and SAQs enhance learning through testing compared to context-free MCQs or studying alone. The extent to which these findings generalize beyond verbatim questions remains to be seen.
Assuntos
Competência Clínica , Educação Médica/métodos , Avaliação Educacional/métodos , Aprendizagem , Humanos , Conhecimento , Quebeque , Estudantes de MedicinaRESUMO
Educational strategies that promote the development of clinical reasoning in students remain scarce. Generating self-explanations (SE) engages students in active learning and has shown to be an effective technique to improve clinical reasoning in clerks. Example-based learning has been shown to support the development of accurate knowledge representations. The purpose of this study was to investigate the effect of combining student's SE and observation of peer's or expert's SE examples on diagnostic performance. Fifty-three third-year medical students were assigned to a peer SE example, an expert SE example or control (no example) group. All participants solved a set of the same four clinical cases (training cases), 1-after SE, 2-after listening to a peer or expert SE example or after a control task, and 3-1 week later. They solved a new set of four different cases (transfer cases) also 1 week later. For training cases, students improved significantly their diagnostic performance overtime but the main effect of group was not significant suggesting that students' SE mainly drives the observed effect. On transfer cases, there was no difference between the three groups (p > .05). Educational implications are discussed and further studies on different types of examples and additional strategies to help students actively process examples are proposed.
Assuntos
Competência Clínica , Educação de Graduação em Medicina/métodos , Estágio Clínico , Técnicas e Procedimentos Diagnósticos , Avaliação Educacional , Feminino , Humanos , Masculino , Observação , Grupo Associado , Aprendizagem Baseada em Problemas , QuebequeRESUMO
CONTEXT: Education scholarship (ES) is integral to the transformation of medical education. Faculty members who engage in ES need encouragement and recognition of this work. Beginning with the definition of ES as 'an umbrella term which can encompass both research and innovation in health professions education', and which as such represents an activity that is separate and distinct from teaching and leadership, the purpose of our study was to explore how promotion policies and processes are used in Canadian medical schools to support and promote ES. METHODS: We conducted an analysis of the promotion policies of 17 Canadian medical schools and interviews with a key informant at each institution. We drew on an interpretive approach to policy analysis to analyse the data and to understand explicit messages about how ES was represented and supported. RESULTS: Of the 17 schools' promotion documents, only nine contained specific reference to ES. There was wide variation in focus and level of detail. All key informants indicated that ES is recognised and considered for academic promotion. Barriers to the support and recognition of ES included a lack of understanding of ES and its relationship to teaching and leadership. This was manifest in the variability in promotion policies and processes, support systems, and career planning and pathways for ES. CONCLUSIONS: This lack of clarity may make it challenging for medical school faculty members to make sense of how they might successfully align ES within an academic career. There is a need therefore to better articulate ES in promotion policies and support systems. Creating a common understanding of ES, developing guidelines to assess the impact of all forms of ES, developing an informed leadership and system of mentors, and creating explicit role descriptions and guidelines are identified as potential strategies to ensure that ES is appropriately valued.
Assuntos
Educação Médica/métodos , Bolsas de Estudo , Docentes de Medicina , Apoio Financeiro , HumanosRESUMO
BACKGROUND: Tutorial-based assessment commonly used in problem-based learning (PBL) is thought to provide information about students which is different from that gathered with traditional assessment strategies such as multiple-choice questions or short-answer questions. Although multiple-observations within units in an undergraduate medical education curriculum foster more reliable scores, that evaluation design is not always practically feasible. Thus, this study investigated the overall reliability of a tutorial-based program of assessment, namely the Tutotest-Lite. METHODS: More specifically, scores from multiple units were used to profile clinical domains for the first two years of a system-based PBL curriculum. RESULTS: G-Study analysis revealed an acceptable level of generalizability, with g-coefficients of 0.84 and 0.83 for Years 1 and 2, respectively. Interestingly, D-Studies suggested that as few as five observations over one year would yield sufficiently reliable scores. CONCLUSIONS: Overall, the results from this study support the use of the Tutotest-Lite to judge clinical domains over different PBL units.
Assuntos
Avaliação Educacional/métodos , Aprendizagem Baseada em Problemas , Reprodutibilidade dos TestesRESUMO
Validity as a social imperative foregrounds the social consequences of assessment and highlights the importance of building quality into the assessment development and monitoring processes. Validity as a social imperative is informed by current assessment trends such as programmatic-, longitudinal-, and rater-based assessment, and is one of the conceptualizations of validity currently at play in the Health Professions Education (HPE) literature. This Black Ice is intended to help readers to get a grip on how to embed principles of validity as a social imperative in the development and quality monitoring of an assessment. This piece draws on a program of work investigating validity as a social imperative, key HPE literature, and data generated through stakeholder interviews. We describe eight ways to implement validation practices that align with validity as a social imperative.
La validité en tant qu'impératif social met de l'avant les conséquences de l'évaluation des apprentissages sur la société et souligne l'importance d'intégrer la qualité dans le développement et le monitoring de l'évaluation des apprentissages. La validité en tant qu'impératif social est influencée par les tendances actuelles en matière d'évaluation, telles que l'évaluation programmatique, longitudinale et l'évaluation par des évaluateurs. La validité en tant qu'impératif social fait partie des conceptualisations actuellement présentes dans les écrits scientifiques dans le contexte de la pédagogie des sciences de la santé. Ce texte a pour but d'aider les lecteurs à comprendre comment intégrer les principes de validité en tant qu'impératif social dans le développement et le suivi de qualité d'une évaluation. Cet article s'appuie sur un programme de recherche qui examine la validité en tant qu'impératif social, sur les écrits scientifiques en pédagogie des sciences de la santé et des données provenant d'entrevues avec différentes parties prenantes. Nous décrivons huit façons de mettre en Åuvre des pratiques de validation qui sont en accord avec la validité en tant qu'impératif social.