Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 138
Filter
Add more filters

Publication year range
1.
Med Educ ; 58(6): 722-729, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38105389

ABSTRACT

INTRODUCTION: Early in COVID-19, continuing professional development (CPD) providers quickly made decisions about program content, design, funding and technology. Although experiences during an earlier pandemic cautioned providers to make disaster plans, CPD was not entirely prepared for this event. We sought to better understand how CPD organisations make decisions about CPD strategy and operations during a crisis. METHODS: This is a descriptive qualitative research study of decision making in two organisations: CPD at the University of Toronto (UofT) and the US-based Society for Academic Continuing Medical Education (SACME). In March 2021, using purposive and snowball sampling, we invited faculty and staff who held leadership positions to participate in semi-structured interviews. The interview focused on the individual's role and organisation, their decision-making process and reflections on how their units had changed because of COVID-19. Transcripts were reviewed, coded and analysed using thematic analysis. We used Mazmanian et al.'s Ecological Framework as a further conceptual tool. RESULTS: We conducted eight interviews from UofT and five from SACME. We identified that decision making during the pandemic occurred over four phases of reactions and impact from COVID-19, including shutdown, pivot, transition and the 'new reality'. The decision-making ability of CPD organisations changed throughout the pandemic, ranging from having little or no independent decision-making ability early on to having considerable control over choosing appropriate pathways forward. Decision making was strongly influenced by the creativity, adaptability and flexibility of the CPD community and the need for social connection. CONCLUSIONS: This adds to literature on the changes CPD organisations faced due to COVID-19, emphasising CPD organisations' adaptability in making decisions. Applying the Ecological Framework further demonstrates the importance of time to decision-making processes and the relational aspect of CPD. To face future crises, CPD will need to embrace creative, flexible and socially connected solutions. Future scholarship could explore an organisation's ability to rapidly adapt to better prepare for future crises.


Subject(s)
COVID-19 , Education, Medical, Continuing , Qualitative Research , Humans , Education, Medical, Continuing/organization & administration , SARS-CoV-2 , Decision Making , Pandemics , Ontario , Interviews as Topic
2.
Teach Learn Med ; 36(2): 244-252, 2024.
Article in English | MEDLINE | ID: mdl-37431929

ABSTRACT

Issue: The way educators think about the nature of competence, the approaches one selects for the assessment of competence, what generated data implies, and what counts as good assessment now involve broader and more diverse interpretive processes. Broadening philosophical positions in assessment has educators applying different interpretations to similar assessment concepts. As a result, what is claimed through assessment, including what counts as quality, can be different for each of us despite using similar activities and language. This is leading to some uncertainty on how to proceed or worse, provides opportunities for questioning the legitimacy of any assessment activity or outcome. While some debate in assessment is inevitable, most have been within philosophical positions (e.g., how best to minimize error), whereas newer debates are happening across philosophical positions (e.g., whether error is a useful concept). As new ways of approaching assessment have emerged, the interpretive nature of underlying philosophical positions has not been sufficiently attended to. Evidence: We illustrate interpretive processes of assessment in action by: (a) summarizing the current health professions assessment context from a philosophical perspective as a way of describing its evolution; (b) demonstrating implications in practice using two examples (i.e., analysis of assessment work and validity claims); and (c) examining pragmatism to demonstrate how even within specific philosophical positions opportunities for variable interpretations still exist. Implications: Our concern is not that assessment designers and users have different assumptions, but that practically, educators may unknowingly (or insidiously) apply different assumptions, and methodological and interpretive norms, and subsequently settle on different views on what serves as quality assessment even for the same assessment program or event. With the state of assessment in health professions in flux, we conclude by calling for a philosophically explicit approach to assessment, and underscore assessment as, fundamentally, an interpretive process - one which demands the careful elucidation of philosophical assumptions to promote understanding and ultimately defensibility of assessment processes and outcomes.


Subject(s)
Health Occupations , Humans
3.
Br J Surg ; 110(2): 233-241, 2023 01 10.
Article in English | MEDLINE | ID: mdl-36413510

ABSTRACT

BACKGROUND: Competency frameworks outline the perceived knowledge, skills, attitudes, and other attributes required for professional practice. These frameworks have gained in popularity, in part for their ability to inform health professions education, assessment, professional mobility, and other activities. Previous research has highlighted inadequate reporting related to their development which may then jeopardize their defensibility and utility. METHODS: This study aimed to develop a set of minimum reporting criteria for developers and authors of competency frameworks in an effort to improve transparency, clarity, interpretability and appraisal of the developmental process, and its outputs. Following guidance from the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, an expert panel was assembled, and a knowledge synthesis, a Delphi study, and workshops were conducted using individuals with experience developing competency frameworks, to identify and achieve consensus on the essential items for a competency framework development reporting guideline. RESULTS: An initial checklist was developed by the 35-member expert panel and the research team. Following the steps listed above, a final reporting guideline including 20 essential items across five sections (title and abstract; framework development; development process; testing; and funding/conflicts of interest) was developed. CONCLUSION: The COmpeteNcy FramEwoRk Development in Health Professions (CONFERD-HP) reporting guideline permits a greater understanding of relevant terminology, core concepts, and key items to report for competency framework development in the health professions.


Subject(s)
Checklist , Health Occupations , Humans , Consensus , Delphi Technique
4.
Adv Health Sci Educ Theory Pract ; 28(4): 1333-1345, 2023 10.
Article in English | MEDLINE | ID: mdl-36729196

ABSTRACT

This paper is motivated by a desire to advance assessment in the health professions through encouraging the judicious and productive use of metaphors. Through five specific examples (pixels, driving lesson/test, jury deliberations, signal processing, and assessment as a toolbox), we interrogate how metaphors are being used in assessment to consider what value they add to understanding and implementation of assessment practices. By unpacking these metaphors in action, we probe each metaphor's rationale and function, the gains each metaphor makes, and explore the unintended meanings they may carry. In summarizing common uses of metaphors, we elucidate how there may be both advantages and/or disadvantages. Metaphors can play important roles in simplifying, complexifying, communicating, translating, encouraging reflection, and convincing. They may be powerfully rhetorical, leading to intended consequences, actions, and other pragmatic outcomes. Although metaphors can be extremely helpful, they do not constitute thorough critique, justified evidence or argumentation. We argue that although metaphors have utility, they must be carefully considered if they are to serve assessment needs in intended ways. We should pay attention to how metaphors may be misinterpreted, what they ignore or unintentionally signal, and perhaps mitigate against this with anticipated corrections or nuanced qualifications. Failure to do so may lead to implementing practices that miss underlying and relevant complexities for assessment science and practice. Using metaphors requires careful attention with respect to their role, contributions, benefits and limitations. We highlight the value that comes from critiquing metaphors, and demonstrate the care required to ensure their continued utility.


Subject(s)
Language , Metaphor , Humans
5.
Adv Health Sci Educ Theory Pract ; 28(5): 1697-1709, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37140661

ABSTRACT

In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.


Subject(s)
Education, Medical , Educational Measurement , Humans , Reproducibility of Results
6.
Med Educ ; 56(10): 1042-1050, 2022 10.
Article in English | MEDLINE | ID: mdl-35701388

ABSTRACT

BACKGROUND: Given the widespread use of Multiple Mini Interviews (MMIs), their impact on the selection of candidates and the considerable resources invested in preparing and administering them, it is essential to ensure their quality. Given the variety of station formats used and the degree to which that factor resides in the control of training programmes that we know so little about, format's effect on MMI quality is a considerable oversight. This study assessed the effect of two popular station formats (interview vs. role-play) on the psychometric properties of MMIs. METHODS: We analysed candidate data from the first 8 years of the Integrated French MMIs (IF-MMI) (2010-2017, n = 11 761 applicants), an MMI organised yearly by three francophone universities and administered at four testing sites located in two Canadian provinces. There were 84 role-play and 96 interview stations administered, totalling 180 stations. Mixed design analyses of variance (ANOVAs) were used to test the effect of station format on candidates' scores and stations' discrimination. Cronbach's alpha coefficients for interview and role-play stations were also compared. Predictive validity of both station formats was estimated with a mixed multiple linear regression model testing the relation between interview and role-play scores with average clerkship performance for those who gained entry to medical school (n = 462). RESULTS: Role-play stations (M = 20.67, standard deviation [SD] = 3.38) had a slightly lower mean score than interview stations (M = 21.36, SD = 3.08), p < 0.01, Cohen's d = 0.2. The correlation between role-play and interview stations scores was r = 0.5 (p < 0.01). Discrimination coefficients, Cronbach's alpha and predictive validity statistics did not vary by station format. CONCLUSION: Interview and role-play stations have comparable psychometric properties, suggesting format to be interchangeable. Programmes should select station format based on match to the personal qualities for which they are trying to select.


Subject(s)
School Admission Criteria , Schools, Medical , Canada , Humans , Psychometrics , Reproducibility of Results
7.
BMC Health Serv Res ; 22(1): 595, 2022 May 03.
Article in English | MEDLINE | ID: mdl-35505321

ABSTRACT

BACKGROUND: Being responsive and adaptive to local population needs is a key principle of integrated care, and traditional top-down approaches to health system governance are considered to be ineffective. There is need for more guidance on taking flexible, complexity-aware approaches to governance that foster integration and adaptability in the health system. Over the past two decades, paramedics in Ontario, Canada have been filling gaps in health and social services beyond their traditional mandate of emergency transport. Studying these grassroots, local programs can provide insight into how health systems can be more integrated, adaptive and responsive. METHODS: Semi-structured interviews were conducted with people involved in new, integrated models of paramedic care in Ontario. Audio recordings of interviews were transcribed and coded inductively for participants' experiences, including drivers, enablers and barriers to implementation. Thematic analysis was done to ascertain key concepts from across the dataset. RESULTS: Twenty-six participants from across Ontario's five administrative health regions participated in the study. Participants described a range of programs that included acute, urgent and preventative care driven by local relationship networks of paramedics, hospitals, primary care, social services and home care. Three themes were developed that represent participants' experiences implementing these programs in the Ontario context. The first theme, adapting and being nimble in tension with system structures, related to distributed versus central control of programs, a desire to be nimble and skepticism towards prohibitive legal and regulatory systems. The second theme, evolving and flexible professional role identity, highlighted the value and challenges of a functionally flexible workforce and interest in new roles amongst the paramedic profession. The third theme, unpredictable influences on program implementation, identified events such as the COVID-19 pandemic and changing government priorities as accelerating, redirecting or inhibiting local program development. CONCLUSIONS: The findings of this study add to the discourse on governing health systems towards being more integrated, adaptive and responsive to population needs. Governance strategies include: supporting networks of local organizational relationships; considering the role of a functionally flexible health workforce; promoting a shared vision and framework for collaboration; and enabling distributed, local control and experimentation.


Subject(s)
COVID-19 , Pandemics , Delivery of Health Care , Humans , Ontario , Qualitative Research
8.
Med Educ ; 55(4): 518-529, 2021 04.
Article in English | MEDLINE | ID: mdl-33259070

ABSTRACT

INTRODUCTION: Capitalising on direct workplace observations of residents by interprofessional team members might be an effective strategy to promote formative feedback in postgraduate medical education. To better understand how interprofessional feedback is conceived, delivered, received and used, we explored both feedback provider and receiver perceptions of workplace feedback. METHODS: We conducted 17 individual interviews with residents and eight focus groups with health professionals (HPs) (two nurses, two rehabilitation therapists, two pharmacists and two social workers), for a total of 61 participants. Using a constructivist grounded theory approach, data collection and analysis proceeded as an iterative process using constant comparison to identify and explore themes. RESULTS: Conceptualisations and content of feedback were dependent on whether the resident was perceived as a learner or a peer within the interprofessional relationship. Residents relied on interprofessional role understanding to determine how physician competencies align with HP roles. The perceived alignment was unique to each profession and influenced feedback credibility judgements. Residents prioritised feedback from physicians or within the Medical Expertise domain-a role that HPs felt was over-valued. Despite ideal opportunities for direct observation, operational enactment of feedback was influenced by power differentials between the professions. DISCUSSION: Our results illuminate HPs' conceptualisation of feedback for residents and the social constructs influencing how their feedback is disseminated. Professional identity and social categorisation added complexity to feedback acceptance and incorporation. To ensure that interprofessional feedback can achieve desired outcomes, education programmes should implement strategies to help mitigate intergroup bias and power imbalance.


Subject(s)
Education, Medical , Feedback , Internship and Residency , Education, Medical, Graduate , Humans , Interprofessional Relations , Qualitative Research
9.
Adv Health Sci Educ Theory Pract ; 26(4): 1355-1371, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34003391

ABSTRACT

Competency frameworks provide a link between professional practice, education, training, and assessment. They support and inform downstream processes such as curriculum design, assessment, accreditation and professional accountability. However, existing guidelines are limited in accounting for the complexities of professional practice potentially undermining utility of such guidelines and validity of outcomes. This necessitates additional ways of "seeing" situated and context-specific practice. We highlight what a conceptual framework informed by systems thinking can offer when developing competency frameworks. Mirroring shifts towards systems thinking in program evaluation and quality improvement, we suggest that similar approaches that identify and make use of the role and influence of system features and contexts can provide ways of augmenting existing guidelines when developing competency frameworks. We framed a systems thinking approach in two ways. First using an adaptation of Ecological Systems Theory which offers a realist perspective of the person and environment, and the evolving interaction between the two. Second, by employing complexity thinking, which obligates attention to the relationships and influences of features within the system, we can explore the multiple complex, unique, and context-embedded problems that exist within and have stake in real-world practice settings. The ability to represent clinical practice when developing competency frameworks can be improved when features that may be relevant, including their potential interactions, are identified and understood. A conceptual framework informed by systems thinking makes visible features of a practice in context that may otherwise be overlooked when developing competency frameworks using existing guidelines.


Subject(s)
Competency-Based Education , Education, Medical, Undergraduate , Clinical Competence , Curriculum , Humans , Systems Analysis
10.
Adv Health Sci Educ Theory Pract ; 26(5): 1597-1623, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34370126

ABSTRACT

Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.


Subject(s)
Knowledge , Humans
11.
Eur J Anaesthesiol ; 38(8): 831-838, 2021 08 01.
Article in English | MEDLINE | ID: mdl-33883459

ABSTRACT

BACKGROUND: Decision-making deficits in airway emergencies have led to adverse patient outcomes. A cognitive aid would assist clinicians through critical decision-making steps, preventing key action omission. OBJECTIVE: We aimed to investigate the effects of a visual airway cognitive aid on decision-making in a simulated airway emergency scenario. DESIGN: Randomised controlled study. SETTING: Single-institution, tertiary-level hospital in Toronto, Canada from September 2017 to March 2019. PARTICIPANTS: Teams consisting of a participant anaesthesia resident, nurse and respiratory therapist were randomised to intervention (N = 20 teams) and control groups (N = 20 teams). INTERVENTION: Participants in both groups received a 15-min didactic session on crisis resource management which included teamwork communication and the concepts of cognitive aids for the management of nonairway and airway critical events. Only participants in the intervention group were familiarised, oriented and instructed on a visual airway cognitive aid that was developed for this study. Within 1 to 4 weeks after the teaching session, teams were video-recorded managing a simulated 'cannot intubate-cannot oxygenate' scenario with the aid displayed in the simulation centre. MAIN OUTCOME MEASURES: Decision-making time to perform a front-of-neck access (FONA), airway checklist actions, teamwork performances and a postscenario questionnaire. RESULTS: Both groups performed similar key airway actions; however, the intervention group took a shorter decision-making time than the control group to perform a FONA after a last action [mean ± SD, 80.9 ±â€Š54.5 vs. 122.2 ±â€Š55.7 s; difference (95% CI) -41.2 (-76.5 to -6.0) s, P = 0.023]. Furthermore, the intervention group used the aid more than the control group (63.0 vs. 28.1%, P < 0.001). Total time of scenario completion, action checklist and teamwork performances scores were similar between groups. CONCLUSIONS: Prior exposure and teaching of a visual airway cognitive aid improved decision-making time to perform a FONA during a simulated airway emergency.


Subject(s)
Anesthesiology , Emergencies , Airway Management , Canada , Cognition , Humans
12.
BMC Emerg Med ; 21(1): 117, 2021 10 12.
Article in English | MEDLINE | ID: mdl-34641823

ABSTRACT

BACKGROUND: Increasing hospitalization rates present unique challenges to manage limited inpatient bed capacity and services. Transport by paramedics to the emergency department (ED) may influence hospital admission decisions independent of patient need/acuity, though this relationship has not been established. We examined whether mode of transportation to the ED was independently associated with hospital admission. METHODS: We conducted a retrospective cohort study using the National Ambulatory Care Reporting System (NACRS) from April 1, 2015 to March 31, 2020 in Ontario, Canada. We included all adult patients (≥18 years) who received a triage score in the ED and presented via paramedic transport or self-referral (walk-in). Multivariable binary logistic regression was used to determine the association of mode of transportation between hospital admission, after adjusting for important patient and visit characteristics. RESULTS: During the study period, 21,764,640 ED visits were eligible for study inclusion. Approximately one-fifth (18.5%) of all ED visits were transported by paramedics. All-cause hospital admission incidence was greater when transported by paramedics (35.0% vs. 7.5%) and with each decreasing Canadian Triage and Acuity Scale level. Paramedic transport was independently associated with hospital admission (OR = 3.76; 95%CI = 3.74-3.77), in addition to higher medical acuity, older age, male sex, greater than two comorbidities, treatment in an urban setting and discharge diagnoses specific to the circulatory or digestive systems. CONCLUSIONS: Transport by paramedics to an ED was independently associated with hospital admission as the disposition outcome, when compared against self-referred visits. Our findings highlight patient and visit characteristics associated with hospital admission, and can be used to inform proactive healthcare strategizing for in-patient bed management.


Subject(s)
Allied Health Personnel , Emergency Service, Hospital , Adult , Aged , Cohort Studies , Hospitalization , Hospitals , Humans , Male , Ontario , Retrospective Studies
13.
J Cancer Educ ; 36(1): 118-125, 2021 02.
Article in English | MEDLINE | ID: mdl-31446618

ABSTRACT

Continuing professional development (CPD) and lifelong learning are core tenets of most healthcare disciplines. Where undergraduate coursework lays the foundation for entry into practice, CPD courses and offerings are designed to aid clinicians in maintaining these competencies. CPD offerings need to be frequently revised and updated to ensure their continued utility. The purpose of this qualitative study was to better understand the CPD needs of members of the University of Toronto's Department of Radiation Oncology (UTDRO) and determine how these needs could be generalized to other CPD programs. Given that UTDRO consists of members of various health disciplines (radiation therapist, medical physicists, radiation oncologists, etc.), eleven semi-structured interviews were conducted with various health professionals from UTDRO. Inductive thematic analysis using qualitative data processing with NVivo® was undertaken. The data was coded, sorted into categories, and subsequently reviewed for emergent themes. Participants noted that a general lack of awareness and lack of access made participation in CPD programs difficult. Members also noted that topics were often impractical, irrelevant, or not inclusive of different professions. Some participants did not feel motivated to engage in CPD offerings due to a general lack of time and lack of incentive. To address the deficiencies of CPD programs, a formal needs assessment that engages stakeholders from different centers and health professions is required. Needs assessments of CPD programs should include analyzing elements related to access, how to utilize technology-enhanced learning (TEL), determine barriers to participation, and understand how to better engage members.


Subject(s)
Radiation Oncology , Faculty , Health Personnel , Humans , Learning , Motivation
14.
Med Educ ; 54(10): 932-942, 2020 10.
Article in English | MEDLINE | ID: mdl-32614480

ABSTRACT

OBJECTIVES: Competency-based medical education (CBME) requires that educators structure assessment of clinical competence using outcome frameworks. Although these frameworks may serve some outcomes well (e.g. represent eventual practice), translating these into workplace-based assessment plans may undermine validity and, therefore, trustworthiness of assessment decisions due to a number of competing factors that may not always be visible or their impact knowable. Explored here is the translation process from outcome framework to formative and summative assessment plans in postgraduate medical education (PGME) in three Canadian universities. METHODS: We conducted a qualitative study involving in-depth semi-structured interviews with leaders of PGME programmes involved in assessment and/or CBME implementation, with a focus on their assessment-based translational activities and evaluation strategies. Interviews were informed by Callon's theory of translation. Our analytical strategy involved directed content analysis, allowing us to be guided by Kane's validity framework, whilst still participating in open coding and analytical memo taking. We then engaged in axial coding to systematically explore themes across the dataset, various situations and our conceptual framework. RESULTS: Twenty-four interviews were conducted involving 15 specialties across three universities. Our results suggest: (i) using outcomes frameworks for assessment is necessary for good assessment but are also viewed as incomplete constructs; (ii) there are a number of social and practical negotiations with competing factors that displace validity as a core influencer in assessment planning, including implementation, accreditation and technology; and (iii) validity exists as threatened, uncertain and assumed due to a number of unchecked assumptions and reliance on surrogates. CONCLUSIONS: Translational processes in CBME involve negotiating with numerous influencing actors and institutions that, from an assessment perspective, provide challenges for assessment scientists, institutions and educators to contend with. These processes are challenging validity as a core element of assessment designs. Educators must reconcile these influences when preparing for or structuring validity arguments.


Subject(s)
Education, Medical , Physicians , Canada , Clinical Competence , Competency-Based Education , Humans
15.
Adv Health Sci Educ Theory Pract ; 25(4): 913-987, 2020 10.
Article in English | MEDLINE | ID: mdl-31797195

ABSTRACT

Competency frameworks serve various roles including outlining characteristics of a competent workforce, facilitating mobility, and analysing or assessing expertise. Given these roles and their relevance in the health professions, we sought to understand the methods and strategies used in the development of existing competency frameworks. We applied the Arksey and O'Malley framework to undertake this scoping review. We searched six electronic databases (MEDLINE, CINAHL, PsycINFO, EMBASE, Scopus, and ERIC) and three grey literature sources (greylit.org, Trove and Google Scholar) using keywords related to competency frameworks. We screened studies for inclusion by title and abstract, and we included studies of any type that described the development of a competency framework in a healthcare profession. Two reviewers independently extracted data including study characteristics. Data synthesis was both quantitative and qualitative. Among 5710 citations, we selected 190 for analysis. The majority of studies were conducted in medicine and nursing professions. Literature reviews and group techniques were conducted in 116 studies each (61%), and 85 (45%) outlined some form of stakeholder deliberation. We observed a significant degree of diversity in methodological strategies, inconsistent adherence to existing guidance on the selection of methods, who was involved, and based on the variation we observed in timeframes, combination, function, application and reporting of methods and strategies, there is no apparent gold standard or standardised approach to competency framework development. We observed significant variation within the conduct and reporting of the competency framework development process. While some variation can be expected given the differences across and within professions, our results suggest there is some difficulty in determining whether methods were fit-for-purpose, and therefore in making determinations regarding the appropriateness of the development process. This uncertainty may unwillingly create and legitimise uncertain or artificial outcomes. There is a need for improved guidance in the process for developing and reporting competency frameworks.


Subject(s)
Clinical Competence/standards , Educational Measurement/standards , Health Occupations/education , Humans , Reproducibility of Results
16.
Adv Health Sci Educ Theory Pract ; 25(4): 1003-1018, 2020 10.
Article in English | MEDLINE | ID: mdl-31677146

ABSTRACT

The array of different philosophical positions underlying contemporary views on competence, assessment strategies and justification have led to advances in assessment science. Challenges may arise when these philosophical positions are not considered in assessment design. These can include (a) a logical incompatibility leading to varied or difficult interpretations of assessment results, (b) an "anything goes" approach, and (c) uncertainty regarding when and in what context various philosophical positions are appropriate. We propose a compatibility principle that recognizes that different philosophical positions commit assessors/assessment researchers to particular ideas, assumptions and commitments, and applies ta logic of philosophically-informed, assessment-based inquiry. Assessment is optimized when its underlying philosophical position produces congruent, aligned and coherent views on constructs, assessment strategies, justification and their interpretations. As a way forward we argue that (a) there can and should be variability in the philosophical positions used in assessment, and these should be clearly articulated to promote understanding of assumptions and make sense of justifications; (b) we focus on developing the merits, boundaries and relationships within and/or between philosophical positions in assessment; (c) we examine a core set of principles related to the role and relevance of philosophical positions; (d) we elaborate strategies and criteria to delineate compatible from incompatible; and (f) we articulate a need to broaden knowledge/competencies related to these issues. The broadened use of philosophical positions in assessment in the health professions affect the "state of play" and can undermine assessment programs. This may be overcome with attention to the alignment between underlying assumptions/commitments.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Educational Measurement/standards , Health Occupations/education , Philosophy , Humans , Reproducibility of Results
17.
Adv Health Sci Educ Theory Pract ; 23(2): 323-338, 2018 May.
Article in English | MEDLINE | ID: mdl-29079933

ABSTRACT

Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying paramedics, using Kane's validity framework, which some report as challenging to implement. We describe our experience using the framework, identifying challenges, decisions points, interpretations and lessons learned. We considered data related to four inferences (scoring, generalization, extrapolation, implications) occurring during assessment and treated validity as a series of assumptions we must evaluate, resulting in several hypotheses and proposed analyses. We then interpreted our findings across the four inferences, judging if the evidence supported or refuted our proposed uses of the assessment data. Data evaluating "Scoring" included: (a) desirable tool characteristics, with acceptable inter-item correlations (b) strong item-total correlations (c) low error variance for items and raters, and (d) strong inter-rater reliability. Data evaluating "Generalizability" included: (a) a robust sampling strategy capturing the majority of relevant medical directives, skills and national competencies, and good overall and inter-station reliability. Data evaluating "Extrapolation" included: low correlations between assessment scores by dimension and clinical errors in practice. Data evaluating "Implications" included low error rates in practice. Interpreting our findings according to Kane's framework, we suggest the evidence for scoring, generalization and implications supports use of our simulation-based paramedic assessment strategy as a certifying exam; however, the extrapolation evidence was weak, suggesting exam scores did not predict clinical error rates. Our analysis represents a worked example others can follow when using Kane's validity framework to evaluate, and iteratively develop and refine assessment strategies.


Subject(s)
Certification/standards , Clinical Competence/standards , Educational Measurement/methods , Educational Measurement/standards , Emergency Medical Technicians/standards , Decision Making , Humans , Judgment , Observer Variation , Psychometrics , Reproducibility of Results
18.
Prehosp Emerg Care ; 21(5): 652-661, 2017.
Article in English | MEDLINE | ID: mdl-28467124

ABSTRACT

OBJECTIVES: Emergency departments (ED) continue to be overburdened, leading to crowding and elevated risk of negative clinical outcomes. Extending clinical services to paramedics may support efforts to improve ED burdens by promoting health care access and capacity during times of patient crisis. The objective of this study was to identify the clinical course and most responsible diagnosis of patients transported by paramedic services to local EDs to then evaluate impact of various augmented 9-1-1/paramedic clinical service models on the need for additional ED services. METHODS: A retrospective cohort and model-simulation based study. We retrieved clinical data from hospital records for a random selection of 3,000 patients who engaged 9-1-1/paramedic services and were transported to a regional ED to identify their clinical course (interventions, diagnostics) disposition and most responsible admitting/discharge diagnosis. We used this data to establish, simulate and test numerous paramedic service models on the need for ED services. RESULTS: A random selection of 3,000 patients was reviewed across 3 hospitals. The majority were female (57.2%) with a mean age of 65 (SD = 21.3). The majority (n = 1954; 65.1%) were discharged directly from ED of which 3.6% (n = 108) received no intervention or diagnostic, 20.4% (n = 611) received only a diagnostic, 4.8% (n = 143) received only an intervention and 36.4% (n = 1092) received both an intervention and diagnostic. The proportion of nonadmitted patients rose to 82.2% and 77.2% when considering lower priority patients and age greater than 65, respectively. Patient types were identified based on frequency and association with discharge directly from ED. Twelve simulated augmented paramedic clinical service models are reported with estimated gains in the number of patients who may no longer require ED services ranging from 7.5% (n = 146) to 35.4% (n = 691). CONCLUSIONS: This study suggests a reduction in need for ED services may be achieved through innovative models of paramedic services at the time of crisis. Identifying and confirming patient types/events to target and clinical services to include in the model requires ongoing investigation. Future research will be needed to evaluate the accuracy and impact of the models presented.


Subject(s)
Allied Health Personnel/statistics & numerical data , Delivery of Health Care/statistics & numerical data , Emergency Medical Services/statistics & numerical data , Emergency Service, Hospital/statistics & numerical data , Adult , Aged , Canada , Cohort Studies , Databases, Factual , Delivery of Health Care/methods , Emergency Medical Services/methods , Female , Hospitalization/statistics & numerical data , Humans , Male , Middle Aged , Models, Theoretical , Patient Discharge/statistics & numerical data , Retrospective Studies
19.
Adv Health Sci Educ Theory Pract ; 22(3): 581-600, 2017 Aug.
Article in English | MEDLINE | ID: mdl-27295218

ABSTRACT

The changing nature of healthcare education and delivery is such that clinicians will increasingly find themselves practicing in contexts that are physically and/or conceptually different from the settings in which they were trained, a practice that conflicts on some level with socio-cultural theories of learning that emphasize learning in context. Our objective was therefore to explore learning in 'professionally distant' contexts. Using paramedic education, where portions of training occur in hospital settings despite preparing students for out-of-hospital work, fifty-three informants (11 current students, 13 recent graduates, 16 paramedic program faculty and 13 program coordinators/directors) took part in five semi-structured focus groups. Participants reflected on the value and role of hospital placements in paramedic student development. All sessions were audio recorded, transcribed verbatim and analyzed using inductive thematic analysis. In this context six educational advantages and two challenges were identified when using professionally distant learning environments. Learning could still be associated with features such as (a) engagement through "authenticity", (b) technical skill development, (c) interpersonal skill development, (d) psychological resilience, (e) healthcare system knowledge and (f) scaffolding. Variability in learning and misalignment with learning goals were identified as potential threats. Learning environments that are professionally distant from eventual practice settings may prove meaningful by providing learners with foundational and preparatory learning experiences for competencies that may be transferrable. This suggests that where learning occurs may be less important than how the experience contributes to the learner's development and the meaning or value he/she derives from it.


Subject(s)
Allied Health Personnel/education , Education, Professional , Emergency Medicine/education , Hospitals , Problem-Based Learning , Focus Groups , Humans
20.
Med Educ ; 50(5): 511-22, 2016 May.
Article in English | MEDLINE | ID: mdl-27072440

ABSTRACT

BACKGROUND: Given the complexity of competency frameworks, associated skills and abilities, and contexts in which they are to be assessed in competency-based education (CBE), there is an increased reliance on rater judgements when considering trainee performance. This increased dependence on rater-based assessment has led to the emergence of rater cognition as a field of research in health professions education. The topic, however, is often conceptualised and ultimately investigated using many different perspectives and theoretical frameworks. Critically analysing how researchers think about, study and discuss rater cognition or the judgement processes in assessment frameworks may provide meaningful and efficient directions in how the field continues to explore the topic. METHODS: We conducted a critical and integrative review of the literature to explore common conceptualisations and unified terminology associated with rater cognition research. We identified 1045 articles on rater-based assessment in health professions education using Scorpus, Medline and ERIC and 78 articles were included in our review. RESULTS: We propose a three-phase framework of observation, processing and integration. We situate nine specific mechanisms and sub-mechanisms described across the literature within these phases: (i) generating automatic impressions about the person; (ii) formulating high-level inferences; (iii) focusing on different dimensions of competencies; (iv) categorising through well-developed schemata based on (a) personal concept of competence, (b) comparison with various exemplars and (c) task and context specificity; (v) weighting and synthesising information differently, (vi) producing narrative judgements; and (vii) translating narrative judgements into scales. CONCLUSION: Our review has allowed us to identify common underlying conceptualisations of observed rater mechanisms and subsequently propose a comprehensive, although complex, framework for the dynamic and contextual nature of the rating process. This framework could help bridge the gap between researchers adopting different perspectives when studying rater cognition and enable the interpretation of contradictory findings of raters' performance by determining which mechanism is enabled or disabled in any given context.


Subject(s)
Cognition , Educational Measurement , Competency-Based Education , Education, Medical , Educational Measurement/methods , Humans , Judgment
SELECTION OF CITATIONS
SEARCH DETAIL