Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 84
Filter
1.
Med Teach ; : 1-9, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38976711

ABSTRACT

INTRODUCTION: Ensuring equivalence in high-stakes performance exams is important for patient safety and candidate fairness. We compared inter-school examiner differences within a shared OSCE and resulting impact on students' pass/fail categorisation. METHODS: The same 6 station formative OSCE ran asynchronously in 4 medical schools, with 2 parallel circuits/school. We compared examiners' judgements using Video-based Examiner Score Comparison and Adjustment (VESCA): examiners scored station-specific comparator videos in addition to 'live' student performances, enabling 1/controlled score comparisons by a/examiner-cohorts and b/schools and 2/data linkage to adjust for the influence of examiner-cohorts. We calculated score impact and change in pass/fail categorisation by school. RESULTS: On controlled video-based comparisons, inter-school variations in examiners' scoring (16.3%) were nearly double within-school variations (8.8%). Students' scores received a median adjustment of 5.26% (IQR 2.87-7.17%). The impact of adjusting for examiner differences on students' pass/fail categorisation varied by school, with adjustment reducing failure rate from 39.13% to 8.70% (school 2) whilst increasing failure from 0.00% to 21.74% (school 4). DISCUSSION: Whilst the formative context may partly account for differences, these findings query whether variations may exist between medical schools in examiners' judgements. This may benefit from systematic appraisal to safeguard equivalence. VESCA provided a viable method for comparisons.

2.
Med Teach ; : 1-9, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38635469

ABSTRACT

INTRODUCTION: Whilst rarely researched, the authenticity with which Objective Structured Clinical Exams (OSCEs) simulate practice is arguably critical to making valid judgements about candidates' preparedness to progress in their training. We studied how and why an OSCE gave rise to different experiences of authenticity for different participants under different circumstances. METHODS: We used Realist evaluation, collecting data through interviews/focus groups from participants across four UK medical schools who participated in an OSCE which aimed to enhance authenticity. RESULTS: Several features of OSCE stations (realistic, complex, complete cases, sufficient time, autonomy, props, guidelines, limited examiner interaction etc) combined to enable students to project into their future roles, judge and integrate information, consider their actions and act naturally. When this occurred, their performances felt like an authentic representation of their clinical practice. This didn't work all the time: focusing on unavoidable differences with practice, incongruous features, anxiety and preoccupation with examiners' expectations sometimes disrupted immersion, producing inauthenticity. CONCLUSIONS: The perception of authenticity in OSCEs appears to originate from an interaction of station design with individual preferences and contextual expectations. Whilst tentatively suggesting ways to promote authenticity, more understanding is needed of candidates' interaction with simulation and scenario immersion in summative assessment.

3.
Clin Med (Lond) ; 24(1): 100002, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38350406

ABSTRACT

The UK Research Excellence Framework (REF) is an assessment of the quality of research carried out in UK Higher Education Institutions (HEIs), performed in 7-year cycles. The outcome impacts the rankings and funding of UK HEIs, which afford the exercise high priority. Much of what REF measures is known to be biased against academics with protected characteristics: for example, women and ethnic minority researchers are less likely to win grants or be published in prestigious journals. Despite changes to REF since 2014, the risk remains that the process might amplify well-recognised existing disparities. The BMA Women in Academic Medicine and Medical Academic Staff Committee carried out a survey of UK clinical academics' experiences of REF2021. The data indicated the persistence of activities previously characterised as 'extremely harmful' in Research England-commissioned work, affecting up to 10% of clinical academics. While acknowledging the limitations of the data, women appeared to be disproportionately affected.


Subject(s)
Ethnicity , Minority Groups , Humans , Female , England , Exercise , Medical Staff
4.
BMC Med Educ ; 23(1): 803, 2023 Oct 26.
Article in English | MEDLINE | ID: mdl-37885005

ABSTRACT

PURPOSE: Ensuring equivalence of examiners' judgements within distributed objective structured clinical exams (OSCEs) is key to both fairness and validity but is hampered by lack of cross-over in the performances which different groups of examiners observe. This study develops a novel method called Video-based Examiner Score Comparison and Adjustment (VESCA) using it to compare examiners scoring from different OSCE sites for the first time. MATERIALS/ METHODS: Within a summative 16 station OSCE, volunteer students were videoed on each station and all examiners invited to score station-specific comparator videos in addition to usual student scoring. Linkage provided through the video-scores enabled use of Many Facet Rasch Modelling (MFRM) to compare 1/ examiner-cohort and 2/ site effects on students' scores. RESULTS: Examiner-cohorts varied by 6.9% in the overall score allocated to students of the same ability. Whilst only a tiny difference was apparent between sites, examiner-cohort variability was greater in one site than the other. Adjusting student scores produced a median change in rank position of 6 places (0.48 deciles), however 26.9% of students changed their rank position by at least 1 decile. By contrast, only 1 student's pass/fail classification was altered by score adjustment. CONCLUSIONS: Whilst comparatively limited examiner participation rates may limit interpretation of score adjustment in this instance, this study demonstrates the feasibility of using VESCA for quality assurance purposes in large scale distributed OSCEs.


Subject(s)
Educational Measurement , Students, Medical , Humans , Educational Measurement/methods , Clinical Competence
5.
Med Teach ; 45(6): 559-564, 2023 06.
Article in English | MEDLINE | ID: mdl-36622887

ABSTRACT

INTRODUCTION: The education of the future health care workforce is fundamental to ensuring safe, effective, and inclusive patient care. Despite this there has been chronic underinvestment in health care education and, even though there is an increased need for educators, the true number of medical educators has been in relative decline for over a decade. PURPOSE: In this paper, we focus on the role of doctors as medical educators. We reflect on the culture in which medical education and training are delivered, the challenges faced, and their origins and sustaining factors. We propose a re-framing of this culture by applying Maslow's principles of the hierarchy of needs to medical educators, not only as individuals but as a specialist group and to the system in which this group works, to instigate actionable change and promote self-actualization for medical educators. DISCUSSION: Promoting and supporting the work of doctors who are educators is critically important. Despite financial investment in some practice areas, overall funding for and the number of medical educators continues to decline. Continuing Professional Development (CPD) schemes such as those offered by specialised medical education associations are welcomed, but without time, funding and a supportive culture from key stakeholders, medical educators cannot thrive and reach their potential. CONCLUSION: We need to revolutionise the culture in which medical education is practised, where medical educators are valued and commensurately rewarded as a diverse group of specialists who have an essential role in training the health care workforce to support the delivery of excellent, inclusive health care for patients. By reimagining the challenges faced as a hierarchy we show that until the fundamental needs of value, funding and time are realised, it will remain challenging to instigate the essential change that is needed.


Subject(s)
Education, Medical , Physicians , Humans , Delivery of Health Care , Motivation , Health Personnel
6.
Med Sci Educ ; 32(2): 371-378, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35528309

ABSTRACT

Introduction: Certainty/uncertainty in medicine is a topic of popular debate. This study aims to understand how biomedical uncertainty is conceptualised by academic medical educators and how it is taught in a medical school in the UK. Methods: This is an exploratory qualitative study grounded in ethnographic principles. This study is based on 10 observations of teaching sessions and seven semi-structured qualitative interviews with medical educators from various biomedical disciplines in a UK medical school. The data set was analysed via a thematic analysis. Results: Four main themes were identified after analysis: (1) ubiquity of biomedical uncertainty, (2) constraints to teaching biomedical uncertainty, (3) the 'medic filter' and (4) fluid distinction: core versus additional knowledge. While medical educators had differing understandings of how biomedical uncertainty is articulated in their disciplines, its presence was ubiquitous. This ubiquity did not translate into teaching due to time constraints and assessment strategies. The 'medic filter' emerged as a strategy that educators employed to decide what to include in their teaching. They made distinctions between core and additional knowledge which were defined in varied ways across disciplines. Additional knowledge often encapsulated biomedical uncertainty. Discussion: Even though the perspective that knowledge is socially constructed is not novel in medical education, it is neither universally valued nor universally applied. Moving beyond situativity theories and into broader debates in social sciences provides new opportunities to discuss the nature of scientific knowledge in medical education. We invite a move away from situated learning to situated knowledge.

7.
Acad Med ; 97(4): 475-476, 2022 04 01.
Article in English | MEDLINE | ID: mdl-35353728
8.
BMC Med Educ ; 22(1): 41, 2022 Jan 17.
Article in English | MEDLINE | ID: mdl-35039023

ABSTRACT

BACKGROUND: Ensuring equivalence of examiners' judgements across different groups of examiners is a priority for large scale performance assessments in clinical education, both to enhance fairness and reassure the public. This study extends insight into an innovation called Video-based Examiner Score Comparison and Adjustment (VESCA) which uses video scoring to link otherwise unlinked groups of examiners. This linkage enables comparison of the influence of different examiner-groups within a common frame of reference and provision of adjusted "fair" scores to students. Whilst this innovation promises substantial benefit to quality assurance of distributed Objective Structured Clinical Exams (OSCEs), questions remain about how the resulting score adjustments might be influenced by the specific parameters used to operationalise VESCA. Research questions, How similar are estimates of students' score adjustments when the model is run with either: fewer comparison videos per participating examiner?; reduced numbers of participating examiners? METHODS: Using secondary analysis of recent research which used VESCA to compare scoring tendencies of different examiner groups, we made numerous copies of the original data then selectively deleted video scores to reduce the number of 1/ linking videos per examiner (4 versus several permutations of 3,2,or 1 videos) or 2/examiner participation rates (all participating examiners (76%) versus several permutations of 70%, 60% or 50% participation). After analysing all resulting datasets with Many Facet Rasch Modelling (MFRM) we calculated students' score adjustments for each dataset and compared these with score adjustments in the original data using Spearman's correlations. RESULTS: Students' score adjustments derived form 3 videos per examiner correlated highly with score adjustments derived from 4 linking videos (median Rho = 0.93,IQR0.90-0.95,p < 0.001), with 2 (median Rho 0.85,IQR0.81-0.87,p < 0.001) and 1 linking videos (median Rho = 0.52(IQR0.46-0.64,p < 0.001) producing progressively smaller correlations. Score adjustments were similar for 76% participating examiners and 70% (median Rho = 0.97,IQR0.95-0.98,p < 0.001), and 60% (median Rho = 0.95,IQR0.94-0.98,p < 0.001) participation, but were lower and more variable for 50% examiner participation (median Rho = 0.78,IQR0.65-0.83, some ns). CONCLUSIONS: Whilst VESCA showed some sensitivity to the examined parameters, modest reductions in examiner participation rates or video numbers produced highly similar results. Employing VESCA in distributed or national exams could enhance quality assurance or exam fairness.


Subject(s)
Educational Measurement , Students, Medical , Clinical Competence , Humans , Judgment
9.
Med Educ ; 56(3): 292-302, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34893998

ABSTRACT

INTRODUCTION: Differential rater function over time (DRIFT) and contrast effects (examiners' scores biased away from the standard of preceding performances) both challenge the fairness of scoring in objective structured clinical exams (OSCEs). This is important as, under some circumstances, these effects could alter whether some candidates pass or fail assessments. Benefitting from experimental control, this study investigated the causality, operation and interaction of both effects simultaneously for the first time in an OSCE setting. METHODS: We used secondary analysis of data from an OSCE in which examiners scored embedded videos of student performances interspersed between live students. Embedded video position varied between examiners (early vs. late) whilst the standard of preceding performances naturally varied (previous high or low). We examined linear relationships suggestive of DRIFT and contrast effects in all within-OSCE data before comparing the influence and interaction of 'early' versus 'late' and 'previous high' versus 'previous low' conditions on embedded video scores. RESULTS: Linear relationships data did not support the presence of DRIFT or contrast effects. Embedded videos were scored higher early (19.9 [19.4-20.5]) versus late (18.6 [18.1-19.1], p < 0.001), but scores did not differ between previous high and previous low conditions. The interaction term was non-significant. CONCLUSIONS: In this instance, the small DRIFT effect we observed on embedded videos can be causally attributed to examiner behaviour. Contrast effects appear less ubiquitous than some prior research suggests. Possible mediators of these finding include the following: OSCE context, detail of task specification, examiners' cognitive load and the distribution of learners' ability. As the operation of these effects appears to vary across contexts, further research is needed to determine the prevalence and mechanisms of contrast and DRIFT effects, so that assessments may be designed in ways that are likely to avoid their occurrence. Quality assurance should monitor for these contextually variable effects in order to ensure OSCE equivalence.


Subject(s)
Clinical Competence , Educational Measurement , Humans
10.
BMJ Open ; 12(12): e064387, 2022 12 07.
Article in English | MEDLINE | ID: mdl-36600366

ABSTRACT

INTRODUCTION: Objective structured clinical exams (OSCEs) are a cornerstone of assessing the competence of trainee healthcare professionals, but have been criticised for (1) lacking authenticity, (2) variability in examiners' judgements which can challenge assessment equivalence and (3) for limited diagnosticity of trainees' focal strengths and weaknesses. In response, this study aims to investigate whether (1) sharing integrated-task OSCE stations across institutions can increase perceived authenticity, while (2) enhancing assessment equivalence by enabling comparison of the standard of examiners' judgements between institutions using a novel methodology (video-based score comparison and adjustment (VESCA)) and (3) exploring the potential to develop more diagnostic signals from data on students' performances. METHODS AND ANALYSIS: The study will use a complex intervention design, developing, implementing and sharing an integrated-task (research) OSCE across four UK medical schools. It will use VESCA to compare examiner scoring differences between groups of examiners and different sites, while studying how, why and for whom the shared OSCE and VESCA operate across participating schools. Quantitative analysis will use Many Facet Rasch Modelling to compare the influence of different examiners groups and sites on students' scores, while the operation of the two interventions (shared integrated task OSCEs; VESCA) will be studied through the theory-driven method of Realist evaluation. Further exploratory analyses will examine diagnostic performance signals within data. ETHICS AND DISSEMINATION: The study will be extra to usual course requirements and all participation will be voluntary. We will uphold principles of informed consent, the right to withdraw, confidentiality with pseudonymity and strict data security. The study has received ethical approval from Keele University Research Ethics Committee. Findings will be academically published and will contribute to good practice guidance on (1) the use of VESCA and (2) sharing and use of integrated-task OSCE stations.


Subject(s)
Education, Medical, Undergraduate , Students, Medical , Humans , Educational Measurement/methods , Education, Medical, Undergraduate/methods , Clinical Competence , Schools, Medical , Multicenter Studies as Topic
11.
Med Teach ; 43(9): 1070-1078, 2021 09.
Article in English | MEDLINE | ID: mdl-34496725

ABSTRACT

INTRODUCTION: Communication skills are assessed by medically-enculturated examiners using consensus frameworks which were developed with limited patient involvement. Assessments consequently risk rewarding performance which incompletely serves patients' authentic communication needs. Whilst regulators require patient involvement in assessment, little is known about how this can be achieved. We aimed to explore patients' perceptions of students' communication skills, examiner feedback and potential roles for patients in assessment. METHODS: Using constructivist grounded theory we performed cognitive stimulated, semi-structured interviews with patients who watched videos of student performances in communication-focused OSCE stations and read corresponding examiner feedback. Data were analysed using grounded theory methods. RESULTS: A disconnect occurred between participants' and examiners' views of students' communication skills. Whilst patients frequently commented on students' use of medical terminology, examiners omitted to mention this in feedback. Patients' judgements of students' performances varied widely, reflecting different preferences and beliefs. Participants viewed variability as an opportunity for students to learn from diverse lived experiences. Participants perceived a variety of roles to enhance assessment authenticity. DISCUSSION: Integrating patients into communications skills assessments could help to highlight deficiencies in students' communication which medically-enculturated examiners may miss. Overcoming the challenges inherent to this is likely to enhance graduates' preparedness for practice.


Subject(s)
Patient Participation , Students, Medical , Clinical Competence , Communication , Educational Measurement , Humans
12.
Acad Med ; 96(8): 1189-1196, 2021 08 01.
Article in English | MEDLINE | ID: mdl-33656012

ABSTRACT

PURPOSE: Ensuring that examiners in different parallel circuits of objective structured clinical examinations (OSCEs) judge to the same standard is critical to the chain of validity. Recent work suggests examiner-cohort (i.e., the particular group of examiners) could significantly alter outcomes for some candidates. Despite this, examiner-cohort effects are rarely examined since fully nested data (i.e., no crossover between the students judged by different examiner groups) limit comparisons. In this study, the authors aim to replicate and further develop a novel method called Video-based Examiner Score Comparison and Adjustment (VESCA), so it can be used to enhance quality assurance of distributed or national OSCEs. METHOD: In 2019, 6 volunteer students were filmed on 12 stations in a summative OSCE. In addition to examining live student performances, examiners from 8 separate examiner-cohorts scored the pool of video performances. Examiners scored videos specific to their station. Video scores linked otherwise fully nested data, enabling comparisons by Many Facet Rasch Modeling. Authors compared and adjusted for examiner-cohort effects. They also compared examiners' scores when videos were embedded (interspersed between live students during the OSCE) or judged later via the Internet. RESULTS: Having accounted for differences in students' ability, different examiner-cohort scores for the same ability of student ranged from 18.57 of 27 (68.8%) to 20.49 (75.9%), Cohen's d = 1.3. Score adjustment changed the pass/fail classification for up to 16% of students depending on the modeled cut score. Internet and embedded video scoring showed no difference in mean scores or variability. Examiners' accuracy did not deteriorate over the 3-week Internet scoring period. CONCLUSIONS: Examiner-cohorts produced a replicable, significant influence on OSCE scores that was unaccounted for by typical assessment psychometrics. VESCA offers a promising means to enhance validity and fairness in distributed OSCEs or national exams. Internet-based scoring may enhance VESCA's feasibility.


Subject(s)
Clinical Competence , Educational Measurement , Educational Measurement/methods , Humans , Physical Examination , Psychometrics
13.
Cureus ; 13(1): e12762, 2021 Jan 18.
Article in English | MEDLINE | ID: mdl-33489639

ABSTRACT

Introduction and aims Assessment of chest radiographs is a fundamental clinical skill, often taught opportunistically. Medical students are taught how to read adult chest radiographs, however, in our experience, there is often a lack of structured training for the interpretation of pediatric chest radiographs. Our aim was to develop and evaluate an online approach for medical students to learn this skill.  Materials and methods Ericsson's expertise acquisition theory was used to develop 10 sets of 10 practice radiographs which were graded using the X-ray difficulty score. Medical student volunteers (from Keele University School of Medicine) were recruited in the paediatric rotation of their first clinical year. Pre- and post-training tests of identical difficulty were offered. A semistructured focus group was conducted after the tests, the transcription of which was analyzed using grounded theory. Results Of 117 students in the year, 54 (46%) originally volunteered. The engagement was initially high but fell during the year, particularly during the pre-examination block. The high drop-out rate made the quantitative measurement of effectiveness difficult. The focus group suggested that pressure of other work, exam preparation, technical factors, and inflexibility of the study protocol reduced engagement. Conclusions Although the topic covered was seen as important and relevant to exams, the current system requires development to make it more effective and engaging.

14.
Med Teach ; 42(11): 1250-1260, 2020 11.
Article in English | MEDLINE | ID: mdl-32749915

ABSTRACT

INTRODUCTION: Novel uses of video aim to enhance assessment in health-professionals education. Whilst these uses presume equivalence between video and live scoring, some research suggests that poorly understood variations could challenge validity. We aimed to understand examiners' and students' interaction with video whilst developing procedures to promote its optimal use. METHODS: Using design-based research we developed theory and procedures for video use in assessment, iteratively adapting conditions across simulated OSCE stations. We explored examiners' and students' perceptions using think-aloud, interviews and focus group. Data were analysed using constructivist grounded-theory methods. RESULTS: Video-based assessment produced detachment and reduced volitional control for examiners. Examiners ability to make valid video-based judgements was mediated by the interaction of station content and specifically selected filming parameters. Examiners displayed several judgemental tendencies which helped them manage videos' limitations but could also bias judgements in some circumstances. Students rarely found carefully-placed cameras intrusive and considered filming acceptable if adequately justified. DISCUSSION: Successful use of video-based assessment relies on balancing the need to ensure station-specific information adequacy; avoiding disruptive intrusion; and the degree of justification provided by video's educational purpose. Video has the potential to enhance assessment validity and students' learning when an appropriate balance is achieved.


Subject(s)
Clinical Competence , Education, Medical , Educational Measurement , Humans , Judgment
15.
Adv Health Sci Educ Theory Pract ; 25(4): 845-875, 2020 10.
Article in English | MEDLINE | ID: mdl-31997115

ABSTRACT

Undergraduate clinical assessors make expert, multifaceted judgements of consultation skills in concert with medical school OSCE grading rubrics. Assessors are not cognitive machines: their judgements are made in the light of prior experience and social interactions with students. It is important to understand assessors' working conceptualisations of consultation skills and whether they could be used to develop assessment tools for undergraduate assessment. To identify any working conceptualisations that assessors use while assessing undergraduate medical students' consultation skills and develop assessment tools based on assessors' working conceptualisations and natural language for undergraduate consultation skills. In semi-structured interviews, 12 experienced assessors from a UK medical school populated a blank assessment scale with personally meaningful descriptors while describing how they made judgements of students' consultation skills (at exit standard). A two-step iterative thematic framework analysis was performed drawing on constructionism and interactionism. Five domains were found within working conceptualisations of consultation skills: Application of knowledge; Manner with patients; Getting it done; Safety; and Overall impression. Three mechanisms of judgement about student behaviour were identified: observations, inferences and feelings. Assessment tools drawing on participants' conceptualisations and natural language were generated, including 'grade descriptors' for common conceptualisations in each domain by mechanism of judgement and matched to grading rubrics of Fail, Borderline, Pass, Very good. Utilising working conceptualisations to develop assessment tools is feasible and potentially useful. Work is needed to test impact on assessment quality.


Subject(s)
Education, Medical, Undergraduate/organization & administration , Educational Measurement/standards , Judgment , Behavior , Clinical Competence , Education, Medical, Undergraduate/standards , Humans , Interviews as Topic , Knowledge , Patient Safety , Physician-Patient Relations , Qualitative Research
16.
Adv Health Sci Educ Theory Pract ; 25(4): 825-843, 2020 10.
Article in English | MEDLINE | ID: mdl-31960189

ABSTRACT

Transitioning from student to doctor is notoriously challenging. Newly qualified doctors feel required to make decisions before owning their new identity. It is essential to understand how responsibility relates to identity formation to improve transitions for doctors and patients. This multiphase ethnographic study explores realities of transition through anticipatory, lived and reflective stages. We utilised Labov's narrative framework (Labov in J Narrat Life Hist 7(1-4):395-415, 1997) to conduct in-depth analysis of complex relationships between changes in responsibility and development of professional identity. Our objective was to understand how these concepts interact. Newly qualified doctors acclimatise to their role requirements through participatory experience, perceived as a series of challenges, told as stories of adventure or quest. Rules of interaction within clinical teams were complex, context dependent and rarely explicit. Students, newly qualified and supervising doctors felt tensions around whether responsibility should be grasped or conferred. Perceived clinical necessity was a common determinant of responsibility rather than planned learning. Identity formation was chronologically mismatched to accepting responsibility. We provide a rich illumination of the complex relationship between responsibility and identity pre, during, and post-transition to qualified doctor: the two are inherently intertwined, each generating the other through successful actions in practice. This suggests successful transition requires a supported period of identity reconciliation during which responsibility may feel burdensome. During this, there is a fine line between too much and too little responsibility: seemingly innocuous assumptions can have a significant impact. More effort is needed to facilitate behaviours that delegate authority to the transitioning learner whilst maintaining true oversight.


Subject(s)
Physicians/psychology , Social Identification , Students, Medical/psychology , Anthropology, Cultural , Humans , Learning , Physician's Role
17.
Br J Gen Pract ; 70(690): e71-e77, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31636129

ABSTRACT

BACKGROUND: Current funding arrangements for undergraduate medical student placements in general practice are widely regarded as outdated, inequitable, and in need of urgent review. AIM: To undertake a detailed costing exercise to inform the setting of a national English tariff for undergraduate medical student placements in general practice. DESIGN AND SETTING: A cost-collection survey in teaching practices across all regions of England between January 2017 and February 2017. METHOD: A cost-collection template was sent to 50 selected teaching practices across all 25 medical schools in England following the development of a cost-collection tool and an initial pilot study. Detailed guidance on completion was provided for practices. Data were analysed by the Department of Health and Social Care. RESULTS: A total of 49 practices submitted data. The mean cost per half-day student placement in general practice was 111 GBP, 95% confidence interval = 100 to 121 (146 USD), with small differences between students in different years of study. Based on 10 sessions per student per week this equated to around 1100 GBP (1460 USD) per student placement week. CONCLUSION: The costs of undergraduate placements in general practice are considerably greater than funding available at time of writing, and broadly comparable with secondary care funding in the same period. The actual cost of placing a medical student full time in general practice for a 37-week academic year is 40 700 GBP (53 640 USD) compared with the average payment rate of only 22 000 GBP (28 990 USD) per year at the time this study was undertaken.


Subject(s)
Education, Medical, Undergraduate/economics , General Practice/education , Teaching/statistics & numerical data , Adult , Attitude of Health Personnel , Costs and Cost Analysis , Education, Medical, Undergraduate/standards , England , Female , General Practice/economics , Health Services Research , Hospitals, Teaching , Humans , Male , Students, Medical
18.
Med Educ ; 53(8): 778-787, 2019 08.
Article in English | MEDLINE | ID: mdl-31012131

ABSTRACT

CONTEXT: Globally, primary health care is facing workforce shortages. Longer and higher-quality placements in primary care increase the likelihood of medical students choosing this specialty. However, the recruitment and retention of community primary care teachers are challenging. Relevant research was predominantly carried out in the 1990s. We seek to understand contemporary facilitators and barriers to general practitioner (GP) engagement with undergraduate education. Communities of practice (CoP) theory offers a novel conceptualisation, which may be pertinent in other community-based teaching settings. METHODS: Semi-structured interviews were undertaken with 24 GP teachers at four UK medical schools. We purposively sampled GPs new to teaching, established GP teachers and GPs who had recently stopped teaching. We undertook NVivo-assisted deductive and inductive thematic analysis of transcripts. We used CoP theory to interpret data. RESULTS: Communities of practice theory illustrated that teachers negotiate membership of three CoPs: (i) clinical practice; (ii) the medical school, and (iii) teaching. The delivery of clinical care and teaching may be integrated or exist in tension. This can depend upon the positioning of the teaching and teacher as central or peripheral to the clinical CoP. Remuneration, workload, space and the expansion of GP trainee numbers impact on this. Teachers did not identify strongly as members of the medical school or a teaching community. Perceptions of membership were affected by medical school communication and support. The findings demonstrate gaps in medical school recruitment. CONCLUSIONS: This research demonstrates the marginalisation of primary care-based teaching and proposes a novel explanation rooted in CoP theory. Concepts including identity and membership may be pertinent to other community-based teaching settings. We recommend that medical schools review and broaden recruitment methods. Teacher retention may be improved by optimising the interface between medical schools and teachers, fostering a teaching community, increasing professional rewards for teaching involvement and altering medical school expectations of learning in primary care.


Subject(s)
General Practice/education , General Practitioners/supply & distribution , Students, Medical , Teaching , Education, Medical, Undergraduate , Faculty, Medical/psychology , Female , Humans , Interviews as Topic , Male , United Kingdom
20.
Med Teach ; 41(3): 271-274, 2019 03.
Article in English | MEDLINE | ID: mdl-29400107

ABSTRACT

Undergraduate medical education has expanded substantially in recent years, through both establishing new programs and increasing student numbers in existing programs. This expansion has placed pressure on the capacity for training students in clinical placements, raising concerns about the risk of dilution of experience, and reducing work readiness. The concerns have been greatest in more traditional environments, where clinical placements in large academic medical centers are often the "gold standard". However, there are ways of exposing medical students to patient interactions and clinical supervisors in many other contexts. In this paper, we share our experiences and observations of expanding clinical placements for both existing and new medical programs in several international locations. While this is not necessarily an easy task, a wide range of opportunities can be accessed by asking the right questions of the right people, often with only relatively modest changes in resource allocation.


Subject(s)
Capacity Building/organization & administration , Clinical Competence , Curriculum/standards , Education, Medical, Undergraduate/organization & administration , Organizational Innovation , Humans , Learning , Outcome Assessment, Health Care , Schools, Medical/standards , Students, Medical/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL