Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 58
Filter
1.
Perspect Med Educ ; 13(1): 250-254, 2024.
Article in English | MEDLINE | ID: mdl-38680196

ABSTRACT

The use of the p-value in quantitative research, particularly its threshold of "P < 0.05" for determining "statistical significance," has long been a cornerstone of statistical analysis in research. However, this standard has been increasingly scrutinized for its potential to mislead findings, especially when the practical significance, the number of comparisons, or the suitability of statistical tests are not properly considered. In response to controversy around use of p-values, the American Statistical Association published a statement in 2016 that challenged the research community to abandon the term "statistically significant". This stance has been echoed by leading scientific journals to urge a significant reduction or complete elimination in the reliance on p-values when reporting results. To provide guidance to researchers in health professions education, this paper provides a succinct overview of the ongoing debate regarding the use of p-values and the definition of p-values. It reflects on the controversy by highlighting the common pitfalls associated with p-value interpretation and usage, such as misinterpretation, overemphasis, and false dichotomization between "significant" and "non-significant" results. This paper also outlines specific recommendations for the effective use of p-values in statistical reporting including the importance of reporting effect sizes, confidence intervals, the null hypothesis, and conducting sensitivity analyses for appropriate interpretation. These considerations aim to guide researchers toward a more nuanced and informative use of p-values.


Subject(s)
Research Design , Humans , Data Interpretation, Statistical , Research Design/standards , Research Design/trends , Research Design/statistics & numerical data
2.
Sci Rep ; 14(1): 5809, 2024 03 09.
Article in English | MEDLINE | ID: mdl-38461322

ABSTRACT

This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p < 0.001) and QS (p < 0.001) with fetal medicine experts. Auto-capture saved additional planes beyond protocol requirements, resulting in more comprehensive echocardiographies. Low QS had adverse effect on both model performance and clinician's agreement with model feedback. The findings highlight the importance of developing and evaluating AI models based on 'noisy' real-life data rather than pursuing the highest accuracy possible with retrospective academic-grade data.


Subject(s)
Echocardiography , Female , Pregnancy , Humans , Retrospective Studies
3.
Neonatology ; 121(3): 314-326, 2024.
Article in English | MEDLINE | ID: mdl-38408441

ABSTRACT

INTRODUCTION: Simulation-based training (SBT) aids healthcare providers in acquiring the technical skills necessary to improve patient outcomes and safety. However, since SBT may require significant resources, training all skills to a comparable extent is impractical. Hence, a strategic prioritization of technical skills is necessary. While the European Training Requirements in Neonatology provide guidance on necessary skills, they lack prioritization. We aimed to identify and prioritize technical skills for a SBT curriculum in neonatology. METHODS: A three-round modified Delphi process of expert neonatologists and neonatal trainees was performed. In round one, the participants listed all the technical skills newly trained neonatologists should master. The content analysis excluded duplicates and non-technical skills. In round two, the Copenhagen Academy for Medical Education and Simulation Needs Assessment Formula (CAMES-NAF) was used to preliminarily prioritize the technical skills according to frequency, importance of competency, SBT impact on patient safety, and feasibility for SBT. In round three, the participants further refined and reprioritized the technical skills. Items achieving consensus (agreement of ≥75%) were included. RESULTS: We included 168 participants from 10 European countries. The response rates in rounds two and three were 80% (135/168) and 87% (117/135), respectively. In round one, the participants suggested 1964 different items. Content analysis revealed 81 unique technical skills prioritized in round two. In round three, 39 technical skills achieved consensus and were included. CONCLUSION: We reached a European consensus on a prioritized list of 39 technical skills to be included in a SBT curriculum in neonatology.


Subject(s)
Clinical Competence , Curriculum , Delphi Technique , Neonatology , Simulation Training , Neonatology/education , Humans , Europe , Simulation Training/methods , Female , Male , Adult
4.
Med Teach ; 46(4): 471-485, 2024 04.
Article in English | MEDLINE | ID: mdl-38306211

ABSTRACT

Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.


Subject(s)
Big Data , Health Occupations , Information Dissemination , Humans , Health Occupations/education , Consensus
5.
J Robot Surg ; 18(1): 47, 2024 Jan 20.
Article in English | MEDLINE | ID: mdl-38244130

ABSTRACT

To collect validity evidence for the assessment of surgical competence through the classification of general surgical gestures for a simulated robot-assisted radical prostatectomy (RARP). We used 165 video recordings of novice and experienced RARP surgeons performing three parts of the RARP procedure on the RobotiX Mentor. We annotated the surgical tasks with different surgical gestures: dissection, hemostatic control, application of clips, needle handling, and suturing. The gestures were analyzed using idle time (periods with minimal instrument movements) and active time (whenever a surgical gesture was annotated). The distribution of surgical gestures was described using a one-dimensional heat map, snail tracks. All surgeons had a similar percentage of idle time but novices had longer phases of idle time (mean time: 21 vs. 15 s, p < 0.001). Novices used a higher total number of surgical gestures (number of phases: 45 vs. 35, p < 0.001) and each phase was longer compared with those of the experienced surgeons (mean time: 10 vs. 8 s, p < 0.001). There was a different pattern of gestures between novices and experienced surgeons as seen by a different distribution of the phases. General surgical gestures can be used to assess surgical competence in simulated RARP and can be displayed as a visual tool to show how performance is improving. The established pass/fail level may be used to ensure the competence of the residents before proceeding with supervised real-life surgery. The next step is to investigate if the developed tool can optimize automated feedback during simulator training.


Subject(s)
Robotic Surgical Procedures , Male , Humans , Robotic Surgical Procedures/methods , Gestures , Clinical Competence , Prostate , Prostatectomy/methods
6.
BMC Med Educ ; 24(1): 15, 2024 Jan 03.
Article in English | MEDLINE | ID: mdl-38172820

ABSTRACT

BACKGROUND: Ultrasound is a safe and effective diagnostic tool used within several specialties. However, the quality of ultrasound scans relies on sufficiently skilled clinician operators. The aim of this study was to explore the validity of automated assessments of upper abdominal ultrasound skills using an ultrasound simulator. METHODS: Twenty five novices and five experts were recruited, all of whom completed an assessment program for the evaluation of upper abdominal ultrasound skills on a virtual reality simulator. The program included five modules that assessed different organ systems using automated simulator metrics. We used Messick's framework to explore the validity evidence of these simulator metrics to determine the contents of a final simulator test. We used the contrasting groups method to establish a pass/fail level for the final simulator test. RESULTS: Thirty seven out of 60 metrics were able to discriminate between novices and experts (p < 0.05). The median simulator score of the final simulator test including the metrics with validity evidence was 26.68% (range: 8.1-40.5%) for novices and 85.1% (range: 56.8-91.9%) for experts. The internal structure was assessed by Cronbach alpha (0.93) and intraclass correlation coefficient (0.89). The pass/fail level was determined to be 50.9%. This pass/fail criterion found no passing novices or failing experts. CONCLUSIONS: This study collected validity evidence for simulation-based assessment of upper abdominal ultrasound examinations, which is the first step toward competency-based training. Future studies may examine how competency-based training in the simulated setting translates into improvements in clinical performances.


Subject(s)
Internship and Residency , Virtual Reality , Humans , Clinical Competence , Computer Simulation , Ultrasonography , Reproducibility of Results
7.
Med Educ ; 58(1): 105-117, 2024 01.
Article in English | MEDLINE | ID: mdl-37615058

ABSTRACT

BACKGROUND: Artificial intelligence (AI) is becoming increasingly used in medical education, but our understanding of the validity of AI-based assessments (AIBA) as compared with traditional clinical expert-based assessments (EBA) is limited. In this study, the authors aimed to compare and contrast the validity evidence for the assessment of a complex clinical skill based on scores generated from an AI and trained clinical experts, respectively. METHODS: The study was conducted between September 2020 to October 2022. The authors used Kane's validity framework to prioritise and organise their evidence according to the four inferences: scoring, generalisation, extrapolation and implications. The context of the study was chorionic villus sampling performed within the simulated setting. AIBA and EBA were used to evaluate performances of experts, intermediates and novice based on video recordings. The clinical experts used a scoring instrument developed in a previous international consensus study. The AI used convolutional neural networks for capturing features on video recordings, motion tracking and eye movements to arrive at a final composite score. RESULTS: A total of 45 individuals participated in the study (22 novices, 12 intermediates and 11 experts). The authors demonstrated validity evidence for scoring, generalisation, extrapolation and implications for both EBA and AIBA. The plausibility of assumptions related to scoring, evidence of reproducibility and relation to different training levels was examined. Issues relating to construct underrepresentation, lack of explainability, and threats to robustness were identified as potential weak links in the AIBA validity argument compared with the EBA validity argument. CONCLUSION: There were weak links in the use of AIBA compared with EBA, mainly in their representation of the underlying construct but also regarding their explainability and ability to transfer to other datasets. However, combining AI and clinical expert-based assessments may offer complementary benefits, which is a promising subject for future research.


Subject(s)
Clinical Competence , Education, Medical , Humans , Educational Measurement , Artificial Intelligence , Reproducibility of Results
8.
BMC Med Educ ; 23(1): 921, 2023 Dec 05.
Article in English | MEDLINE | ID: mdl-38053134

ABSTRACT

BACKGROUND: Ultrasound is an essential diagnostic examination used in several medical specialties. However, the quality of ultrasound examinations is dependent on mastery of certain skills, which may be difficult and costly to attain in the clinical setting. This study aimed to explore mastery learning for trainees practicing general abdominal ultrasound using a virtual reality simulator and to evaluate the associated cost per student achieving the mastery learning level. METHODS: Trainees were instructed to train on a virtual reality ultrasound simulator until the attainment of a mastery learning level was established in a previous study. Automated simulator scores were used to track performances during each round of training, and these scores were recorded to determine learning curves. Finally, the costs of the training were evaluated using a micro-costing procedure. RESULTS: Twenty-one out of the 24 trainees managed to attain the predefined mastery level two times consecutively. The trainees completed their training with a median of 2h38min (range: 1h20min-4h30min) using a median of 7 attempts (range: 3-11 attempts) at the simulator test. The cost of training one trainee to the mastery level was estimated to be USD 638. CONCLUSION: Complete trainees can obtain mastery learning levels in general abdominal ultrasound examinations within 3 hours of training in the simulated setting and at an average cost of USD 638 per trainee. Future studies are needed to explore how the cost of simulation-based training is best balanced against the costs of clinical training.


Subject(s)
Simulation Training , Virtual Reality , Humans , Clinical Competence , Ultrasonography , Computer Simulation , Simulation Training/methods , Learning Curve
9.
Ann Surg Open ; 4(1): e271, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37600868
10.
Surg Endosc ; 37(8): 6588-6601, 2023 08.
Article in English | MEDLINE | ID: mdl-37389741

ABSTRACT

BACKGROUND: The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD: We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS: 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION: With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.


Subject(s)
Robotic Surgical Procedures , Surgeons , Humans , Animals , Swine , Robotic Surgical Procedures/methods , Artificial Intelligence , Machine Learning , Motion
11.
Thorax ; 78(10): 1028-1034, 2023 10.
Article in English | MEDLINE | ID: mdl-37208187

ABSTRACT

BACKGROUND: Testing is critical for detecting SARS-CoV-2 infection, but the best sampling method remains unclear. OBJECTIVES: To determine whether nasopharyngeal swab (NPS), oropharyngeal swab (OPS) or saliva specimen collection has the highest detection rate for SARS-CoV-2 molecular testing. METHODS: We conducted a randomised clinical trial at two COVID-19 outpatient test centres where NPS, OPS and saliva specimens were collected by healthcare workers in different orders for reverse transcriptase PCR testing. The SARS-CoV-2 detection rate was calculated as the number positive by a specific sampling method divided by the number in which any of the three sampling methods was positive. As secondary outcomes, test-related discomfort was measured with an 11-point numeric scale and cost-effectiveness was calculated. RESULTS: Among 23 102 adults completing the trial, 381 (1.65%) were SARS-CoV-2 positive. The SARS-CoV-2 detection rate was higher for OPSs, 78.7% (95% CI 74.3 to 82.7), compared with NPSs, 72.7% (95% CI 67.9 to 77.1) (p=0.049) and compared with saliva sampling, 61.9% (95% CI 56.9 to 66.8) (p<0.001). The discomfort score was highest for NPSs, at 5.76 (SD, 2.52), followed by OPSs, at 3.16 (SD 3.16) and saliva samples, at 1.03 (SD 18.8), p<0.001 between all measurements. Saliva specimens were associated with the lowest cost, and the incremental costs per detected SARS-CoV-2 infection for NPSs and OPSs were US$3258 and US$1832, respectively. CONCLUSIONS: OPSs were associated with higher SARS-CoV-2 detection and lower test-related discomfort than NPSs for SARS-CoV-2 testing. Saliva sampling had the lowest SARS-CoV-2 detection but was the least costly strategy for mass testing. TRIAL REGISTRATION NUMBER: NCT04715607.


Subject(s)
COVID-19 , SARS-CoV-2 , Adult , Humans , COVID-19/diagnosis , COVID-19 Testing , Saliva , Clinical Laboratory Techniques/methods , Nasopharynx , Specimen Handling/methods
12.
Pediatr Res ; 94(3): 1216-1224, 2023 09.
Article in English | MEDLINE | ID: mdl-37142651

ABSTRACT

BACKGROUND: Training and assessment of operator competence for the less invasive surfactant administration (LISA) procedure vary. This study aimed to obtain international expert consensus on LISA training (LISA curriculum (LISA-CUR)) and assessment (LISA assessment tool (LISA-AT)). METHODS: From February to July 2022, an international three-round Delphi process gathered opinions from LISA experts (researchers, curriculum developers, and clinical educators) on a list of items to be included in a LISA-CUR and LISA-AT (Round 1). The experts rated the importance of each item (Round 2). Items supported by more than 80% consensus were included. All experts were asked to approve or reject the final LISA-CUR and LISA-AT (Round 3). RESULTS: A total of 153 experts from 14 countries participated in Round 1, and the response rate for Rounds 2 and 3 was >80%. Round 1 identified 44 items for LISA-CUR and 22 for LISA-AT. Round 2 excluded 15 items for the LISA-CUR and 7 items for the LISA-AT. Round 3 resulted in a strong consensus (99-100%) for the final 29 items for the LISA-CUR and 15 items for the LISA-AT. CONCLUSIONS: This Delphi process established an international consensus on a training curriculum and content evidence for the assessment of LISA competence. IMPACT: This international consensus-based expert statement provides content on a curriculum for the less invasive surfactant administration procedure (LISA-CUR) that may be partnered with existing evidence-based strategies to optimize and standardize LISA training in the future. This international consensus-based expert statement also provides content on an assessment tool for the LISA procedure (LISA-AT) that can help to evaluate competence in LISA operators. The proposed LISA-AT enables standardized, continuous feedback and assessment until achieving proficiency.


Subject(s)
Clinical Competence , Surface-Active Agents , Delphi Technique , Curriculum , Consensus
13.
Med Teach ; 45(6): 565-573, 2023 06.
Article in English | MEDLINE | ID: mdl-36862064

ABSTRACT

The use of Artificial Intelligence (AI) in medical education has the potential to facilitate complicated tasks and improve efficiency. For example, AI could help automate assessment of written responses, or provide feedback on medical image interpretations with excellent reliability. While applications of AI in learning, instruction, and assessment are growing, further exploration is still required. There exist few conceptual or methodological guides for medical educators wishing to evaluate or engage in AI research. In this guide, we aim to: 1) describe practical considerations involved in reading and conducting studies in medical education using AI, 2) define basic terminology and 3) identify which medical education problems and data are ideally-suited for using AI.


Subject(s)
Artificial Intelligence , Education, Medical , Humans , Reproducibility of Results
14.
JTCVS Open ; 16: 619-627, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38204726

ABSTRACT

Objective: This study aimed to investigate the validity of simulation-based assessment of robotic-assisted cardiac surgery skills using a wet lab model, focusing on the use of a time-based score (TBS) and modified Global Evaluative Assessment of Robotic Skills (mGEARS) score. Methods: We tested 3 wet lab tasks (atrial closure, mitral annular stitches, and internal thoracic artery [ITA] dissection) with both experienced robotic cardiac surgeons and novices from multiple European centers. The tasks were assessed using 2 tools: TBS and mGEARS score. Reliability, internal consistency, and the ability to discriminate between different levels of competence were evaluated. Results: The results demonstrated a high internal consistency for all 3 tasks using mGEARS assessment tool. The mGEARS score and TBS could reliably discriminate between different levels of competence for the atrial closure and mitral stitches tasks but not for the ITA harvesting task. A generalizability study also revealed that it was feasible to assess competency of the atrial closure and mitral stitches tasks using mGEARS but not the ITA dissection task. Pass/fail scores were established for each task using both TBS and mGEARS assessment tools. Conclusions: The study provides sufficient evidence for using TBS and mGEARS scores in evaluating robotic-assisted cardiac surgery skills in wet lab settings for intracardiac tasks. Combining both assessment tools enhances the evaluation of proficiency in robotic cardiac surgery, paving the way for standardized, evidence-based preclinical training and credentialing. Clinical trial registry number: NCT05043064.

15.
Dermatol Pract Concept ; 12(4): e2022188, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36534519

ABSTRACT

Introduction: Efficient interpretation of dermoscopic images relies on pattern recognition, and the development of expert-level proficiency typically requires extensive training and years of practice. While traditional methods of transferring knowledge have proven effective, technological advances may significantly improve upon these strategies and better equip dermoscopy learners with the pattern recognition skills required for real-world practice. Objectives: A narrative review of the literature was performed to explore emerging directions in medical image interpretation education that may enhance dermoscopy education. This article represents the first of a two-part review series on this topic. Methods: To promote innovation in dermoscopy education, the International Skin Imaging Collaborative (ISIC) assembled a 12-member Education Working Group that comprises international dermoscopy experts and educational scientists. Based on a preliminary literature review and their experiences as educators, the group developed and refined a list of innovative approaches through multiple rounds of discussion and feedback. For each approach, literature searches were performed for relevant articles. Results: Through a consensus-based approach, the group identified a number of emerging directions in image interpretation education. The following theory-based approaches will be discussed in this first part: whole-task learning, microlearning, perceptual learning, and adaptive learning. Conclusions: Compared to traditional methods, these theory-based approaches may enhance dermoscopy education by making learning more engaging and interactive and reducing the amount of time required to develop expert-level pattern recognition skills. Further exploration is needed to determine how these approaches can be seamlessly and successfully integrated to optimize dermoscopy education.

16.
Dermatol Pract Concept ; 12(4): e2022189, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36534542

ABSTRACT

Introduction: In image interpretation education, many educators have shifted away from traditional methods that involve passive instruction and fragmented learning to interactive ones that promote active engagement and integrated knowledge. By training pattern recognition skills in an effective manner, these interactive approaches provide a promising direction for dermoscopy education. Objectives: A narrative review of the literature was performed to probe emerging directions in medical image interpretation education that may support dermoscopy education. This article represents the second of a two-part review series. Methods: To promote innovation in dermoscopy education, the International Skin Imaging Collaborative (ISIC) assembled an Education Working Group that comprises international dermoscopy experts and educational scientists. Based on a preliminary literature review and their experiences as educators, the group developed and refined a list of innovative approaches through multiple rounds of discussion and feedback. For each approach, literature searches were performed for relevant articles. Results: Through a consensus-based approach, the group identified a number of theory-based approaches, as discussed in the first part of this series. The group also acknowledged the role of motivation, metacognition, and early failures in optimizing the learning process. Other promising teaching tools included gamification, social media, and perceptual and adaptive learning modules (PALMs). Conclusions: Over the years, many dermoscopy educators may have intuitively adopted these instructional strategies in response to learner feedback, personal observations, and changes in the learning environment. For dermoscopy training, PALMs may be especially valuable in that they provide immediate feedback and adapt the training schedule to the individual's performance.

17.
BMJ Open ; 12(3): e049046, 2022 03 07.
Article in English | MEDLINE | ID: mdl-35256439

ABSTRACT

OBJECTIVES: Emergency caesarean sections (ECS) are time-sensitive procedures. Multiple factors may affect team efficiency but their relative importance remains unknown. This study aimed to identify the most important predictors contributing to quality of care during ECS in terms of the arrival-to-delivery interval. DESIGN: A retrospective cohort study. ECS were classified by urgency using emergency categories one/two and three (delivery within 30 and 60 min). In total, 92 predictor variables were included in the analysis and grouped as follows: 'Maternal objective', 'Maternal psychological', 'Fetal factors', 'ECS Indication', 'Emergency category', 'Type of anaesthesia', 'Team member qualifications and experience' and 'Procedural'. Data was analysed with a linear regression model using elastic net regularisation and jackknife technique to improve generalisability. The relative influence of the predictors, percentage significant predictor weight (PSPW) was calculated for each predictor to visualise the main determinants of arrival-to-delivery interval. SETTING AND PARTICIPANTS: Patient records for mothers undergoing ECS between 2010 and 2017, Nordsjællands Hospital, Capital Region of Denmark. PRIMARY OUTCOME MEASURES: Arrival-to-delivery interval during ECS. RESULTS: Data was obtained from 2409 patient records for women undergoing ECS. The group of predictors representing 'Team member qualifications and experience' was the most important predictor of arrival-to-delivery interval in all ECS emergency categories (PSPW 25.9% for ECS category one/two; PSPW 35.5% for ECS category three). In ECS category one/two the 'Indication for ECS' was the second most important predictor group (PSPW 24.9%). In ECS category three, the second most important predictor group was 'Maternal objective predictors' (PSPW 24.2%). CONCLUSION: This study provides empirical evidence for the importance of team member qualifications and experience relative to other predictors of arrival-to-delivery during ECS. Machine learning provides a promising method for expanding our current knowledge about the relative importance of different factors in predicting outcomes of complex obstetric events.


Subject(s)
Cesarean Section , Fetus , Female , Humans , Machine Learning , Pregnancy , Retrospective Studies
18.
Med Educ ; 56(8): 805-814, 2022 08.
Article in English | MEDLINE | ID: mdl-35199378

ABSTRACT

INTRODUCTION: During a health crisis, hospitals must prioritise activities and resources, which can compromise clerkship-based learning. We explored how health crises affect clinical clerkships using the COVID-19 pandemic as an example. METHODS: In a constructivist qualitative study, we conducted 22 semi-structured interviews with key stakeholders (i.e. medical students and doctors) from two teaching hospitals and 10 different departments. We used thematic analysis to investigate our data and used stakeholder theory as a sensitising concept. RESULTS: We identified three themes: (1) emotional triggers and reactions; (2) negotiation of legitimacy; and (3) building resilience. Our results suggest that the health crisis accentuated already existing problems in clerkships, such as students' feelings of low legitimacy, constant negotiation of roles, inconsistencies navigating rules and regulations and low levels of active participation. Medical students and doctors adapted to the new organisational demands by developing increased resilience. Students responded by reaching out for guidance and acceptance to remain relevant in the clinical clerkships. Doctors developed a behaviour of closing in and focused on managing themselves and their patients. This created tension between these two stakeholder groups. CONCLUSION: A health crisis can critically disrupt the hierarchical structure within the clinical clerkships and exacerbate existing conflicts between stakeholder groups. When medical students are not perceived as legitimate stakeholders in clinical clerkships during a health crisis, their attendance is perceived as unnecessary or even a nuisance. Despite increased student proactiveness and resilience, their roles inevitably shift from being doctors-to-be to students-to-be-managed.


Subject(s)
COVID-19 , Clinical Clerkship , Students, Medical , COVID-19/epidemiology , Humans , Pandemics , Qualitative Research , Students, Medical/psychology
19.
Adv Health Sci Educ Theory Pract ; 27(3): 761-792, 2022 08.
Article in English | MEDLINE | ID: mdl-35190892

ABSTRACT

The purpose of this scoping review was to explore how errors are conceptualized in medical education contexts by examining different error perspectives and practices. This review used a scoping methodology with a systematic search strategy to identify relevant studies, written in English, and published before January 2021. Four medical education journals (Medical Education, Advances in Health Science Education, Medical Teacher, and Academic Medicine) and four clinical journals (Journal of the American Medical Association, Journal of General Internal Medicine, Annals of Surgery, and British Medical Journal) were purposively selected. Data extraction was charted according to a data collection form. Of 1505 screened studies, 79 studies were included. Three overarching perspectives were identified: 'understanding errors') (n = 31), 'avoiding errors' (n = 25), 'learning from errors' (n = 23). Studies that aimed at'understanding errors' used qualitative methods (19/31, 61.3%) and took place in the clinical setting (19/31, 61.3%), whereas studies that aimed at 'avoiding errors' and 'learning from errors' used quantitative methods ('avoiding errors': 20/25, 80%, and 'learning from errors': 16/23, 69.6%, p = 0.007) and took place in pre-clinical (14/25, 56%) and simulated settings (10/23, 43.5%), respectively (p < 0.001). The three perspectives differed significantly in terms of inclusion of educational theory: 'Understanding errors' studies 16.1% (5/31),'avoiding errors' studies 48% (12/25), and 'learning from errors' studies 73.9% (17/23), p < 0.001. Errors in medical education and clinical practice are defined differently, which makes comparisons difficult. A uniform understanding is not necessarily a goal but improving transparency and clarity of how errors are currently conceptualized may improve our understanding of when, why, and how to use and learn from errors in the future.


Subject(s)
Education, Medical , Delivery of Health Care , Humans , United States
20.
Med Educ ; 55(6): 724-732, 2021 06.
Article in English | MEDLINE | ID: mdl-33368489

ABSTRACT

INTRODUCTION: Dyad learning occurs when two students work together to acquire new skills and knowledge. Several studies have provided evidence to support the educational rationale for dyad learning in the controlled simulated setting. However, the role of dyad learning in the clinical setting remains uncertain. Unlike the simulated setting, learning in the clinical setting depends on a complex interplay between medical students, doctors, nurses and patients potentially making dyad learning less valuable in clerkships. The objective of this study was to explore how key stakeholders perceive the value of implementing dyad learning during medical students' clinical clerkships. METHODS: In a constructivist qualitative study, we conducted 51 semi-structured interviews with 36 key stakeholders involved in dyad learning, including 10 medical students, 12 doctors, five nurses and nine patients. Data were coded inductively using thematic analysis, then coded deductively using stakeholder theory as a theoretical framework. RESULTS: We found that stakeholders generally perceived the educational impact of dyad learning in the clinical setting similarly but disagreed on its value. Students emphasised that dyad learning made them participate more actively during patient encounters and patients did not mind having two students present. Doctors and nurses considered dyad learning disruptive to the balance between service and training and reported that it did not resonate with their perception of good patient care. CONCLUSION: Dyad learning enables students to be more active during their clinical clerkships, but it easily disrupts the balance between service and training. This disruption may be exacerbated by the shifted balance in priorities and values between different stakeholder groups, as well as by making implicit teaching obligations more explicit for supervising doctors and nurses. Consequently, implementing dyad learning may not be perceived as valuable by doctors and nurses in the clinical setting, regardless of its pedagogical rationale.


Subject(s)
Clinical Clerkship , Education, Medical, Undergraduate , Students, Medical , Clinical Competence , Humans , Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...