ABSTRACT
Formal training in the subspecialty of pediatric anesthesiology began >60 years ago. Over the years, the duration and clinical work has varied, but what has stayed constant is a mission to develop clinically competent and professionally responsible pediatric anesthesiologists. Since accreditation in 1997, there has been additional guidance by the Accreditation Council on Graduate Medical Education (ACGME) and greater accountability to the public that we, indeed, are producing competent and professional pediatric anesthesiologists. This has been influenced by the slow evolution from time-based educational curriculum to a competency-based paradigm. As with all ACGME-accredited specialties, education leaders in pediatric anesthesiology first convened in 2014 to design specialty-specific developmental trajectories within the framework of the 6 core competencies, known as milestones, on which fellows were to be tracked during the 1-year fellowship. With 5 years of implementation, and substantial data and feedback, it has become clear that an iterative improvement was necessary to mirror the evolution of the profession. It was evident that the community required brevity and clarity in the next version of the milestones and required additional resources for assessment and faculty development. We describe here the methodology and considerations of our working group, guided by ACGME, in the rewriting of the milestones. We also provide suggestions for implementation and collaboration to support the education and assessment of pediatric anesthesiology fellows across the country.
Subject(s)
Anesthesiology , Internship and Residency , Humans , Child , Anesthesiology/education , Education, Medical, Graduate , Curriculum , Anesthesiologists , Feedback , Clinical Competence , AccreditationABSTRACT
BACKGROUND: Letters of recommendation (LORs) are a valued, yet imperfect tool. Program directors (PDs) score phrases such as give my highest recommendation and top 5 to 10% of students as positive. Although positive phrases are valued by PDs, there is no evidence that these phrases predict performance. We attempt to identify whether 12 specific phrases found in letters of recommendation predict future performance of fellows. METHODS: LORs were evaluated for 12 select phrases and statements. Alpha Omega Alpha (AOA) status, Step 2 Clinical Knowledge (CK) score, and whether the letter writer was personally known to our admission's committee were also categorized. Logistic regressions were performed to evaluate the relationship of the independent variables with fellow performance. RESULTS: Using multivariate logistic regression, one of the best residents (OR = 4.02, 95% CI (1.0, 15.9), p < 0.05), exceeds expectations (OR = 4.74, 95% CI (1.4, 16.3), p = 0.01), and give my highest recommendation (OR = 3.87, 95% CI (1.3, 11.7), p = 0.02) predicted positive performance. Highly recommend (OR = 0.31, 95% CI (0.1, 1.0), p < 0.05) and top 5 to 10% (OR = 0.05, 95% CI (0.0, 0.6), p = 0.02) predicted negative performance. The remaining phrases did not correlate to fellowship performance. CONCLUSION: The current LOR evaluation process may place undo importance on phrases that have limited bearing on a candidate's success in training. Training both letter readers and writers to avoid using coded language or avoid assigning improper importance to select phrases may help improve the candidate selection process.
Subject(s)
Correspondence as Topic , Fellowships and Scholarships , Humans , School Admission Criteria , Internship and Residency , Logistic ModelsABSTRACT
The probability of death after emergency laparotomy varies greatly between patients. Accurate pre-operative risk prediction is fundamental to planning care and improving outcomes. We aimed to develop a model limited to a few pre-operative factors that performed well irrespective of surgical indication: obstruction; sepsis; ischaemia; bleeding; and other. We derived a model with data from the National Emergency Laparotomy Audit for patients who had emergency laparotomy between December 2016 and November 2018. We tested the model on patients who underwent emergency laparotomy between December 2018 and November 2019. There were 4077/40,816 (10%) deaths 30 days after surgery in the derivation cohort. The final model had 13 pre-operative variables: surgical indication; age; blood pressure; heart rate; respiratory history; urgency; biochemical markers; anticipated malignancy; anticipated peritoneal soiling; and ASA physical status. The predicted mortality probability deciles ranged from 0.1% to 47%. There were 1888/11,187 deaths in the test cohort. The scaled Brier score, integrated calibration index and concordance for the model were 20%, 0.006 and 0.86, respectively. Model metrics were similar for the five surgical indications. In conclusion, we think that this prognostic model is suitable to support decision-making before emergency laparotomy as well as for risk adjustment for comparing organisations.
Subject(s)
Laparotomy , Neoplasms , Humans , Adult , Prognosis , Risk Adjustment , Hemorrhage/etiology , Retrospective StudiesABSTRACT
OBJECTIVE: To determine the association between ethnic group and likelihood of admission to intensive care in pregnancy and the postnatal period. DESIGN: Cohort study. SETTING: Maternity and intensive care units in England and Wales. POPULATION OR SAMPLE: A total of 631 851 women who had a record of a registerable birth between 1 April 2015 and 31 March 2016 in a database used for national audit. METHODS: Logistic regression analyses of linked maternity and intensive care records, with multiple imputation to account for missing data. MAIN OUTCOME MEASURES: Admission to intensive care in pregnancy or postnatal period to 6 weeks after birth. RESULTS: In all, 2.24 per 1000 maternities were associated with intensive care admission. Black women were more than twice as likely as women from other ethnic groups to be admitted (odds ratio [OR] 2.21, 95% CI 1.82-2.68). This association was only partially explained by demographic, lifestyle, pregnancy and birth factors (adjusted OR 1.69, 95% CI 1.37-2.09). A higher proportion of intensive care admissions in Black women were for obstetric haemorrhage than in women from other ethnic groups. CONCLUSIONS: Black women have an increased risk of intensive care admission that cannot be explained by demographic, health, lifestyle, pregnancy and birth factors. Clinical and policy intervention should focus on the early identification and management of severe illness, particularly obstetric haemorrhage, in Black women, in order to reduce inequalities in intensive care admission. TWEETABLE ABSTRACT: Black women are almost twice as likely as White women to be admitted to intensive care during pregnancy and the postpartum period; this risk remains after accounting for demographic, health, lifestyle, pregnancy and birth factors.
Subject(s)
Critical Care , Ethnicity , Cohort Studies , Female , Humans , Intensive Care Units , Parturition , PregnancyABSTRACT
Perceived effectiveness (PE) is a validated tool for predicting the potential impact of anti-tobacco public service announcements (PSAs). We set out to evaluate the added predictive value of facial expression analysis when combined with PE in a remote (online) survey. Each of 302 tobacco users watched 3 PSAs and allowed transmission of webcam videos from which metrics for "attention" (head position) and "facial action units" (FAU) were computed. The participants completed scales for their subjective emotions, willingness to share on social media, and intention to quit smoking using the Tobacco Free Florida website. Based on PE, both ready to quit (RTQ) and not ready (NR) respondents favored the same PSAs but RTQs assigned higher PE scores. Negative PSAs ("sad" or "frightening") were more compelling overall but RTQs also favored surprising ads and were more willing to share them on social media. Logistic regression showed that the combination of Attention + FAU+ PE (AUC = .816, p < .0001) outperformed single factors or factor combinations in distinguishing RTQ from NR. This study demonstrates that on-line assessment of facial expressions enhances the predictive value of PE and can be deployed on large remote samples.
Subject(s)
Smoking Cessation , Tobacco Products , Facial Expression , Humans , Smoking/psychology , Smoking Cessation/psychology , NicotianaABSTRACT
The evolution of medical education, from a time-based to a competency-based platform, began nearly 30 years ago and continues to slowly take shape. The development of valid and reproducible assessment tools is the first step. Medical educators across specialties acknowledge the challenges and remain motivated to develop a relevant, generalizable, and measurable system. The Accreditation Council for Graduate Medical Education (ACGME) remains committed to its responsibility to the public by assuring that the process and outcome of graduate medical education in the nation's residency programs produce competent, safe, and compassionate doctors. The Milestones Project is the ACGME's current strategy in the evolution to a competency-based system, which allows each specialty to develop its own set of subcompetencies and 5-level progression, or milestones, along a continuum of novice to expert. The education community has now had nearly 5 years of experience with these rubrics. While not perfect, Milestones 1.0 provided important foundational information and insights. The first iteration of the Anesthesiology Milestones highlighted some mismatch between subcompetencies and current and future clinical practices. They have also highlighted challenges with assessment and evaluation of learners, and the need for faculty development tools. Committed to an iterative process, the ACGME assembled representatives from stakeholder groups within the Anesthesiology community to develop the second generation of Milestones. This special article describes the foundational data from Milestones 1.0 that was useful in the development process of Milestones 2.0, the rationale behind the important changes, and the additional tools made available with this iteration.
Subject(s)
Anesthesiologists/education , Anesthesiology/education , Clinical Competence , Education, Medical, Graduate , Educational Measurement , Internship and Residency , Credentialing , Curriculum , Educational Status , HumansABSTRACT
BACKGROUND: Transarterial chemoembolization (TACE) in patients with hepatocellular carcinoma (HCC) awaiting liver transplantation is widespread, although evidence that it improves outcomes is lacking and there exist concerns about morbidity. The impact of TACE on outcomes after transplantation was evaluated in this study. METHODS: Patients with HCC who had liver transplantation in the UK were identified, and stratified according to whether they received TACE between 2006 and 2016. Cox regression methods were used to estimate hazard ratios (HRs) for death and graft failure after transplantation adjusted for donor and recipient characteristics. RESULTS: In total, 385 of 968 patients (39·8 per cent) received TACE. Five-year patient survival after transplantation was similar in those who had or had not received TACE: 75·2 (95 per cent c.i. 68·8 to 80·5) and 75·0 (70·5 to 78·8) per cent respectively. After adjustment for donor and recipient characteristics, there were no differences in mortality (HR 0·96, 95 per cent c.i. 0·67 to 1·38; P = 0·821) or graft failure (HR 1·01, 0·73 to 1·40; P = 0·964). The number of TACE treatments (2 or more versus 1: HR 0·97, 0·61 to 1·55; P = 0·903) or the time of death after transplantation (within or after 90 days; P = 0·291) did not alter the outcome. The incidence of hepatic artery thrombosis was low in those who had or had not received TACE (1·3 and 2·4 per cent respectively; P = 0·235). CONCLUSION: TACE delivered to patients with HCC before liver transplant did not affect complications, patient death or graft failure after transplantation.
ANTECEDENTES: La quimioembolización transarterial (transarterial chemoembolization, TACE) en pacientes con carcinoma hepatocelular (hepatocellular carcinoma, HCC) se utiliza como puente al trasplante hepático, aunque falta evidencia de que mejore los resultados y la morbilidad relacionada es motivo de preocupación. En este estudio se evaluó el impacto de la TACE en los resultados tras el trasplante para analizar las complicaciones. MÉTODOS: Se identificaron los receptores de trasplante hepático por HCC en el Reino Unido y se estratificaron según si habían recibido TACE entre 2006 y 2016. Se utilizó el método de regresión de Cox para estimar los cocientes de riesgos instantáneos (hazard ratio, HR) para la mortalidad post-trasplante y el fallo del injerto ajustados por las características del donante y del receptor. RESULTADOS: En total, 385 (39,8%) de 968 pacientes recibieron TACE, observándose similar supervivencia del paciente a los 5 años después del trasplante: 75,2% (i.c. del 95%: 68,8% a 80,5%) con TACE y 75,0% (70,5% a 78,8 %) sin TACE. Después de ajustar según las características del donante y del receptor, no hubo diferencias en la mortalidad (HR: 0,96, 0,67 a 1,38; P = 0,82) o en el fallo del injerto (HR: 1,01, 0,73 a 1,40; P = 0,96). El número de tratamientos con TACE (≥ 2 tratamientos TACE HR: 0,97, 0,61 a 1,55; P = 0,90) o el período de tiempo después del trasplante (mortalidad del paciente antes o después de 90 días; P = 0,29) no alteró el resultado. La incidencia de trombosis de la arteria hepática fue baja en aquellos que recibieron TACE o no (1,3% y 2,5%, respectivamente; P = 0,23). El fallo del injerto debido a eventos oclusivos fue similar en el grupo de pacientes que recibieron TACE (8,0% o 11/137) o que no la recibieron (6,7% o 5/75) TACE (P = 0,74). CONCLUSIÓN: La administración de TACE en pacientes con HCC antes del trasplante hepático no influyó en las complicaciones post-trasplante, la mortalidad del paciente o el fallo del injerto.
Subject(s)
Carcinoma, Hepatocellular/therapy , Chemoembolization, Therapeutic , Liver Neoplasms/therapy , Liver Transplantation/mortality , Carcinoma, Hepatocellular/mortality , Carcinoma, Hepatocellular/surgery , Chemoembolization, Therapeutic/adverse effects , Chemoembolization, Therapeutic/mortality , Chemoembolization, Therapeutic/statistics & numerical data , Female , Graft Rejection/epidemiology , Humans , Liver Neoplasms/mortality , Liver Neoplasms/surgery , Liver Transplantation/statistics & numerical data , Male , Middle Aged , Registries , Treatment OutcomeABSTRACT
BACKGROUND: The increasing demand for liver transplantation has led to considerable changes in characteristics of donors and recipients. This study evaluated the short- and long-term mortality of recipients with and without hepatocellular carcinoma (HCC) in the UK between 1997 and 2016. METHODS: First-time elective adult liver transplant recipients in the UK were identified and four successive eras of transplantation were compared. Hazard ratios (HRs) comparing the impact of era on short-term (first 90 days) and longer-term (from 90 days to 5 years) mortality were estimated, with adjustment for recipient and donor characteristics. RESULTS: Some 1879 recipients with and 7661 without HCC were included. There was an increase in use of organs donated after circulatory death (DCD), from 0 per cent in era 1 to 35·2 per cent in era 4 for recipients with HCC, and from 0·2 to 24·1 per cent for non-HCC recipients. The 3-year mortality rate decreased from 28·3 per cent in era 1 to 16·9 per cent in era 4 (adjusted HR 0·47, 95 per cent c.i. 0·35 to 0·63) for recipients with HCC, and from 20·4 to 9·3 per cent (adjusted HR 0·44, 0·36 to 0·53) for those without HCC. Comparing era 4 with era 1, improvements were more marked in short-term than in long-term mortality, both for recipients with HCC (0-90 days: adjusted HR 0·20, 0·10 to 0·39; 90 days to 5 years: adjusted HR 0·52, 0·35 to 0·75; P = 0·043) and for non-HCC recipients (0-90 days: adjusted HR 0·32, 0·24 to 0·42; 90 days to 5 years: adjusted HR 0·52, 0·40 to 0·67; P = 0·024). CONCLUSION: In the past 20 years, the mortality rate after liver transplantation has more than halved, despite increasing use of DCD donors. Improvements in overall survival can be explained by decreases in short-term and longer-term mortality.
ANTECEDENTES: La creciente demanda de trasplante hepático ha determinado cambios considerables en las características de los donantes y receptores. En este estudio, se evaluó la mortalidad a corto y a largo plazo de los receptores de trasplante hepático por carcinoma hepatocelular (hepatocelular carcinoma, HCC) y no-HCC en el Reino Unido entre 1997 y 2016. MÉTODOS: Se identificaron los receptores adultos de un primer trasplante hepático electivo en el Reino Unido y se compararon cuatro eras sucesivas de trasplante. Se estimaron los cocientes de riesgos instantáneos ajustados (adjusted hazard ratio, aHR) que comparaban el impacto de la era en la mortalidad a corto plazo (primeros 90 días) y a largo plazo (de 90 días a 5 años) ajustando por las características del receptor y del donante. RESULTADOS: Se incluyeron 1.879 receptores HCC y 7.661 receptores no-HCC. Hubo un aumento en el uso de donantes después de parada cardíaca (donors following circulatory death, DCD) del 0% en la era 1 al 35,2% en la era 4 para los receptores HCC y del 0,2% al 24,1% para los receptores no-HCC. La mortalidad a los 3 años disminuyó de 28,3% en la era 1 a 16,9% en la era 4 (aHR 0,47, i.c. del 95% 0,35-0,63) para receptores HCC y de 20,4% a 9,3% (aHR 0,44, 0,36-0,53) para receptores no-HCC. Comparando la era 1 y la era 4, las mejoras en la mortalidad a corto plazo fueron más marcadas que en la mortalidad a largo plazo, tanto para receptores HCC (aHR 0-90 días 0,20, 0,10-0,39; 90 días-5 años 0,52, 0,35-0,75; P =è0,04) como para receptores no-HCC (aHR 0-90 días 0,32, 0,24-0,42; 90 días-5 años 0,52, 0,40-0,67; P =è0,02). CONCLUSIÓN: En los últimos 20 años, la mortalidad después del trasplante de hígado se ha reducido a más de la mitad, a pesar del uso cada vez mayor de donantes DCD. Las mejoras en la supervivencia global pueden explicarse por la disminución de la mortalidad a corto y largo plazo.
Subject(s)
Carcinoma, Hepatocellular/surgery , Liver Neoplasms/surgery , Liver Transplantation/mortality , Adult , Carcinoma, Hepatocellular/mortality , Female , Graft Rejection/mortality , Humans , Kaplan-Meier Estimate , Liver Neoplasms/mortality , Male , Middle Aged , Proportional Hazards Models , Registries , Risk Factors , Time Factors , Treatment Outcome , United Kingdom/epidemiologyABSTRACT
BACKGROUND: Early reports of COVID-19 in pregnancy described management by caesarean, strict isolation of the neonate and formula feeding. Is this practice justified? OBJECTIVE: To estimate the risk of the neonate becoming infected with SARS-CoV-2 by mode of delivery, type of infant feeding and mother-infant interaction. SEARCH STRATEGY: Two biomedical databases were searched between September 2019 and June 2020. SELECTION CRITERIA: Case reports or case series of pregnant women with confirmed COVID-19, where neonatal outcomes were reported. DATA COLLECTION AND ANALYSIS: Data were extracted on mode of delivery, infant infection status, infant feeding and mother-infant interaction. For reported infant infection, a critical analysis was performed to evaluate the likelihood of vertical transmission. MAIN RESULTS: Forty nine studies included information on mode of delivery and infant infection status for 655 women and 666 neonates. In all, 28/666 (4%) tested positive postnatally. Of babies born vaginally, 8/292 (2.7%) tested positivecompared with 20/374 (5.3%) born by Caesarean. Information on feeding and baby separation were often missing, but of reported breastfed babies 7/148 (4.7%) tested positive compared with 3/56 (5.3%) for reported formula fed ones. Of babies reported as nursed with their mother 4/107 (3.7%) tested positive, compared with 6/46 (13%) for those who were reported as isolated. CONCLUSIONS: Neonatal COVID-19 infection is uncommon, rarely symptomatic, and the rate of infection is no greater when the baby is born vaginally, breastfed or remains with the mother. TWEETABLE ABSTRACT: Risk of neonatal infection with COVID-19 by delivery route, infant feeding and mother-baby interaction.
Subject(s)
Bottle Feeding/statistics & numerical data , Breast Feeding/statistics & numerical data , Cesarean Section/statistics & numerical data , Coronavirus Infections/epidemiology , Coronavirus Infections/transmission , Infant Formula , Infectious Disease Transmission, Vertical/statistics & numerical data , Pneumonia, Viral/epidemiology , Pneumonia, Viral/transmission , Pregnancy Complications, Infectious/epidemiology , Betacoronavirus , Breast Milk Expression , COVID-19 , China/epidemiology , Delivery, Obstetric/statistics & numerical data , Female , Humans , Infant, Newborn , Milk, Human , Mother-Child Relations , Pandemics , Pregnancy , Risk Factors , SARS-CoV-2ABSTRACT
The abundance of chlorine in the Earth's atmosphere increased considerably during the 1970s to 1990s, following large emissions of anthropogenic long-lived chlorine-containing source gases, notably the chlorofluorocarbons. The chemical inertness of chlorofluorocarbons allows their transport and mixing throughout the troposphere on a global scale, before they reach the stratosphere where they release chlorine atoms that cause ozone depletion. The large ozone loss over Antarctica was the key observation that stimulated the definition and signing in 1987 of the Montreal Protocol, an international treaty establishing a schedule to reduce the production of the major chlorine- and bromine-containing halocarbons. Owing to its implementation, the near-surface total chlorine concentration showed a maximum in 1993, followed by a decrease of half a per cent to one per cent per year, in line with expectations. Remote-sensing data have revealed a peak in stratospheric chlorine after 1996, then a decrease of close to one per cent per year, in agreement with the surface observations of the chlorine source gases and model calculations. Here we present ground-based and satellite data that show a recent and significant increase, at the 2σ level, in hydrogen chloride (HCl), the main stratospheric chlorine reservoir, starting around 2007 in the lower stratosphere of the Northern Hemisphere, in contrast with the ongoing monotonic decrease of near-surface source gases. Using model simulations, we attribute this trend anomaly to a slowdown in the Northern Hemisphere atmospheric circulation, occurring over several consecutive years, transporting more aged air to the lower stratosphere, and characterized by a larger relative conversion of source gases to HCl. This short-term dynamical variability will also affect other stratospheric tracers and needs to be accounted for when studying the evolution of the stratospheric ozone layer.
ABSTRACT
OBJECTIVE: Lupus is a chronic, autoimmune disease that disproportionately affects African Americans. We adapted the Centers for Disease Control and Prevention's Popular Opinion Leader model to implement an intervention tailored for African American individuals that leverages an academic-community partnership and community-based social networks to disseminate culturally appropriate lupus education. METHODS: Academic rheumatologists, social scientists, and researchers in Boston, MA and Chicago, IL partnered with local lupus support groups, community organizations, and churches in neighborhoods with higher proportions of African Americans to develop curriculum and recruit community leaders with and without lupus (Popular Opinion Leaders; POLs). POLs attended four training sessions focused on lupus education, strategies to educate others, and a review of research methods. POLs disseminated information through their social networks and recorded their impact, which was mapped using a geographic information system framework. RESULTS: We trained 18 POLs in greater Boston and 19 in greater Chicago: 97% were African American, 97% were female; and the mean age was 57 years. Fifty-nine percent of Boston POLs and 68% of Chicago POLs had lupus. POLs at both sites engaged members of their social networks and communities in conversations about lupus, health disparities, and the importance of care. Boston POLs documented 97 encounters with 547 community members reached. Chicago POLs documented 124 encounters with 4083 community members reached. CONCLUSIONS: An adapted, community-based POL model can be used to disseminate lupus education and increase awareness in African American communities. Further research is needed to determine the degree to which this may begin to reduce disparities in access to care and outcomes.
Subject(s)
Awareness , Black or African American/education , Community Networks/organization & administration , Lupus Erythematosus, Systemic/epidemiology , Adult , Black or African American/psychology , Aged , Centers for Disease Control and Prevention, U.S./organization & administration , Chronic Disease , Community Networks/trends , Female , Geographic Information Systems/instrumentation , Health Promotion/methods , Healthcare Disparities/ethnology , Healthcare Disparities/statistics & numerical data , Humans , Information Dissemination/methods , Leadership , Lupus Erythematosus, Systemic/prevention & control , Male , Middle Aged , Public Opinion , Research Design , United States/ethnologyABSTRACT
The vertical transmission of group B Streptococcus (GBS) strains causing neonatal sepsis is one of the leading reasons for neonatal mortality worldwide. The gold standard for GBS detection is enriched culture with or without the aid of chromogenic agars. Given the high risk for morbidity and mortality in this population, high assay sensitivity is required to prevent the personal and economic costs of GBS disease. Nucleic acid amplification tests (NAATs) allow for objective determination of GBS colonization with a sensitivity and a specificity higher than those of traditional culture methods. In this study, we determined the analytical and clinical performance of the Aries GBS assay compared to those of the enrichment culture method, biochemical identification, and the NAATs used at the study sites. Remnant Lim broth samples were used to perform the Aries assay and reference testing. Upon first testing using enriched culture as the reference standard, the Aries GBS assay identified GBS with a 96.1% sensitivity (95% confidence interval [CI], 91.2 to 98.7%) and a 91.4% specificity (95% CI, 88.8 to 93.6%). The test performed with 100% positive agreement (95% CI, 83.2 to 100%) compared to the results of the BD Max GBS assay and 98.0% positive agreement (95% CI, 89.2 to 99.9%) compared to the results of the Cepheid Xpert GBS LB test. Repeatability and reproducibility were maintained in intra- and interlaboratory testing, regardless of the instrument, module, or user who performed the test. The Aries GBS assay can be set up in less than 5 min and produces results in 2 h. The easy setup, with minimal hands-on time, and high assay sensitivity and specificity make this a useful testing option for GBS screening in prepartum women.
Subject(s)
Pregnancy Complications, Infectious/diagnosis , Prenatal Diagnosis/methods , Real-Time Polymerase Chain Reaction/standards , Streptococcal Infections/diagnosis , Adolescent , Adult , Female , Humans , Infectious Disease Transmission, Vertical/prevention & control , Molecular Diagnostic Techniques/standards , Pregnancy , Pregnancy Trimester, Third , Reference Standards , Reproducibility of Results , Sensitivity and Specificity , Streptococcus agalactiae/genetics , Time Factors , Young AdultABSTRACT
BACKGROUND: Small bowel obstruction (SBO) is a common indication for emergency laparotomy. There are currently variations in the timing of surgery for patients with SBO and limited evidence on whether delayed surgery affects outcomes. The aim of this study was to evaluate the impact of time to operation on 30-day mortality in patients requiring emergency laparotomy for SBO. METHODS: Data were collected from the National Emergency Laparotomy Audit (NELA) on all patients aged 18 years or older who underwent emergency laparotomy for all forms of SBO between December 2013 and November 2015. The primary outcome measure was 30-day mortality, with date of death obtained from the Office for National Statistics. Patients were grouped according to the time from admission to surgery (less than 24 h, 24-72 h and more than 72 h). A multilevel logistic regression model was used to explore the impact of patient factors, primarily delay to surgery, on 30-day mortality. RESULTS: Some 9991 patients underwent emergency laparotomy requiring adhesiolysis or small bowel resection for SBO. The overall mortality rate was 7·2 per cent (722 patients). Within each time group, 30-day mortality rates were significantly worse with increasing age, ASA grade, Portsmouth POSSUM score and level of contamination. Patients undergoing emergency laparotomy more than 72 h after admission had a significantly higher risk-adjusted 30-day mortality rate (odds ratio 1·39, 95 per cent c.i. 1·09 to 1·76). CONCLUSION: In patients who require an emergency laparotomy with adhesiolysis or resection for SBO, a delay to surgery of more than 72 h is associated with a higher 30-day postoperative mortality rate.
Subject(s)
Intestinal Obstruction/surgery , Intestine, Small/surgery , Laparotomy/mortality , Time-to-Treatment/statistics & numerical data , Aged , Aged, 80 and over , Databases, Factual , Emergency Treatment/methods , Female , Humans , Laparotomy/adverse effects , Laparotomy/methods , Logistic Models , Male , Middle Aged , Survival Rate , Treatment OutcomeABSTRACT
Collisions between H2O and CO play a crucial role in the gaseous component of comets and protoplanetary disks. We present here a five-dimensional potential energy surface (PES) for the H2O-CO collisional complex. Ab initio calculations were carried out using the explicitly-correlated closed-shell single- and double-excitation coupled cluster approach with the non-iterative perturbative treatment of triple excitations [CCSD(T)-F12a] method with the augmented correlation-consistent aug-cc-pVTZ basis sets. The most stable configuration of the complex, where the carbon atom of CO is pointing towards the OH bond of water, has a binding energy De = 646.1 cm-1. The end-over-end rotational constant of the H2O-CO complex was extracted from bound state calculations and it was found to be B0 = 0.0916 cm-1, in excellent agreement with experimental measurements. Finally, cross sections for the rotational excitation of CO by H2O are computed for s-wave (J = 0) scattering at the full close-coupling level of theory. These results will serve as a benchmark for future studies.
ABSTRACT
BACKGROUND: Among patients undergoing emergency laparotomy, 30-day postoperative mortality is around 10-15%. The risk of death among these patients, however, varies greatly because of their clinical characteristics. We developed a risk prediction model for 30-day postoperative mortality to enable better comparison of outcomes between hospitals. METHODS: We analysed data from the National Emergency Laparotomy Audit (NELA) on patients having an emergency laparotomy between December 2013 and November 2015. A prediction model was developed using multivariable logistic regression, with potential risk factors identified from existing prediction models, national guidelines, and clinical experts. Continuous risk factors were transformed if necessary to reflect their non-linear relationship with 30-day mortality. The performance of the model was assessed in terms of its calibration and discrimination. Interval validation was conducted using bootstrap resampling. RESULTS: There were 4458 (11.5%) deaths within 30-days among the 38 830 patients undergoing emergency laparotomy. Variables associated with death included (among others): age, blood pressure, heart rate, physiological variables, malignancy, and ASA physical status classification. The predicted risk of death among patients ranged from 1% to 50%. The model demonstrated excellent calibration and discrimination, with a C-statistic of 0.863 (95% confidence interval, 0.858-0.867). The model retained its high discrimination during internal validation, with a bootstrap derived C-statistic of 0.861. CONCLUSIONS: The NELA risk prediction model for emergency laparotomies discriminates well between low- and high-risk patients and is suitable for producing risk-adjusted provider mortality statistics.
Subject(s)
Emergency Medical Services/statistics & numerical data , Laparotomy/adverse effects , Laparotomy/statistics & numerical data , Adolescent , Adult , Aged , Aged, 80 and over , Female , Forecasting , Hemodynamics , Humans , Laparotomy/mortality , Male , Medical Audit , Middle Aged , Models, Statistical , Neoplasms/complications , Reproducibility of Results , Retrospective Studies , Risk Adjustment , Risk Factors , United Kingdom/epidemiology , Young AdultABSTRACT
AIM: There is uncertainty regarding the optimal sequence of surgery for patients with colorectal cancer (CRC) and synchronous liver metastases. This study was designed to describe temporal trends and inter-hospital variation in surgical strategy, and to compare long-term survival in a propensity score-matched analysis. METHOD: The National Bowel Cancer Audit dataset was used to identify patients diagnosed with primary CRC between 1 January 2010 and 31 December 2015 who underwent CRC resection in the English National Health Service. Hospital Episode Statistics data were used to identify those with synchronous liver-limited metastases who underwent liver resection. Survival outcomes of propensity score-matched groups were compared. RESULTS: Of 1830 patients, 270 (14.8%) underwent a liver-first approach, 259 (14.2%) a simultaneous approach and 1301 (71.1%) a bowel-first approach. The proportion of patients undergoing either a liver-first or simultaneous approach increased over the study period from 26.8% in 2010 to 35.6% in 2015 (P < 0.001). There was wide variation in surgical approach according to hospital trust of diagnosis. There was no evidence of a difference in 4-year survival between the propensity score-matched cohorts according to surgical strategy: bowel first vs simultaneous [hazard ratio (HR) 0.92 (95% CI: 0.80-1.06)] or bowel first vs liver first [HR 0.99 (95% CI: 0.82-1.19)]. CONCLUSION: There is evidence of wide variation in surgical strategy in dealing with CRC and synchronous liver metastases. In selected patients, the simultaneous and liver-first strategies have comparable long-term survival to the bowel-first approach.
Subject(s)
Colorectal Neoplasms/surgery , Digestive System Surgical Procedures/methods , Hepatectomy/methods , Hospitals , Liver Neoplasms/surgery , Metastasectomy/methods , Practice Patterns, Physicians' , Aged , Colorectal Neoplasms/pathology , Female , Humans , Kaplan-Meier Estimate , Liver Neoplasms/secondary , Male , Middle Aged , Propensity Score , Radiofrequency Ablation/methods , Survival Rate , Time Factors , United KingdomABSTRACT
BACKGROUND: Centralization of specialist surgical services can improve patient outcomes. The aim of this cohort study was to compare liver resection rates and survival in patients with primary colorectal cancer and synchronous metastases limited to the liver diagnosed at hepatobiliary surgical units (hubs) with those diagnosed at hospital Trusts without hepatobiliary services (spokes). METHODS: The study included patients from the National Bowel Cancer Audit diagnosed with primary colorectal cancer between 1 April 2010 and 31 March 2014 who underwent colorectal cancer resection in the English National Health Service. Patients were linked to Hospital Episode Statistics data to identify those with liver metastases and those who underwent liver resection. Multivariable random-effects logistic regression was used to estimate the odds ratio of liver resection by presence of specialist hepatobiliary services on site. Survival curves were estimated using the Kaplan-Meier method. RESULTS: Of 4547 patients, 1956 (43·0 per cent) underwent liver resection. The 1081 patients diagnosed at hubs were more likely to undergo liver resection (adjusted odds ratio 1·52, 95 per cent c.i. 1·20 to 1·91). Patients diagnosed at hubs had better median survival (30·6 months compared with 25·3 months for spokes; adjusted hazard ratio 0·83, 0·75 to 0·91). There was no difference in survival between hubs and spokes when the analysis was restricted to patients who had liver resection (P = 0·620) or those who did not undergo liver resection (P = 0·749). CONCLUSION: Patients with colorectal cancer and synchronous metastases limited to the liver who are diagnosed at hospital Trusts with a hepatobiliary team on site are more likely to undergo liver resection and have better survival.
Subject(s)
Centralized Hospital Services , Colorectal Neoplasms/pathology , Liver Neoplasms/secondary , Liver Neoplasms/surgery , Oncology Service, Hospital , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Female , Hepatectomy , Humans , Infant , Infant, Newborn , Kaplan-Meier Estimate , Liver Neoplasms/mortality , Male , Middle Aged , Proportional Hazards Models , Treatment Outcome , Young AdultABSTRACT
OBJECTIVE: Induction of labour at 39 weeks for nulliparous women aged 35 years and over may prevent stillbirths and does not increase caesarean births, so it may be popular. But the overall costs and benefits of such a policy have not been compared. DESIGN: A cost-utility analysis alongside a randomised controlled trial (the 35/39 trial). SETTING: Obstetric departments of 38 UK National Health Service hospitals and one UK primary-care trust. POPULATION: Nulliparous women aged 35 years or over on their expected due date, with a singleton live fetus in a cephalic presentation. METHODS: Costs were estimated from the National Health Service and Personal Social Services perspective and quality-adjusted life-years (QALYs) were calculated based on patient responses to the EQ-5D at baseline and 4 weeks. MAIN OUTCOME MEASURES: Data on antenatal care, mode of delivery, analgesia in labour, method of induction, EQ-5D (baseline and 4 weeks postnatal) and participant-administered postnatal health resource use data were collected. RESULTS: The intervention was associated with a mean cost saving of £263 and a small additional gain in QALYs (though this was not statistically significant), even without considering any possible QALY gains from stillbirth prevention. CONCLUSION: A policy of induction of labour at 39 weeks for women of advanced maternal age would save money. TWEETABLE ABSTRACT: A policy of induction of labour at 39 weeks of gestation for women of advanced maternal age would save money.
Subject(s)
Delivery, Obstetric/economics , Labor, Induced/economics , Maternal Age , Prenatal Care/economics , Term Birth , Adult , Cost-Benefit Analysis , Delivery, Obstetric/methods , Female , Humans , Labor, Induced/methods , Pregnancy , Quality-Adjusted Life Years , United KingdomABSTRACT
Stratospheric aerosols (SAs) are a variable component of the Earth's albedo that may be intentionally enhanced in the future to offset greenhouse gases (geoengineering). The role of tropospheric-sourced sulfur dioxide (SO2) in maintaining background SAs has been debated for decades without in-situ measurements of SO2 at the tropical tropopause to inform this issue. Here we clarify the role of SO2 in maintaining SAs by using new in-situ SO2 measurements to evaluate climate models and satellite retrievals. We then use the observed tropical tropopause SO2 mixing ratios to estimate the global flux of SO2 across the tropical tropopause. These analyses show that the tropopause background SO2 is about 5 times smaller than reported by the average satellite observations that have been used recently to test atmospheric models. This shifts the view of SO2 as a dominant source of SAs to a near-negligible one, possibly revealing a significant gap in the SA budget.
ABSTRACT
OBJECTIVES: The visual, somatosensory, and vestibular systems are critical for establishing a sensorimotor set for postural control and orientation. The goal of this study was to assess how individuals with a vestibular-related disorder keep their balance following prolonged stance on an inclined surface. We hypothesize that subjects will show greater reliance on the somatosensory system than age-matched controls as inferred by the presence of a forward postural lean aftereffect following the inclined stance (i.e., a positive response). RESULTS: The results revealed an underlying somatosensory-dominant strategy for postural control in the vestibular group: 100% of the subjects tested positive compared to 58% in the control group (P=.006). CONCLUSION: Individuals with a vestibular-related disorder use a somatosensory-dominant strategy for postural orientation following prolonged inclined stance. The implications for the management of this population are discussed.