RESUMEN
Patients of Asian and black ethnicity face disadvantage on the renal transplant waiting list in the UK, because of lack of human leucocyte antigen and blood group matched donors from an overwhelmingly white deceased donor pool. This study evaluates outcomes of renal allografts from Asian and black donors. The UK Transplant Registry was analysed for adult deceased donor kidney only transplants performed between 2001 and 2015. Asian and black ethnicity patients constituted 12.4% and 6.7% of all deceased donor recipients but only 1.6% and 1.2% of all deceased donors, respectively. Unadjusted survival analysis demonstrated significantly inferior long-term allograft outcomes associated with Asian and black donors, compared to white donors. On Cox-regression analysis, Asian donor and black recipient ethnicities were associated with poorer outcomes than white counterparts, and on ethnicity matching, compared with the white donor-white recipient baseline group and adjusting for other donor and recipient factors, 5-year graft outcomes were significantly poorer for black donor-black recipient, Asian donor-white recipient, and white donor-black recipient combinations in decreasing order of worse unadjusted 5-year graft survival. Increased deceased donation among ethnic minorities could benefit the recipient pool by increasing available organs. However, it may require a refined approach to enhance outcomes.
Asunto(s)
Pueblo Asiatico , Población Negra , Supervivencia de Injerto , Trasplante de Riñón , Donantes de Tejidos , Humanos , Reino Unido , Masculino , Femenino , Adulto , Persona de Mediana Edad , Donantes de Tejidos/provisión & distribución , Población Negra/estadística & datos numéricos , Sistema de Registros , Población Blanca/estadística & datos numéricos , Resultado del Tratamiento , Anciano , Modelos de Riesgos Proporcionales , Listas de Espera , Receptores de Trasplantes/estadística & datos numéricosRESUMEN
Little is known about the level of testing required to sustain elimination of hepatitis C (HCV), once achieved. In this study, we model the testing coverage required to maintain HCV elimination in an injecting network of people who inject drugs (PWID). We test the hypothesis that network-based strategies are a superior approach to deliver testing. We created a dynamic injecting network structure connecting 689 PWID based on empirical data. The primary outcome was the testing coverage required per month to maintain prevalence at the elimination threshold over 5 years. We compared four testing strategies. Without any testing or treatment provision, the prevalence of HCV increased from the elimination threshold (11.68%) to a mean of 25.4% (SD 2.96%) over the 5-year period. To maintain elimination with random testing, on average, 4.96% (SD 0.83%) of the injecting network needs to be tested per month. However, with a 'bring your friends' strategy, this was reduced to 3.79% (SD 0.64%) of the network (p < .001). The addition of contact tracing improved the efficiency of both strategies. In conclusion, we report that network-based approaches to testing such as 'bring a friend' initiatives and contact tracing lower the level of testing coverage required to maintain elimination.
Asunto(s)
Consumidores de Drogas , Hepatitis C , Abuso de Sustancias por Vía Intravenosa , Humanos , Abuso de Sustancias por Vía Intravenosa/complicaciones , Abuso de Sustancias por Vía Intravenosa/epidemiología , Hepatitis C/diagnóstico , Hepatitis C/epidemiología , Hepatitis C/prevención & control , Hepacivirus , PrevalenciaRESUMEN
INTRODUCTION: Learned smoking cues from a smoker's environment are a major cause of lapse and relapse. Quit Sense, a theory-guided Just-In-Time Adaptive Intervention smartphone app, aims to help smokers learn about their situational smoking cues and provide in-the-moment support to help manage these when quitting. METHODS: A two-arm feasibility randomized controlled trial (N = 209) to estimate parameters to inform a definitive evaluation. Smoker's willing to make a quit attempt were recruited using online paid-for adverts and randomized to "usual care" (text message referral to NHS SmokeFree website) or "usual care" plus a text message invitation to install Quit Sense. Procedures, excluding manual follow-up for nonresponders, were automated. Follow-up at 6 weeks and 6 months included feasibility, intervention engagement, smoking-related, and economic outcomes. Abstinence was verified using cotinine assessment from posted saliva samples. RESULTS: Self-reported smoking outcome completion rates at 6 months were 77% (95% CI 71%, 82%), viable saliva sample return rate was 39% (95% CI 24%, 54%), and health economic data 70% (95% CI 64%, 77%). Among Quit Sense participants, 75% (95% CI 67%, 83%) installed the app and set a quit date and, of those, 51% engaged for more than one week. The 6-month biochemically verified sustained abstinence rate (anticipated primary outcome for definitive trial), was 11.5% (12/104) among Quit Sense participants and 2.9% (3/105) for usual care (adjusted odds ratio = 4.57, 95% CIs 1.23, 16.94). No evidence of between-group differences in hypothesized mechanisms of action was found. CONCLUSIONS: Evaluation feasibility was demonstrated alongside evidence supporting the effectiveness potential of Quit Sense. IMPLICATIONS: Running a primarily automated trial to initially evaluate Quit Sense was feasible, resulting in modest recruitment costs and researcher time, and high trial engagement. When invited, as part of trial participation, to install a smoking cessation app, most participants are likely to do so, and, for those using Quit Sense, an estimated one-half will engage with it for more than 1 week. Evidence that Quit Sense may increase verified abstinence at 6-month follow-up, relative to usual care, was generated, although low saliva return rates to verify smoking status contributed to considerable imprecision in the effect size estimate.
Asunto(s)
Aplicaciones Móviles , Cese del Hábito de Fumar , Humanos , Cese del Hábito de Fumar/métodos , Estudios de Factibilidad , Fumar , AutoinformeRESUMEN
BACKGROUND: Kidney transplantation is the treatment of choice in chronic kidney disease (CKD) stage 5. It is often delayed in younger children until a target weight is achieved due to technical feasibility and historic concerns about poorer outcomes. METHODS: Data on all first paediatric (aged < 18 years) kidney only transplants performed in the United Kingdom between 1 January 2006 and 31 December 2016 were extracted from the UK Transplant Registry (n = 1,340). Children were categorised by weight at the time of transplant into those < 15 kg and those ≥ 15 kg. Donor, recipient and transplant characteristics were compared between groups using chi-squared or Fisher's exact test for categorical variables and Kruskal-Wallis test for continuous variables. Thirty day, one-year, five-year and ten-year patient and kidney allograft survival were compared using the Kaplan-Meier method. RESULTS: There was no difference in patient survival following kidney transplantation when comparing children < 15 kg with those ≥ 15 kg. Ten-year kidney allograft survival was significantly better for children < 15 kg than children ≥ 15 kg (85.4% vs. 73.5% respectively, p = 0.002). For children < 15 kg, a greater proportion of kidney transplants were from living donors compared with children ≥ 15 kg (68.3% vs. 49.6% respectively, p < 0.001). There was no difference in immediate graft function between the groups (p = 0.54) and delayed graft function was seen in 4.8% and 6.8% of children < 15 kg and ≥ 15 kg respectively. CONCLUSIONS: Our study reports significantly better ten-year kidney allograft survival in children < 15 kg and supports consideration of earlier transplantation for children with CKD stage 5. A higher resolution version of the Graphical abstract is available as Supplementary information.
Asunto(s)
Fallo Renal Crónico , Trasplante de Riñón , Humanos , Niño , Trasplante de Riñón/efectos adversos , Supervivencia de Injerto , Donadores Vivos , Reino Unido/epidemiología , Sistema de Registros , Estudios Retrospectivos , Resultado del TratamientoRESUMEN
BACKGROUND: To date, performance comparisons between men and machines have been carried out in many health domains. Yet machine learning (ML) models and human performance comparisons in audio-based respiratory diagnosis remain largely unexplored. OBJECTIVE: The primary objective of this study was to compare human clinicians and an ML model in predicting COVID-19 from respiratory sound recordings. METHODS: In this study, we compared human clinicians and an ML model in predicting COVID-19 from respiratory sound recordings. Prediction performance on 24 audio samples (12 tested positive) made by 36 clinicians with experience in treating COVID-19 or other respiratory illnesses was compared with predictions made by an ML model trained on 1162 samples. Each sample consisted of voice, cough, and breathing sound recordings from 1 subject, and the length of each sample was around 20 seconds. We also investigated whether combining the predictions of the model and human experts could further enhance the performance in terms of both accuracy and confidence. RESULTS: The ML model outperformed the clinicians, yielding a sensitivity of 0.75 and a specificity of 0.83, whereas the best performance achieved by the clinicians was 0.67 in terms of sensitivity and 0.75 in terms of specificity. Integrating the clinicians' and the model's predictions, however, could enhance performance further, achieving a sensitivity of 0.83 and a specificity of 0.92. CONCLUSIONS: Our findings suggest that the clinicians and the ML model could make better clinical decisions via a cooperative approach and achieve higher confidence in audio-based respiratory diagnosis.
Asunto(s)
COVID-19 , Ruidos Respiratorios , Enfermedades Respiratorias , Humanos , Masculino , COVID-19/diagnóstico , Aprendizaje Automático , Médicos , Enfermedades Respiratorias/diagnóstico , Aprendizaje ProfundoRESUMEN
Organ donation networks audit and report on national or regional organ donation performance, however there are inconsistencies in the metrics and definitions used, rendering comparisons difficult or inappropriate. This is despite multiple attempts exploring the possibility for convergently evolving audits so that collectives of donation networks might transparently share data and practice and then target system interventions. This paper represents a collaboration between the United Kingdom and Australian organ donation organisations which aimed to understand the intricacies of our respective auditing systems, compare the metrics and definitions they employ and ultimately assess their level of comparability. This point of view outlines the historical context underlying the development of the auditing tools, demonstrates their differences to the Critical Pathway proposed as a common tool a decade ago and presents a side-by-side comparison of donation definitions, metrics and data for the 2019 calendar year. There were significant differences in donation definition terminology, metrics and overall structure of the audits. Fitting the audits to a tiered scaffold allowed for reasonable comparisons however this required substantial effort and understanding of nuance. Direct comparison of international and inter-regional donation performance is challenging and would benefit from consistent auditing processes across organisations.
Asunto(s)
Malus , Trasplante de Órganos , Obtención de Tejidos y Órganos , Australia , Benchmarking , HumanosRESUMEN
BACKGROUND: Transplantation is widely considered the gold standard method of kidney replacement therapy. Despite compelling evidence for biological sexual dimorphisms, the role of donor and recipient sex matching in transplantation is undefined. METHODS: The UK historical cohort study explores the impact of donor and recipient sex on allograft survival, in children receiving their first deceased donor transplant. Nationwide registry data were collected for 1316 transplant procedures performed from January 1, 1999, to December 31, 2019. RESULTS: Male donor-male recipient transplantation occurred most frequently (35%), followed by female donor-male recipient (23%), male donor-female recipient (22%), and female donor-female recipient (20%). The median follow-up time was 7.03 years (IQR: 2.89-12.4 years), with a total of 10,326 person-years. Male donor-male recipients were associated with the highest 10-year kidney allograft survival (72.8% [95% CI 68.3-77.8]) and male donor-female recipients with the lowest (64% [95% CI 57.7-71.1]). Multivariable Cox regression demonstrated for male donor transplantation, the risk of kidney allograft failure was 1.46 times greater for female (compared to male) recipients, after adjusting for acquired recipient age, recipient/donor age at transplantation, mismatched HLA A/B/DR, waitlist time, cold ischemia time, CMV seropositivity, donor hypertension, and donor diabetes (HR 1.46 [95% CI. 1.06-2.01], p = 0.02). There was no evidence for an independent effect of donor or recipient sex in other combinations. CONCLUSION: This study highlights the complex relationship between donor and recipient sex and pediatric kidney allograft survival, which require further mechanistic evaluation.
Asunto(s)
Supervivencia de Injerto , Trasplante de Riñón , Donantes de Tejidos , Receptores de Trasplantes , Aloinjertos , Niño , Estudios de Cohortes , Femenino , Humanos , Masculino , Factores Sexuales , Donantes de Tejidos/estadística & datos numéricos , Receptores de Trasplantes/estadística & datos numéricosRESUMEN
BACKGROUND: Recent work has shown the potential of using audio data (eg, cough, breathing, and voice) in the screening for COVID-19. However, these approaches only focus on one-off detection and detect the infection, given the current audio sample, but do not monitor disease progression in COVID-19. Limited exploration has been put forward to continuously monitor COVID-19 progression, especially recovery, through longitudinal audio data. Tracking disease progression characteristics and patterns of recovery could bring insights and lead to more timely treatment or treatment adjustment, as well as better resource management in health care systems. OBJECTIVE: The primary objective of this study is to explore the potential of longitudinal audio samples over time for COVID-19 progression prediction and, especially, recovery trend prediction using sequential deep learning techniques. METHODS: Crowdsourced respiratory audio data, including breathing, cough, and voice samples, from 212 individuals over 5-385 days were analyzed, alongside their self-reported COVID-19 test results. We developed and validated a deep learning-enabled tracking tool using gated recurrent units (GRUs) to detect COVID-19 progression by exploring the audio dynamics of the individuals' historical audio biomarkers. The investigation comprised 2 parts: (1) COVID-19 detection in terms of positive and negative (healthy) tests using sequential audio signals, which was primarily assessed in terms of the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity, with 95% CIs, and (2) longitudinal disease progression prediction over time in terms of probability of positive tests, which was evaluated using the correlation between the predicted probability trajectory and self-reported labels. RESULTS: We first explored the benefits of capturing longitudinal dynamics of audio biomarkers for COVID-19 detection. The strong performance, yielding an AUROC of 0.79, a sensitivity of 0.75, and a specificity of 0.71 supported the effectiveness of the approach compared to methods that do not leverage longitudinal dynamics. We further examined the predicted disease progression trajectory, which displayed high consistency with longitudinal test results with a correlation of 0.75 in the test cohort and 0.86 in a subset of the test cohort with 12 (57.1%) of 21 COVID-19-positive participants who reported disease recovery. Our findings suggest that monitoring COVID-19 evolution via longitudinal audio data has potential in the tracking of individuals' disease progression and recovery. CONCLUSIONS: An audio-based COVID-19 progression monitoring system was developed using deep learning techniques, with strong performance showing high consistency between the predicted trajectory and the test results over time, especially for recovery trend predictions. This has good potential in the postpeak and postpandemic era that can help guide medical treatment and optimize hospital resource allocations. The changes in longitudinal audio samples, referred to as audio dynamics, are associated with COVID-19 progression; thus, modeling the audio dynamics can potentially capture the underlying disease progression process and further aid COVID-19 progression prediction. This framework provides a flexible, affordable, and timely tool for COVID-19 tracking, and more importantly, it also provides a proof of concept of how telemonitoring could be applicable to respiratory diseases monitoring, in general.
Asunto(s)
COVID-19 , Aprendizaje Profundo , Voz , Tos/diagnóstico , Progresión de la Enfermedad , HumanosRESUMEN
Transmission of Hepatitis C (HCV) continues via sharing of injection equipment between people who inject drugs (PWID). Network-based modelling studies have produced conflicting results about whether random treatment is preferable to targeting treatment at PWID with multiple partners. We hypothesise that differences in the modelled injecting network structure produce this heterogeneity. The study aimed to test how changing network structure affects HCV transmission and treatment effects. We created three dynamic injecting network structures connecting 689 PWID (UK-net, AUS-net and USA-net) based on published empirical data. We modelled HCV in the networks and at 5 years compared prevalence of HCV 1) with no treatment, 2) with randomly targeted treatment and 3) with treatment targeted at PWID with the most injecting partnerships (degree-based treatment). HCV prevalence at 5 years without treatment differed significantly between the three networks (UK-net (42.8%) vs. AUS-net (38.2%), p < 0.0001 and vs. USA-net (54.0%), p < 0.0001). In the treatment scenarios UK-net and AUS-net showed a benefit of degree-based treatment with a 5-year prevalence of 1.0% vs. 9.6% p < 0.0001 and 0.15% vs. 0.44%, p < 0.0001. USA-net showed no significant difference (29.3% vs. 29.2%, p = 0.0681). Degree-based treatment was optimised with low prevalence, moderate treatment coverage conditions whereas random treatment was optimised in low treatment coverage, high prevalence conditions. In conclusion, injecting network structure determines the transmission rate of HCV and the most efficient treatment strategy. In real-world injecting network structures, the benefit of targeting HCV treatment at individuals with multiple injecting partnerships may have been underestimated.
Asunto(s)
Hepatitis C , Preparaciones Farmacéuticas , Abuso de Sustancias por Vía Intravenosa , Hepacivirus , Hepatitis C/tratamiento farmacológico , Hepatitis C/epidemiología , Hepatitis C/prevención & control , Humanos , Prevalencia , Abuso de Sustancias por Vía Intravenosa/complicacionesRESUMEN
BACKGROUND: We sought to compare the quality of washed red blood cells (RBCs) produced using the ACP215 device or manual methods with different combinations of wash and storage solutions. Our aim was to establish manual methods of washing that would permit a shelf life of more than 24 hours. STUDY DESIGN AND METHODS: Fourteen-day-old RBCs were pooled, split, and washed in one of five ways: 1) using the ACP215 and stored in SAGM, 2) manually washed and stored in saline, 3) manually washed in saline and stored in SAGM, 4) manually washed in saline-glucose and stored in SAGM, and 5) manually washed and stored in SAGM. Additional units were pooled and split, washed manually or using the ACP215, and irradiated on Day 14. Units were sampled to 14 days after washing and storage at 4 ± 2°C. RESULTS: All washed RBCs met specification for volume (200-320 mL) and hemoglobin (Hb) content (>40 g/unit). Removal of plasma proteins was better using manual methods: residual immunoglobulin A in saline-glucose-washed cells 0.033 (0.007-0.058) mg/dL manual versus 0.064 (0.026-0.104) mg/dL ACP215 (median, range). Hb loss was lower in manually washed units (mean, ≤ 2.0g/unit) than in ACP215-washed units (mean, 6.1 g/unit). Disregarding saline-washed and stored cells, hemolysis in all nonirradiated units was less than 0.8% 14 days after washing. As expected, the use of SAGM to store manually washed units improved adenosine triphosphate, glucose, lactate, and pH versus storage in saline. CONCLUSION: The data suggest that the shelf life of manually washed RBCs could be extended to 14 days if stored in SAGM instead of saline.
Asunto(s)
Eritrocitos , Irrigación Terapéutica/métodos , Adenina , Automatización , Glucemia/análisis , Conservación de la Sangre , Transfusión de Eritrocitos , Volumen de Eritrocitos , Eritrocitos/química , Eritrocitos/citología , Glucosa , Hemoglobinas/análisis , Hemólisis , Humanos , Concentración de Iones de Hidrógeno , Ácido Láctico/sangre , Manitol , Solución Salina , Cloruro de Sodio , Irrigación Terapéutica/instrumentación , Reacción a la Transfusión/prevención & controlRESUMEN
The role of macrophages in regulating the tumor microenvironment has spurned the exponential generation of nanoparticle targeting technologies. With the large amount of literature and the speed at which it is generated it is difficult to remain current with the most up-to-date literature. In this study we performed a topic modeling analysis of 854 abstracts of peer-reviewed literature for the most common usages of nanoparticle targeting of tumor associated macrophages (TAMs) in solid tumors. The data spans 20 years of literature, providing a broad perspective of the nanoparticle strategies. Our topic model found 6 distinct topics: Immune and TAMs, Nanoparticles, Imaging, Gene Delivery and Exosomes, Vaccines, and Multi-modal Therapies. We also found distinct nanoparticle usage, tumor types, and therapeutic trends across these topics. Moreover, we established that the topic model could be used to assign new papers into the existing topics, thereby creating a Living Review. This type of "birds-eye-view" analysis provides a useful assessment tool for exploring new and emerging themes within a large field.
Asunto(s)
Aprendizaje Automático , Nanopartículas , Nanopartículas/química , Humanos , Macrófagos Asociados a Tumores/metabolismo , Macrófagos Asociados a Tumores/inmunología , Neoplasias , Microambiente Tumoral , AnimalesRESUMEN
Tropical peatland across Southeast Asia is drained extensively for production of pulpwood, palm oil and other food crops. Associated increases in peat decomposition have led to widespread subsidence, deterioration of peat condition and CO2 emissions. However, quantification of subsidence and peat condition from these processes is challenging due to the scale and inaccessibility of dense tropical peat swamp forests. The development of satellite interferometric synthetic aperture radar (InSAR) has the potential to solve this problem. The Advanced Pixel System using Intermittent Baseline Subset (APSIS, formerly ISBAS) modelling technique provides improved coverage across almost all land surfaces irrespective of ground cover, enabling derivation of a time series of tropical peatland surface oscillations across whole catchments. This study aimed to establish the extent to which APSIS-InSAR can monitor seasonal patterns of tropical peat surface oscillations at North Selangor Peat Swamp Forest, Peninsular Malaysia. Results showed that C-band SAR could penetrate the forest canopy over tropical peat swamp forests intermittently and was applicable to a range of land covers. Therefore the APSIS technique has the potential for monitoring peat surface oscillations under tropical forest canopy using regularly acquired C-band Sentinel-1 InSAR data, enabling continuous monitoring of tropical peatland surface motion at a spatial resolution of 20 m.
Asunto(s)
Bosques , Radar , Suelo , Asia Sudoriental , HumedalesRESUMEN
BACKGROUND: Memorializing nurses' experiences during the COVID-19 pandemic had the potential to allow scientists and policymakers to learn about the impact on the nursing profession and health care systems. Yet, nurses are considered a difficult population to recruit for research. OBJECTIVE: To describe an innovative qualitative data collection method for capturing current practice experiences among nurses working during the COVID-19 pandemic. METHODS: Guerilla theory served as the theoretical framework. Utilizing a qualitative descriptive design, a telephone voicemail messaging system was developed to capture nurses' experiences. RESULTS: Nurses were recruited with convenience and snowball sampling via social media and state listservs. The telephone voicemail messaging system, Twilio, was used. After listening to the recording of the consent form, the participants shared their experiences by leaving a voice message where they answered the prompt, "Tell us about your experiences working during the COVID-19 pandemic." Seventy voicemails were included, and the voicemails were transcribed. After a nurse shared their experience via an email sent to the research team, emails were added to the data collection; 16 emails were received. Transcripts and emails were uploaded to the qualitative data analysis software program, Dedoose, and coded by 2 researchers using content analysis. Main themes were derived and discussed among the research team. CONCLUSION: Allowing participants multiple modes of expressing their experiences promote inclusivity in data collection. Further development and standardization of this method is needed for future research.
Asunto(s)
COVID-19 , Correo Electrónico , Investigación Cualitativa , Humanos , COVID-19/enfermería , Recolección de Datos/métodos , Recolección de Datos/normas , Enfermeras y Enfermeros/psicología , Pandemias , Femenino , SARS-CoV-2 , Adulto , MasculinoRESUMEN
Importance: Respiratory syncytial virus (RSV) infection can cause severe respiratory illness in older adults. Less is known about the cardiac complications of RSV disease compared with those of influenza and SARS-CoV-2 infection. Objective: To describe the prevalence and severity of acute cardiac events during hospitalizations among adults aged 50 years or older with RSV infection. Design, Setting, and Participants: This cross-sectional study analyzed surveillance data from the RSV Hospitalization Surveillance Network, which conducts detailed medical record abstraction among hospitalized patients with RSV infection detected through clinician-directed laboratory testing. Cases of RSV infection in adults aged 50 years or older within 12 states over 5 RSV seasons (annually from 2014-2015 through 2017-2018 and 2022-2023) were examined to estimate the weighted period prevalence and 95% CIs of acute cardiac events. Exposures: Acute cardiac events, identified by International Classification of Diseases, 9th Revision, Clinical Modification or International Statistical Classification of Diseases, Tenth Revision, Clinical Modification discharge codes, and discharge summary review. Main Outcomes and Measures: Severe disease outcomes, including intensive care unit (ICU) admission, receipt of invasive mechanical ventilation, or in-hospital death. Adjusted risk ratios (ARR) were calculated to compare severe outcomes among patients with and without acute cardiac events. Results: The study included 6248 hospitalized adults (median [IQR] age, 72.7 [63.0-82.3] years; 59.6% female; 56.4% with underlying cardiovascular disease) with laboratory-confirmed RSV infection. The weighted estimated prevalence of experiencing a cardiac event was 22.4% (95% CI, 21.0%-23.7%). The weighted estimated prevalence was 15.8% (95% CI, 14.6%-17.0%) for acute heart failure, 7.5% (95% CI, 6.8%-8.3%) for acute ischemic heart disease, 1.3% (95% CI, 1.0%-1.7%) for hypertensive crisis, 1.1% (95% CI, 0.8%-1.4%) for ventricular tachycardia, and 0.6% (95% CI, 0.4%-0.8%) for cardiogenic shock. Adults with underlying cardiovascular disease had a greater risk of experiencing an acute cardiac event relative to those who did not (33.0% vs 8.5%; ARR, 3.51; 95% CI, 2.85-4.32). Among all hospitalized adults with RSV infection, 18.6% required ICU admission and 4.9% died during hospitalization. Compared with patients without an acute cardiac event, those who experienced an acute cardiac event had a greater risk of ICU admission (25.8% vs 16.5%; ARR, 1.54; 95% CI, 1.23-1.93) and in-hospital death (8.1% vs 4.0%; ARR, 1.77; 95% CI, 1.36-2.31). Conclusions and Relevance: In this cross-sectional study over 5 RSV seasons, nearly one-quarter of hospitalized adults aged 50 years or older with RSV infection experienced an acute cardiac event (most frequently acute heart failure), including 1 in 12 adults (8.5%) with no documented underlying cardiovascular disease. The risk of severe outcomes was nearly twice as high in patients with acute cardiac events compared with patients who did not experience an acute cardiac event. These findings clarify the baseline epidemiology of potential cardiac complications of RSV infection prior to RSV vaccine availability.
Asunto(s)
Hospitalización , Infecciones por Virus Sincitial Respiratorio , Humanos , Masculino , Femenino , Anciano , Infecciones por Virus Sincitial Respiratorio/epidemiología , Infecciones por Virus Sincitial Respiratorio/complicaciones , Estudios Transversales , Hospitalización/estadística & datos numéricos , Persona de Mediana Edad , Anciano de 80 o más Años , Prevalencia , COVID-19/epidemiología , COVID-19/complicaciones , Estados Unidos/epidemiología , Mortalidad HospitalariaRESUMEN
Background: Respiratory syncytial virus (RSV) can cause severe disease among infants and older adults. Less is known about RSV among pregnant women. Methods: To analyze hospitalizations with laboratory-confirmed RSV among women aged 18 to 49 years, we used data from the RSV Hospitalization Surveillance Network (RSV-NET), a multistate population-based surveillance system. Specifically, we compared characteristics and outcomes among (1) pregnant and nonpregnant women during the pre-COVID-19 pandemic period (2014-2018), (2) pregnant women with respiratory symptoms during the prepandemic and pandemic periods (2021-2023), and (3) pregnant women with and without respiratory symptoms in the pandemic period. Using multivariable logistic regression, we examined whether pregnancy was a risk factor for severe outcomes (intensive care unit admission or in-hospital death) among women aged 18 to 49 years who were hospitalized with RSV prepandemic. Results: Prepandemic, 387 women aged 18 to 49 years were hospitalized with RSV. Of those, 350 (90.4%) had respiratory symptoms, among whom 33 (9.4%) were pregnant. Five (15.2%) pregnant women and 74 (23.3%) nonpregnant women were admitted to the intensive care unit; no pregnant women and 5 (1.6%) nonpregnant women died. Among 279 hospitalized pregnant women, 41 were identified prepandemic and 238 during the pandemic: 80.5% and 35.3% had respiratory symptoms, respectively (P < .001). Pregnant women were more likely to deliver during their RSV-associated hospitalization during the pandemic vs the prepandemic period (73.1% vs 43.9%, P < .001). Conclusions: Few pregnant women had severe RSV disease, and pregnancy was not a risk factor for a severe outcome. More asymptomatic pregnant women were identified during the pandemic, likely due to changes in testing practices for RSV.
RESUMEN
Background: During a quit attempt, cues from a smoker's environment are a major cause of brief smoking lapses, which increase the risk of relapse. Quit Sense is a theory-guided Just-In-Time Adaptive Intervention smartphone app, providing smokers with the means to learn about their environmental smoking cues and provides 'in the moment' support to help them manage these during a quit attempt. Objective: To undertake a feasibility randomised controlled trial to estimate key parameters to inform a definitive randomised controlled trial of Quit Sense. Design: A parallel, two-arm randomised controlled trial with a qualitative process evaluation and a 'Study Within A Trial' evaluating incentives on attrition. The research team were blind to allocation except for the study statistician, database developers and lead researcher. Participants were not blind to allocation. Setting: Online with recruitment, enrolment, randomisation and data collection (excluding manual telephone follow-up) automated through the study website. Participants: Smokers (323 screened, 297 eligible, 209 enrolled) recruited via online adverts on Google search, Facebook and Instagram. Interventions: Participants were allocated to 'usual care' arm (nâ =â 105; text message referral to the National Health Service SmokeFree website) or 'usual care' plus Quit Sense (nâ =â 104), via a text message invitation to install the Quit Sense app. Main outcome measures: Follow-up at 6 weeks and 6 months post enrolment was undertaken by automated text messages with an online questionnaire link and, for non-responders, by telephone. Definitive trial progression criteria were met if a priori thresholds were included in or lower than the 95% confidence interval of the estimate. Measures included health economic and outcome data completion rates (progression criterion #1 threshold: ≥ 70%), including biochemical validation rates (progression criterion #2 threshold: ≥ 70%), recruitment costs, app installation (progression criterion #3 threshold: ≥ 70%) and engagement rates (progression criterion #4 threshold: ≥ 60%), biochemically verified 6-month abstinence and hypothesised mechanisms of action and participant views of the app (qualitative). Results: Self-reported smoking outcome completion rates were 77% (95% confidence interval 71% to 82%) and health economic data (resource use and quality of life) 70% (95% CI 64% to 77%) at 6 months. Return rate of viable saliva samples for abstinence verification was 39% (95% CI 24% to 54%). The per-participant recruitment cost was £19.20, which included advert (£5.82) and running costs (£13.38). In the Quit Sense arm, 75% (95% CI 67% to 83%; 78/104) installed the app and, of these, 100% set a quit date within the app and 51% engaged with it for more than 1 week. The rate of 6-month biochemically verified sustained abstinence, which we anticipated would be used as a primary outcome in a future study, was 11.5% (12/104) in the Quit Sense arm and 2.9% (3/105) in the usual care arm (estimated effect size: adjusted odds ratioâ =â 4.57, 95% CIs 1.23 to 16.94). There was no evidence of between-arm differences in hypothesised mechanisms of action. Three out of four progression criteria were met. The Study Within A Trial analysis found a £20 versus £10 incentive did not significantly increase follow-up rates though reduced the need for manual follow-up and increased response speed. The process evaluation identified several potential pathways to abstinence for Quit Sense, factors which led to disengagement with the app, and app improvement suggestions. Limitations: Biochemical validation rates were lower than anticipated and imbalanced between arms. COVID-19-related restrictions likely limited opportunities for Quit Sense to provide location tailored support. Conclusions: The trial design and procedures demonstrated feasibility and evidence was generated supporting the efficacy potential of Quit Sense. Future work: Progression to a definitive trial is warranted providing improved biochemical validation rates. Trial registration: This trial is registered as ISRCTN12326962. Funding: This award was funded by the National Institute for Health and Care Research (NIHR) Public Health Research programme (NIHR award ref: 17/92/31) and is published in full in Public Health Research; Vol. 12, No. 4. See the NIHR Funding and Awards website for further award information.
Smokers often fail to quit because of urges to smoke triggered by their surroundings (e.g. being around smokers). We developed a smartphone app ('Quit Sense') which learns about an individual's surroundings and locations where they smoke. During a quit attempt, Quit Sense uses in-built sensors to identify when smokers are in those locations and sends 'in the moment' advice to help prevent them from smoking. We ran a feasibility study to help plan for a future large study to see if Quit Sense helps smokers to quit. This feasibility study was designed to tell us how many participants complete study measures; recruitment costs; how many participants install and use Quit Sense; and estimate whether Quit Sense may help smokers to stop and how it might do this. We recruited 209 smokers using online adverts on Google search, Facebook and Instagram, costing £19 per participant. Participants then had an equal chance of receiving a web link to the National Health Service SmokeFree website ('usual care group') or receive that same web link plus a link to the Quit Sense app ('Quit Sense group'). Three-quarters of the Quit Sense group installed the app on their phone and half of these used the app for more than 1 week. We followed up 77% of participants at 6 months to collect study data, though only 39% of quitters returned a saliva sample for abstinence verification. At 6 months, more people in the Quit Sense group had stopped smoking (12%) than the usual care group (3%). It was not clear how the app helped smokers to quit based on study measures, though interviews found that the process of training the app helped people quit through learning about what triggered their smoking behaviour. The findings support undertaking a large study to tell us whether Quit Sense really does help smokers to quit.
Asunto(s)
Estudios de Factibilidad , Aplicaciones Móviles , Teléfono Inteligente , Cese del Hábito de Fumar , Humanos , Cese del Hábito de Fumar/métodos , Cese del Hábito de Fumar/psicología , Femenino , Masculino , Adulto , Persona de Mediana EdadRESUMEN
RESEARCH FINDINGS: Minor illnesses, such as upper respiratory infections, stomachaches, and fevers, have been associated with children's decreased activity and increased irritability. Mothers of children who are frequently ill report more child behavior problems; however, previous research in this area has yet to simultaneously examine children's temperament. This investigation examined whether experience with recurrent, minor illnesses and negative emotionality worked together to predict young children's social functioning. This multi-method study utilized a sample of 110 daycare-attending children. Nurses went to the daycare centers weekly to perform health screens on the participating children. Minor illness experience was represented using a proportion created by dividing the number of illness diagnoses by the total number of health screenings completed from the time the child was enrolled in the study through his or her second birthday. Toddlers' negative emotionality and social behavior were assessed using mothers' and fathers' reports. The two dimensions of negative emotionality and minor illness experience operated in different ways such that anger worked additively with minor illness experience and fearfulness interacted with minor illness experience to predict social behavior. Children who were described as more temperamentally angry displayed less social competence especially when they also experienced high proportions of minor illness. Temperamentally fearful children exhibited more externalizing problems when they experienced a higher frequency of illness whereas fearfulness was not associated to externalizing problems for children who experienced low proportions of illness. PRACTICE OR POLICY: Children's frequent experience with minor illnesses combined with negative emotionality appears to place toddlers at a heightened risk for exhibiting behavior problems. These findings have implications for child and family well-being as well as interactions with childcare providers and peers within childcare settings. Interventions could be developed to target "at risk" children.
RESUMEN
Tropical peatlands are important carbon stores that are vulnerable to drainage and conversion to agriculture. Protection and restoration of peatlands are increasingly recognised as key nature based solutions that can be implemented as part of climate change mitigation. Identification of peatland areas that are important for protection and restauration with regards to the state of their carbon stocks, are therefore vital for policy makers. In this paper we combined organic geochemical analysis by Rock-Eval (6) pyrolysis of peat collected from sites with different land management history and optical remote sensing products to assess if remotely sensed data could be used to predict peat conditions and carbon storage. The study used the North Selangor Peat Swamp forest, Malaysia, as the model system. Across the sampling sites the carbon stocks in the below ground peat was ca 12 times higher than the forest (median carbon stock held in ground vegetation 114.70 Mg ha-1 and peat soil 1401.51 Mg ha-1). Peat core sub-samples and litter collected from Fire Affected, Disturbed Forest, and Managed Recovery locations (i.e. disturbed sites) had different decomposition profiles than Central Forest sites. The Rock-Eval pyrolysis of the upper peat profiles showed that surface peat layers at Fire Affected, Disturbed Forest, and Managed Recovery locations had lower immature organic matter index (I-index) values (average I-index range in upper section 0.15 to -0.06) and higher refractory organic matter index (R -index) (average R-index range in upper section 0.51 to 0.65) compared to Central Forest sites indicating enhanced decomposition of the surface peat. In the top 50 cm section of the peat profile, carbon stocks were negatively related to the normalised burns ratio (NBR) (a satellite derived parameter) (Spearman's rho = -0.664, S = 366, p-value = <0.05) while there was a positive relationship between the hydrogen index and the normalised burns ratio profile (Spearman's rho = 0.7, S = 66, p-value = <0.05) suggesting that this remotely sensed product is able to detect degradation of peat in the upper peat profile. We conclude that the NBR can be used to identify degraded peatland areas and to support identification of areas for conversation and restoration.