RESUMO
The science around the use of masks by the public to impede COVID-19 transmission is advancing rapidly. In this narrative review, we develop an analytical framework to examine mask usage, synthesizing the relevant literature to inform multiple areas: population impact, transmission characteristics, source control, wearer protection, sociological considerations, and implementation considerations. A primary route of transmission of COVID-19 is via respiratory particles, and it is known to be transmissible from presymptomatic, paucisymptomatic, and asymptomatic individuals. Reducing disease spread requires two things: limiting contacts of infected individuals via physical distancing and other measures and reducing the transmission probability per contact. The preponderance of evidence indicates that mask wearing reduces transmissibility per contact by reducing transmission of infected respiratory particles in both laboratory and clinical contexts. Public mask wearing is most effective at reducing spread of the virus when compliance is high. Given the current shortages of medical masks, we recommend the adoption of public cloth mask wearing, as an effective form of source control, in conjunction with existing hygiene, distancing, and contact tracing strategies. Because many respiratory particles become smaller due to evaporation, we recommend increasing focus on a previously overlooked aspect of mask usage: mask wearing by infectious people ("source control") with benefits at the population level, rather than only mask wearing by susceptible people, such as health care workers, with focus on individual outcomes. We recommend that public officials and governments strongly encourage the use of widespread face masks in public, including the use of appropriate regulation.
Assuntos
COVID-19 , Busca de Comunicante , Máscaras , SARS-CoV-2 , COVID-19/epidemiologia , COVID-19/prevenção & controle , HumanosRESUMO
Predictions of COVID-19 case growth and mortality are critical to the decisions of political leaders, businesses, and individuals grappling with the pandemic. This predictive task is challenging due to the novelty of the virus, limited data, and dynamic political and societal responses. We embed a Bayesian time series model and a random forest algorithm within an epidemiological compartmental model for empirically grounded COVID-19 predictions. The Bayesian case model fits a location-specific curve to the velocity (first derivative) of the log transformed cumulative case count, borrowing strength across geographic locations and incorporating prior information to obtain a posterior distribution for case trajectories. The compartmental model uses this distribution and predicts deaths using a random forest algorithm trained on COVID-19 data and population-level characteristics, yielding daily projections and interval estimates for cases and deaths in U.S. states. We evaluated the model by training it on progressively longer periods of the pandemic and computing its predictive accuracy over 21-day forecasts. The substantial variation in predicted trajectories and associated uncertainty between states is illustrated by comparing three unique locations: New York, Colorado, and West Virginia. The sophistication and accuracy of this COVID-19 model offer reliable predictions and uncertainty estimates for the current trajectory of the pandemic in the U.S. and provide a platform for future predictions as shifting political and societal responses alter its course.
Assuntos
COVID-19/epidemiologia , COVID-19/mortalidade , Previsões/métodos , Modelos Estatísticos , Pandemias/estatística & dados numéricos , SARS-CoV-2 , Algoritmos , Teorema de Bayes , COVID-19/transmissão , Biologia Computacional , Humanos , Aprendizado de Máquina , Estados Unidos/epidemiologiaRESUMO
PURPOSE: Sepsis is a heterogeneous syndrome. Identification of sepsis subphenotypes with distinct immune profiles could lead to targeted therapies. This study investigates the immune profiles of patients with sepsis following distinct body temperature patterns (i.e., temperature trajectory subphenotypes). METHODS: Hospitalized patients from four hospitals between 2018 and 2022 with suspicion of infection were included. A previously validated temperature trajectory algorithm was used to classify study patients into temperature trajectory subphenotypes. Microbiological profiles, clinical outcomes, and levels of 31 biomarkers were compared between these subphenotypes. RESULTS: The 3576 study patients were classified into four temperature trajectory subphenotypes: hyperthermic slow resolvers (N = 563, 16%), hyperthermic fast resolvers (N = 805, 23%), normothermic (N = 1693, 47%), hypothermic (N = 515, 14%). The mortality rate was significantly different between subphenotypes, with the highest rate in hypothermics (14.2%), followed by hyperthermic slow resolvers 6%, normothermic 5.5%, and lowest in hyperthermic fast resolvers 3.6% (p < 0.001). After multiple testing correction for the 31 biomarkers tested, 20 biomarkers remained significantly different between temperature trajectories: angiopoietin-1 (Ang-1), C-reactive protein (CRP), feline McDonough sarcoma-like tyrosine kinase 3 ligand (Flt-3l), granulocyte colony stimulating factor (G-CSF), granulocyte-macrophage colony stimulating factor (GM-CSF), interleukin (IL)-15, IL-1 receptor antagonist (RA), IL-2, IL-6, IL-7, interferon gamma-induced protein 10 (IP-10), monocyte chemoattractant protein-1 (MCP-1), human macrophage inflammatory protein 3 alpha (MIP-3a), neutrophil gelatinase-associated lipocalin (NGAL), pentraxin-3, thrombomodulin, tissue factor, soluble triggering receptor expressed on myeloid cells-1 (sTREM-1), and vascular cellular adhesion molecule-1 (vCAM-1).The hyperthermic fast and slow resolvers had the highest levels of most pro- and anti-inflammatory cytokines. Hypothermics had suppressed levels of most cytokines but the highest levels of several coagulation markers (Ang-1, thrombomodulin, tissue factor). CONCLUSION: Sepsis subphenotypes identified using the universally available measurement of body temperature had distinct immune profiles. Hypothermic patients, who had the highest mortality rate, also had the lowest levels of most pro- and anti-inflammatory cytokines.
RESUMO
In emerging epidemics, early estimates of key epidemiological characteristics of the disease are critical for guiding public policy. In particular, identifying high-risk population subgroups aids policymakers and health officials in combating the epidemic. This has been challenging during the coronavirus disease 2019 (COVID-19) pandemic because governmental agencies typically release aggregate COVID-19 data as summary statistics of patient demographics. These data may identify disparities in COVID-19 outcomes between broad population subgroups, but do not provide comparisons between more granular population subgroups defined by combinations of multiple demographics. We introduce a method that helps to overcome the limitations of aggregated summary statistics and yields estimates of COVID-19 infection and case fatality rates - key quantities for guiding public policy related to the control and prevention of COVID-19 - for population subgroups across combinations of demographic characteristics. Our approach uses pseudo-likelihood based logistic regression to combine aggregate COVID-19 case and fatality data with population-level demographic survey data to estimate infection and case fatality rates for population subgroups across combinations of demographic characteristics. We illustrate our method on California COVID-19 data to estimate test-based infection and case fatality rates for population subgroups defined by gender, age, and race/ethnicity. Our analysis indicates that in California, males have higher test-based infection rates and test-based case fatality rates across age and race/ethnicity groups, with the gender gap widening with increasing age. Although elderly infected with COVID-19 are at an elevated risk of mortality, the test-based infection rates do not increase monotonically with age. The workforce population, especially, has a higher test-based infection rate than children, adolescents, and other elderly people in their 60-80. LatinX and African Americans have higher test-based infection rates than other race/ethnicity groups. The subgroups with the highest 5 test-based case fatality rates are all-male groups with race as African American, Asian, Multi-race, LatinX, and White, followed by African American females, indicating that African Americans are an especially vulnerable California subpopulation.
Assuntos
COVID-19/epidemiologia , Modelos Logísticos , Adolescente , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , COVID-19/mortalidade , California/epidemiologia , California/etnologia , Criança , Etnicidade , Feminino , Inquéritos Epidemiológicos , Humanos , Funções Verossimilhança , Masculino , Pessoa de Meia-Idade , Método de Monte Carlo , Pandemias , Grupos Raciais , Fatores de Risco , SARS-CoV-2/fisiologia , Fatores SexuaisRESUMO
Epidemiologists use prediction models to downscale (i.e., interpolate) air pollution exposure where monitoring data is insufficient. This study compares machine learning prediction models for ground-level ozone during wildfires, evaluating the predictive accuracy of ten algorithms on the daily 8-hour maximum average ozone during a 2008 wildfire event in northern California. Models were evaluated using a leave-one-location-out cross-validation (LOLO CV) procedure to account for the spatial and temporal dependence of the data and produce more realistic estimates of prediction error. LOLO CV avoids both the well-known overly optimistic bias of k-fold cross-validation on dependent data and the conservative bias of evaluating prediction error over a coarser spatial resolution via leave-k-locations-out CV. Gradient boosting was the most accurate of the ten machine learning algorithms with the lowest LOLO CV estimated root mean square error (0.228) and the highest LOLO CV RË2 (0.677). Random forest was the second best performing algorithm with an LOLO CV RË2 of 0.661. The LOLO CV estimates of predictive accuracy were less optimistic than 10-fold CV estimates for all ten models. The difference in estimated accuracy between the 10-fold CV and LOLO CV was greater for more flexible models like gradient boosting and random forest. The order of estimated model accuracy depended on the choice of evaluation metric, indicating that 10-fold CV and LOLO CV may select different models or sets of covariates as optimal, which calls into question the reliability of 10-fold CV for model (or variable) selection. These prediction models are designed for interpolating ozone exposure, and are not suited to inferring the effect of wildfires on ozone or extrapolating to predict ozone in other spatial or temporal domains. This is demonstrated by the inability of the best performing models to accurately predict ozone during 2007 southern California wildfires.
Assuntos
Poluentes Atmosféricos/análise , Poluição do Ar/estatística & dados numéricos , Monitoramento Ambiental/métodos , Aprendizado de Máquina , Ozônio/análise , Incêndios Florestais , Poluição do Ar/análise , Algoritmos , California , Reprodutibilidade dos TestesRESUMO
Wildfires have been increasing in frequency in the western United States (US) with the 2017 and 2018 fire seasons experiencing some of the worst wildfires in terms of suppression costs and air pollution that the western US has seen. Although growing evidence suggests respiratory exacerbations from elevated fine particulate matter (PM2.5) during wildfires, significantly less is known about the impacts on human health of ozone (O3) that may also be increased due to wildfires. Using machine learning, we created daily surface concentration maps for PM2.5 and O3 during an intense wildfire in California in 2008. We then linked these daily exposures to counts of respiratory hospitalizations and emergency department visits at the ZIP code level. We calculated relative risks of respiratory health outcomes using Poisson generalized estimating equations models for each exposure in separate and mutually-adjusted models, additionally adjusted for pertinent covariates. During the active fire periods, PM2.5 was significantly associated with exacerbations of asthma and chronic obstructive pulmonary disease (COPD) and these effects remained after controlling for O3. Effect estimates of O3 during the fire period were non-significant for respiratory hospitalizations but were significant for ED visits for asthma (RRâ¯=â¯1.05 and 95% CIâ¯=â¯(1.022, 1.078) for a 10â¯ppb increase in O3). In mutually-adjusted models, the significant findings for PM2.5 remained whereas the associations with O3 were confounded. Adjusted for O3, the RR for asthma ED visits associated with a 10⯵g/m3 increase in PM2.5 was 1.112 and 95% CIâ¯=â¯(1.087, 1.138). The significant findings for PM2.5 but not for O3 in mutually-adjusted models is likely due to the fact that PM2.5 levels during these fires exceeded the 24-hour National Ambient Air Quality Standard (NAAQS) of 35⯵g/m3 for 4976 ZIP-code days and reached levels up to 6.073 times the NAAQS, whereas our estimated O3 levels during the fire period only occasionally exceeded the NAAQS of 70â¯ppb with low exceedance levels. Future studies should continue to investigate the combined role of O3 and PM2.5 during wildfires to get a more comprehensive assessment of the cumulative burden on health from wildfire smoke.