RESUMO
BACKGROUND: Risk factors for progression of coronavirus disease 2019 (COVID-19) to severe disease or death are underexplored in U.S. cohorts. OBJECTIVE: To determine the factors on hospital admission that are predictive of severe disease or death from COVID-19. DESIGN: Retrospective cohort analysis. SETTING: Five hospitals in the Maryland and Washington, DC, area. PATIENTS: 832 consecutive COVID-19 admissions from 4 March to 24 April 2020, with follow-up through 27 June 2020. MEASUREMENTS: Patient trajectories and outcomes, categorized by using the World Health Organization COVID-19 disease severity scale. Primary outcomes were death and a composite of severe disease or death. RESULTS: Median patient age was 64 years (range, 1 to 108 years); 47% were women, 40% were Black, 16% were Latinx, and 21% were nursing home residents. Among all patients, 131 (16%) died and 694 (83%) were discharged (523 [63%] had mild to moderate disease and 171 [20%] had severe disease). Of deaths, 66 (50%) were nursing home residents. Of 787 patients admitted with mild to moderate disease, 302 (38%) progressed to severe disease or death: 181 (60%) by day 2 and 238 (79%) by day 4. Patients had markedly different probabilities of disease progression on the basis of age, nursing home residence, comorbid conditions, obesity, respiratory symptoms, respiratory rate, fever, absolute lymphocyte count, hypoalbuminemia, troponin level, and C-reactive protein level and the interactions among these factors. Using only factors present on admission, a model to predict in-hospital disease progression had an area under the curve of 0.85, 0.79, and 0.79 at days 2, 4, and 7, respectively. LIMITATION: The study was done in a single health care system. CONCLUSION: A combination of demographic and clinical variables is strongly associated with severe COVID-19 disease or death and their early onset. The COVID-19 Inpatient Risk Calculator (CIRC), using factors present on admission, can inform clinical and resource allocation decisions. PRIMARY FUNDING SOURCE: Hopkins inHealth and COVID-19 Administrative Supplement for the HHS Region 3 Treatment Center from the Office of the Assistant Secretary for Preparedness and Response.
Assuntos
COVID-19/mortalidade , Mortalidade Hospitalar , Hospitalização , Índice de Gravidade de Doença , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Progressão da Doença , Feminino , Humanos , Lactente , Masculino , Pessoa de Meia-Idade , Pandemias , Estudos Retrospectivos , Fatores de Risco , SARS-CoV-2 , Estados Unidos/epidemiologiaRESUMO
Ground- and aircraft-based measurements show that the seasonal amplitude of Northern Hemisphere atmospheric carbon dioxide (CO2) concentrations has increased by as much as 50 per cent over the past 50 years. This increase has been linked to changes in temperate, boreal and arctic ecosystem properties and processes such as enhanced photosynthesis, increased heterotrophic respiration, and expansion of woody vegetation. However, the precise causal mechanisms behind the observed changes in atmospheric CO2 seasonality remain unclear. Here we use production statistics and a carbon accounting model to show that increases in agricultural productivity, which have been largely overlooked in previous investigations, explain as much as a quarter of the observed changes in atmospheric CO2 seasonality. Specifically, Northern Hemisphere extratropical maize, wheat, rice, and soybean production grew by 240 per cent between 1961 and 2008, thereby increasing the amount of net carbon uptake by croplands during the Northern Hemisphere growing season by 0.33 petagrams. Maize alone accounts for two-thirds of this change, owing mostly to agricultural intensification within concentrated production zones in the midwestern United States and northern China. Maize, wheat, rice, and soybeans account for about 68 per cent of extratropical dry biomass production, so it is likely that the total impact of increased agricultural production exceeds the amount quantified here.
Assuntos
Agricultura/estatística & dados numéricos , Atmosfera/química , Dióxido de Carbono/análise , Produtos Agrícolas/metabolismo , Eficiência , Estações do Ano , Biomassa , Dióxido de Carbono/metabolismo , Produtos Agrícolas/crescimento & desenvolvimento , Ecossistema , Atividades HumanasRESUMO
Drug overdose is the leading cause of unintentional injury-associated death in the United States. Among 70,237 fatal drug overdoses in 2017, prescription opioids were involved in 17,029 (24.2%) (1). Higher rates of opioid-related deaths have been recorded in nonmetropolitan (rural) areas (2). In 2017, 14 rural counties were among the 15 counties with the highest opioid prescribing rates.* Higher opioid prescribing rates put patients at risk for addiction and overdose (3). Using deidentified data from the Athenahealth electronic health record (EHR) system, opioid prescribing rates among 31,422 primary care providers in the United States were analyzed to evaluate trends from January 2014 to March 2017. This analysis assessed how prescribing practices varied among six urban-rural classification categories of counties, before and after the March 2016 release of CDC's Guideline for Prescribing Opioids for Chronic Pain (Guideline) (4). Patients in noncore (the most rural) counties had an 87% higher chance of receiving an opioid prescription compared with persons in large central metropolitan counties during the study period. Across all six county groups, the odds of receiving an opioid prescription decreased significantly after March 2016. This decrease followed a flat trend during the preceding period in micropolitan and large central metropolitan county groups; in contrast, the decrease continued previous downward trends in the other four county groups. Data from EHRs can effectively supplement traditional surveillance methods for monitoring trends in opioid prescribing and other areas of public health importance, with minimal lag time under ideal conditions. As less densely populated areas appear to indicate both substantial progress in decreasing opioid prescribing and ongoing need for reduction, community health care practices and intervention programs must continue to be tailored to community characteristics.
Assuntos
Analgésicos Opioides/uso terapêutico , Prescrições de Medicamentos/estatística & dados numéricos , Registros Eletrônicos de Saúde , Médicos de Atenção Primária , Padrões de Prática Médica/estatística & dados numéricos , Serviços de Saúde Rural/estatística & dados numéricos , Serviços Urbanos de Saúde/estatística & dados numéricos , Humanos , Estados UnidosRESUMO
BACKGROUND: Influenza causes an estimated 3000 to 50,000 deaths per year in the United States of America (US). Timely and representative data can help local, state, and national public health officials monitor and respond to outbreaks of seasonal influenza. Data from cloud-based electronic health records (EHR) and crowd-sourced influenza surveillance systems have the potential to provide complementary, near real-time estimates of influenza activity. The objectives of this paper are to compare two novel influenza-tracking systems with three traditional healthcare-based influenza surveillance systems at four spatial resolutions: national, regional, state, and city, and to determine the minimum number of participants in these systems required to produce influenza activity estimates that resemble the historical trends recorded by traditional surveillance systems. METHODS: We compared influenza activity estimates from five influenza surveillance systems: 1) patient visits for influenza-like illness (ILI) from the US Outpatient ILI Surveillance Network (ILINet), 2) virologic data from World Health Organization (WHO) Collaborating and National Respiratory and Enteric Virus Surveillance System (NREVSS) Laboratories, 3) Emergency Department (ED) syndromic surveillance from Boston, Massachusetts, 4) patient visits for ILI from EHR, and 5) reports of ILI from the crowd-sourced system, Flu Near You (FNY), by calculating correlations between these systems across four influenza seasons, 2012-16, at four different spatial resolutions in the US. For the crowd-sourced system, we also used a bootstrapping statistical approach to estimate the minimum number of reports necessary to produce a meaningful signal at a given spatial resolution. RESULTS: In general, as the spatial resolution increased, correlation values between all influenza surveillance systems decreased. Influenza-like Illness rates in geographic areas with more than 250 crowd-sourced participants or with more than 20,000 visit counts for EHR tracked government-lead estimates of influenza activity. CONCLUSIONS: With a sufficient number of reports, data from novel influenza surveillance systems can complement traditional healthcare-based systems at multiple spatial resolutions.
Assuntos
Influenza Humana/epidemiologia , Crowdsourcing , Surtos de Doenças , Registros Eletrônicos de Saúde , Humanos , Massachusetts/epidemiologia , Vigilância da População , Estados UnidosRESUMO
The use of multivariate analysis techniques, such as principal component analysis-inverse least-squares (PCA-ILS), has become standard for signal isolation from in vivo fast-scan cyclic voltammetric (FSCV) data due to its superior noise removal and interferent-detection capabilities. However, the requirement of collecting separate training data for PCA-ILS model construction increases experimental complexity and, as such, has been the source of recent controversy. Here, we explore an alternative method, multivariate curve resolution-alternating least-squares (MCR-ALS), to circumvent this issue while retaining the advantages of multivariate analysis. As compared to PCA-ILS, which relies on explicit user definition of component number and profiles, MCR-ALS relies on the unique temporal signatures of individual chemical components for analyte-profile determination. However, due to increased model freedom, proper deployment of MCR-ALS requires careful consideration of the model parameters and the imposition of constraints on possible model solutions. As such, approaches to achieve meaningful MCR-ALS models are characterized. It is shown, through use of previously reported techniques, that MCR-ALS can produce similar results to PCA-ILS and may serve as a useful supplement or replacement to PCA-ILS for signal isolation from FSCV data.
Assuntos
Técnicas Eletroquímicas/métodos , Animais , Dopamina/química , Concentração de Íons de Hidrogênio , Análise dos Mínimos Quadrados , Masculino , Análise de Componente Principal , Ratos , Ratos Sprague-Dawley , Processamento de Sinais Assistido por Computador , SoftwareRESUMO
BACKGROUND: Accurate influenza activity forecasting helps public health officials prepare and allocate resources for unusual influenza activity. Traditional flu surveillance systems, such as the Centers for Disease Control and Prevention's (CDC) influenza-like illnesses reports, lag behind real-time by one to 2 weeks, whereas information contained in cloud-based electronic health records (EHR) and in Internet users' search activity is typically available in near real-time. We present a method that combines the information from these two data sources with historical flu activity to produce national flu forecasts for the United States up to 4 weeks ahead of the publication of CDC's flu reports. METHODS: We extend a method originally designed to track flu using Google searches, named ARGO, to combine information from EHR and Internet searches with historical flu activities. Our regularized multivariate regression model dynamically selects the most appropriate variables for flu prediction every week. The model is assessed for the flu seasons within the time period 2013-2016 using multiple metrics including root mean squared error (RMSE). RESULTS: Our method reduces the RMSE of the publicly available alternative (Healthmap flutrends) method by 33, 20, 17 and 21%, for the four time horizons: real-time, one, two, and 3 weeks ahead, respectively. Such accuracy improvements are statistically significant at the 5% level. Our real-time estimates correctly identified the peak timing and magnitude of the studied flu seasons. CONCLUSIONS: Our method significantly reduces the prediction error when compared to historical publicly available Internet-based prediction systems, demonstrating that: (1) the method to combine data sources is as important as data quality; (2) effectively extracting information from a cloud-based EHR and Internet search activity leads to accurate forecast of flu.
Assuntos
Centers for Disease Control and Prevention, U.S. , Registros Eletrônicos de Saúde , Influenza Humana/epidemiologia , Previsões , Humanos , Internet , Vigilância da População/métodos , Estações do Ano , Estados UnidosRESUMO
A spring phenology model that combines photoperiod with accumulated heating and chilling to predict spring leaf-out dates is optimized using PhenoCam observations and coupled into the Community Land Model (CLM) 4.5. In head-to-head comparison (using satellite data from 2003 to 2013 for validation) for model grid cells over the Northern Hemisphere deciduous broadleaf forests (5.5 million km2 ), we found that the revised model substantially outperformed the standard CLM seasonal-deciduous spring phenology submodel at both coarse (0.9 × 1.25°) and fine (1 km) scales. The revised model also does a better job of representing recent (decadal) phenological trends observed globally by MODIS, as well as long-term trends (1950-2014) in the PEP725 European phenology dataset. Moreover, forward model runs suggested a stronger advancement (up to 11 days) of spring leaf-out by the end of the 21st century for the revised model. Trends toward earlier advancement are predicted for deciduous forests across the whole Northern Hemisphere boreal and temperate deciduous forest region for the revised model, whereas the standard model predicts earlier leaf-out in colder regions, but later leaf-out in warmer regions, and no trend globally. The earlier spring leaf-out predicted by the revised model resulted in enhanced gross primary production (up to 0.6 Pg C yr-1 ) and evapotranspiration (up to 24 mm yr-1 ) when results were integrated across the study region. These results suggest that the standard seasonal-deciduous submodel in CLM should be reconsidered, otherwise substantial errors in predictions of key land-atmosphere interactions and feedbacks may result.
Assuntos
Carbono , Clima , Florestas , Estações do Ano , ÁrvoresAssuntos
Analgésicos Opioides , Indústria Farmacêutica/legislação & jurisprudência , Overdose de Drogas/prevenção & controle , Prescrições de Medicamentos/estatística & dados numéricos , Política de Saúde , Transtornos Relacionados ao Uso de Opioides/epidemiologia , Formulação de Políticas , Analgésicos Opioides/intoxicação , Indústria Farmacêutica/economia , Epidemias , Humanos , Transtornos Relacionados ao Uso de Opioides/prevenção & controle , Padrões de Prática Médica/tendências , Estados Unidos/epidemiologiaRESUMO
We estimated excess mortality in Medicare recipients in the United States with probable and confirmed Covid-19 infections in the general community and amongst residents of long-term care (LTC) facilities. We considered 28,389,098 Medicare and dual-eligible recipients from one year before February 29, 2020 through September 30, 2020, with mortality followed through November 30th, 2020. Probable and confirmed Covid-19 diagnoses, presumably mostly symptomatic, were determined from ICD-10 codes. We developed a Risk Stratification Index (RSI) mortality model which was applied prospectively to establish baseline mortality risk. Excess deaths attributable to Covid-19 were estimated by comparing actual-to-expected deaths based on historical (2017-2019) comparisons and in closely matched concurrent (2020) cohorts with and without Covid-19. Overall, 677,100 (2.4%) beneficiaries had confirmed Covid-19 and 2,917,604 (10.3%) had probable Covid-19. A total of 472,329 confirmed cases were community living and 204,771 were in LTC. Mortality following a probable or confirmed diagnosis in the community increased from an expected incidence of about 4.0% to actual incidence of 7.5%. In long-term care facilities, the corresponding increase was from 20.3% to 24.6%. The absolute increase was therefore similar at 3-4% in the community and in LTC residents. The percentage increase was far greater in the community (89.5%) than among patients in chronic care facilities (21.1%) who had higher baseline risk of mortality. The LTC population without probable or confirmed Covid-19 diagnoses experienced 38,932 excess deaths (34.8%) compared to historical estimates. Limitations in access to Covid-19 testing and disease under-reporting in LTC patients probably were important factors, although social isolation and disruption in usual care presumably also contributed. Remarkably, there were 31,360 (5.4%) fewer deaths than expected in community dwellers without probable or confirmed Covid-19 diagnoses. Disruptions to the healthcare system and avoided medical care were thus apparently offset by other factors, representing overall benefit. The Covid-19 pandemic had marked effects on mortality, but the effects were highly context-dependent.
Assuntos
COVID-19/mortalidade , Medicare/tendências , Idoso , Idoso de 80 Anos ou mais , COVID-19/economia , Feminino , Humanos , Incidência , Benefícios do Seguro/tendências , Assistência de Longa Duração/tendências , Masculino , Mortalidade , Fatores de Risco , SARS-CoV-2/patogenicidade , Instituições de Cuidados Especializados de Enfermagem/tendências , Estados UnidosRESUMO
The COVID-19 pandemic swept across the world rapidly, infecting millions of people. An efficient tool that can accurately recognize important clinical concepts of COVID-19 from free text in electronic health records (EHRs) will be valuable to accelerate COVID-19 clinical research. To this end, this study aims at adapting the existing CLAMP natural language processing tool to quickly build COVID-19 SignSym, which can extract COVID-19 signs/symptoms and their 8 attributes (body location, severity, temporal expression, subject, condition, uncertainty, negation, and course) from clinical text. The extracted information is also mapped to standard concepts in the Observational Medical Outcomes Partnership common data model. A hybrid approach of combining deep learning-based models, curated lexicons, and pattern-based rules was applied to quickly build the COVID-19 SignSym from CLAMP, with optimized performance. Our extensive evaluation using 3 external sites with clinical notes of COVID-19 patients, as well as the online medical dialogues of COVID-19, shows COVID-19 SignSym can achieve high performance across data sources. The workflow used for this study can be generalized to other use cases, where existing clinical natural language processing tools need to be customized for specific information needs within a short time. COVID-19 SignSym is freely accessible to the research community as a downloadable package (https://clamp.uth.edu/covid/nlp.php) and has been used by 16 healthcare organizations to support clinical research of COVID-19.
Assuntos
COVID-19/diagnóstico , Registros Eletrônicos de Saúde , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural , Aprendizado Profundo , Humanos , Avaliação de Sintomas/métodosRESUMO
The COVID-19 pandemic swept across the world rapidly, infecting millions of people. An efficient tool that can accurately recognize important clinical concepts of COVID-19 from free text in electronic health records (EHRs) will be valuable to accelerate COVID-19 clinical research. To this end, this study aims at adapting the existing CLAMP natural language processing tool to quickly build COVID-19 SignSym, which can extract COVID-19 signs/symptoms and their 8 attributes (body location, severity, temporal expression, subject, condition, uncertainty, negation, and course) from clinical text. The extracted information is also mapped to standard concepts in the Observational Medical Outcomes Partnership common data model. A hybrid approach of combining deep learning-based models, curated lexicons, and pattern-based rules was applied to quickly build the COVID-19 SignSym from CLAMP, with optimized performance. Our extensive evaluation using 3 external sites with clinical notes of COVID-19 patients, as well as the online medical dialogues of COVID-19, shows COVID-19 SignSym can achieve high performance across data sources. The workflow used for this study can be generalized to other use cases, where existing clinical natural language processing tools need to be customized for specific information needs within a short time. COVID-19 SignSym is freely accessible to the research community as a downloadable package (https://clamp.uth.edu/covid/nlp.php) and has been used by 16 healthcare organizations to support clinical research of COVID-19.
RESUMO
BACKGROUND: The Centers for Disease Control and Prevention (CDC) tracks influenza-like illness (ILI) using information on patient visits to health care providers through the Outpatient Influenza-like Illness Surveillance Network (ILINet). As participation in this system is voluntary, the composition, coverage, and consistency of health care reports vary from state to state, leading to different measures of ILI activity between regions. The degree to which these measures reflect actual differences in influenza activity or systematic differences in the methods used to collect and aggregate the data is unclear. OBJECTIVE: The objective of our study was to qualitatively and quantitatively compare national and region-specific ILI activity in the United States across 4 surveillance data sources-CDC ILINet, Flu Near You (FNY), athenahealth, and HealthTweets.org-to determine whether these data sources, commonly used as input in influenza modeling efforts, show geographical patterns that are similar to those observed in CDC ILINet's data. We also compared the yearly percentage of FNY participants who sought health care for ILI symptoms across geographical areas. METHODS: We compared the national and regional 2018-2019 ILI activity baselines, calculated using noninfluenza weeks from previous years, for each surveillance data source. We also compared measures of ILI activity across geographical areas during 3 influenza seasons, 2015-2016, 2016-2017, and 2017-2018. Geographical differences in weekly ILI activity within each data source were also assessed using relative mean differences and time series heatmaps. National and regional age-adjusted health care-seeking percentages were calculated for each influenza season by dividing the number of FNY participants who sought medical care for ILI symptoms by the total number of ILI reports within an influenza season. Pearson correlations were used to assess the association between the health care-seeking percentages and baselines for each surveillance data source. RESULTS: We observed consistent differences in ILI activity across geographical areas for CDC ILINet and athenahealth data. ILI activity for FNY displayed little variation across geographical areas, whereas differences in ILI activity for HealthTweets.org were associated with the total number of tweets within a geographical area. The percentage of FNY participants who sought health care for ILI symptoms differed slightly across geographical areas, and these percentages were positively correlated with CDC ILINet and athenahealth baselines. CONCLUSIONS: Our findings suggest that differences in ILI activity across geographical areas as reported by a given surveillance system may not accurately reflect true differences in the prevalence of ILI. Instead, these differences may reflect systematic collection and aggregation biases that are particular to each system and consistent across influenza seasons. These findings are potentially relevant in the real-time analysis of the influenza season and in the definition of unbiased forecast models.
RESUMO
Gut microbial ß-glucuronidase (GUS) enzymes play important roles in drug efficacy and toxicity, intestinal carcinogenesis, and mammalian-microbial symbiosis. Recently, the first catalog of human gut GUS proteins was provided for the Human Microbiome Project stool sample database and revealed 279 unique GUS enzymes organized into six categories based on active-site structural features. Because mice represent a model biomedical research organism, here we provide an analogous catalog of mouse intestinal microbial GUS proteins-a mouse gut GUSome. Using metagenome analysis guided by protein structure, we examined 2.5 million unique proteins from a comprehensive mouse gut metagenome created from several mouse strains, providers, housing conditions, and diets. We identified 444 unique GUS proteins and organized them into six categories based on active-site features, similarly to the human GUSome analysis. GUS enzymes were encoded by the major gut microbial phyla, including Firmicutes (60%) and Bacteroidetes (21%), and there were nearly 20% for which taxonomy could not be assigned. No differences in gut microbial gus gene composition were observed for mice based on sex. However, mice exhibited gus differences based on active-site features associated with provider, location, strain, and diet. Furthermore, diet yielded the largest differences in gus composition. Biochemical analysis of two low-fat-associated GUS enzymes revealed that they are variable with respect to their efficacy of processing both sulfated and nonsulfated heparan nonasaccharides containing terminal glucuronides.IMPORTANCE Mice are commonly employed as model organisms of mammalian disease; as such, our understanding of the compositions of their gut microbiomes is critical to appreciating how the mouse and human gastrointestinal tracts mirror one another. GUS enzymes, with importance in normal physiology and disease, are an attractive set of proteins to use for such analyses. Here we show that while the specific GUS enzymes differ at the sequence level, a core GUSome functionality appears conserved between mouse and human gastrointestinal bacteria. Mouse strain, provider, housing location, and diet exhibit distinct GUSomes and gus gene compositions, but sex seems not to affect the GUSome. These data provide a basis for understanding the gut microbial GUS enzymes present in commonly used laboratory mice. Further, they demonstrate the utility of metagenome analysis guided by protein structure to provide specific sets of functionally related proteins from whole-genome metagenome sequencing data.
RESUMO
Urban ecosystem assessments increasingly rely on widely available map products, such as the U.S. Geological Service (USGS) National Land Cover Database (NLCD), and datasets that use generic classification schemes to detect and model large-scale impacts of land-cover change. However, utilizing existing map products or schemes without identifying relevant urban class types such as semi-natural, yet managed land areas that account for differences in ecological functions due to their pervious surfaces may severely constrain assessments. To address this gap, we introduce the managed clearings land-cover type-semi-natural, vegetated land surfaces with varying degrees of management practices-for urbanizing landscapes. We explore the extent to which managed clearings are common and spatially distributed in three rapidly urbanizing areas of the Charlanta megaregion, USA. We visually interpreted and mapped fine-scale land cover with special attention to managed clearings using 2012 U.S. Department of Agriculture (USDA) National Agriculture Imagery Program (NAIP) images within 150 randomly selected 1-km2 blocks in the cities of Atlanta, Charlotte, and Raleigh, and compared our maps with National Land Cover Database (NLCD) data. We estimated the abundance of managed clearings relative to other land use and land cover types, and the proportion of land-cover types in the NLCD that are similar to managed clearings. Our study reveals that managed clearings are the most common land cover type in these cities, covering 28% of the total sampled land area- 6.2% higher than the total area of impervious surfaces. Managed clearings, when combined with forest cover, constitutes 69% of pervious surfaces in the sampled region. We observed variability in area estimates of managed clearings between the NAIP-derived and NLCD data. This suggests using high-resolution remote sensing imagery (e.g., NAIP) instead of modifying NLCD data for improved representation of spatial heterogeneity and mapping of managed clearings in urbanizing landscapes. Our findings also demonstrate the need to more carefully consider managed clearings and their critical ecological functions in landscape- to regional-scale studies of urbanizing ecosystems.
Assuntos
Conservação dos Recursos Naturais/métodos , Ecossistema , Urbanização , Conservação dos Recursos Naturais/estatística & dados numéricos , Florestas , Georgia , Mapas como Assunto , Recursos Naturais , North Carolina , Estados Unidos , United States Department of AgricultureRESUMO
BACKGROUND: Influenza outbreaks pose major challenges to public health around the world, leading to thousands of deaths a year in the United States alone. Accurate systems that track influenza activity at the city level are necessary to provide actionable information that can be used for clinical, hospital, and community outbreak preparation. OBJECTIVE: Although Internet-based real-time data sources such as Google searches and tweets have been successfully used to produce influenza activity estimates ahead of traditional health care-based systems at national and state levels, influenza tracking and forecasting at finer spatial resolutions, such as the city level, remain an open question. Our study aimed to present a precise, near real-time methodology capable of producing influenza estimates ahead of those collected and published by the Boston Public Health Commission (BPHC) for the Boston metropolitan area. This approach has great potential to be extended to other cities with access to similar data sources. METHODS: We first tested the ability of Google searches, Twitter posts, electronic health records, and a crowd-sourced influenza reporting system to detect influenza activity in the Boston metropolis separately. We then adapted a multivariate dynamic regression method named ARGO (autoregression with general online information), designed for tracking influenza at the national level, and showed that it effectively uses the above data sources to monitor and forecast influenza at the city level 1 week ahead of the current date. Finally, we presented an ensemble-based approach capable of combining information from models based on multiple data sources to more robustly nowcast as well as forecast influenza activity in the Boston metropolitan area. The performances of our models were evaluated in an out-of-sample fashion over 4 influenza seasons within 2012-2016, as well as a holdout validation period from 2016 to 2017. RESULTS: Our ensemble-based methods incorporating information from diverse models based on multiple data sources, including ARGO, produced the most robust and accurate results. The observed Pearson correlations between our out-of-sample flu activity estimates and those historically reported by the BPHC were 0.98 in nowcasting influenza and 0.94 in forecasting influenza 1 week ahead of the current date. CONCLUSIONS: We show that information from Internet-based data sources, when combined using an informed, robust methodology, can be effectively used as early indicators of influenza activity at fine geographic resolutions.
RESUMO
Vegetation phenology controls the seasonality of many ecosystem processes, as well as numerous biosphere-atmosphere feedbacks. Phenology is also highly sensitive to climate change and variability. Here we present a series of datasets, together consisting of almost 750 years of observations, characterizing vegetation phenology in diverse ecosystems across North America. Our data are derived from conventional, visible-wavelength, automated digital camera imagery collected through the PhenoCam network. For each archived image, we extracted RGB (red, green, blue) colour channel information, with means and other statistics calculated across a region-of-interest (ROI) delineating a specific vegetation type. From the high-frequency (typically, 30 min) imagery, we derived time series characterizing vegetation colour, including "canopy greenness", processed to 1- and 3-day intervals. For ecosystems with one or more annual cycles of vegetation activity, we provide estimates, with uncertainties, for the start of the "greenness rising" and end of the "greenness falling" stages. The database can be used for phenological model validation and development, evaluation of satellite remote sensing data products, benchmarking earth system models, and studies of climate change impacts on terrestrial ecosystems.
Assuntos
Ecossistema , Plantas , Mudança Climática , Bases de Dados Factuais , Imagens de Satélites , Estados UnidosRESUMO
OBJECTIVES: Public discussion suggests that rising out-of-pocket costs have dramatically reduced the value of insurance and led to patients doing without needed care. Our aim was to ascertain trends in patient responsibility for cost sharing. STUDY DESIGN: We used data from an organization that serves over 78,000 healthcare providers and has access to visit-level data, including the amounts paid by patients. These practices are broadly representative of physicians and patients nationally. METHODS: We analyzed trends in patient obligations among a cohort of about 21,000 providers in 1078 practices who had used athenahealth software since 2011, including primary care physicians, obstetricians and gynecologists, surgeons, and some other specialists. Our analysis focused on what commercially insured patients pay out of pocket when seeking ambulatory care. RESULTS: The average patient obligation for approximately 2.5 million primary care visits each year rose from $23.52 per visit in 2011 to $26.40 per visit in 2015, for an overall increase of $2.88, or about 3% annually. This rate of increase is moderate and below growth in overall healthcare spending during the same time period. CONCLUSIONS: Average increases in patient obligations for outpatient visits in recent years have been fairly moderate, and multiple sources of survey data suggest that consumers' concerns about overall affordability are decreasing. The high cost of healthcare continues to pose challenges, both at the individual level and for society as a whole. Nevertheless, it is important that potential strategies to improve affordability are informed by trends in patient obligations.