RESUMO
Disease surveillance systems provide early warnings of disease outbreaks before they become public health emergencies. However, pandemics containment would be challenging due to the complex immunity landscape created by multiple variants. Genomic surveillance is critical for detecting novel variants with diverse characteristics and importation/emergence times. Yet, a systematic study incorporating genomic monitoring, situation assessment, and intervention strategies is lacking in the literature. We formulate an integrated computational modeling framework to study a realistic course of action based on sequencing, analysis, and response. We study the effects of the second variant's importation time, its infectiousness advantage and, its cross-infection on the novel variant's detection time, and the resulting intervention scenarios to contain epidemics driven by two-variants dynamics. Our results illustrate the limitation in the intervention's effectiveness due to the variants' competing dynamics and provide the following insights: i) There is a set of importation times that yields the worst detection time for the second variant, which depends on the first variant's basic reproductive number; ii) When the second variant is imported relatively early with respect to the first variant, the cross-infection level does not impact the detection time of the second variant. We found that depending on the target metric, the best outcomes are attained under different interventions' regimes. Our results emphasize the importance of sustained enforcement of Non-Pharmaceutical Interventions on preventing epidemic resurgence due to importation/emergence of novel variants. We also discuss how our methods can be used to study when a novel variant emerges within a population.
Assuntos
COVID-19 , Pandemias , Humanos , Pandemias/prevenção & controle , Saúde Pública , Surtos de Doenças/prevenção & controle , GenômicaRESUMO
When an influenza pandemic emerges, temporary school closures and antiviral treatment may slow virus spread, reduce the overall disease burden, and provide time for vaccine development, distribution, and administration while keeping a larger portion of the general population infection free. The impact of such measures will depend on the transmissibility and severity of the virus and the timing and extent of their implementation. To provide robust assessments of layered pandemic intervention strategies, the Centers for Disease Control and Prevention (CDC) funded a network of academic groups to build a framework for the development and comparison of multiple pandemic influenza models. Research teams from Columbia University, Imperial College London/Princeton University, Northeastern University, the University of Texas at Austin/Yale University, and the University of Virginia independently modeled three prescribed sets of pandemic influenza scenarios developed collaboratively by the CDC and network members. Results provided by the groups were aggregated into a mean-based ensemble. The ensemble and most component models agreed on the ranking of the most and least effective intervention strategies by impact but not on the magnitude of those impacts. In the scenarios evaluated, vaccination alone, due to the time needed for development, approval, and deployment, would not be expected to substantially reduce the numbers of illnesses, hospitalizations, and deaths that would occur. Only strategies that included early implementation of school closure were found to substantially mitigate early spread and allow time for vaccines to be developed and administered, especially under a highly transmissible pandemic scenario.
Assuntos
Vacinas contra Influenza , Influenza Humana , Humanos , Influenza Humana/tratamento farmacológico , Influenza Humana/epidemiologia , Influenza Humana/prevenção & controle , Preparações Farmacêuticas , Pandemias/prevenção & controle , Vacinas contra Influenza/uso terapêutico , Antivirais/farmacologia , Antivirais/uso terapêuticoRESUMO
[This corrects the article DOI: 10.1371/journal.pmed.1003793.].
RESUMO
This paper describes an integrated, data-driven operational pipeline based on national agent-based models to support federal and state-level pandemic planning and response. The pipeline consists of (i) an automatic semantic-aware scheduling method that coordinates jobs across two separate high performance computing systems; (ii) a data pipeline to collect, integrate and organize national and county-level disaggregated data for initialization and post-simulation analysis; (iii) a digital twin of national social contact networks made up of 288 Million individuals and 12.6 Billion time-varying interactions covering the US states and DC; (iv) an extension of a parallel agent-based simulation model to study epidemic dynamics and associated interventions. This pipeline can run 400 replicates of national runs in less than 33 h, and reduces the need for human intervention, resulting in faster turnaround times and higher reliability and accuracy of the results. Scientifically, the work has led to significant advances in real-time epidemic sciences.
RESUMO
BACKGROUND: The importance of infectious disease epidemic forecasting and prediction research is underscored by decades of communicable disease outbreaks, including COVID-19. Unlike other fields of medical research, such as clinical trials and systematic reviews, no reporting guidelines exist for reporting epidemic forecasting and prediction research despite their utility. We therefore developed the EPIFORGE checklist, a guideline for standardized reporting of epidemic forecasting research. METHODS AND FINDINGS: We developed this checklist using a best-practice process for development of reporting guidelines, involving a Delphi process and broad consultation with an international panel of infectious disease modelers and model end users. The objectives of these guidelines are to improve the consistency, reproducibility, comparability, and quality of epidemic forecasting reporting. The guidelines are not designed to advise scientists on how to perform epidemic forecasting and prediction research, but rather to serve as a standard for reporting critical methodological details of such studies. CONCLUSIONS: These guidelines have been submitted to the EQUATOR network, in addition to hosting by other dedicated webpages to facilitate feedback and journal endorsement.
Assuntos
Pesquisa Biomédica/normas , COVID-19/epidemiologia , Lista de Checagem/normas , Epidemias , Guias como Assunto/normas , Projetos de Pesquisa , Pesquisa Biomédica/métodos , Lista de Checagem/métodos , Doenças Transmissíveis/epidemiologia , Epidemias/estatística & dados numéricos , Previsões/métodos , Humanos , Reprodutibilidade dos TestesRESUMO
After a period of rapidly declining U.S. COVID-19 incidence during January-March 2021, increases occurred in several jurisdictions (1,2) despite the rapid rollout of a large-scale vaccination program. This increase coincided with the spread of more transmissible variants of SARS-CoV-2, the virus that causes COVID-19, including B.1.1.7 (1,3) and relaxation of COVID-19 prevention strategies such as those for businesses, large-scale gatherings, and educational activities. To provide long-term projections of potential trends in COVID-19 cases, hospitalizations, and deaths, COVID-19 Scenario Modeling Hub teams used a multiple-model approach comprising six models to assess the potential course of COVID-19 in the United States across four scenarios with different vaccination coverage rates and effectiveness estimates and strength and implementation of nonpharmaceutical interventions (NPIs) (public health policies, such as physical distancing and masking) over a 6-month period (April-September 2021) using data available through March 27, 2021 (4). Among the four scenarios, an accelerated decline in NPI adherence (which encapsulates NPI mandates and population behavior) was shown to undermine vaccination-related gains over the subsequent 2-3 months and, in combination with increased transmissibility of new variants, could lead to surges in cases, hospitalizations, and deaths. A sharp decline in cases was projected by July 2021, with a faster decline in the high-vaccination scenarios. High vaccination rates and compliance with public health prevention measures are essential to control the COVID-19 pandemic and to prevent surges in hospitalizations and deaths in the coming months.
Assuntos
Vacinas contra COVID-19/administração & dosagem , COVID-19/epidemiologia , COVID-19/terapia , Hospitalização/estatística & dados numéricos , Modelos Estatísticos , Política Pública , Vacinação/estatística & dados numéricos , COVID-19/mortalidade , COVID-19/prevenção & controle , Previsões , Humanos , Máscaras , Distanciamento Físico , Estados Unidos/epidemiologiaRESUMO
Prophylactic interventions such as vaccine allocation are some of the most effective public health policy planning tools. The supply of vaccines, however, is limited and an important challenge is to optimally allocate the vaccines to minimize epidemic impact. This resource allocation question (which we refer to as VaccIntDesign) has multiple dimensions: when, where, to whom, etc. Most of the existing literature in this topic deals with the latter (to whom), proposing policies that prioritize individuals by age and disease risk. However, since seasonal influenza spread has a typical spatial trend, and due to the temporal constraints enforced by the availability schedule, the when and where problems become equally, if not more, relevant. In this paper, we study the VaccIntDesign problem in the context of seasonal influenza spread in the United States. We develop a national scale metapopulation model for influenza that integrates both short and long distance human mobility, along with realistic data on vaccine uptake. We also design GreedyAlloc, a greedy algorithm for allocating the vaccine supply at the state level under temporal constraints and show that such a strategy improves over the current baseline of pro-rata allocation, and the improvement is more pronounced for higher vaccine efficacy and moderate flu season intensity. Further, the resulting strategy resembles a ring vaccination applied spatiallyacross the US.
Assuntos
Biologia Computacional/métodos , Vacinas contra Influenza/administração & dosagem , Influenza Humana , Alocação de Recursos/métodos , Análise Espaço-Temporal , Algoritmos , Bases de Dados Factuais , Humanos , Influenza Humana/epidemiologia , Influenza Humana/prevenção & controle , Influenza Humana/transmissão , Estações do Ano , Fatores de Tempo , Viagem/estatística & dados numéricos , Estados UnidosRESUMO
BACKGROUND: Over the past few decades, numerous forecasting methods have been proposed in the field of epidemic forecasting. Such methods can be classified into different categories such as deterministic vs. probabilistic, comparative methods vs. generative methods, and so on. In some of the more popular comparative methods, researchers compare observed epidemiological data from the early stages of an outbreak with the output of proposed models to forecast the future trend and prevalence of the pandemic. A significant problem in this area is the lack of standard well-defined evaluation measures to select the best algorithm among different ones, as well as for selecting the best possible configuration for a particular algorithm. RESULTS: In this paper we present an evaluation framework which allows for combining different features, error measures, and ranking schema to evaluate forecasts. We describe the various epidemic features (Epi-features) included to characterize the output of forecasting methods and provide suitable error measures that could be used to evaluate the accuracy of the methods with respect to these Epi-features. We focus on long-term predictions rather than short-term forecasting and demonstrate the utility of the framework by evaluating six forecasting methods for predicting influenza in the United States. Our results demonstrate that different error measures lead to different rankings even for a single Epi-feature. Further, our experimental analyses show that no single method dominates the rest in predicting all Epi-features when evaluated across error measures. As an alternative, we provide various Consensus Ranking schema that summarize individual rankings, thus accounting for different error measures. Since each Epi-feature presents a different aspect of the epidemic, multiple methods need to be combined to provide a comprehensive forecast. Thus we call for a more nuanced approach while evaluating epidemic forecasts and we believe that a comprehensive evaluation framework, as presented in this paper, will add value to the computational epidemiology community.
Assuntos
Algoritmos , Influenza Humana/epidemiologia , Fatores Etários , Surtos de Doenças , Previsões , Humanos , Modelos Teóricos , Pandemias , Processos Estocásticos , Estados UnidosRESUMO
UVA-EpiHiper is a national scale agent-based model to support the US COVID-19 Scenario Modeling Hub (SMH). UVA-EpiHiper uses a detailed representation of the underlying social contact network along with data measured during the course of the pandemic to initialize and calibrate the model. In this paper, we study the role of heterogeneity on model complexity and resulting epidemic dynamics using UVA-EpiHiper. We discuss various sources of heterogeneity that we encounter in the use of UVA-EpiHiper to support modeling and analysis of epidemic dynamics under various scenarios. We also discuss how this affects model complexity and computational complexity of the corresponding simulations. Using round 13 of the SMH as an example, we discuss how UVA-EpiHiper was initialized and calibrated. We then discuss how the detailed output produced by UVA-EpiHiper can be analyzed to obtain interesting insights. We find that despite the complexity in the model, the software, and the computation incurred to an agent-based model in scenario modeling, it is capable of capturing various heterogeneities of real-world systems, especially those in networks and behaviors, and enables analyzing heterogeneities in epidemiological outcomes between different demographic, geographic, and social cohorts. In applying UVA-EpiHiper to round 13 scenario modeling, we find that disease outcomes are different between and within states, and between demographic groups, which can be attributed to heterogeneities in population demographics, network structures, and initial immunity.
Assuntos
COVID-19 , SARS-CoV-2 , COVID-19/epidemiologia , COVID-19/transmissão , Humanos , Estados Unidos/epidemiologia , Análise de Sistemas , Pandemias , Modelos EpidemiológicosRESUMO
Scenario-based modeling frameworks have been widely used to support policy-making at state and federal levels in the United States during the COVID-19 response. While custom-built models can be used to support one-off studies, sustained updates to projections under changing pandemic conditions requires a robust, integrated, and adaptive framework. In this paper, we describe one such framework, UVA-adaptive, that was built to support the CDC-aligned Scenario Modeling Hub (SMH) across multiple rounds, as well as weekly/biweekly projections to Virginia Department of Health (VDH) and US Department of Defense during the COVID-19 response. Building upon an existing metapopulation framework, PatchSim, UVA-adaptive uses a calibration mechanism relying on adjustable effective transmissibility as a basis for scenario definition while also incorporating real-time datasets on case incidence, seroprevalence, variant characteristics, and vaccine uptake. Through the pandemic, our framework evolved by incorporating available data sources and was extended to capture complexities of multiple strains and heterogeneous immunity of the population. Here we present the version of the model that was used for the recent projections for SMH and VDH, describe the calibration and projection framework, and demonstrate that the calibrated transmissibility correlates with the evolution of the pathogen as well as associated societal dynamics.
Assuntos
COVID-19 , SARS-CoV-2 , COVID-19/transmissão , COVID-19/epidemiologia , COVID-19/prevenção & controle , COVID-19/imunologia , Humanos , SARS-CoV-2/imunologia , Estados Unidos/epidemiologia , Pandemias/prevenção & controle , Vacinas contra COVID-19/imunologia , Virginia/epidemiologia , Modelos Epidemiológicos , PrevisõesRESUMO
The ongoing Russian aggression against Ukraine has forced over eight million people to migrate out of Ukraine. Understanding the dynamics of forced migration is essential for policy-making and for delivering humanitarian assistance. Existing work is hindered by a reliance on observational data which is only available well after the fact. In this work, we study the efficacy of a data-driven agent-based framework motivated by social and behavioral theory in predicting outflow of migrants as a result of conflict events during the initial phase of the Ukraine war. We discuss policy use cases for the proposed framework by demonstrating how it can leverage refugee demographic details to answer pressing policy questions. We also show how to incorporate conflict forecast scenarios to predict future conflict-induced migration flows. Detailed future migration estimates across various conflict scenarios can both help to reduce policymaker uncertainty and improve allocation and staging of limited humanitarian resources in crisis settings.
RESUMO
We present MacKenzie, a HPC-driven multi-cluster workflow system that was used repeatedly to configure and execute fine-grained US national-scale epidemic simulation models during the COVID-19 pandemic. Mackenzie supported federal and Virginia policymakers, in real-time, for a large number of "what-if" scenarios during the COVID-19 pandemic, and continues to be used to answer related questions as COVID-19 transitions to the endemic stage of the disease. MacKenzie is a novel HPC meta-scheduler that can execute US-scale simulation models and associated workflows that typically present significant big data challenges. The meta-scheduler optimizes the total execution time of simulations in the workflow, and helps improve overall human productivity. As an exemplar of the kind of studies that can be conducted using Mackenzie, we present a modeling study to understand the impact of vaccine-acceptance in controlling the spread of COVID-19 in the US. We use a 288 million node synthetic social contact network (digital twin) spanning all 50 US states plus Washington DC, comprised of 3300 counties, with 12 billion daily interactions. The highly-resolved agent-based model used for the epidemic simulations uses realistic information about disease progression, vaccine uptake, production schedules, acceptance trends, prevalence, and social distancing guidelines. Computational experiments show that, for the simulation workload discussed above, MacKenzie is able to scale up well to 10K CPU cores. Our modeling results show that, when compared to faster and accelerating vaccinations, slower vaccination rates due to vaccine hesitancy cause averted infections to drop from 6.7M to 4.5M, and averted total deaths to drop from 39.4K to 28.2K across the US. This occurs despite the fact that the final vaccine coverage is the same in both scenarios. We also find that if vaccine acceptance could be increased by 10% in all states, averted infections could be increased from 4.5M to 4.7M (a 4.4% improvement) and total averted deaths could be increased from 28.2K to 29.9K (a 6% improvement) nationwide.
RESUMO
Across many fields, scenario modeling has become an important tool for exploring long-term projections and how they might depend on potential interventions and critical uncertainties, with relevance to both decision makers and scientists. In the past decade, and especially during the COVID-19 pandemic, the field of epidemiology has seen substantial growth in the use of scenario projections. Multiple scenarios are often projected at the same time, allowing important comparisons that can guide the choice of intervention, the prioritization of research topics, or public communication. The design of the scenarios is central to their ability to inform important questions. In this paper, we draw on the fields of decision analysis and statistical design of experiments to propose a framework for scenario design in epidemiology, with relevance also to other fields. We identify six different fundamental purposes for scenario designs (decision making, sensitivity analysis, situational awareness, horizon scanning, forecasting, and value of information) and discuss how those purposes guide the structure of scenarios. We discuss other aspects of the content and process of scenario design, broadly for all settings and specifically for multi-model ensemble projections. As an illustrative case study, we examine the first 17 rounds of scenarios from the U.S. COVID-19 Scenario Modeling Hub, then reflect on future advancements that could improve the design of scenarios in epidemiological settings.
Assuntos
COVID-19 , Técnicas de Apoio para a Decisão , Humanos , COVID-19/epidemiologia , COVID-19/prevenção & controle , COVID-19/transmissão , Previsões , SARS-CoV-2 , Doenças Transmissíveis/epidemiologia , Pandemias/prevenção & controle , Tomada de Decisões , Projetos de PesquisaRESUMO
The pandemic of COVID-19 has imposed tremendous pressure on public health systems and social economic ecosystems over the past years. To alleviate its social impact, it is important to proactively track the prevalence of COVID-19 within communities. The traditional way to estimate the disease prevalence is to estimate from reported clinical test data or surveys. However, the coverage of clinical tests is often limited and the tests can be labor-intensive, requires reliable and timely results, and consistent diagnostic and reporting criteria. Recent studies revealed that patients who are diagnosed with COVID-19 often undergo fecal shedding of SARS-CoV-2 virus into wastewater, which makes wastewater-based epidemiology for COVID-19 surveillance a promising approach to complement traditional clinical testing. In this paper, we survey the existing literature regarding wastewater-based epidemiology for COVID-19 surveillance and summarize the current advances in the area. Specifically, we have covered the key aspects of wastewater sampling, sample testing, and presented a comprehensive and organized summary of wastewater data analytical methods. Finally, we provide the open challenges on current wastewater-based COVID-19 surveillance studies, aiming to encourage new ideas to advance the development of effective wastewater-based surveillance systems for general infectious diseases.
RESUMO
The pandemic of COVID-19 has imposed tremendous pressure on public health systems and social economic ecosystems over the past years. To alleviate its social impact, it is important to proactively track the prevalence of COVID-19 within communities. The traditional way to estimate the disease prevalence is to estimate from reported clinical test data or surveys. However, the coverage of clinical tests is often limited and the tests can be labor-intensive, requires reliable and timely results, and consistent diagnostic and reporting criteria. Recent studies revealed that patients who are diagnosed with COVID-19 often undergo fecal shedding of SARS-CoV-2 virus into wastewater, which makes wastewater-based epidemiology for COVID-19 surveillance a promising approach to complement traditional clinical testing. In this paper, we survey the existing literature regarding wastewater-based epidemiology for COVID-19 surveillance and summarize the current advances in the area. Specifically, we have covered the key aspects of wastewater sampling, sample testing, and presented a comprehensive and organized summary of wastewater data analytical methods. Finally, we provide the open challenges on current wastewater-based COVID-19 surveillance studies, aiming to encourage new ideas to advance the development of effective wastewater-based surveillance systems for general infectious diseases.
RESUMO
Across many fields, scenario modeling has become an important tool for exploring long-term projections and how they might depend on potential interventions and critical uncertainties, with relevance to both decision makers and scientists. In the past decade, and especially during the COVID-19 pandemic, the field of epidemiology has seen substantial growth in the use of scenario projections. Multiple scenarios are often projected at the same time, allowing important comparisons that can guide the choice of intervention, the prioritization of research topics, or public communication. The design of the scenarios is central to their ability to inform important questions. In this paper, we draw on the fields of decision analysis and statistical design of experiments to propose a framework for scenario design in epidemiology, with relevance also to other fields. We identify six different fundamental purposes for scenario designs (decision making, sensitivity analysis, value of information, situational awareness, horizon scanning, and forecasting) and discuss how those purposes guide the structure of scenarios. We discuss other aspects of the content and process of scenario design, broadly for all settings and specifically for multi-model ensemble projections. As an illustrative case study, we examine the first 17 rounds of scenarios from the U.S. COVID-19 Scenario Modeling Hub, then reflect on future advancements that could improve the design of scenarios in epidemiological settings.
RESUMO
In Spring 2021, the highly transmissible SARS-CoV-2 Delta variant began to cause increases in cases, hospitalizations, and deaths in parts of the United States. At the time, with slowed vaccination uptake, this novel variant was expected to increase the risk of pandemic resurgence in the US in summer and fall 2021. As part of the COVID-19 Scenario Modeling Hub, an ensemble of nine mechanistic models produced 6-month scenario projections for July-December 2021 for the United States. These projections estimated substantial resurgences of COVID-19 across the US resulting from the more transmissible Delta variant, projected to occur across most of the US, coinciding with school and business reopening. The scenarios revealed that reaching higher vaccine coverage in July-December 2021 reduced the size and duration of the projected resurgence substantially, with the expected impacts was largely concentrated in a subset of states with lower vaccination coverage. Despite accurate projection of COVID-19 surges occurring and timing, the magnitude was substantially underestimated 2021 by the models compared with the of the reported cases, hospitalizations, and deaths occurring during July-December, highlighting the continued challenges to predict the evolving COVID-19 pandemic. Vaccination uptake remains critical to limiting transmission and disease, particularly in states with lower vaccination coverage. Higher vaccination goals at the onset of the surge of the new variant were estimated to avert over 1.5 million cases and 21,000 deaths, although may have had even greater impacts, considering the underestimated resurgence magnitude from the model.
Assuntos
COVID-19 , SARS-CoV-2 , COVID-19/epidemiologia , COVID-19/prevenção & controle , Humanos , Pandemias/prevenção & controle , SARS-CoV-2/genética , Estados Unidos/epidemiologia , VacinaçãoRESUMO
High resolution mobility datasets have become increasingly available in the past few years and have enabled detailed models for infectious disease spread including those for COVID-19. However, there are open questions on how such a mobility data can be used effectively within epidemic models and for which tasks they are best suited. In this paper, we extract a number of graph-based proximity metrics from high resolution cellphone trace data from X-Mode and use it to study COVID-19 epidemic spread in 50 land grant university counties in the US. We present an approach to estimate the effect of mobility on cases by fitting an ODE based model and performing multivariate linear regression to explain the estimated time varying transmissibility. We find that, while mobility plays a significant role, the contribution is heterogeneous across the counties, as exemplified by a subsequent correlation analysis. We subsequently evaluate the metricsâ™ utility for case surge prediction defined as a supervised classification problem, and show that the learnt model can predict surges with 95% accuracy and 87% F1-score.
RESUMO
Timely, high-resolution forecasts of infectious disease incidence are useful for policy makers in deciding intervention measures and estimating healthcare resource burden. In this paper, we consider the task of forecasting COVID-19 confirmed cases at the county level for the United States. Although multiple methods have been explored for this task, their performance has varied across space and time due to noisy data and the inherent dynamic nature of the pandemic. We present a forecasting pipeline which incorporates probabilistic forecasts from multiple statistical, machine learning and mechanistic methods through a Bayesian ensembling scheme, and has been operational for nearly 6 months serving local, state and federal policymakers in the United States. While showing that the Bayesian ensemble is at least as good as the individual methods, we also show that each individual method contributes significantly for different spatial regions and time points. We compare our model's performance with other similar models being integrated into CDC-initiated COVID-19 Forecast Hub, and show better performance at longer forecast horizons. Finally, we also describe how such forecasts are used to increase lead time for training mechanistic scenario projections. Our work demonstrates that such a real-time high resolution forecasting pipeline can be developed by integrating multiple methods within a performance-based ensemble to support pandemic response. ACM REFERENCE FORMAT: Aniruddha Adiga, Lijing Wang, Benjamin Hurt, Akhil Peddireddy, Przemys-law Porebski,, Srinivasan Venkatramanan, Bryan Lewis, Madhav Marathe. 2021. All Models Are Useful: Bayesian Ensembling for Robust High Resolution COVID-19 Forecasting. In Proceedings of ACM Conference (Conference'17) . ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn.
RESUMO
The COVID-19 global outbreak represents the most significant epidemic event since the 1918 influenza pandemic. Simulations have played a crucial role in supporting COVID-19 planning and response efforts. Developing scalable workflows to provide policymakers quick responses to important questions pertaining to logistics, resource allocation, epidemic forecasts and intervention analysis remains a challenging computational problem. In this work, we present scalable high performance computing-enabled workflows for COVID-19 pandemic planning and response. The scalability of our methodology allows us to run fine-grained simulations daily, and to generate county-level forecasts and other counter-factual analysis for each of the 50 states (and DC), 3140 counties across the USA. Our workflows use a hybrid cloud/cluster system utilizing a combination of local and remote cluster computing facilities, and using over 20,000 CPU cores running for 6-9 hours every day to meet this objective. Our state (Virginia), state hospital network, our university, the DOD and the CDC use our models to guide their COVID-19 planning and response efforts. We began executing these pipelines March 25, 2020, and have delivered and briefed weekly updates to these stakeholders for over 30 weeks without interruption.