ABSTRACT
Operation Outbreak (OO) is a Bluetooth-based simulation platform that teaches students how pathogens spread and the impact of interventions, thereby facilitating the safe reopening of schools. OO also generates data to inform epidemiological models and prevent future outbreaks. Before SARS-CoV-2 was reported, we repeatedly simulated a virus with similar features, correctly predicting many human behaviors later observed during the pandemic.
Subject(s)
Computer Simulation , Computer-Assisted Instruction/methods , Contact Tracing/methods , Coronavirus Infections/epidemiology , Epidemiology/education , Pneumonia, Viral/epidemiology , Basic Reproduction Number , COVID-19 , Coronavirus Infections/prevention & control , Coronavirus Infections/transmission , Humans , Mobile Applications , Pandemics/prevention & control , Pneumonia, Viral/prevention & control , Pneumonia, Viral/transmission , SmartphoneABSTRACT
BACKGROUND: The performance of rapid antigen tests (Ag-RDTs) for screening asymptomatic and symptomatic persons for SARS-CoV-2 is not well established. OBJECTIVE: To evaluate the performance of Ag-RDTs for detection of SARS-CoV-2 among symptomatic and asymptomatic participants. DESIGN: This prospective cohort study enrolled participants between October 2021 and January 2022. Participants completed Ag-RDTs and reverse transcriptase polymerase chain reaction (RT-PCR) testing for SARS-CoV-2 every 48 hours for 15 days. SETTING: Participants were enrolled digitally throughout the mainland United States. They self-collected anterior nasal swabs for Ag-RDTs and RT-PCR testing. Nasal swabs for RT-PCR were shipped to a central laboratory, whereas Ag-RDTs were done at home. PARTICIPANTS: Of 7361 participants in the study, 5353 who were asymptomatic and negative for SARS-CoV-2 on study day 1 were eligible. In total, 154 participants had at least 1 positive RT-PCR result. MEASUREMENTS: The sensitivity of Ag-RDTs was measured on the basis of testing once (same-day), twice (after 48 hours), and thrice (after a total of 96 hours). The analysis was repeated for different days past index PCR positivity (DPIPPs) to approximate real-world scenarios where testing initiation may not always coincide with DPIPP 0. Results were stratified by symptom status. RESULTS: Among 154 participants who tested positive for SARS-CoV-2, 97 were asymptomatic and 57 had symptoms at infection onset. Serial testing with Ag-RDTs twice 48 hours apart resulted in an aggregated sensitivity of 93.4% (95% CI, 90.4% to 95.9%) among symptomatic participants on DPIPPs 0 to 6. When singleton positive results were excluded, the aggregated sensitivity on DPIPPs 0 to 6 for 2-time serial testing among asymptomatic participants was lower at 62.7% (CI, 57.0% to 70.5%), but it improved to 79.0% (CI, 70.1% to 87.4%) with testing 3 times at 48-hour intervals. LIMITATION: Participants tested every 48 hours; therefore, these data cannot support conclusions about serial testing intervals shorter than 48 hours. CONCLUSION: The performance of Ag-RDTs was optimized when asymptomatic participants tested 3 times at 48-hour intervals and when symptomatic participants tested 2 times separated by 48 hours. PRIMARY FUNDING SOURCE: National Institutes of Health RADx Tech program.
Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , Prospective Studies , SARS-CoV-2 , Polymerase Chain Reaction , Cognition , Sensitivity and SpecificityABSTRACT
BACKGROUND: It is important to document the performance of rapid antigen tests (Ag-RDTs) in detecting SARS-CoV-2 variants. OBJECTIVE: To compare the performance of Ag-RDTs in detecting the Delta (B.1.617.2) and Omicron (B.1.1.529) variants of SARS-CoV-2. DESIGN: Secondary analysis of a prospective cohort study that enrolled participants between 18 October 2021 and 24 January 2022. Participants did Ag-RDTs and collected samples for reverse transcriptase polymerase chain reaction (RT-PCR) testing every 48 hours for 15 days. SETTING: The parent study enrolled participants throughout the mainland United States through a digital platform. All participants self-collected anterior nasal swabs for rapid antigen testing and RT-PCR testing. All Ag-RDTs were completed at home, whereas nasal swabs for RT-PCR were shipped to a central laboratory. PARTICIPANTS: Of 7349 participants enrolled in the parent study, 5779 asymptomatic persons who tested negative for SARS-CoV-2 on day 1 of the study were eligible for this substudy. MEASUREMENTS: Sensitivity of Ag-RDTs on the same day as the first positive (index) RT-PCR result and 48 hours after the first positive RT-PCR result. RESULTS: A total of 207 participants were positive on RT-PCR (58 Delta, 149 Omicron). Differences in sensitivity between variants were not statistically significant (same day: Delta, 15.5% [95% CI, 6.2% to 24.8%] vs. Omicron, 22.1% [CI, 15.5% to 28.8%]; at 48 hours: Delta, 44.8% [CI, 32.0% to 57.6%] vs. Omicron, 49.7% [CI, 41.6% to 57.6%]). Among 109 participants who had RT-PCR-positive results for 48 hours, rapid antigen sensitivity did not differ significantly between Delta- and Omicron-infected participants (48-hour sensitivity: Delta, 81.5% [CI, 66.8% to 96.1%] vs. Omicron, 78.0% [CI, 69.1% to 87.0%]). Only 7.2% of the 69 participants with RT-PCR-positive results for shorter than 48 hours tested positive by Ag-RDT within 1 week; those with Delta infections remained consistently negative on Ag-RDTs. LIMITATION: A testing frequency of 48 hours does not allow a finer temporal resolution of the analysis of test performance, and the results of Ag-RDTs are based on self-report. CONCLUSION: The performance of Ag-RDTs in persons infected with the SARS-CoV-2 Omicron variant is not inferior to that in persons with Delta infections. Serial testing improved the sensitivity of Ag-RDTs for both variants. The performance of rapid antigen testing varies on the basis of duration of RT-PCR positivity. PRIMARY FUNDING SOURCE: National Heart, Lung, and Blood Institute of the National Institutes of Health.
Subject(s)
COVID-19 , SARS-CoV-2 , United States , Humans , Prospective Studies , Self-Testing , Sensitivity and SpecificityABSTRACT
Rapid diagnostic tools for children with Ebola virus disease (EVD) are needed to expedite isolation and treatment. To evaluate a predictive diagnostic tool, we examined retrospective data (2014-2015) from the International Medical Corps Ebola Treatment Centers in West Africa. We incorporated statistically derived candidate predictors into a 7-point Pediatric Ebola Risk Score. Evidence of bleeding or having known or no known Ebola contacts was positively associated with an EVD diagnosis, whereas abdominal pain was negatively associated. Model discrimination using area under the curve (AUC) was 0.87, which outperforms the World Health Organization criteria (AUC 0.56). External validation, performed by using data from International Medical Corps Ebola Treatment Centers in the Democratic Republic of the Congo during 2018-2019, showed an AUC of 0.70. External validation showed that discrimination achieved by using World Health Organization criteria was similar; however, the Pediatric Ebola Risk Score is simpler to use.
Subject(s)
Ebolavirus , Hemorrhagic Fever, Ebola , Area Under Curve , Child , Democratic Republic of the Congo/epidemiology , Disease Outbreaks , Hemorrhagic Fever, Ebola/diagnosis , Hemorrhagic Fever, Ebola/epidemiology , Humans , Retrospective Studies , Risk FactorsABSTRACT
MOTIVATION: Visualizing two-dimensional embeddings (such as UMAP or tSNE) is a useful step in interrogating single-cell RNA sequencing (scRNA-Seq) data. Subsequently, users typically iterate between programmatic analyses (including clustering and differential expression) and visual exploration (e.g. coloring cells by interesting features) to uncover biological signals in the data. Interactive tools exist to facilitate visual exploration of embeddings such as performing differential expression on user-selected cells. However, the practical utility of these tools is limited because they don't support rapid movement of data and results to and from the programming environments where most of the data analysis takes place, interrupting the iterative process. RESULTS: Here, we present the Single-cell Interactive Viewer (Sciviewer), a tool that overcomes this limitation by allowing interactive visual interrogation of embeddings from within Python. Beyond differential expression analysis of user-selected cells, Sciviewer implements a novel method to identify genes varying locally along any user-specified direction on the embedding. Sciviewer enables rapid and flexible iteration between interactive and programmatic modes of scRNA-Seq exploration, illustrating a useful approach for analyzing high-dimensional data. AVAILABILITY AND IMPLEMENTATION: Code and examples are provided at https://github.com/colabobio/sciviewer.
Subject(s)
Single-Cell Gene Expression Analysis , Software , Sequence Analysis, RNA/methods , Single-Cell Analysis/methods , Cluster AnalysisABSTRACT
BACKGROUND: Limited clinical and laboratory data are available on patients with Ebola virus disease (EVD). The Kenema Government Hospital in Sierra Leone, which had an existing infrastructure for research regarding viral hemorrhagic fever, has received and cared for patients with EVD since the beginning of the outbreak in Sierra Leone in May 2014. METHODS: We reviewed available epidemiologic, clinical, and laboratory records of patients in whom EVD was diagnosed between May 25 and June 18, 2014. We used quantitative reverse-transcriptase-polymerase-chain-reaction assays to assess the load of Ebola virus (EBOV, Zaire species) in a subgroup of patients. RESULTS: Of 106 patients in whom EVD was diagnosed, 87 had a known outcome, and 44 had detailed clinical information available. The incubation period was estimated to be 6 to 12 days, and the case fatality rate was 74%. Common findings at presentation included fever (in 89% of the patients), headache (in 80%), weakness (in 66%), dizziness (in 60%), diarrhea (in 51%), abdominal pain (in 40%), and vomiting (in 34%). Clinical and laboratory factors at presentation that were associated with a fatal outcome included fever, weakness, dizziness, diarrhea, and elevated levels of blood urea nitrogen, aspartate aminotransferase, and creatinine. Exploratory analyses indicated that patients under the age of 21 years had a lower case fatality rate than those over the age of 45 years (57% vs. 94%, P=0.03), and patients presenting with fewer than 100,000 EBOV copies per milliliter had a lower case fatality rate than those with 10 million EBOV copies per milliliter or more (33% vs. 94%, P=0.003). Bleeding occurred in only 1 patient. CONCLUSIONS: The incubation period and case fatality rate among patients with EVD in Sierra Leone are similar to those observed elsewhere in the 2014 outbreak and in previous outbreaks. Although bleeding was an infrequent finding, diarrhea and other gastrointestinal manifestations were common. (Funded by the National Institutes of Health and others.).
Subject(s)
Ebolavirus/genetics , Epidemics , Hemorrhagic Fever, Ebola/epidemiology , Abdominal Pain , Adult , Animals , Diarrhea , Ebolavirus/isolation & purification , Female , Fever , Hemorrhagic Fever, Ebola/complications , Hemorrhagic Fever, Ebola/therapy , Hemorrhagic Fever, Ebola/virology , Humans , Male , Middle Aged , Mortality , Reverse Transcriptase Polymerase Chain Reaction , Sierra Leone/epidemiology , Viral Load , VomitingSubject(s)
Lassa Fever/epidemiology , Pregnancy Complications, Infectious/epidemiology , Pregnancy Complications, Infectious/virology , Adolescent , Adult , Disease Management , Female , Gestational Age , Humans , Lassa Fever/diagnosis , Lassa Fever/drug therapy , Lassa virus , Nigeria/epidemiology , Pregnancy , Pregnancy Complications, Infectious/drug therapy , Pregnancy Outcome , Public Health Surveillance , Retrospective Studies , Severity of Illness Index , Symptom Assessment , Young AdultABSTRACT
Despite most COVID-19 infections being asymptomatic, mainland China had a high increase in symptomatic cases at the end of 2022. In this study, we examine China's sudden COVID-19 symptomatic surge using a conceptual SIR-based model. Our model considers the epidemiological characteristics of SARS-CoV-2, particularly variolation, from non-pharmaceutical intervention (facial masking and social distance), demography, and disease mortality in mainland China. The increase in symptomatic proportions in China may be attributable to (1) higher sensitivity and vulnerability during winter and (2) enhanced viral inhalation due to spikes in SARS-CoV-2 infections (high transmissibility). These two reasons could explain China's high symptomatic proportion of COVID-19 in December 2022. Our study, therefore, can serve as a decision-support tool to enhance SARS-CoV-2 prevention and control efforts. Thus, we highlight that facemask-induced variolation could potentially reduces transmissibility rather than severity in infected individuals. However, further investigation is required to understand the variolation effect on disease severity.
ABSTRACT
Background: Although multiple prognostic models exist for Ebola virus disease mortality, few incorporate biomarkers, and none has used longitudinal point-of-care serum testing throughout Ebola treatment center care. Methods: This retrospective study evaluated adult patients with Ebola virus disease during the 10th outbreak in the Democratic Republic of Congo. Ebola virus cycle threshold (Ct; based on reverse transcriptase polymerase chain reaction) and point-of-care serum biomarker values were collected throughout Ebola treatment center care. Four iterative machine learning models were created for prognosis of mortality. The base model used age and admission Ct as predictors. Ct and biomarkers from treatment days 1 and 2, days 3 and 4, and days 5 and 6 associated with mortality were iteratively added to the model to yield mortality risk estimates. Receiver operating characteristic curves for each iteration provided period-specific areas under curve with 95% CIs. Results: Of 310 cases positive for Ebola virus disease, mortality occurred in 46.5%. Biomarkers predictive of mortality were elevated creatinine kinase, aspartate aminotransferase, blood urea nitrogen (BUN), alanine aminotransferase, and potassium; low albumin during days 1 and 2; elevated C-reactive protein, BUN, and potassium during days 3 and 4; and elevated C-reactive protein and BUN during days 5 and 6. The area under curve substantially improved with each iteration: base model, 0.74 (95% CI, .69-.80); days 1 and 2, 0.84 (95% CI, .73-.94); days 3 and 4, 0.94 (95% CI, .88-1.0); and days 5 and 6, 0.96 (95% CI, .90-1.0). Conclusions: This is the first study to utilize iterative point-of-care biomarkers to derive dynamic prognostic mortality models. This novel approach demonstrates that utilizing biomarkers drastically improved prognostication up to 6 days into patient care.
ABSTRACT
Most genetic variants associated with psychiatric disorders are located in noncoding regions of the genome. To investigate their functional implications, we integrate epigenetic data from the PsychENCODE Consortium and other published sources to construct a comprehensive atlas of candidate brain cis-regulatory elements. Using deep learning, we model these elements' sequence syntax and predict how binding sites for lineage-specific transcription factors contribute to cell type-specific gene regulation in various types of glia and neurons. The elements' evolutionary history suggests that new regulatory information in the brain emerges primarily via smaller sequence mutations within conserved mammalian elements rather than entirely new human- or primate-specific sequences. However, primate-specific candidate elements, particularly those active during fetal brain development and in excitatory neurons and astrocytes, are implicated in the heritability of brain-related human traits. Additionally, we introduce PsychSCREEN, a web-based platform offering interactive visualization of PsychENCODE-generated genetic and epigenetic data from diverse brain cell types in individuals with psychiatric disorders and healthy controls.
Subject(s)
Brain , Epigenesis, Genetic , Regulatory Sequences, Nucleic Acid , Humans , Brain/metabolism , Regulatory Sequences, Nucleic Acid/genetics , Animals , Evolution, Molecular , Mental Disorders/genetics , Regulatory Elements, Transcriptional/genetics , Neurons/metabolism , Gene Expression Regulation , Transcription Factors/genetics , Transcription Factors/metabolismABSTRACT
The worldwide decline in malaria incidence is revealing the extensive burden of non-malarial febrile illness (NMFI), which remains poorly understood and difficult to diagnose. To characterize NMFI in Senegal, we collected venous blood and clinical metadata in a cross-sectional study of febrile patients and healthy controls in a low malaria burden area. Using 16S and untargeted sequencing, we detected viral, bacterial, or eukaryotic pathogens in 23% (38/163) of NMFI cases. Bacteria were the most common, with relapsing fever Borrelia and spotted fever Rickettsia found in 15.5% and 3.8% of cases, respectively. Four viral pathogens were found in a total of 7 febrile cases (3.5%). Sequencing also detected undiagnosed Plasmodium, including one putative P. ovale infection. We developed a logistic regression model that can distinguish Borrelia from NMFIs with similar presentation based on symptoms and vital signs (F1 score: 0.823). These results highlight the challenge and importance of improved diagnostics, especially for Borrelia, to support diagnosis and surveillance.
Subject(s)
Borrelia , Malaria , Plasmodium , Humans , Senegal/epidemiology , Cross-Sectional Studies , Malaria/diagnosis , Malaria/epidemiology , Fever/epidemiology , Borrelia/geneticsABSTRACT
Background: Understanding changes in diagnostic performance after symptom onset and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) exposure within different populations is crucial to guide the use of diagnostics for SARS-CoV-2. Methods: The Test Us at Home study was a longitudinal cohort study that enrolled individuals across the United States between October 2021 and February 2022. Participants performed paired antigen-detection rapid diagnostic tests (Ag-RDTs) and reverse-transcriptase polymerase chain reaction (RT-PCR) tests at home every 48â hours for 15 days and self-reported symptoms and known coronavirus disease 2019 exposures immediately before testing. The percent positivity for Ag-RDTs and RT-PCR tests was calculated each day after symptom onset and exposure and stratified by vaccination status, variant, age category, and sex. Results: The highest percent positivity occurred 2 days after symptom onset (RT-PCR, 91.2%; Ag-RDT, 71.1%) and 6 days after exposure (RT-PCR, 91.8%; Ag-RDT, 86.2%). RT-PCR and Ag-RDT performance did not differ by vaccination status, variant, age category, or sex. The percent positivity for Ag-RDTs was lower among exposed, asymptomatic than among symptomatic individuals (37.5% (95% confidence interval [CI], 13.7%-69.4%) vs 90.3% (75.1%-96.7%). Cumulatively, Ag-RDTs detected 84.9% (95% CI, 78.2%-89.8%) of infections within 4 days of symptom onset. For exposed participants, Ag-RDTs detected 94.0% (95% CI, 86.7%-97.4%) of RT-PCR-confirmed infections within 6 days of exposure. Conclusions: The percent positivity for Ag-RDTs and RT-PCR tests was highest 2 days after symptom onset and 6 days after exposure, and performance increased with serial testing. The percent positivity of Ag-RDTs was lowest among asymptomatic individuals but did not differ by sex, variant, vaccination status, or age category.
ABSTRACT
The worldwide decline in malaria incidence is revealing the extensive burden of non-malarial febrile illness (NMFI), which remains poorly understood and difficult to diagnose. To characterize NMFI in Senegal, we collected venous blood and clinical metadata from febrile patients and healthy controls in a low malaria burden area. Using 16S and unbiased sequencing, we detected viral, bacterial, or eukaryotic pathogens in 29% of NMFI cases. Bacteria were the most common, with relapsing fever Borrelia and spotted fever Rickettsia found in 15% and 3.7% of cases, respectively. Four viral pathogens were found in a total of 7 febrile cases (3.5%). Sequencing also detected undiagnosed Plasmodium, including one putative P. ovale infection. We developed a logistic regression model to distinguish Borrelia from NMFIs with similar presentation based on symptoms and vital signs. These results highlight the challenge and importance of improved diagnostics, especially for Borrelia, to support diagnosis and surveillance.
ABSTRACT
Background: The performance of rapid antigen tests for SARS-CoV-2 (Ag-RDT) in temporal relation to symptom onset or exposure is unknown, as is the impact of vaccination on this relationship. Objective: To evaluate the performance of Ag-RDT compared with RT-PCR based on day after symptom onset or exposure in order to decide on 'when to test'. Design Setting and Participants: The Test Us at Home study was a longitudinal cohort study that enrolled participants over 2 years old across the United States between October 18, 2021 and February 4, 2022. All participants were asked to conduct Ag-RDT and RT-PCR testing every 48 hours over a 15-day period. Participants with one or more symptoms during the study period were included in the Day Post Symptom Onset (DPSO) analyses, while those who reported a COVID-19 exposure were included in the Day Post Exposure (DPE) analysis. Exposure: Participants were asked to self-report any symptoms or known exposures to SARS-CoV-2 every 48-hours, immediately prior to conducting Ag-RDT and RT-PCR testing. The first day a participant reported one or more symptoms was termed DPSO 0, and the day of exposure was DPE 0. Vaccination status was self-reported. Main Outcome and Measures: Results of Ag-RDT were self-reported (positive, negative, or invalid) and RT-PCR results were analyzed by a central laboratory. Percent positivity of SARS-CoV-2 and sensitivity of Ag-RDT and RT-PCR by DPSO and DPE were stratified by vaccination status and calculated with 95% confidence intervals. Results: A total of 7,361 participants enrolled in the study. Among them, 2,086 (28.3%) and 546 (7.4%) participants were eligible for the DPSO and DPE analyses, respectively. Unvaccinated participants were nearly twice as likely to test positive for SARS-CoV-2 than vaccinated participants in event of symptoms (PCR+: 27.6% vs 10.1%) or exposure (PCR+: 43.8% vs. 22.2%). The highest proportion of vaccinated and unvaccinated individuals tested positive on DPSO 2 and DPE 5-8. Performance of RT-PCR and Ag-RDT did not differ by vaccination status. Ag-RDT detected 78.0% (95% Confidence Interval: 72.56-82.61) of PCR-confirmed infections by DPSO 4. For exposed participants, Ag-RDT detected 84.9% (95% CI: 75.0-91.4) of PCR-confirmed infections by day five post-exposure (DPE 5). Conclusions and Relevance: Performance of Ag-RDT and RT-PCR was highest on DPSO 0-2 and DPE 5 and did not differ by vaccination status. These data suggests that serial testing remains integral to enhancing the performance of Ag-RDT.
ABSTRACT
Background: Performance of rapid antigen tests for SARS-CoV-2 (Ag-RDT) varies over the course of an infection, and their performance in screening for SARS-CoV-2 is not well established. We aimed to evaluate performance of Ag-RDT for detection of SARS-CoV-2 for symptomatic and asymptomatic participants. Methods: Participants >2 years old across the United States enrolled in the study between October 2021 and February 2022. Participants completed Ag-RDT and molecular testing (RT-PCR) for SARS-CoV-2 every 48 hours for 15 days. This analysis was limited to participants who were asymptomatic and tested negative on their first day of study participation. Onset of infection was defined as the day of first positive RT-PCR result. Sensitivity of Ag-RDT was measured based on testing once, twice (after 48-hours), and thrice (after 96 hours). Analysis was repeated for different Days Post Index PCR Positivity (DPIPP) and stratified based on symptom-status. Results: In total, 5,609 of 7,361 participants were eligible for this analysis. Among 154 participants who tested positive for SARS-CoV-2, 97 were asymptomatic and 57 had symptoms at infection onset. Serial testing with Ag-RDT twice 48-hours apart resulted in an aggregated sensitivity of 93.4% (95% CI: 89.1-96.1%) among symptomatic participants on DPIPP 0-6. Excluding singleton positives, aggregated sensitivity on DPIPP 0-6 for two-time serial-testing among asymptomatic participants was lower at 62.7% (54.7-70.0%) but improved to 79.0% (71.0-85.3%) with testing three times at 48-hour intervals. Discussion: Performance of Ag-RDT was optimized when asymptomatic participants tested three-times at 48-hour intervals and when symptomatic participants tested two-times separated by 48-hours.
ABSTRACT
Since the demonstration that the sequence of a protein encodes its structure, the prediction of structure from sequence remains an outstanding problem that impacts numerous scientific disciplines, including many genome projects. By iteratively fixing secondary structure assignments of residues during Monte Carlo simulations of folding, our coarse-grained model without information concerning homology or explicit side chains can outperform current homology-based secondary structure prediction methods for many proteins. The computationally rapid algorithm using only single (phi,psi) dihedral angle moves also generates tertiary structures of accuracy comparable with existing all-atom methods for many small proteins, particularly those with low homology. Hence, given appropriate search strategies and scoring functions, reduced representations can be used for accurately predicting secondary structure and providing 3D structures, thereby increasing the size of proteins approachable by homology-free methods and the accuracy of template methods that depend on a high-quality input secondary structure.
Subject(s)
Molecular Mimicry , Protein Folding , Proteins/chemistry , Proteins/metabolism , Structural Homology, Protein , Algorithms , Amino Acid Sequence , Molecular Sequence Data , Protein Structure, Secondary , Protein Structure, TertiaryABSTRACT
College campuses are vulnerable to infectious disease outbreaks, and there is an urgent need to develop better strategies to mitigate their size and duration, particularly as educational institutions around the world adapt to in-person instruction during the COVID-19 pandemic. Towards addressing this need, we applied a stochastic compartmental model to quantify the impact of university-level responses to contain a mumps outbreak at Harvard University in 2016. We used our model to determine which containment interventions were most effective and study alternative scenarios without and with earlier interventions. This model allows for stochastic variation in small populations, missing or unobserved case data and changes in disease transmission rates post-intervention. The results suggest that control measures implemented by the University's Health Services, including rapid isolation of suspected cases, were very effective at containing the outbreak. Without those measures, the outbreak could have been four times larger. More generally, we conclude that universities should apply (i) diagnostic protocols that address false negatives from molecular tests and (ii) strict quarantine policies to contain the spread of easily transmissible infectious diseases such as mumps among their students. This modelling approach could be applied to data from other outbreaks in college campuses and similar small population settings.
ABSTRACT
Amid COVID-19, many institutions deployed vast resources to test their members regularly for safe reopening. This self-focused approach, however, not only overlooks surrounding communities but also remains blind to community transmission that could breach the institution. To test the relative merits of a more altruistic strategy, we built an epidemiological model that assesses the differential impact on case counts when institutions instead allocate a proportion of their tests to members' close contacts in the larger community. We found that testing outside the institution benefits the institution in all plausible circumstances, with the optimal proportion of tests to use externally landing at 45% under baseline model parameters. Our results were robust to local prevalence, secondary attack rate, testing capacity, and contact reporting level, yielding a range of optimal community testing proportions from 18 to 58%. The model performed best under the assumption that community contacts are known to the institution; however, it still demonstrated a significant benefit even without complete knowledge of the contact network.
Subject(s)
COVID-19 Testing/methods , COVID-19/diagnosis , COVID-19/prevention & control , COVID-19/epidemiology , COVID-19/transmission , Contact Tracing/methods , Epidemiological Models , Female , Humans , Male , Prevalence , Public HealthABSTRACT
An app-based educational outbreak simulator, Operation Outbreak (OO), seeks to engage and educate participants to better respond to outbreaks. Here, we examine the utility of OO for understanding epidemiological dynamics. The OO app enables experience-based learning about outbreaks, spreading a virtual pathogen via Bluetooth among participating smartphones. Deployed at many colleges and in other settings, OO collects anonymized spatiotemporal data, including the time and duration of the contacts among participants of the simulation. We report the distribution, timing, duration, and connectedness of student social contacts at two university deployments and uncover cryptic transmission pathways through individuals' second-degree contacts. We then construct epidemiological models based on the OO-generated contact networks to predict the transmission pathways of hypothetical pathogens with varying reproductive numbers. Finally, we demonstrate that the granularity of OO data enables institutions to mitigate outbreaks by proactively and strategically testing and/or vaccinating individuals based on individual social interaction levels.
ABSTRACT
BACKGROUND: Ebola Virus Disease (EVD) causes high case fatality rates (CFRs) in young children, yet there are limited data focusing on predicting mortality in pediatric patients. Here we present machine learning-derived prognostic models to predict clinical outcomes in children infected with Ebola virus. METHODS: Using retrospective data from the Ebola Data Platform, we investigated children with EVD from the West African EVD outbreak in 2014-2016. Elastic net regularization was used to create a prognostic model for EVD mortality. In addition to external validation with data from the 2018-2020 EVD epidemic in the Democratic Republic of the Congo (DRC), we updated the model using selected serum biomarkers. FINDINGS: Pediatric EVD mortality was significantly associated with younger age, lower PCR cycle threshold (Ct) values, unexplained bleeding, respiratory distress, bone/muscle pain, anorexia, dysphagia, and diarrhea. These variables were combined to develop the newly described EVD Prognosis in Children (EPiC) predictive model. The area under the receiver operating characteristic curve (AUC) for EPiC was 0.77 (95% CI: 0.74-0.81) in the West Africa derivation dataset and 0.76 (95% CI: 0.64-0.88) in the DRC validation dataset. Updating the model with peak aspartate aminotransferase (AST) or creatinine kinase (CK) measured within the first 48 hours after admission increased the AUC to 0.90 (0.77-1.00) and 0.87 (0.74-1.00), respectively. CONCLUSION: The novel EPiC prognostic model that incorporates clinical information and commonly used biochemical tests, such as AST and CK, can be used to predict mortality in children with EVD.