ABSTRACT
Throughout the COVID-19 pandemic, policymakers have proposed risk metrics, such as the CDC Community Levels, to guide local and state decision-making. However, risk metrics have not reliably predicted key outcomes and have often lacked transparency in terms of prioritization of false-positive versus false-negative signals. They have also struggled to maintain relevance over time due to slow and infrequent updates addressing new variants and shifts in vaccine- and infection-induced immunity. We make two contributions to address these weaknesses. We first present a framework to evaluate predictive accuracy based on policy targets related to severe disease and mortality, allowing for explicit preferences toward false-negative versus false-positive signals. This approach allows policymakers to optimize metrics for specific preferences and interventions. Second, we propose a method to update risk thresholds in real time. We show that this adaptive approach to designating areas as "high risk" improves performance over static metrics in predicting 3-wk-ahead mortality and intensive care usage at both state and county levels. We also demonstrate that with our approach, using only new hospital admissions to predict 3-wk-ahead mortality and intensive care usage has performed consistently as well as metrics that also include cases and inpatient bed usage. Our results highlight that a key challenge for COVID-19 risk prediction is the changing relationship between indicators and outcomes of policy interest. Adaptive metrics therefore have a unique advantage in a rapidly evolving pandemic context.
Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Pandemics , SARS-CoV-2 , Benchmarking , Critical CareABSTRACT
BACKGROUND: Information regarding the protection conferred by vaccination and previous infection against infection with the B.1.1.529 (omicron) variant of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is limited. METHODS: We evaluated the protection conferred by mRNA vaccines and previous infection against infection with the omicron variant in two high-risk populations: residents and staff in the California state prison system. We used a retrospective cohort design to analyze the risk of infection during the omicron wave using data collected from December 24, 2021, through April 14, 2022. Weighted Cox models were used to compare the effectiveness (measured as 1 minus the hazard ratio) of vaccination and previous infection across combinations of vaccination history (stratified according to the number of mRNA doses received) and infection history (none or infection before or during the period of B.1.617.2 [delta]-variant predominance). A secondary analysis used a rolling matched-cohort design to evaluate the effectiveness of three vaccine doses as compared with two doses. RESULTS: Among 59,794 residents and 16,572 staff, the estimated effectiveness of previous infection against omicron infection among unvaccinated persons who had been infected before or during the period of delta predominance ranged from 16.3% (95% confidence interval [CI], 8.1 to 23.7) to 48.9% (95% CI, 41.6 to 55.3). Depending on previous infection status, the estimated effectiveness of vaccination (relative to being unvaccinated and without previous documented infection) ranged from 18.6% (95% CI, 7.7 to 28.1) to 83.2% (95% CI, 77.7 to 87.4) with two vaccine doses and from 40.9% (95% CI, 31.9 to 48.7) to 87.9% (95% CI, 76.0 to 93.9) with three vaccine doses. Incremental effectiveness estimates of a third (booster) dose (relative to two doses) ranged from 25.0% (95% CI, 16.6 to 32.5) to 57.9% (95% CI, 48.4 to 65.7) among persons who either had not had previous documented infection or had been infected before the period of delta predominance. CONCLUSIONS: Our findings in two high-risk populations suggest that mRNA vaccination and previous infection were effective against omicron infection, with lower estimates among those infected before the period of delta predominance. Three vaccine doses offered significantly more protection than two doses, including among previously infected persons.
Subject(s)
COVID-19 Vaccines , COVID-19 , Prisons , Vaccination , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Prisons/statistics & numerical data , Retrospective Studies , SARS-CoV-2 , COVID-19 Vaccines/administration & dosage , COVID-19 Vaccines/therapeutic use , California/epidemiology , Prisoners/statistics & numerical data , Police/statistics & numerical data , Vaccine Efficacy/statistics & numerical data , Reinfection/epidemiology , Reinfection/prevention & control , Immunization, Secondary/statistics & numerical dataABSTRACT
BACKGROUND: Elevated tuberculosis (TB) incidence rates have recently been reported for racial/ethnic minority populations in the United States. Tracking such disparities is important for assessing progress toward national health equity goals and implementing change. OBJECTIVE: To quantify trends in racial/ethnic disparities in TB incidence among U.S.-born persons. DESIGN: Time-series analysis of national TB registry data for 2011 to 2021. SETTING: United States. PARTICIPANTS: U.S.-born persons stratified by race/ethnicity. MEASUREMENTS: TB incidence rates, incidence rate differences, and incidence rate ratios compared with non-Hispanic White persons; excess TB cases (calculated from incidence rate differences); and the index of disparity. Analyses were stratified by sex and by attribution of TB disease to recent transmission and were adjusted for age, year, and state of residence. RESULTS: In analyses of TB incidence rates for each racial/ethnic population compared with non-Hispanic White persons, incidence rate ratios were as high as 14.2 (95% CI, 13.0 to 15.5) among American Indian or Alaska Native (AI/AN) females. Relative disparities were greater for females, younger persons, and TB attributed to recent transmission. Absolute disparities were greater for males. Excess TB cases in 2011 to 2021 represented 69% (CI, 66% to 71%) and 62% (CI, 60% to 64%) of total cases for females and males, respectively. No evidence was found to indicate that incidence rate ratios decreased over time, and most relative disparity measures showed small, statistically nonsignificant increases. LIMITATION: Analyses assumed complete TB case diagnosis and self-report of race/ethnicity and were not adjusted for medical comorbidities or social determinants of health. CONCLUSION: There are persistent disparities in TB incidence by race/ethnicity. Relative disparities were greater for AI/AN persons, females, and younger persons, and absolute disparities were greater for males. Eliminating these disparities could reduce overall TB incidence by more than 60% among the U.S.-born population. PRIMARY FUNDING SOURCE: Centers for Disease Control and Prevention.
Subject(s)
Ethnicity , Tuberculosis , United States/epidemiology , Humans , Incidence , Routinely Collected Health Data , Minority Groups , Population Surveillance , Tuberculosis/epidemiology , Tuberculosis/prevention & controlABSTRACT
BACKGROUND: Since common diagnostic tests for gonorrhea do not provide information about susceptibility to antibiotics, treatment of gonorrhea remains empiric. Antibiotics used for empiric therapy are usually changed once resistance prevalence exceeds a certain threshold (e.g., 5%). A low switch threshold is intended to increase the probability that an infection is successfully treated with the first-line antibiotic, but it could also increase the pace at which recommendations are switched to newer antibiotics. Little is known about the impact of changing the switch threshold on the incidence of gonorrhea, the rate of treatment failure, and the overall cost and quality-adjusted life-years (QALYs) associated with gonorrhea. METHODS AND FINDINGS: We developed a transmission model of gonococcal infection with multiple resistant strains to project gonorrhea-associated costs and loss in QALYs under different switch thresholds among men who have sex with men (MSM) in the United States. We accounted for the costs and disutilities associated with symptoms, diagnosis, treatment, and sequelae, and combined costs and QALYs in a measure of net health benefit (NHB). Our results suggest that under a scenario where 3 antibiotics are available over the next 50 years (2 suitable for the first-line therapy of gonorrhea and 1 suitable only for the retreatment of resistant infections), changing the switch threshold between 1% and 10% does not meaningfully impact the annual number of gonorrhea cases, total costs, or total QALY losses associated with gonorrhea. However, if a new antibiotic is to become available in the future, choosing a lower switch threshold could improve the population NHB. If in addition, drug-susceptibility testing (DST) is available to inform retreatment regimens after unsuccessful first-line therapy, setting the switch threshold at 1% to 2% is expected to maximize the population NHB. A limitation of our study is that our analysis only focuses on the MSM population and does not consider the influence of interventions such as vaccine and common use of rapid drugs susceptibility tests to inform first-line therapy. CONCLUSIONS: Changing the switch threshold for first-line antibiotics may not substantially change the health and financial outcomes associated with gonorrhea. However, the switch threshold could be reduced when newer antibiotics are expected to become available soon or when in addition to future novel antibiotics, DST is also available to inform retreatment regimens.
Subject(s)
Anti-Bacterial Agents , Cost-Benefit Analysis , Gonorrhea , Homosexuality, Male , Quality-Adjusted Life Years , Humans , Gonorrhea/drug therapy , Gonorrhea/epidemiology , Gonorrhea/economics , Gonorrhea/diagnosis , Male , Anti-Bacterial Agents/therapeutic use , Anti-Bacterial Agents/economics , Prevalence , United States/epidemiology , Neisseria gonorrhoeae/drug effects , Drug Resistance, Bacterial , Cost-Effectiveness AnalysisABSTRACT
Rapid point-of-care tests that diagnose gonococcal infections and identify susceptibility to antibiotics enable individualized treatment. This could improve patient outcomes and slow the emergence and spread of antibiotic resistance. However, little is known about the long-term impact of such diagnostics on the burden of gonorrhea and the effective life span of antibiotics. We used a mathematical model of gonorrhea transmission among men who have sex with men in the United States to project the annual rate of reported gonorrhea cases and the effective life span of ceftriaxone, the recommended antibiotic for first-line treatment of gonorrhea, as well as 2 previously recommended antibiotics, ciprofloxacin and tetracycline, when a rapid drug susceptibility test that estimates susceptibility to ciprofloxacin and tetracycline is available. The use of a rapid drug susceptibility test with ≥50% sensitivity and ≥95% specificity, defined in terms of correct ascertainment of drug susceptibility and nonsusceptibility status, could increase the combined effective life span of ciprofloxacin, tetracycline, and ceftriaxone by at least 2 years over 25 years of simulation. If test specificity is imperfect, however, the increase in the effective life span of antibiotics is accompanied by an increase in the rate of reported gonorrhea cases even under perfect sensitivity.
Subject(s)
Gonorrhea , Sexual and Gender Minorities , Male , Humans , United States/epidemiology , Anti-Bacterial Agents/pharmacology , Anti-Bacterial Agents/therapeutic use , Gonorrhea/drug therapy , Gonorrhea/epidemiology , Ceftriaxone/therapeutic use , Ceftriaxone/pharmacology , Homosexuality, Male , Longevity , Neisseria gonorrhoeae , Microbial Sensitivity Tests , Ciprofloxacin/pharmacology , Ciprofloxacin/therapeutic use , Tetracycline/pharmacology , Tetracycline/therapeutic use , Drug Resistance, BacterialABSTRACT
About 80% of persons with chronic hepatitis B virus (HBV) infection in the United States are non-US-born. Despite improvements in infant hepatitis B vaccination globally since 2000, work remains to attain the World Health Organization's (WHO) global 2030 goal of 90% vaccination. We explore the impacts on the United States of global progress in hepatitis B vaccination since 2000 and of achieving WHO hepatitis B vaccination goals. We simulated immigrants with HBV infection arriving to the United States from 2000 to 2070 using models of the 10 countries from which the largest numbers of individuals with HBV infection were born. We estimated costs in the United States among these cohorts using a disease simulation model. We simulated three scenarios: a scenario with no progress in infant vaccination for hepatitis B since 2000 (baseline), current (2020) progress and achieving WHO 2030 goals for hepatitis B vaccination. We estimate current hepatitis B vaccination progress since the 2000 baseline in these 10 countries will lead to 468,686 fewer HBV infections, avoid 35,582 hepatitis B-related deaths and save $4.2 billion in the United States through 2070. Achieving the WHO 2030 90% hepatitis B infant vaccination targets could lead to an additional 16,762 fewer HBV infections, 989 fewer hepatitis B-related deaths and save $143 million through 2070. Global hepatitis B vaccination since 2000 reduced prevalence of HBV infection in the United States. Achieving the WHO 2030 infant vaccination goals globally could lead to over one hundred million dollars in additional savings.
ABSTRACT
BACKGROUND: In the United States, over 80% of tuberculosis (TB) disease cases are estimated to result from reactivation of latent TB infection (LTBI) acquired more than 2 years previously ("reactivation TB"). We estimated reactivation TB rates for the US population with LTBI, overall, by age, sex, race-ethnicity, and US-born status, and for selected comorbidities (diabetes, end-stage renal disease, and HIV). METHODS: We collated nationally representative data for 2011-2012. Reactivation TB incidence was based on TB cases reported to the National TB Surveillance System that were attributed to LTBI reactivation. Person-years at risk of reactivation TB were calculated using interferon-gamma release assay (IGRA) positivity from the National Health and Nutrition Examination Survey, published values for interferon-gamma release assay sensitivity and specificity, and population estimates from the American Community Survey. RESULTS: For persons aged ≥6 years with LTBI, the overall reactivation rate was estimated as 0.072 (95% uncertainty interval: 0.047, 0.12) per 100 person-years. Estimated reactivation rates declined with age. Compared to the overall population, estimated reactivation rates were higher for persons with diabetes (adjusted rate ratio [aRR] = 1.6 [1.5, 1.7]), end-stage renal disease (aRR = 9.8 [5.4, 19]), and HIV (aRR = 12 [10, 13]). CONCLUSIONS: In our study, individuals with LTBI faced small, non-negligible risks of reactivation TB. Risks were elevated for individuals with medical comorbidities that weaken immune function.
Subject(s)
Diabetes Mellitus , HIV Infections , Kidney Failure, Chronic , Mycobacterium tuberculosis , Tuberculosis , Humans , United States/epidemiology , Nutrition Surveys , Tuberculosis/epidemiology , Tuberculosis/diagnosis , Kidney Failure, Chronic/epidemiology , HIV Infections/epidemiologyABSTRACT
The US COVID-19 Trends and Impact Survey (CTIS) is a large, cross-sectional, internet-based survey that has operated continuously since April 6, 2020. By inviting a random sample of Facebook active users each day, CTIS collects information about COVID-19 symptoms, risks, mitigating behaviors, mental health, testing, vaccination, and other key priorities. The large scale of the survey-over 20 million responses in its first year of operation-allows tracking of trends over short timescales and allows comparisons at fine demographic and geographic detail. The survey has been repeatedly revised to respond to emerging public health priorities. In this paper, we describe the survey methods and content and give examples of CTIS results that illuminate key patterns and trends and help answer high-priority policy questions relevant to the COVID-19 epidemic and response. These results demonstrate how large online surveys can provide continuous, real-time indicators of important outcomes that are not subject to public health reporting delays and backlogs. The CTIS offers high value as a supplement to official reporting data by supplying essential information about behaviors, attitudes toward policy and preventive measures, economic impacts, and other topics not reported in public health surveillance systems.
Subject(s)
COVID-19 Testing/statistics & numerical data , COVID-19/epidemiology , Health Status Indicators , Adult , Aged , COVID-19/diagnosis , COVID-19/prevention & control , COVID-19/transmission , COVID-19 Vaccines , Cross-Sectional Studies , Epidemiologic Methods , Female , Humans , Male , Middle Aged , Patient Acceptance of Health Care/statistics & numerical data , Social Media/statistics & numerical data , United States/epidemiology , Young AdultABSTRACT
BACKGROUND: Comprehensive evaluation of the quality-adjusted life-years (QALYs) lost attributable to chlamydia, gonorrhea, andtrichomoniasis in the United States is lacking. METHODS: We adapted a previous probability-tree model to estimate the average number of lifetime QALYs lost due to genital chlamydia, gonorrhea, and trichomoniasis, per incident infection and at the population level, by sex and age group. We conducted multivariate sensitivity analyses to address uncertainty around key parameter values. RESULTS: The estimated total discounted lifetime QALYs lost for men and women, respectively, due to infections acquired in 2018, were 1541 (95% uncertainty interval [UI], 186-6358) and 111 872 (95% UI, 29 777-267 404) for chlamydia, 989 (95% UI, 127-3720) and 12 112 (95% UI, 2 410-33 895) for gonorrhea, and 386 (95% UI, 30-1851) and 4576 (95% UI, 13-30 355) for trichomoniasis. Total QALYs lost were highest among women aged 15-24 years with chlamydia. QALYs lost estimates were highly sensitive to disutilities (health losses) of infections and sequelae, and to duration of infections and chronic sequelae for chlamydia and gonorrhea in women. CONCLUSIONS: The 3 sexually transmitted infections cause substantial health losses in the United States, particularly gonorrhea and chlamydia among women. The estimates of lifetime QALYs lost per infection help to prioritize prevention policies and inform cost-effectiveness analyses of sexually transmitted infection interventions.
Subject(s)
Chlamydia Infections , Chlamydia , Gonorrhea , Sexually Transmitted Diseases , Trichomonas Infections , Male , Humans , Female , United States/epidemiology , Gonorrhea/complications , Quality-Adjusted Life Years , Chlamydia Infections/complications , Sexually Transmitted Diseases/complications , Trichomonas Infections/epidemiology , Trichomonas Infections/complicationsABSTRACT
BACKGROUND: In 2019, about 58 million individuals were chronically infected with hepatitis C virus. Some experts have proposed challenge trials for hepatitis C virus vaccine development. METHODS: We modeled incremental infections averted through a challenge approach, under varying assumptions regarding trial duration, number of candidates, and vaccine uptake. We computed the benefit-risk ratio of incremental benefits to risks for challenge versus traditional approaches. We also benchmarked against monetary costs of achieving incremental benefits through treatment. RESULTS: Our base case assumes 3 vaccine candidates, each with an 11% chance of success, corresponding to a 30% probability of successfully developing a vaccine. Given this probability, and assuming a 5-year difference in duration between challenge and traditional trials, a challenge approach would avert an expected 185 000 incremental infections with 20% steady-state uptake compared to a traditional approach and 832 000 with 90% uptake (quality-adjusted life-year benefit-risk ratio, 72 000 & 323 000). It would cost at least $92 million and $416 million, respectively, to obtain equivalent benefits through treatment. BRRs vary considerably across scenarios, depending on input assumptions. CONCLUSIONS: Benefits of a challenge approach increase with more vaccine candidates, faster challenge trials, and greater uptake.
Subject(s)
Hepatitis C , Vaccines , Humans , Cost-Benefit Analysis , Quality-Adjusted Life Years , Hepatitis C/prevention & control , Risk Assessment , Vaccines/adverse effects , Vaccine DevelopmentABSTRACT
BACKGROUND: The purpose of this study was to estimate the health impact of syphilis in the United States in terms of the number of quality-adjusted life years (QALYs) lost attributable to infections in 2018. METHODS: We developed a Markov model that simulates the natural history and management of syphilis. The model was parameterized by sex and sexual orientation (women who have sex with men, men who have sex with women [MSW], and men who have sex with men [MSM]), and by age at primary infection. We developed a separate decision tree model to quantify health losses due to congenital syphilis. We estimated the average lifetime number of QALYs lost per infection, and the total expected lifetime number of QALYs lost due to syphilis acquired in 2018. RESULTS: We estimated the average number of discounted lifetime QALYs lost per infection as 0.09 (95% uncertainty interval [UI] .03-.19). The total expected number of QALYs lost due to syphilis acquired in 2018 was 13 349 (5071-31 360). Although per-case loss was the lowest among MSM (0.06), MSM accounted for 47.7% of the overall burden. For each case of congenital syphilis, we estimated 1.79 (1.43-2.16) and 0.06 (.01-.14) QALYs lost in the child and the mother, respectively. We projected 2332 (1871-28 250) and 79 (17-177) QALYs lost for children and mothers, respectively, due to congenital syphilis in 2018. CONCLUSIONS: Syphilis causes substantial health losses in adults and children. Quantifying these health losses in terms of QALYs can inform cost-effectiveness analyses and can facilitate comparisons of the burden of syphilis to that of other diseases.
Subject(s)
Sexual and Gender Minorities , Syphilis, Congenital , Syphilis , Adult , Child , Humans , Male , Female , United States/epidemiology , Syphilis/epidemiology , Homosexuality, Male , Quality-Adjusted Life Years , Syphilis, Congenital/epidemiologyABSTRACT
BACKGROUND: Although a substantial fraction of the US population was infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during December 2021-February 2022, the subsequent evolution of population immunity reflects the competing influences of waning protection over time and acquisition or restoration of immunity through additional infections and vaccinations. METHODS: Using a Bayesian evidence synthesis model of reported coronavirus disease 2019 (COVID-19) data (diagnoses, hospitalizations), vaccinations, and waning patterns for vaccine- and infection-acquired immunity, we estimate population immunity against infection and severe disease from SARS-CoV-2 Omicron variants in the United States, by location (national, state, county) and week. RESULTS: By 9 November 2022, 97% (95%-99%) of the US population were estimated to have prior immunological exposure to SARS-CoV-2. Between 1 December 2021 and 9 November 2022, protection against a new Omicron infection rose from 22% (21%-23%) to 63% (51%-75%) nationally, and protection against an Omicron infection leading to severe disease increased from 61% (59%-64%) to 89% (83%-92%). Increasing first booster uptake to 55% in all states (current US coverage: 34%) and second booster uptake to 22% (current US coverage: 11%) would increase protection against infection by 4.5 percentage points (2.4-7.2) and protection against severe disease by 1.1 percentage points (1.0-1.5). CONCLUSIONS: Effective protection against SARS-CoV-2 infection and severe disease in November 2022 was substantially higher than in December 2021. Despite this high level of protection, a more transmissible or immune evading (sub)variant, changes in behavior, or ongoing waning of immunity could lead to a new SARS-CoV-2 wave.
Subject(s)
COVID-19 , SARS-CoV-2 , Humans , United States/epidemiology , COVID-19/epidemiology , Bayes Theorem , Adaptive ImmunityABSTRACT
BACKGROUND: Both severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and coronavirus disease 2019 (COVID-19) vaccination contribute to population-level immunity against SARS-CoV-2. This study estimated the immunological exposure and effective protection against future SARS-CoV-2 infection in each US state and county over 2020-2021 and how this changed with the introduction of the Omicron variant. METHODS: We used a Bayesian model to synthesize estimates of daily SARS-CoV-2 infections, vaccination data and estimates of the relative rates of vaccination conditional on infection status to estimate the fraction of the population with (1) immunological exposure to SARS-CoV-2 (ever infected with SARS-CoV-2 and/or received ≥1 doses of a COVID-19 vaccine), (2) effective protection against infection, and (3) effective protection against severe disease, for each US state and county from 1 January 2020 to 1 December 2021. RESULTS: The estimated percentage of the US population with a history of SARS-CoV-2 infection or vaccination as of 1 December 2021 was 88.2% (95% credible interval [CrI], 83.6%-93.5%). Accounting for waning and immune escape, effective protection against the Omicron variant on 1 December 2021 was 21.8% (95% CrI, 20.7%-23.4%) nationally and ranged between 14.4% (13.2%-15.8%; West Virginia) and 26.4% (25.3%-27.8%; Colorado). Effective protection against severe disease from Omicron was 61.2% (95% CrI, 59.1%-64.0%) nationally and ranged between 53.0% (47.3%-60.0%; Vermont) and 65.8% (64.9%-66.7%; Colorado). CONCLUSIONS: While more than four-fifths of the US population had prior immunological exposure to SARS-CoV-2 via vaccination or infection on 1 December 2021, only a fifth of the population was estimated to have effective protection against infection with the immune-evading Omicron variant.
Subject(s)
COVID-19 , SARS-CoV-2 , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Bayes Theorem , COVID-19 Vaccines , VaccinationABSTRACT
BACKGROUND: In the United States, the tuberculosis (TB) disease burden and associated factors vary substantially across states. While public health agencies must choose how to deploy resources to combat TB and latent tuberculosis infection (LTBI), state-level modeling analyses to inform policy decisions have not been widely available. METHODS: We developed a mathematical model of TB epidemiology linked to a web-based user interface - Tabby2. The model is calibrated to epidemiological and demographic data for the United States, each U.S. state, and the District of Columbia. Users can simulate pre-defined scenarios describing approaches to TB prevention and treatment or create their own intervention scenarios. Location-specific results for epidemiological outcomes, service utilization, costs, and cost-effectiveness are reported as downloadable tables and customizable visualizations. To demonstrate the tool's functionality, we projected trends in TB outcomes without additional intervention for all 50 states and the District of Columbia. We further undertook a case study of expanded treatment of LTBI among non-U.S.-born individuals in Massachusetts, covering 10% of the target population annually over 2025-2029. RESULTS: Between 2022 and 2050, TB incidence rates were projected to decline in all states and the District of Columbia. Incidence projections for the year 2050 ranged from 0.03 to 3.8 cases (median 0.95) per 100,000 persons. By 2050, we project that majority (> 50%) of TB will be diagnosed among non-U.S.-born persons in 46 states and the District of Columbia; per state percentages range from 17.4% to 96.7% (median 83.0%). In Massachusetts, expanded testing and treatment for LTBI in this population was projected to reduce cumulative TB cases between 2025 and 2050 by 6.3% and TB-related deaths by 8.4%, relative to base case projections. This intervention had an incremental cost-effectiveness ratio of $180,951 (2020 USD) per quality-adjusted life year gained from the societal perspective. CONCLUSIONS: Tabby2 allows users to estimate the costs, impact, and cost-effectiveness of different TB prevention approaches for multiple geographic areas in the United States. Expanded testing and treatment for LTBI could accelerate declines in TB incidence in the United States, as demonstrated in the Massachusetts case study.
Subject(s)
Latent Tuberculosis , Tuberculosis , United States/epidemiology , Humans , Pregnancy , Female , Tuberculosis/epidemiology , Tuberculosis/prevention & control , Antibiotic Prophylaxis , Cost of Illness , ParturitionABSTRACT
BACKGROUND: Chronic hepatitis B (CHB) carries an increased risk of death from cirrhosis and hepatocellular carcinoma (HCC). The American Association for the Study of Liver Diseases recommends patients with CHB receive monitoring of disease activity, including ALT, hepatitis B virus (HBV) DNA, hepatitis B e-antigen (HBeAg), and liver imaging for patients who experience an increased risk for HCC. HBV antiviral therapy is recommended for patients with active hepatitis and cirrhosis. METHODS: Monitoring and treatment of adults with new CHB diagnoses were analyzed using Optum Clinformatics Data Mart Database claims data from January 1, 2016, to December 31, 2019. RESULTS: Among 5978 patients with new CHB diagnosis, only 56% with cirrhosis and 50% without cirrhosis had claims for≥1 ALT and either HBV DNA or HBeAg test, and among patients recommended for HCC surveillance, 82% with cirrhosis and 57% without cirrhosis had claims for≥1 liver imaging within 12 months of diagnosis. Although antiviral treatment is recommended for patients with cirrhosis, only 29% of patients with cirrhosis had≥1 claim for HBV antiviral therapy within 12 months of CHB diagnosis. Multivariable analysis showed patients who were male, Asian, privately insured, or had cirrhosis were more likely (P<0.05) to receive ALT and either HBV DNA or HBeAg tests and HBV antiviral therapy within 12 months of diagnosis. CONCLUSION: Many patients diagnosed with CHB are not receiving the clinical assessment and treatment recommended. A comprehensive initiative is needed to address the patient, provider, and system-related barriers to improve the clinical management of CHB.
Subject(s)
Carcinoma, Hepatocellular , Hepatitis B, Chronic , Liver Neoplasms , Adult , Humans , Male , United States , Female , Carcinoma, Hepatocellular/drug therapy , Carcinoma, Hepatocellular/etiology , Carcinoma, Hepatocellular/pathology , Hepatitis B, Chronic/complications , Hepatitis B, Chronic/drug therapy , Hepatitis B, Chronic/diagnosis , Liver Neoplasms/drug therapy , Liver Neoplasms/etiology , Liver Neoplasms/pathology , Hepatitis B e Antigens/therapeutic use , DNA, Viral/therapeutic use , Antiviral Agents/therapeutic use , Liver Cirrhosis/diagnosis , Liver Cirrhosis/drug therapy , Liver Cirrhosis/epidemiologyABSTRACT
BACKGROUND: Chlamydia remains a significant public health problem that contributes to adverse reproductive health outcomes. In the United States, sexually active women 24 years and younger are recommended to receive annual screening for chlamydia. In this study, we evaluated the impact of estimated current levels of screening and partner notification (PN), and the impact of screening based on guidelines on chlamydia associated sequelae, quality adjusted life years (QALYs) lost and costs. METHODS: We conducted a cost-effectiveness analysis of chlamydia screening, using a published calibrated pair formation transmission model that estimated trends in chlamydia screening coverage in the United States from 2000 to 2015 consistent with epidemiological data. We used probability trees to translate chlamydial infection outcomes into estimated numbers of chlamydia-associated sequelae, QALYs lost, and health care services costs (in 2020 US dollars). We evaluated the costs and population health benefits of screening and PN in the United States for 2000 to 2015, as compared with no screening and no PN. We also estimated the additional benefits that could be achieved by increasing screening coverage to the levels indicated by the policy recommendations for 2016 to 2019, compared with screening coverage achieved by 2015. RESULTS: Screening and PN from 2000 to 2015 were estimated to have averted 1.3 million (95% uncertainty interval [UI] 490,000-2.3 million) cases of pelvic inflammatory disease, 430,000 (95% UI, 160,000-760,000) cases of chronic pelvic pain, 300,000 (95% UI, 104,000-570,000) cases of tubal factor infertility, and 140,000 (95% UI, 47,000-260,000) cases of ectopic pregnancy in women. We estimated that chlamydia screening and PN cost $9700 per QALY gained compared with no screening and no PN. We estimated the full realization of chlamydia screening guidelines for 2016 to 2019 to cost $30,000 per QALY gained, compared with a scenario in which chlamydia screening coverage was maintained at 2015 levels. DISCUSSION: Chlamydia screening and PN as implemented in the United States from 2000 through 2015 has substantially improved population health and provided good value for money when considering associated health care services costs. Further population health gains are attainable by increasing screening further, at reasonable cost per QALY gained.
Subject(s)
Chlamydia Infections , Chlamydia , Pregnancy , Humans , Female , United States/epidemiology , Cost-Benefit Analysis , Contact Tracing , Chlamydia Infections/diagnosis , Chlamydia Infections/epidemiology , Chlamydia Infections/prevention & control , Mass Screening , Quality-Adjusted Life Years , Health Care CostsABSTRACT
In the absence of point-of-care gonorrhea diagnostics that report antibiotic susceptibility, gonorrhea treatment is empiric and determined by standardized guidelines. These guidelines are informed by estimates of resistance prevalence from national surveillance systems. We examined whether guidelines informed by local, rather than national, surveillance data could reduce the incidence of gonorrhea and increase the effective lifespan of antibiotics used in treatment guidelines. We used a transmission dynamic model of gonorrhea among men who have sex with men (MSM) in 16 U.S. metropolitan areas to determine whether spatially adaptive treatment guidelines based on local estimates of resistance prevalence can extend the effective lifespan of hypothetical antibiotics. The rate of gonorrhea cases in these metropolitan areas was 5,548 cases per 100,000 MSM in 2017. Under the current strategy of updating the treatment guideline when the prevalence of resistance exceeds 5%, we showed that spatially adaptive guidelines could reduce the annual rate of gonorrhea cases by 200 cases (95% uncertainty interval: 169, 232) per 100,000 MSM population while extending the use of a first-line antibiotic by 0.75 (0.55, 0.95) years. One potential strategy to reduce the incidence of gonorrhea while extending the effective lifespan of antibiotics is to inform treatment guidelines based on local, rather than national, resistance prevalence.
Subject(s)
Gonorrhea , Sexual and Gender Minorities , Anti-Bacterial Agents/therapeutic use , Gonorrhea/drug therapy , Gonorrhea/epidemiology , Gonorrhea/prevention & control , Homosexuality, Male , Humans , Incidence , Longevity , Male , Neisseria gonorrhoeaeABSTRACT
Reported COVID-19 cases and deaths provide a delayed and incomplete picture of SARS-CoV-2 infections in the United States (US). Accurate estimates of both the timing and magnitude of infections are needed to characterize viral transmission dynamics and better understand COVID-19 disease burden. We estimated time trends in SARS-CoV-2 transmission and other COVID-19 outcomes for every county in the US, from the first reported COVID-19 case in January 13, 2020 through January 1, 2021. To do so we employed a Bayesian modeling approach that explicitly accounts for reporting delays and variation in case ascertainment, and generates daily estimates of incident SARS-CoV-2 infections on the basis of reported COVID-19 cases and deaths. The model is freely available as the covidestim R package. Nationally, we estimated there had been 49 million symptomatic COVID-19 cases and 404,214 COVID-19 deaths by the end of 2020, and that 28% of the US population had been infected. There was county-level variability in the timing and magnitude of incidence, with local epidemiological trends differing substantially from state or regional averages, leading to large differences in the estimated proportion of the population infected by the end of 2020. Our estimates of true COVID-19 related deaths are consistent with independent estimates of excess mortality, and our estimated trends in cumulative incidence of SARS-CoV-2 infection are consistent with trends in seroprevalence estimates from available antibody testing studies. Reconstructing the underlying incidence of SARS-CoV-2 infections across US counties allows for a more granular understanding of disease trends and the potential impact of epidemiological drivers.
Subject(s)
COVID-19 , Epidemics , Bayes Theorem , COVID-19/epidemiology , Humans , SARS-CoV-2 , Seroepidemiologic Studies , United States/epidemiologyABSTRACT
Low rates of vaccination, emergence of novel variants of SARS-CoV-2, and increasing transmission relating to seasonal changes and relaxation of mitigation measures leave many US communities at risk for surges of COVID-19 that might strain hospital capacity, as in previous waves. The trajectories of COVID-19 hospitalizations differ across communities depending on their age distributions, vaccination coverage, cumulative incidence, and adoption of risk mitigating behaviors. Yet, existing predictive models of COVID-19 hospitalizations are almost exclusively focused on national- and state-level predictions. This leaves local policymakers in urgent need of tools that can provide early warnings about the possibility that COVID-19 hospitalizations may rise to levels that exceed local capacity. In this work, we develop a framework to generate simple classification rules to predict whether COVID-19 hospitalization will exceed the local hospitalization capacity within a 4- or 8-week period if no additional mitigating strategies are implemented during this time. This framework uses a simulation model of SARS-CoV-2 transmission and COVID-19 hospitalizations in the US to train classification decision trees that are robust to changes in the data-generating process and future uncertainties. These generated classification rules use real-time data related to hospital occupancy and new hospitalizations associated with COVID-19, and when available, genomic surveillance of SARS-CoV-2. We show that these classification rules present reasonable accuracy, sensitivity, and specificity (all ≥ 80%) in predicting local surges in hospitalizations under numerous simulated scenarios, which capture substantial uncertainties over the future trajectories of COVID-19. Our proposed classification rules are simple, visual, and straightforward to use in practice by local decision makers without the need to perform numerical computations.
Subject(s)
COVID-19 , Humans , SARS-CoV-2 , Hospitalization , Hospitals , Age DistributionABSTRACT
BACKGROUND: Centers for Disease Control and Prevention (CDC) defines low, medium, and high "COVID-19 community levels" to guide interventions, but associated mortality rates have not been reported. OBJECTIVE: To evaluate the diagnostic performance of CDC COVID-19 community level metrics as predictors of elevated community mortality risk. DESIGN: Time series analysis over the period of 30 May 2021 through 4 June 2022. SETTING: U.S. states and counties. PARTICIPANTS: U.S. population. MEASUREMENTS: CDC "COVID-19 community level" metrics based on hospital admissions, bed occupancy, and reported cases; reported COVID-19 deaths; and sensitivity, specificity, and predictive values for CDC and alternative metrics. RESULTS: Mean and median weekly mortality rates per 100 000 population after onset of high COVID-19 community level 3 weeks prior were, respectively, 2.6 and 2.4 (interquartile range [IQR], 1.7 to 3.1) across 90 high episodes in states and 4.3 and 2.1 (IQR, 0 to 5.4) across 7987 high episodes in counties. In 85 of 90 (94%) episodes in states and 4801 of 7987 (60%) episodes in counties, lagged weekly mortality after onset exceeded 0.9 per 100 000 population, and in 57 of 90 (63%) episodes in states and 4018 of 7987 (50%) episodes in counties, lagged weekly mortality after onset exceeded 2.1 per 100 000, which is equivalent to approximately 1000 daily deaths in the national population. Alternative metrics based on lower hospital admissions or case thresholds were associated with lower mortality and had higher sensitivity and negative predictive value for elevated mortality, but the CDC metrics had higher specificity and positive predictive value. Ratios between cases, hospitalizations, and deaths have varied substantially over time. LIMITATIONS: Aggregate mortality does not account for nonfatal outcomes or disparities. Continuing evolution of viral variants, immunity, clinical interventions, and public health mitigation strategies complicate prediction for future waves. CONCLUSION: Designing metrics for public health decision making involves tradeoffs between identifying early signals for action and avoiding undue restrictions when risks are modest. Explicit frameworks for evaluating surveillance metrics can improve transparency and decision support. PRIMARY FUNDING SOURCE: Council of State and Territorial Epidemiologists.