Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 115
Filtrar
1.
JAMA Netw Open ; 5(9): e2230426, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36098969

RESUMO

Importance: Quantitative assessment of disease progression in patients with nonalcoholic fatty liver disease (NAFLD) has not been systematically examined using competing liver-related and non-liver-related mortality. Objective: To estimate long-term outcomes in NAFLD, accounting for competing liver-related and non-liver-related mortality associated with the different fibrosis stages of NAFLD using a simulated patient population. Design, Setting, and Participants: This decision analytical modeling study used individual-level state-transition simulation analysis and was conducted from September 1, 2017, to September 1, 2021. A publicly available interactive tool, dubbed NAFLD Simulator, was developed that simulates the natural history of NAFLD by age and fibrosis stage at the time of (hypothetical) diagnosis defined by liver biopsy. Model health states were defined by fibrosis states F0 to F4, decompensated cirrhosis, hepatocellular carcinoma (HCC), and liver transplant. Simulated patients could experience nonalcoholic steatohepatitis resolution, and their fibrosis stage could progress or regress. Transition probabilities between states were estimated from the literature as well as calibration, and the model reproduced the outcomes of a large observational study. Exposure: Simulated natural history of NAFLD. Main Outcomes and Measures: Main outcomes were life expectancy; all cause, liver-related, and non-liver-related mortality; and cumulative incidence of decompensated cirrhosis and/or HCC. Results: The model included 1 000 000 simulated patients with a mean (range) age of 49 (18-75) years at baseline, including 66% women. The life expectancy of patients aged 49 years was 25.3 (95% CI, 20.1-29.8) years for those with F0, 25.1 (95% CI, 20.1-29.4) years for those with F1, 23.6 (95% CI, 18.3-28.2) years for those with F2, 21.1 (95% CI, 15.6-26.3) years for those with F3, and 13.8 (95% CI, 10.3-17.6) years for those with F4 at the time of diagnosis. The estimated 10-year liver-related mortality was 0.1% (95% uncertainty interval [UI], <0.1%-0.2%) in F0, 0.2% (95% UI, 0.1%-0.4%) in F1, 1.0% (95% UI, 0.6%-1.7%) in F2, 4.0% (95% UI, 2.5%-5.9%) in F3, and 29.3% (95% UI, 21.8%-35.9%) in F4. The corresponding 10-year non-liver-related mortality was 1.8% (95% UI, 0.6%-5.0%) in F0, 2.4% (95% UI, 0.8%-6.3%) in F1, 5.2% (95% UI, 2.0%-11.9%) in F2, 9.7% (95% UI, 4.3%-18.1%) in F3, and 15.6% (95% UI, 10.1%-21.7%) in F4. Among patients aged 65 years, estimated 10-year non-liver-related mortality was higher than liver-related mortality in all fibrosis stages (eg, F2: 16.7% vs 0.8%; F3: 28.8% vs 3.0%; F4: 40.8% vs 21.9%). Conclusions and Relevance: This decision analytic model study simulated stage-specific long-term outcomes, including liver- and non-liver-related mortality in patients with NAFLD. Depending on age and fibrosis stage, non-liver-related mortality was higher than liver-related mortality in patients with NAFLD. By translating surrogate markers into clinical outcomes, the NAFLD Simulator could be used as an educational tool among patients and clinicians to increase awareness of the health consequences of NAFLD.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Hepatopatia Gordurosa não Alcoólica , Carcinoma Hepatocelular/complicações , Feminino , Fibrose , Humanos , Cirrose Hepática/epidemiologia , Cirrose Hepática/etiologia , Neoplasias Hepáticas/epidemiologia , Masculino , Hepatopatia Gordurosa não Alcoólica/complicações , Hepatopatia Gordurosa não Alcoólica/epidemiologia
2.
Med Decis Making ; : 272989X221125418, 2022 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-36113098

RESUMO

BACKGROUND: Metamodels can address some of the limitations of complex simulation models by formulating a mathematical relationship between input parameters and simulation model outcomes. Our objective was to develop and compare the performance of a machine learning (ML)-based metamodel against a conventional metamodeling approach in replicating the findings of a complex simulation model. METHODS: We constructed 3 ML-based metamodels using random forest, support vector regression, and artificial neural networks and a linear regression-based metamodel from a previously validated microsimulation model of the natural history hepatitis C virus (HCV) consisting of 40 input parameters. Outcomes of interest included societal costs and quality-adjusted life-years (QALYs), the incremental cost-effectiveness (ICER) of HCV treatment versus no treatment, cost-effectiveness analysis curve (CEAC), and expected value of perfect information (EVPI). We evaluated metamodel performance using root mean squared error (RMSE) and Pearson's R2 on the normalized data. RESULTS: The R2 values for the linear regression metamodel for QALYs without treatment, QALYs with treatment, societal cost without treatment, societal cost with treatment, and ICER were 0.92, 0.98, 0.85, 0.92, and 0.60, respectively. The corresponding R2 values for our ML-based metamodels were 0.96, 0.97, 0.90, 0.95, and 0.49 for support vector regression; 0.99, 0.83, 0.99, 0.99, and 0.82 for artificial neural network; and 0.99, 0.99, 0.99, 0.99, and 0.98 for random forest. Similar trends were observed for RMSE. The CEAC and EVPI curves produced by the random forest metamodel matched the results of the simulation output more closely than the linear regression metamodel. CONCLUSIONS: ML-based metamodels generally outperformed traditional linear regression metamodels at replicating results from complex simulation models, with random forest metamodels performing best. HIGHLIGHTS: Decision-analytic models are frequently used by policy makers and other stakeholders to assess the impact of new medical technologies and interventions. However, complex models can impose limitations on conducting probabilistic sensitivity analysis and value-of-information analysis, and may not be suitable for developing online decision-support tools.Metamodels, which accurately formulate a mathematical relationship between input parameters and model outcomes, can replicate complex simulation models and address the above limitation.The machine learning-based random forest model can outperform linear regression in replicating the findings of a complex simulation model. Such a metamodel can be used for conducting cost-effectiveness and value-of-information analyses or developing online decision support tools.

3.
JAMA Health Forum ; 3(4): e220760, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35977324

RESUMO

Importance: A key question for policy makers and the public is what to expect from the COVID-19 pandemic going forward as states lift nonpharmacologic interventions (NPIs), such as indoor mask mandates, to prevent COVID-19 transmission. Objective: To project COVID-19 deaths between March 1, 2022, and December 31, 2022, in each of the 50 US states, District of Columbia, and Puerto Rico assuming different dates of lifting of mask mandates and NPIs. Design Setting and Participants: This simulation modeling study used the COVID-19 Policy Simulator compartmental model to project COVID-19 deaths from March 1, 2022, to December 31, 2022, using simulated populations in the 50 US states, District of Columbia, and Puerto Rico. Projected current epidemiologic trends for each state until December 31, 2022, assuming the current pace of vaccination is maintained into the future and modeling different dates of lifting NPIs. Exposures: Date of lifting statewide NPI mandates as March 1, April 1, May 1, June 1, or July 1, 2022. Main Outcomes and Measures: Projected COVID-19 incident deaths from March to December 2022. Results: With the high transmissibility of current circulating SARS-CoV-2 variants, the simulated lifting of NPIs in March 2022 was associated with resurgences of COVID-19 deaths in nearly every state. In comparison, delaying by even 1 month to lift NPIs in April 2022 was estimated to mitigate the amplitude of the surge. For most states, however, no amount of delay was estimated to be sufficient to prevent a surge in deaths completely. The primary factor associated with recurrent epidemics in the simulation was the assumed high effective reproduction number of unmitigated viral transmission. With a lower level of transmissibility similar to those of the ancestral strains, the model estimated that most states could remove NPIs in March 2022 and likely not see recurrent surges. Conclusions and Relevance: This simulation study estimated that the SARS-CoV-2 virus would likely continue to take a major toll in the US, even as cases continued to decrease. Because of the high transmissibility of the recent Delta and Omicron variants, premature lifting of NPIs could pose a substantial threat of rebounding surges in morbidity and mortality. At the same time, continued delay in lifting NPIs may not prevent future surges.


Assuntos
COVID-19 , SARS-CoV-2 , Número Básico de Reprodução , COVID-19/epidemiologia , Humanos , Pandemias/prevenção & controle
4.
JAMA Health Forum ; 3(8): e222419, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-36003419

RESUMO

Importance: Undiagnosed atrial fibrillation (AF) is an important cause of stroke. Screening for AF using wrist-worn wearable devices may prevent strokes, but their cost-effectiveness is unknown. Objective: To evaluate the cost-effectiveness of contemporary AF screening strategies, particularly wrist-worn wearable devices. Design Setting and Participants: This economic evaluation used a microsimulation decision-analytic model and was conducted from September 8, 2020, to May 23, 2022, comprising 30 million simulated individuals with an age, sex, and comorbidity profile matching the US population aged 65 years or older. Interventions: Eight AF screening strategies, with 6 using wrist-worn wearable devices (watch or band photoplethysmography, with or without watch or band electrocardiography) and 2 using traditional modalities (ie, pulse palpation and 12-lead electrocardiogram) vs no screening. Main Outcomes and Measures: The primary outcome was the incremental cost-effectiveness ratio, defined as US dollars per quality-adjusted life-year (QALY). Secondary measures included rates of stroke and major bleeding. Results: In the base case analysis of this model, the mean (SD) age was 72.5 (7.5) years, and 50% of the individuals were women. All 6 screening strategies using wrist-worn wearable devices were estimated to be more effective than no screening (range of QALYs gained vs no screening, 226-957 per 100 000 individuals) and were associated with greater relative benefit than screening using traditional modalities (range of QALYs gained vs no screening, -116 to 93 per 100 000 individuals). Compared with no screening, screening using wrist-worn wearable devices was associated with a reduction in stroke incidence by 20 to 23 per 100 000 person-years but an increase in major bleeding by 20 to 44 per 100 000 person-years. The overall preferred strategy was wearable photoplethysmography, followed conditionally by wearable electrocardiography with patch monitor confirmation, which had an incremental cost-effectiveness ratio of $57 894 per QALY, meeting the acceptability threshold of $100 000 per QALY. The cost-effectiveness of screening was consistent across multiple scenarios, including strata of sex, screening at earlier ages (eg, ≥50 years), and with variation in the association of anticoagulation with risk of stroke in the setting of screening-detected AF. Conclusions and Relevance: This economic evaluation of AF screening using a microsimulation decision-analytic model suggests that screening using wearable devices is cost-effective compared with either no screening or AF screening using traditional methods.


Assuntos
Fibrilação Atrial , Acidente Vascular Cerebral , Dispositivos Eletrônicos Vestíveis , Idoso , Fibrilação Atrial/diagnóstico , Análise Custo-Benefício , Feminino , Hemorragia/complicações , Humanos , Masculino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/diagnóstico
5.
Hepatol Commun ; 6(10): 2925-2936, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35945907

RESUMO

Ultrasound-based surveillance has suboptimal sensitivity for early detection of hepatocellular carcinoma (HCC) in patients with cirrhosis. There are several emerging alternatives, including a novel multitarget HCC blood test (Mt-HBT). We compared performance of mt-HBT against ultrasound with or without alpha-fetoprotein (AFP) for early HCC detection in patients with cirrhosis. Per the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, two reviewers searched PubMed, Cochrane, Embase, and clinicaltrials.gov databases from January 1990 through December 2020 to identify studies reporting sensitivity and/or specificity of ultrasound and AFP for overall and early stage HCC detection in patients with cirrhosis. Mt-HBT diagnostic performance was derived from a clinical validation study. A network meta-analysis model was built for comparative assessment, and pooled estimates of sensitivity at a fixed specificity were estimated based on Bayesian binormal receiver operating characteristic models for each modality. Forty-one studies (comprising 62,517 patients with cirrhosis) met inclusion criteria. Ultrasound-alone sensitivity was 51.6% (95% credible interval [CrI], 43.3%-60.5%) for early stage HCC detection, which increased with the addition of AFP to 74.1% (95% CrI, 62.6%-82.4%); however, this was offset by decreased specificity (87.9% vs. 83.9%, respectively). With specificity fixed at 90%, mt-HBT sensitivity for early stage HCC detection was higher than ultrasound alone (18.2%; 95% CrI, 0.2%-37.7%) and similar to ultrasound with AFP (-3.3%; 95% CrI, -22.3%-17.4%). Pairwise posterior probabilities suggested a preference for mt-HBT over ultrasound alone in 97.4% of cases but only 36.3% of cases versus ultrasound with AFP. Conclusion: A blood-based mt-HBT has higher sensitivity than ultrasound alone for early stage HCC detection but similar sensitivity compared to ultrasound and AFP. Mt-HBT could be a comparable alternative to existing methods for HCC surveillance in patients who are at risk.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Teorema de Bayes , Carcinoma Hepatocelular/diagnóstico , Testes Hematológicos , Humanos , Cirrose Hepática , Neoplasias Hepáticas/diagnóstico , Metanálise em Rede , Sensibilidade e Especificidade , alfa-Fetoproteínas/análise
8.
Value Health ; 25(7): 1107-1115, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35272954

RESUMO

OBJECTIVES: Hepatitis C virus (HCV) affects 58 million worldwide and > 79% of people remain undiagnosed. Rapid diagnostic tests (RDTs) for HCV can help improve diagnosis and treatment rates. Nevertheless, the high price and infrastructure needed to use current molecular HCV RDT options present a barrier to widespread use-particularly in low- and middle-income countries. We evaluated the performance and cost-effectiveness of a theoretical core antigen (cAg) RDT for HCV viremia confirmation, which requires fewer resources. METHODS: We adapted a previously validated microsimulation model to simulate HCV disease progression and outcomes under different HCV testing algorithms in Georgia and Malaysia. We compared standard of care testing with laboratory-based ribonucleic acid HCV to a cAg-based RDT for HCV confirmation. We simulated a cohort of 10 000 adults in each country, with an HCV-ribonucleic acid prevalence of 5.40% in Georgia and 1.54% in Malaysia. We projected the cumulative healthcare costs, quality-adjusted life-years, and diagnosis coverage rates over a lifetime horizon. RESULTS: Compared with the standard of care testing, the cAg-based RDT would increase quality-adjusted life-years by 270 in Georgia and 259 in Malaysia per 10 000 people. The high diagnosis rate and treatment rate of the cAg-based RDT result in substantial cost savings because of averted HCV sequelae management costs. Cost savings are $281 000 for Georgia and $781 000 for Malaysia. CONCLUSIONS: We found that a cAg-based RDT for HCV could improve the diagnosis rate and result in cost savings. Such a test could have a substantial impact on the feasibility and cost of HCV elimination.


Assuntos
Hepacivirus , Hepatite C , Adulto , Análise Custo-Benefício , Testes Diagnósticos de Rotina , Hepacivirus/genética , Hepatite C/diagnóstico , Hepatite C/epidemiologia , Humanos , RNA
9.
J Hepatol ; 77(1): 55-62, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35157959

RESUMO

BACKGROUND & AIMS: Successful treatment of chronic hepatitis C with oral direct-acting antivirals (DAAs) leads to virological cure, however, the subsequent risk of hepatocellular carcinoma (HCC) persists. Our objective was to evaluate the cost-effectiveness of biannual surveillance for HCC in patients cured of hepatitis C and the optimal age to stop surveillance. METHODS: We developed a microsimulation model of the natural history of HCC in individuals with hepatitis C and advanced fibrosis or cirrhosis who achieved virological cure with oral DAAs. We used published data on HCC incidence, tumor progression, real-world HCC surveillance adherence, and costs and utilities of different health states. We compared biannual HCC surveillance using ultrasound and alpha-fetoprotein for varying durations of surveillance (from 5 years to lifetime) vs. no surveillance. RESULTS: In virologically cured patients with cirrhosis, the incremental cost-effectiveness ratio (ICER) of biannual surveillance remained below $150,000 per additional quality-adjusted life year (QALY) (range: $79,500-$94,800) when surveillance was stopped at age 70, irrespective of the starting age (40-65). Compared with no surveillance, surveillance detected 130 additional HCCs in 'very early'/early stage and yielded 51 additional QALYs per 1,000 patients with cirrhosis. In virologically cured patients with advanced fibrosis, the ICER of biannual surveillance remained below $150,000/QALY (range: $124,600-$129,800) when surveillance was stopped at age 60, irrespective of the starting age (40-50). Compared with no surveillance, surveillance detected 24 additional HCCs in 'very early'/early stage and yielded 12 additional QALYs per 1,000 patients with advanced fibrosis. CONCLUSION: Biannual surveillance for HCC in patients cured of hepatitis C is cost-effective until the age of 70 for patients with cirrhosis, and until the age of 60 for patients with stable advanced fibrosis. LAY SUMMARY: Individuals who are cured of hepatitis C using oral antiviral drugs remain at risk of developing liver cancer. The value of lifelong screening for liver cancer in these individuals is not known. By simulating the life course of hepatitis C cured individuals, we found that ultrasound-based biannual screening for liver cancer is cost-effective up to age 70 in those with cirrhosis and up to age 60 in those with stable advanced fibrosis.


Assuntos
Carcinoma Hepatocelular , Hepatite C Crônica , Hepatite C , Neoplasias Hepáticas , Idoso , Antivirais/uso terapêutico , Carcinoma Hepatocelular/diagnóstico , Carcinoma Hepatocelular/epidemiologia , Carcinoma Hepatocelular/etiologia , Análise Custo-Benefício , Hepacivirus , Hepatite C/tratamento farmacológico , Hepatite C Crônica/complicações , Hepatite C Crônica/tratamento farmacológico , Hepatite C Crônica/epidemiologia , Humanos , Cirrose Hepática/complicações , Cirrose Hepática/tratamento farmacológico , Cirrose Hepática/epidemiologia , Neoplasias Hepáticas/diagnóstico , Neoplasias Hepáticas/epidemiologia , Neoplasias Hepáticas/etiologia , Pessoa de Meia-Idade
12.
Liver Int ; 42(3): 532-540, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34817928

RESUMO

BACKGROUND AND AIMS: India has a significant burden of hepatitis C virus (HCV) infection and has committed to achieving national elimination by 2030. This will require a substantial scale-up in testing and treatment. The "HEAD-Start Project Delhi" aimed to enhance HCV diagnosis and treatment pathways among the general population. METHODS: A prospective study was conducted at 5 district hospitals (Arm 1: one-stop shop), 15 polyclinics (Arm 2: referral for viral load (VL) testing and treatment) and 62 screening camps (Arm 3: referral for treatment). HCV prevalence, retention in the HCV care cascade, and turn-around time were measured. RESULTS: Between January and September 2019, 37 425 participants were screened for HCV. The median (IQR) age of participants was 35 (26-48) years, with 50.4% male and 49.6% female. A significantly higher proportion of participants in Arm 1 (93.7%) and Arm 3 (90.3%) received a VL test compared with Arm 2 (52.5%, P < .001). Of those confirmed positive, treatment was initiated at significantly higher rates for participants in both Arms 1 (85.6%) and 2 (73.7%) compared to Arm 3 (41.8%, P < .001). Arm 1 was found to be a cost-saving strategy compared to Arm 2, Arm 3, and no action. CONCLUSIONS: Delivery of all services at a single site (district hospitals) resulted in a higher yield of HCV seropositive cases and retention compared with sites where participants were referred elsewhere for VL testing and/or treatment. The highest level of retention in the care cascade was also associated with the shortest turn-around times.


Assuntos
Hepacivirus , Hepatite C , Adulto , Estudos de Viabilidade , Feminino , Hepatite C/diagnóstico , Hepatite C/epidemiologia , Hepatite C/terapia , Humanos , Índia/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos
13.
Hepatology ; 75(6): 1480-1490, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34878683

RESUMO

BACKGROUND AND AIMS: Alcohol consumption increased during the COVID-19 pandemic in 2020 in the United States. We projected the effect of increased alcohol consumption on alcohol-associated liver disease (ALD) and mortality. APPROACH AND RESULTS: We extended a previously validated microsimulation model that estimated the short- and long-term effect of increased drinking during the COVID-19 pandemic in individuals in the United States born between 1920 and 2012. We modeled short- and long-term outcomes of current drinking patterns during COVID-19 (status quo) using survey data of changes in alcohol consumption in a nationally representative sample between February and November 2020. We compared these outcomes with a counterfactual scenario wherein no COVID-19 occurs and drinking patterns do not change. One-year increase in alcohol consumption during the COVID-19 pandemic is estimated to result in 8000 (95% uncertainty interval [UI], 7500-8600) additional ALD-related deaths, 18,700 (95% UI, 17,600-19,900) cases of decompensated cirrhosis, and 1000 (95% UI, 1000-1100) cases of HCC, and 8.9 million disability-adjusted life years between 2020 and 2040. Between 2020 and 2023, alcohol consumption changes due to COVID-19 will lead to 100 (100-200) additional deaths and 2800 (2700-2900) additional decompensated cirrhosis cases. A sustained increase in alcohol consumption for more than 1 year could result in additional morbidity and mortality. CONCLUSIONS: A short-term increase in alcohol consumption during the COVID-19 pandemic can substantially increase long-term ALD-related morbidity and mortality. Our findings highlight the need for individuals and policymakers to make informed decisions to mitigate the impact of high-risk alcohol drinking in the United States.


Assuntos
COVID-19 , Carcinoma Hepatocelular , Hepatopatias Alcoólicas , Neoplasias Hepáticas , Consumo de Bebidas Alcoólicas/efeitos adversos , Consumo de Bebidas Alcoólicas/epidemiologia , COVID-19/epidemiologia , Humanos , Cirrose Hepática , Hepatopatias Alcoólicas/epidemiologia , Pandemias , Estados Unidos/epidemiologia
14.
Lancet Reg Health Am ; 8: 100143, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34927126

RESUMO

BACKGROUND: Oropharyngeal cancer (OPC) incidence is rising rapidly among men in the United States of America (USA). We aimed to project the impact of maintaining the current HPV vaccination uptake and achieving 80% national (Healthy People) goal on OPC incidence and burden. METHODS: We developed an open-cohort micro-simulation model of OPC natural history among contemporary and future birth cohorts of men, accounting for sexual behaviors, population growth, aging, and herd immunity. We used data from nationally representative databases, cancer registries from all 50 states, large clinical trials, and literature. We evaluated the status quo scenario (the current HPV vaccination uptake remained stable) and alternative scenarios of improvements in uptake rates in adolescents (aged 9-17 years) and young adults (aged 18-26 years) by 2025 to achieve and maintain the 80% goal. The primary outcome was to project OPC incidence and burden from 2009 to 2100. We also assessed the impact of disruption in HPV vaccine uptake during the COVID-19 pandemic. FINDINGS: OPC incidence is projected to rise until the mid-2030s, reaching the age-standardized incidence rate of 9·8 (95% uncertainty interval [UI] 9·5-10·1) per 100 000 men, with the peak annual burden of 23 850 (UI, 23 200-24 500) cases. Under the status quo scenario, HPV vaccination could prevent 124 000 (UI, 117 000-131 000) by 2060, 400 000 (UI, 384 000-416 000) by 2080, and 792 000 (UI, 763 000-821 000) by 2100 OPC cases among men. Achievement and maintenance of 80% coverage among adolescent girls only, adolescent girls and boys, and adolescents plus young adults could prevent an additional number of 100 000 (UI, 95 000-105 000), 118 000 (UI, 113 000-123 000), and 142 000 (UI, 136 000-148 000) male OPC cases by 2100. Delayed recovery of the HPV vaccine uptake during the COVID-19 pandemic could lead to 600 (UI, 580-620) to 6200 (UI, 5940-6460) additional male OPC cases by 2100, conditional on the decline in the extent of the national HPV vaccination coverage and potential delay in rebounding. INTERPRETATION: Oropharyngeal cancer burden is projected to rise among men in the USA. Nationwide efforts to achieve the HPV vaccination goal of 80% coverage should be a public health priority. Rapid recovery of the declined HPV vaccination uptake during the COVID-19 pandemic is also crucial to prevent future excess OPC burden. FUNDING: National Cancer Institute and National Institute on Minority Health and Health Disparities of the USA.

15.
BMJ Open ; 11(12): e055142, 2021 12 24.
Artigo em Inglês | MEDLINE | ID: mdl-34952885

RESUMO

INTRODUCTION: To achieve the elimination of hepatitis C virus (HCV), substantial scale-up in access to testing and treatment is needed. This will require innovation and simplification of the care pathway, through decentralisation of testing and treatment to primary care settings and task-shifting to non-specialists. The objective of this study was to evaluate the feasibility and effectiveness of decentralisation of HCV testing and treatment using rapid diagnostic tests (RDTs) in primary healthcare clinics (PHCs) among high-risk populations, with referral of seropositive patients for confirmatory viral load testing and treatment. METHODS: This observational study was conducted between December 2018 and October 2019 at 25 PHCs in three regions in Malaysia. Each PHC was linked to one or more hospitals, for referral of seropositive participants for confirmatory testing and pretreatment evaluation. Treatment was provided in PHCs for non-cirrhotic patients and at hospitals for cirrhotic patients. RESULTS: During the study period, a total of 15 366 adults were screened at the 25 PHCs, using RDTs for HCV antibodies. Of the 2020 (13.2%) HCV antibody-positive participants, 1481/2020 (73.3%) had a confirmatory viral load test, 1241/1481 (83.8%) were HCV RNA-positive, 991/1241 (79.9%) completed pretreatment assessment, 632/991 (63.8%) initiated treatment, 518/632 (82.0%) completed treatment, 352/518 (68.0%) were eligible for a sustained virological response (SVR) cure assessment, 209/352 (59.4%) had an SVR cure assessment, and SVR was achieved in 202/209 (96.7%) patients. A significantly higher proportion of patients referred to PHCs initiated treatment compared with those who had treatment initiated at hospitals (71.0% vs 48.8%, p<0.001). CONCLUSIONS: This study demonstrated the effectiveness and feasibility of a simplified decentralised HCV testing and treatment model in primary healthcare settings, targeting high-risk groups in Malaysia. There were good outcomes across most steps of the cascade of care when treatment was provided at PHCs compared with hospitals.


Assuntos
Hepacivirus , Hepatite C , Adulto , Antivirais/uso terapêutico , Hepatite C/diagnóstico , Hepatite C/tratamento farmacológico , Humanos , Malásia , Atenção Primária à Saúde
16.
Sci Rep ; 11(1): 21382, 2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34725356

RESUMO

The cost of testing can be a substantial contributor to hepatitis C virus (HCV) elimination program costs in many low- and middle-income countries such as Georgia, resulting in the need for innovative and cost-effective strategies for testing. Our objective was to investigate the most cost-effective testing pathways for scaling-up HCV testing in Georgia. We developed a Markov-based model with a lifetime horizon that simulates the natural history of HCV, and the cost of detection and treatment of HCV. We then created an interactive online tool that uses results from the Markov-based model to evaluate the cost-effectiveness of different HCV testing pathways. We compared the current standard-of-care (SoC) testing pathway and four innovative testing pathways for Georgia. The SoC testing was cost-saving compared to no testing, but all four new HCV testing pathways further increased QALYs and decreased costs. The pathway with the highest patient follow-up, due to on-site testing, resulted in the highest discounted QALYs (123 QALY more than the SoC) and lowest costs ($127,052 less than the SoC) per 10,000 persons screened. The current testing algorithm in Georgia can be replaced with a new pathway that is more effective while being cost-saving.


Assuntos
Hepatite C/diagnóstico , Adulto , Antivirais/uso terapêutico , Análise Custo-Benefício , Feminino , República da Geórgia/epidemiologia , Hepacivirus/isolamento & purificação , Hepatite C/tratamento farmacológico , Hepatite C/economia , Hepatite C/epidemiologia , Humanos , Masculino , Cadeias de Markov , Programas de Rastreamento/economia , Técnicas Microbiológicas/economia , Anos de Vida Ajustados por Qualidade de Vida
17.
J Am Heart Assoc ; 10(18): e020330, 2021 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-34476979

RESUMO

Background Atrial fibrillation (AF) screening is endorsed by certain guidelines for individuals aged ≥65 years. Yet many AF screening strategies exist, including the use of wrist-worn wearable devices, and their comparative effectiveness is not well-understood. Methods and Results We developed a decision-analytic model simulating 50 million individuals with an age, sex, and comorbidity profile matching the United States population aged ≥65 years (ie, with a guideline-based AF screening indication). We modeled no screening, in addition to 45 distinct AF screening strategies (comprising different modalities and screening intervals), each initiated at a clinical encounter. The primary effectiveness measure was quality-adjusted life-years, with incident stroke and major bleeding as secondary measures. We defined continuous or nearly continuous modalities as those capable of monitoring beyond a single time-point (eg, patch monitor), and discrete modalities as those capable of only instantaneous AF detection (eg, 12-lead ECG). In total, 10 AF screening strategies were effective compared with no screening (300-1500 quality-adjusted life-years gained/100 000 individuals screened). Nine (90%) effective strategies involved use of a continuous or nearly continuous modality such as patch monitor or wrist-worn wearable device, whereas 1 (10%) relied on discrete modalities alone. Effective strategies reduced stroke incidence (number needed to screen to prevent a stroke: 3087-4445) but increased major bleeding (number needed to screen to cause a major bleed: 1815-4049) and intracranial hemorrhage (number needed to screen to cause intracranial hemorrhage: 7693-16 950). The test specificity was a highly influential model parameter on screening effectiveness. Conclusions When modeled from a clinician-directed perspective, the comparative effectiveness of population-based AF screening varies substantially upon the specific strategy used. Future screening interventions and guidelines should consider the relative effectiveness of specific AF screening strategies.


Assuntos
Fibrilação Atrial , Acidente Vascular Cerebral , Idoso , Fibrilação Atrial/diagnóstico , Fibrilação Atrial/epidemiologia , Análise Custo-Benefício , Humanos , Hemorragias Intracranianas , Programas de Rastreamento , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/epidemiologia , Acidente Vascular Cerebral/prevenção & controle , Resultado do Tratamento
18.
J Am Heart Assoc ; 10(16): e021144, 2021 08 17.
Artigo em Inglês | MEDLINE | ID: mdl-34387130

RESUMO

Background Optimal management of asymptomatic Brugada syndrome (BrS) with spontaneous type I electrocardiographic pattern is uncertain. Methods and Results We developed an individual-level simulation comprising 2 000 000 average-risk individuals with asymptomatic BrS and spontaneous type I electrocardiographic pattern. We compared (1) observation, (2) electrophysiologic study (EPS)-guided implantable cardioverter-defibrillator (ICD), and (3) upfront ICD, each using either subcutaneous or transvenous ICD, resulting in 6 strategies tested. The primary outcome was quality-adjusted life years (QALYs), with cardiac deaths (arrest or procedural-related) as a secondary outcome. We varied BrS diagnosis age and underlying arrest rate. We assessed cost-effectiveness at $100 000/QALY. Compared with observation, EPS-guided subcutaneous ICD resulted in 0.35 QALY gain/individual and 4130 cardiac deaths avoided/100 000 individuals, and EPS-guided transvenous ICD resulted in 0.26 QALY gain and 3390 cardiac deaths avoided. Compared with observation, upfront ICD reduced cardiac deaths by a greater margin (subcutaneous ICD, 8950; transvenous ICD, 6050), but only subcutaneous ICD improved QALYs (subcutaneous ICD, 0.25 QALY gain; transvenous ICD, 0.01 QALY loss), and complications were higher. ICD-based strategies were more effective at younger ages and higher arrest rates (eg, using subcutaneous devices, upfront ICD was the most effective strategy at ages 20-39.4 years and arrest rates >1.37%/year; EPS-guided ICD was the most effective strategy at ages 39.5-51.3 years and arrest rates 0.47%-1.37%/year, and observation was the most effective strategy at ages >51.3 years and arrest rates <0.47%/year). EPS-guided subcutaneous ICD was cost-effective ($80 508/QALY). Conclusions Device-based approaches (with or without EPS risk stratification) can be more effective than observation among selected patients with asymptomatic BrS. BrS management should be tailored to patient characteristics.


Assuntos
Síndrome de Brugada/terapia , Técnicas de Apoio para a Decisão , Desfibriladores Implantáveis , Cardioversão Elétrica/instrumentação , Adulto , Doenças Assintomáticas , Síndrome de Brugada/diagnóstico , Síndrome de Brugada/economia , Síndrome de Brugada/mortalidade , Pesquisa Comparativa da Efetividade , Análise Custo-Benefício , Desfibriladores Implantáveis/economia , Cardioversão Elétrica/efeitos adversos , Cardioversão Elétrica/economia , Cardioversão Elétrica/mortalidade , Eletrocardiografia , Custos de Cuidados de Saúde , Humanos , Pessoa de Meia-Idade , Modelos Econômicos , Anos de Vida Ajustados por Qualidade de Vida , Recuperação de Função Fisiológica , Fatores de Tempo , Resultado do Tratamento
19.
JAMA Netw Open ; 4(8): e2119621, 2021 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-34402891

RESUMO

Importance: In 2020 and early 2021, the National Football League (NFL) and National Collegiate Athletic Association (NCAA) opted to host football games in stadiums across the country. The in-person attendance of games varied with time and from county to county. There is currently no evidence on whether limited in-person attendance of games is associated with COVID-19 case numbers on a county-level. Objective: To assess whether NFL and NCAA football games with limited in-person attendance were associated with increased COVID-19 cases in the counties they were held compared with a matched set of counties. Design, Setting, and Participants: In this time-series cross-sectional study, every county hosting NFL or NCAA games with in-person attendance (treated group) in 2020 and 2021 was matched with a county that that did not host a game on the corresponding day but had an identical game history for up to 14 days prior (control group). A standard matching method was used to further refine this matched set so that the treated and matched control counties had similar population size, nonpharmaceutical interventions in place, and COVID-19 trends. The association of hosting games with in-person attendance with COVID-19 cases was assessed using a difference-in-difference estimator. Data were analyzed from August 29 to December 28, 2020. Exposures: Hosting NFL or NCAA games. Main Outcomes and Measures: The main outcome was estimation of new COVID-19 cases per 100 000 residents at the county level reported up to 14 days after a game among counties with NFL and NCAA games with in-person attendance. Results: A total of 528 games with in-person attendance (101 NFL games [19.1%]; 427 NCAA games [80.9%]) were included. The matching algorithm returned 361 matching sets of counties. The median (interquartile range [IQR]) number of attendance for NFL games was 9949 (6000 to 13 797) people. The median number of attendance for NCAA games was not available, and attendance was recorded as a binary variable. The median (IQR) daily new COVID-19 cases in treatment group counties hosting games was 26.14 (10.77-50.25) cases per 100 000 residents on game day. The median (IQR) daily new COVID-19 cases in control group counties where no games were played was 24.11 (9.64-48.55) cases per 100 000 residents on game day. The treatment effect size ranged from -5.17 to 4.72, with a mean (SD) of 1.21 (2.67) cases per 100 000 residents, within the 14-day period in all counties hosting the games, and the daily treatment effect trend remained relatively steady during this period. Conclusions and Relevance: This cross-sectional study did not find a consistent increase in the daily COVID-19 cases per 100 000 residents in counties where NFL and NCAA games were held with limited in-person attendance. These findings suggest that NFL and NCAA football games hosted with limited in-person attendance were not associated with substantial risk for increased local COVID-19 cases.


Assuntos
COVID-19/epidemiologia , Controle de Doenças Transmissíveis/estatística & dados numéricos , Saúde da População/estatística & dados numéricos , Vigilância de Evento Sentinela , Instalações Esportivas e Recreacionais/estatística & dados numéricos , COVID-19/prevenção & controle , COVID-19/transmissão , Controle de Doenças Transmissíveis/métodos , Estudos Transversais , Futebol Americano , Humanos , Organizações sem Fins Lucrativos , SARS-CoV-2 , Sociedades , Estados Unidos/epidemiologia , Universidades
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...