Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
2.
J Hepatol ; 78(5): 937-946, 2023 05.
Article in English | MEDLINE | ID: mdl-36669704

ABSTRACT

BACKGROUND & AIMS: HCV test and treat campaigns currently exclude pregnant women. Pregnancy offers a unique opportunity for HCV screening and to potentially initiate direct-acting antiviral treatment. We explored HCV screening and treatment strategies in two lower middle-income countries with high HCV prevalence, Egypt and Ukraine. METHODS: Country-specific probabilistic decision models were developed to simulate a cohort of pregnant women. We compared five strategies: S0, targeted risk-based screening and deferred treatment (DT) to after pregnancy/breastfeeding; S1, World Health Organization (WHO) risk-based screening and DT; S2, WHO risk-based screening and targeted treatment (treat women with risk factors for HCV vertical transmission [VT]); S3, universal screening and targeted treatment during pregnancy; S4, universal screening and treatment. Maternal and infant HCV outcomes were projected. RESULTS: S0 resulted in the highest proportion of women undiagnosed: 59% and 20% in Egypt and Ukraine, respectively, with 0% maternal cure by delivery and VT estimated at 6.5% and 7.9%, respectively. WHO risk-based screening and DT (S1) increased the proportion of women diagnosed with no change in maternal cure or VT. Universal screening and treatment during pregnancy (S4) resulted in the highest proportion of women diagnosed and cured by delivery (65% and 70%, respectively), and lower levels of VT (3.4% and 3.6%, respectively). CONCLUSIONS: This is one of the first models to explore HCV screening and treatment strategies in pregnancy, which will be critical in informing future care and policy as more safety/efficacy data emerge. Universal screening and treatment in pregnancy could potentially improve both maternal and infant outcomes. IMPACT AND IMPLICATIONS: In the context of two lower middle-income countries with high HCV burdens (Egypt and Ukraine), we designed a decision analytic model to explore five different HCV testing and treatment strategies for pregnant women, with the assumption that treatment was safe and efficacious for use in pregnancy. Assuming direct-acting antiviral treatment during pregnancy would reduce vertical transmission, our findings indicate that the provision of universal (rather than risk-based targeted) screening and treatment would provide the greatest maternal and infant benefits. While future trials are needed to assess the safety and efficacy of direct-acting antivirals in pregnancy and their impact on vertical transmission, there is increasing recognition that the elimination of HCV cannot leave entire subpopulations of pregnant women and young children behind. Our findings will be critical for policymakers when developing improved screening and treatment recommendations for pregnant women.


Subject(s)
Hepatitis C, Chronic , Hepatitis C , Pregnancy Complications, Infectious , Child , Humans , Pregnancy , Female , Child, Preschool , Hepatitis C/diagnosis , Hepatitis C/drug therapy , Hepatitis C/epidemiology , Antiviral Agents/therapeutic use , Hepatitis C, Chronic/drug therapy , Egypt/epidemiology , Ukraine/epidemiology , Pregnancy Complications, Infectious/diagnosis , Pregnancy Complications, Infectious/drug therapy , Pregnancy Complications, Infectious/epidemiology , Mass Screening , Infectious Disease Transmission, Vertical/prevention & control
3.
Clin Infect Dis ; 76(5): 905-912, 2023 03 04.
Article in English | MEDLINE | ID: mdl-35403676

ABSTRACT

BACKGROUND: It is widely accepted that the risk of hepatitis C virus (HCV) vertical transmission (VT) is 5%-6% in monoinfected women, and that 25%-40% of HCV infection clears spontaneously within 5 years. However, there is no consensus on how VT rates should be estimated, and there is a lack of information on VT rates "net" of clearance. METHODS: We reanalyzed data on 1749 children in 3 prospective cohorts to obtain coherent estimates of overall VT rate and VT rates net of clearance at different ages. Clearance rates were used to impute the proportion of uninfected children who had been infected and then cleared before testing negative. The proportion of transmission early in utero, late in utero, and at delivery was estimated from data on the proportion of HCV RNA positive within 3 days of birth, and differences between elective cesarean and nonelective cesarean deliveries. RESULTS: Overall VT rates were 7.2% (95% credible interval [CrI], 5.6%-8.9%) in mothers who were human immunodeficiency virus (HIV) negative and 12.1% (95% CrI, 8.6%-16.8%) in HIV-coinfected women. The corresponding rates net of clearance at 5 years were 2.4% (95% CrI, 1.1%-4.1%), and 4.1% (95% CrI, 1.7%-7.3%). We estimated that 24.8% (95% CrI, 12.1%-40.8%) of infections occur early in utero, 66.0% (95% CrI, 42.5%-83.3%) later in utero, and 9.3% (95% CrI, 0.5%-30.6%) during delivery. CONCLUSIONS: Overall VT rates are about 24% higher than previously assumed, but the risk of infection persisting beyond age 5 years is about 38% lower. The results can inform design of trials of interventions to prevent or treat pediatric HCV infection, and strategies to manage children exposed in utero.


Subject(s)
HIV Infections , Hepatitis C , Pregnancy Complications, Infectious , Pregnancy , Female , Child , Humans , Child, Preschool , Hepacivirus/genetics , Risk Factors , Prospective Studies , Pregnancy Complications, Infectious/epidemiology , HIV Infections/epidemiology
5.
BMJ ; 371: m4262, 2020 11 11.
Article in English | MEDLINE | ID: mdl-33177070

ABSTRACT

OBJECTIVE: To assess the accuracy of the AbC-19 Rapid Test lateral flow immunoassay for the detection of previous severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. DESIGN: Test accuracy study. SETTING: Laboratory based evaluation. PARTICIPANTS: 2847 key workers (healthcare staff, fire and rescue officers, and police officers) in England in June 2020 (268 with a previous polymerase chain reaction (PCR) positive result (median 63 days previously), 2579 with unknown previous infection status); and 1995 pre-pandemic blood donors. MAIN OUTCOME MEASURES: AbC-19 sensitivity and specificity, estimated using known negative (pre-pandemic) and known positive (PCR confirmed) samples as reference standards and secondly using the Roche Elecsys anti-nucleoprotein assay, a highly sensitive laboratory immunoassay, as a reference standard in samples from key workers. RESULTS: Test result bands were often weak, with positive/negative discordance by three trained laboratory staff for 3.9% of devices. Using consensus readings, for known positive and negative samples sensitivity was 92.5% (95% confidence interval 88.8% to 95.1%) and specificity was 97.9% (97.2% to 98.4%). Using an immunoassay reference standard, sensitivity was 94.2% (90.7% to 96.5%) among PCR confirmed cases but 84.7% (80.6% to 88.1%) among other people with antibodies. This is consistent with AbC-19 being more sensitive when antibody concentrations are higher, as people with PCR confirmation tended to have more severe disease whereas only 62% (218/354) of seropositive participants had had symptoms. If 1 million key workers were tested with AbC-19 and 10% had actually been previously infected, 84 700 true positive and 18 900 false positive results would be projected. The probability that a positive result was correct would be 81.7% (76.8% to 85.8%). CONCLUSIONS: AbC-19 sensitivity was lower among unselected populations than among PCR confirmed cases of SARS-CoV-2, highlighting the scope for overestimation of assay performance in studies involving only PCR confirmed cases, owing to "spectrum bias." Assuming that 10% of the tested population have had SARS-CoV-2 infection, around one in five key workers testing positive with AbC-19 would be false positives. STUDY REGISTRATION: ISRCTN 56609224.


Subject(s)
Clinical Laboratory Techniques/standards , Coronavirus Infections/diagnosis , Immunoassay/standards , Pneumonia, Viral/diagnosis , Betacoronavirus , COVID-19 , COVID-19 Testing , Female , Firefighters , Health Personnel , Humans , Male , Pandemics , Police , Predictive Value of Tests , Reagent Kits, Diagnostic/standards , SARS-CoV-2 , Sensitivity and Specificity , United Kingdom
6.
Pathogens ; 9(5)2020 May 07.
Article in English | MEDLINE | ID: mdl-32392815

ABSTRACT

BACKGROUND: Zika virus (ZIKV) infection has been associated with congenital microcephaly and other neurodevelopmental abnormalities. There is little published research on the effect of maternal ZIKV infection in a non-endemic European region. We aimed to describe the outcomes of pregnant travelers diagnosed as ZIKV-infected in Spain, and their exposed children. METHODS: This prospective observational cohort study of nine referral hospitals enrolled pregnant women (PW) who travelled to endemic areas during their pregnancy or the two previous months, or those whose sexual partners visited endemic areas in the previous 6 months. Infants of ZIKV-infected mothers were followed for about two years. RESULTS: ZIKV infection was diagnosed in 163 PW; 112 (70%) were asymptomatic and 24 (14.7%) were confirmed cases. Among 143 infants, 14 (9.8%) had adverse outcomes during follow-up; three had a congenital Zika syndrome (CZS), and 11 other potential Zika-related outcomes. The overall incidence of CZS was 2.1% (95%CI: 0.4-6.0%), but among infants born to ZIKV-confirmed mothers, this increased to 15.8% (95%CI: 3.4-39.6%). CONCLUSIONS: A nearly 10% overall risk of neurologic and hearing adverse outcomes was found in ZIKV-exposed children born to a ZIKV-infected traveler PW. Longer-term follow-up of these children is needed to assess whether there are any later-onset manifestations.

7.
Lancet Psychiatry ; 6(11): 903-914, 2019 11.
Article in English | MEDLINE | ID: mdl-31543474

ABSTRACT

BACKGROUND: Depression is usually managed in primary care, but most antidepressant trials are of patients from secondary care mental health services, with eligibility criteria based on diagnosis and severity of depressive symptoms. Antidepressants are now used in a much wider group of people than in previous regulatory trials. We investigated the clinical effectiveness of sertraline in patients in primary care with depressive symptoms ranging from mild to severe and tested the role of severity and duration in treatment response. METHODS: The PANDA study was a pragmatic, multicentre, double-blind, placebo-controlled randomised trial of patients from 179 primary care surgeries in four UK cities (Bristol, Liverpool, London, and York). We included patients aged 18 to 74 years who had depressive symptoms of any severity or duration in the past 2 years, where there was clinical uncertainty about the benefit of an antidepressant. This strategy was designed to improve the generalisability of our sample to current use of antidepressants within primary care. Patients were randomly assigned (1:1) with a remote computer-generated code to sertraline or placebo, and were stratified by severity, duration, and site with random block length. Patients received one capsule (sertraline 50 mg or placebo orally) daily for one week then two capsules daily for up to 11 weeks, consistent with evidence on optimal dosages for efficacy and acceptability. The primary outcome was depressive symptoms 6 weeks after randomisation, measured by Patient Health Questionnaire, 9-item version (PHQ-9) scores. Secondary outcomes at 2, 6 and 12 weeks were depressive symptoms and remission (PHQ-9 and Beck Depression Inventory-II), generalised anxiety symptoms (Generalised Anxiety Disorder Assessment 7-item version), mental and physical health-related quality of life (12-item Short-Form Health Survey), and self-reported improvement. All analyses compared groups as randomised (intention-to-treat). The study is registered with EudraCT, 2013-003440-22 (protocol number 13/0413; version 6.1) and ISRCTN, ISRCTN84544741, and is closed to new participants. FINDINGS: Between Jan 1, 2015, and Aug 31, 2017, we recruited and randomly assigned 655 patients-326 (50%) to sertraline and 329 (50%) to placebo. Two patients in the sertraline group did not complete a substantial proportion of the baseline assessment and were excluded, leaving 653 patients in total. Due to attrition, primary outcome analyses were of 550 patients (266 in the sertraline group and 284 in the placebo group; 85% follow-up that did not differ by treatment allocation). We found no evidence that sertraline led to a clinically meaningful reduction in depressive symptoms at 6 weeks. The mean 6-week PHQ-9 score was 7·98 (SD 5·63) in the sertraline group and 8·76 (5·86) in the placebo group (adjusted proportional difference 0·95, 95% CI 0·85-1·07; p=0·41). However, for secondary outcomes, we found evidence that sertraline led to reduced anxiety symptoms, better mental (but not physical) health-related quality of life, and self-reported improvements in mental health. We observed weak evidence that depressive symptoms were reduced by sertraline at 12 weeks. We recorded seven adverse events-four for sertraline and three for placebo, and adverse events did not differ by treatment allocation. Three adverse events were classified as serious-two in the sertraline group and one in the placebo group. One serious adverse event in the sertraline group was classified as possibly related to study medication. INTERPRETATION: Sertraline is unlikely to reduce depressive symptoms within 6 weeks in primary care but we observed improvements in anxiety, quality of life, and self-rated mental health, which are likely to be clinically important. Our findings support the prescription of SSRI antidepressants in a wider group of participants than previously thought, including those with mild to moderate symptoms who do not meet diagnostic criteria for depression or generalised anxiety disorder. FUNDING: National Institute for Health Research.


Subject(s)
Depressive Disorder/drug therapy , Primary Health Care/methods , Selective Serotonin Reuptake Inhibitors/therapeutic use , Sertraline/therapeutic use , Adolescent , Adult , Aged , Double-Blind Method , Female , Humans , Male , Middle Aged , Severity of Illness Index , Time Factors , Treatment Outcome , United Kingdom , Young Adult
9.
PLoS One ; 13(12): e0208652, 2018.
Article in English | MEDLINE | ID: mdl-30557408

ABSTRACT

BACKGROUND: Seroprevalence surveys of Chlamydia trachomatis (CT) antibodies are promising for estimating age-specific CT cumulative incidence, however accurate estimates require improved understanding of antibody response to CT infection. METHODS: We used GUMCAD, England's national sexually transmitted infection (STI) surveillance system, to select sera taken from female STI clinic attendees on the day of or after a chlamydia diagnosis. Serum specimens were collected from laboratories and tested anonymously on an indirect and a double-antigen ELISA, both of which are based on the CT-specific Pgp3 antigen. We used cross-sectional and longitudinal descriptive analyses to explore the relationship between seropositivity and a) cumulative number of chlamydia diagnoses and b) time since most recent chlamydia diagnosis. RESULTS: 919 samples were obtained from visits when chlamydia was diagnosed and 812 during subsequent follow-up visits. Pgp3 seropositivity using the indirect ELISA increased from 57.1% (95% confidence interval: 53.2-60.7) on the day of a first-recorded chlamydia diagnosis to 89.6% (95%CI: 79.3-95.0) on the day of a third or higher documented diagnosis. With the double-antigen ELISA, the increase was from 61.1% (95%CI: 53.2-60.7) to 97.0% (95%CI: 88.5-99.3). Seropositivity decreased with time since CT diagnosis on only the indirect assay, to 49.3% (95%CI: 40.9-57.7) two or more years after a first diagnosis and 51.9% (95%CI: 33.2-70.0) after a repeat diagnosis. CONCLUSION: Seropositivity increased with cumulative number of infections, and decreased over time after diagnosis on the indirect ELISA, but not on the double-antigen ELISA. This is the first study to demonstrate the combined impact of number of chlamydia diagnoses, time since diagnosis, and specific ELISA on Pgp3 seropositivity. Our findings are being used to inform models estimating age-specific chlamydia incidence over time using serial population-representative serum sample collections, to enable accurate public health monitoring of chlamydia.


Subject(s)
Antibodies, Bacterial/blood , Antibodies, Bacterial/immunology , Antigens, Bacterial/immunology , Bacterial Proteins/immunology , Chlamydia trachomatis/immunology , Adolescent , Adult , Chlamydia Infections/blood , Chlamydia Infections/diagnosis , Chlamydia Infections/epidemiology , Chlamydia Infections/immunology , Cross-Sectional Studies , England , Epidemiological Monitoring , Female , Follow-Up Studies , Humans , Immunoglobulin G/blood , Longitudinal Studies , Seroepidemiologic Studies , Young Adult
10.
Med Decis Making ; 38(2): 200-211, 2018 02.
Article in English | MEDLINE | ID: mdl-28823204

ABSTRACT

Standard methods for indirect comparisons and network meta-analysis are based on aggregate data, with the key assumption that there is no difference between the trials in the distribution of effect-modifying variables. Methods which relax this assumption are becoming increasingly common for submissions to reimbursement agencies, such as the National Institute for Health and Care Excellence (NICE). These methods use individual patient data from a subset of trials to form population-adjusted indirect comparisons between treatments, in a specific target population. Recently proposed population adjustment methods include the Matching-Adjusted Indirect Comparison (MAIC) and the Simulated Treatment Comparison (STC). Despite increasing popularity, MAIC and STC remain largely untested. Furthermore, there is a lack of clarity about exactly how and when they should be applied in practice, and even whether the results are relevant to the decision problem. There is therefore a real and present risk that the assumptions being made in one submission to a reimbursement agency are fundamentally different to-or even incompatible with-the assumptions being made in another for the same indication. We describe the assumptions required for population-adjusted indirect comparisons, and demonstrate how these may be used to generate comparisons in any given target population. We distinguish between anchored and unanchored comparisons according to whether a common comparator arm is used or not. Unanchored comparisons make much stronger assumptions, which are widely regarded as infeasible. We provide recommendations on how and when population adjustment methods should be used, and the supporting analyses that are required to provide statistically valid, clinically meaningful, transparent and consistent results for the purposes of health technology appraisal. Simulation studies are needed to examine the properties of population adjustment methods and their robustness to breakdown of assumptions.


Subject(s)
Comparative Effectiveness Research , Technology Assessment, Biomedical/methods , Algorithms , Cost-Benefit Analysis , Technology Assessment, Biomedical/statistics & numerical data
11.
Trials ; 18(1): 496, 2017 Oct 24.
Article in English | MEDLINE | ID: mdl-29065916

ABSTRACT

BACKGROUND: Depressive symptoms are usually managed within primary care and antidepressant medication constitutes the first-line treatment. It remains unclear at present which people are more likely to benefit from antidepressant medication. This paper describes the protocol for a randomised controlled trial (PANDA) to investigate the severity and duration of depressive symptoms that are associated with a clinically significant response to sertraline compared to placebo, in people presenting to primary care with depression. METHODS/DESIGN: PANDA is a randomised, double blind, placebo controlled trial in which participants are individually randomised to sertraline or placebo. Eligible participants are those who are between the ages of 18 to 74; have presented to primary care with depression or low mood during the past 2 years; have not received antidepressant or anti-anxiety medication in the 8 weeks prior to enrolment in the trial and there is clinical equipoise about the benefits of selective serotonin reuptake inhibitor (SSRI) medication. Participants who consent to participate in the trial are randomised to receive either sertraline or matching placebo, starting at 50 mg daily for 1 week, increasing to 100 mg daily for up to 11 weeks (with the option of increasing to 150 mg if required). Participants, general practitioners (GPs) and the research team will be blind to treatment allocation. The primary outcome will be depressive symptoms measured by the Patient Health Questionnaire-9 (PHQ-9) at 6 weeks post randomisation, measured as a continuous outcome. Secondary outcomes include depressive symptoms measured with the PHQ-9 at 2 and 12 weeks as a continuous outcome and at 2, 6 and 12 weeks as a binary outcome; follow-up scores on depressive symptoms measured with the Beck Depression Inventory-II, anxiety symptoms measured by the Generalized Anxiety Disorder-7 and quality of life measured with the Euroqol-5D-5L and Short Form-12; emotional processing task scores measured at baseline, 2 and 6 weeks; and costs associated with healthcare use, time off work and personal costs. DISCUSSION: The PANDA trial uses a simple self-administered measure to establish the severity and duration of depressive symptoms associated with a clinically significant response to sertraline. The evidence from the trial will inform primary care prescribing practice by identifying which patients are more likely to benefit from antidepressants. TRIAL REGISTRATION: Controlled Trials ISRCTN Registry, ISRCTN84544741 . Registered on 20 March 2014. EudraCT Number: 2013-003440-22; Protocol Number: 13/0413 (version 6.1).


Subject(s)
Affect/drug effects , Antidepressive Agents/therapeutic use , Depression/drug therapy , Selective Serotonin Reuptake Inhibitors/therapeutic use , Sertraline/therapeutic use , Adolescent , Adult , Aged , Antidepressive Agents/adverse effects , Clinical Protocols , Depression/diagnosis , Depression/psychology , Double-Blind Method , England , Female , Humans , Male , Mental Health , Middle Aged , Patient Health Questionnaire , Quality of Life , Research Design , Selective Serotonin Reuptake Inhibitors/adverse effects , Sertraline/adverse effects , Severity of Illness Index , Time Factors , Treatment Outcome , Young Adult
12.
Med Decis Making ; 37(4): 353-366, 2017 05.
Article in English | MEDLINE | ID: mdl-27681990

ABSTRACT

BACKGROUND: Estimates of life expectancy are a key input to cost-effectiveness analysis (CEA) models for cancer treatments. Due to the limited follow-up in Randomized Controlled Trials (RCTs), parametric models are frequently used to extrapolate survival outcomes beyond the RCT period. However, different parametric models that fit the RCT data equally well may generate highly divergent predictions of treatment-related gain in life expectancy. Here, we investigate the use of information external to the RCT data to inform model choice and estimation of life expectancy. METHODS: We used Bayesian multi-parameter evidence synthesis to combine the RCT data with external information on general population survival, conditional survival from cancer registry databases, and expert opinion. We illustrate with a 5-year follow-up RCT of cetuximab plus radiotherapy v. radiotherapy alone for head and neck cancer. RESULTS: Standard survival time distributions were insufficiently flexible to simultaneously fit both the RCT data and external data on general population survival. Using spline models, we were able to estimate a model that was consistent with the trial data and all external data. A model integrating all sources achieved an adequate fit and predicted a 4.7-month (95% CrL: 0.4; 9.1) gain in life expectancy due to cetuximab. CONCLUSIONS: Long-term extrapolation using parametric models based on RCT data alone is highly unreliable and these models are unlikely to be consistent with external data. External data can be integrated with RCT data using spline models to enable long-term extrapolation. Conditional survival data could be used for many cancers and general population survival may have a role in other conditions. The use of external data should be guided by knowledge of natural history and treatment mechanisms.


Subject(s)
Cost-Benefit Analysis/methods , Models, Statistical , Neoplasms/mortality , Neoplasms/therapy , Survival Analysis , Age Factors , Bayes Theorem , Head and Neck Neoplasms/mortality , Head and Neck Neoplasms/therapy , Humans , Randomized Controlled Trials as Topic/statistics & numerical data , SEER Program , Sex Factors
13.
J Clin Epidemiol ; 77: 68-77, 2016 09.
Article in English | MEDLINE | ID: mdl-26994662

ABSTRACT

OBJECTIVES: We present a meta-analytic method that combines information on treatment effects from different instruments from a network of randomized trials to estimate instrument relative responsiveness. STUDY DESIGN AND SETTING: Five depression-test instruments [Beck Depression Inventory (BDI I/II), Patient Health Questionnaire (PHQ9), Hamilton Rating for Depression 17 and 24 items, Montgomery-Asberg Depression Rating] and three generic quality of life measures [EuroQoL (EQ-5D), SF36 mental component summary (SF36 MCS), and physical component summary (SF36 PCS)] were compared. Randomized trials of treatments for depression reporting outcomes on any two or more of these instruments were identified. Information on the within-trial ratios of standardized treatment effects was pooled across the studies to estimate relative responsiveness. RESULTS: The between-instrument ratios of standardized treatment effects vary across trials, with a coefficient of variation of 13% (95% credible interval: 6%, 25%). There were important differences between the depression measures, with PHQ9 being the most responsive instrument and BDI the least. Responsiveness of the EQ-5D and SF36 PCS was poor. SF36 MCS performed similarly to depression instruments. CONCLUSION: Information on relative responsiveness of several test instruments can be pooled across networks of trials reporting at least two outcomes, allowing comparison and ranking of test instruments that may never have been compared directly.


Subject(s)
Depressive Disorder/psychology , Depressive Disorder/therapy , Psychiatric Status Rating Scales/statistics & numerical data , Quality of Life/psychology , Humans , Randomized Controlled Trials as Topic , Surveys and Questionnaires , Treatment Outcome
14.
Med Decis Making ; 35(7): 859-71, 2015 10.
Article in English | MEDLINE | ID: mdl-25986470

ABSTRACT

Decision makers in different health care settings need to weigh the benefits and harms of alternative treatment strategies. Such health care decisions include marketing authorization by regulatory agencies, practice guideline formulation by clinical groups, and treatment selection by prescribers and patients in clinical practice. Multiple criteria decision analysis (MCDA) is a family of formal methods that help make explicit the tradeoffs that decision makers accept between the benefit and risk outcomes of different treatment options. Despite the recent interest in MCDA, certain methodological aspects are poorly understood. This paper presents 7 guidelines for applying MCDA in benefit-risk assessment and illustrates their use in the selection of a statin drug for the primary prevention of cardiovascular disease. We provide guidance on the key methodological issues of how to define the decision problem, how to select a set of nonoverlapping evaluation criteria, how to synthesize and summarize the evidence, how to translate relative measures to absolute ones that permit comparisons between the criteria, how to define suitable scale ranges, how to elicit partial preference information from the decision makers, and how to incorporate uncertainty in the analysis. Our example on statins indicates that fluvastatin is likely to be the most preferred drug by our decision maker and that this result is insensitive to the amount of preference information incorporated in the analysis.


Subject(s)
Decision Making , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Primary Prevention , Humans , Risk Assessment , Uncertainty
15.
Med Decis Making ; 35(5): 608-21, 2015 07.
Article in English | MEDLINE | ID: mdl-25712447

ABSTRACT

Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk.


Subject(s)
Cost-Benefit Analysis/methods , Decision Support Techniques , Bayes Theorem , Bias , Humans , Meta-Analysis as Topic , Regression Analysis , Sepsis/drug therapy
16.
Crit Care ; 18(6): 649, 2014 Dec 01.
Article in English | MEDLINE | ID: mdl-25434816

ABSTRACT

INTRODUCTION: Prior to investing in a large, multicentre randomised controlled trial (RCT), the National Institute for Health Research in the UK called for an evaluation of the feasibility and value for money of undertaking a trial on intravenous immunoglobulin (IVIG) as an adjuvant therapy for severe sepsis/septic shock. METHODS: In response to this call, this study assessed the clinical and cost-effectiveness of IVIG (using a decision model), and evaluated the value of conducting an RCT (using expected value of information (EVI) analysis). The evidence informing such assessments was obtained through a series of systematic reviews and meta-analyses. Further primary data analyses were also undertaken using the Intensive Care National Audit & Research Centre Case Mix Programme Database, and a Scottish Intensive Care Society research study. RESULTS: We found a large degree of statistical heterogeneity in the clinical evidence on treatment effect, and the source of such heterogeneity was unclear. The incremental cost-effectiveness ratio of IVIG is within the borderline region of estimates considered to represent value for money, but results appear highly sensitive to the choice of model used for clinical effectiveness. This was also the case with EVI estimates, with maximum payoffs from conducting a further clinical trial between £ 137 and £ 1,011 million. CONCLUSIONS: Our analyses suggest that there is a need for a further RCT. Results on the value of conducting such research, however, were sensitive to the clinical effectiveness model used, reflecting the high level of heterogeneity in the evidence base.


Subject(s)
Cost-Benefit Analysis/methods , Immunoglobulins, Intravenous/administration & dosage , Immunoglobulins, Intravenous/economics , Randomized Controlled Trials as Topic/economics , Shock, Septic/drug therapy , Shock, Septic/economics , Aged , Decision Support Techniques , Female , Humans , Male , Middle Aged , Sepsis/drug therapy , Sepsis/economics , Survival Rate/trends , Treatment Outcome
17.
Int J Epidemiol ; 43(6): 1865-73, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25172138

ABSTRACT

BACKGROUND: Before their diagnosis, patients with cancer present in primary care more frequently than do matched controls. This has raised hopes that earlier investigation in primary care could lead to earlier stage at diagnosis. METHODS: We re-analysed primary care symptom data collected from 247 lung cancer cases and 1235 matched controls in Devon, UK. We identified the most sensitive and specific definition of symptoms, and estimated its incidence in cases and controls prior to diagnosis. We estimated the symptom lead time (SLT) distribution (the time between symptoms attributable to cancer and diagnosis), taking account of the investigations already carried out in primary care. The impact of route of diagnosis on stage at diagnosis was also examined. RESULTS: Symptom incidence in cases was higher than in controls 2 years before diagnosis, accelerating markedly in the last 6 months. The median SLT was under 3 months, with mean 5.3 months [95% credible interval (CrI) 4.5-6.1] and did not differ by stage at diagnosis. An earlier stage at diagnosis was observed in patients identified through chest X-ray originated in primary care. CONCLUSIONS: Most symptoms preceded clinical diagnosis by only a few months. Symptom-based investigation would lengthen lead times and result in earlier stage at diagnosis in a small proportion of cases, but would be far less effective than standard screening targeted at smokers.


Subject(s)
Chest Pain/epidemiology , Cough/epidemiology , Dyspnea/epidemiology , Fatigue/epidemiology , Lung Neoplasms/epidemiology , Smoking/epidemiology , Weight Loss , Case-Control Studies , Delayed Diagnosis/prevention & control , Disease Progression , Early Detection of Cancer , England/epidemiology , Humans , Incidence , Lung Neoplasms/diagnosis , Lung Neoplasms/pathology , Neoplasm Staging , ROC Curve , Risk Assessment , Time Factors
18.
Med Decis Making ; 34(3): 327-42, 2014 04.
Article in English | MEDLINE | ID: mdl-24449434

ABSTRACT

Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria.


Subject(s)
Computational Biology , Decision Making , Health Priorities , Monte Carlo Method , Uncertainty
19.
Med Decis Making ; 34(3): 352-65, 2014 04.
Article in English | MEDLINE | ID: mdl-24085289

ABSTRACT

Expected value of sample information (EVSI) measures the anticipated net benefit gained from conducting new research with a specific design to add to the evidence on which reimbursement decisions are made. Cluster randomized trials raise specific issues for EVSI calculations because 1) a hierarchical model is necessary to account for between-cluster variability when incorporating new evidence and 2) heterogeneity between clusters needs to be carefully characterized in the cost-effectiveness analysis model. Multi-arm trials provide parameter estimates that are correlated, which needs to be accounted for in EVSI calculations. Furthermore, EVSI is computationally intensive when the net benefit function is nonlinear, due to the need for an inner-simulation step. We develop a method for the computation of EVSI that avoids the inner simulation step for cluster randomized multi-arm trials with a binary outcome, where the net benefit function is linear in the probability of an event but nonlinear in the log-odds ratio parameters. We motivate and illustrate the method with an example of a cluster randomized 2 × 2 factorial trial for interventions to increase attendance at breast screening in the UK, using a previously reported cost-effectiveness model. We highlight assumptions made in our approach, extensions to individually randomized trials and inclusion of covariates, and areas for further developments. We discuss computation time, the research-design space, and the ethical implications of an EVSI approach. We suggest that EVSI is a practical and appropriate tool for the design of cluster randomized trials.


Subject(s)
Randomized Controlled Trials as Topic , Cluster Analysis , Cost-Benefit Analysis
20.
Br J Gen Pract ; 61(591): e620-7, 2011 Oct.
Article in English | MEDLINE | ID: mdl-22152833

ABSTRACT

BACKGROUND: Haemoglobinopathies, including sickle cell disease and thalassaemia (SCT), are inherited disorders of haemoglobin. Antenatal screening for SCT rarely occurs before 10 weeks of pregnancy. AIM: To explore the cost-effectiveness of offering SCT screening in a primary care setting, during the pregnancy confirmation visit. DESIGN AND SETTING: A model-based cost-effectiveness analysis of inner-city areas with a high proportion of residents from ethnic minority groups. METHOD: Comparison was made of three SCT screening approaches: 'primary care parallel' (primary care screening with test offered to mother and father together); 'primary care sequential (primary care screening with test offered to the mother and then the father only if the mother is a carrier); and 'midwife care' (sequential screening at the first midwife consultation). The model was populated with data from the SHIFT (Screening for Haemoglobinopathies In First Trimester) trial and other sources. RESULTS: Compared to midwife care, primary care sequential had a higher NHS cost of £34,000 per 10,000 pregnancies (95% confidence interval [CI] = £15,000 to £51,000) and an increase of 2623 women screened (95% CI: 1359 to 4495), giving a cost per additional woman screened by 10 weeks of £13. Primary care parallel was dominated by primary care sequential, with both higher costs and fewer women screened. CONCLUSION: The policy judgement is whether an earlier opportunity for informed reproductive choice has a value of at least £13. Further work is required to understand the value attached to earlier informed reproductive choices.


Subject(s)
Anemia, Sickle Cell/economics , Pregnancy Complications, Hematologic/economics , Prenatal Diagnosis/economics , Primary Health Care/economics , Thalassemia/economics , Abortion, Induced/economics , Anemia, Sickle Cell/diagnosis , Cluster Analysis , Cost-Benefit Analysis , Counseling/economics , Female , Humans , London , Pregnancy , Pregnancy Complications, Hematologic/diagnosis , Prenatal Diagnosis/methods , Thalassemia/diagnosis
SELECTION OF CITATIONS
SEARCH DETAIL
...