ABSTRACT
The relationship between prevalence of infection and severe outcomes such as hospitalisation and death changed over the course of the COVID-19 pandemic. Reliable estimates of the infection fatality ratio (IFR) and infection hospitalisation ratio (IHR) along with the time-delay between infection and hospitalisation/death can inform forecasts of the numbers/timing of severe outcomes and allow healthcare services to better prepare for periods of increased demand. The REal-time Assessment of Community Transmission-1 (REACT-1) study estimated swab positivity for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection in England approximately monthly from May 2020 to March 2022. Here, we analyse the changing relationship between prevalence of swab positivity and the IFR and IHR over this period in England, using publicly available data for the daily number of deaths and hospitalisations, REACT-1 swab positivity data, time-delay models, and Bayesian P-spline models. We analyse data for all age groups together, as well as in 2 subgroups: those aged 65 and over and those aged 64 and under. Additionally, we analysed the relationship between swab positivity and daily case numbers to estimate the case ascertainment rate of England's mass testing programme. During 2020, we estimated the IFR to be 0.67% and the IHR to be 2.6%. By late 2021/early 2022, the IFR and IHR had both decreased to 0.097% and 0.76%, respectively. The average case ascertainment rate over the entire duration of the study was estimated to be 36.1%, but there was some significant variation in continuous estimates of the case ascertainment rate. Continuous estimates of the IFR and IHR of the virus were observed to increase during the periods of Alpha and Delta's emergence. During periods of vaccination rollout, and the emergence of the Omicron variant, the IFR and IHR decreased. During 2020, we estimated a time-lag of 19 days between hospitalisation and swab positivity, and 26 days between deaths and swab positivity. By late 2021/early 2022, these time-lags had decreased to 7 days for hospitalisations and 18 days for deaths. Even though many populations have high levels of immunity to SARS-CoV-2 from vaccination and natural infection, waning of immunity and variant emergence will continue to be an upwards pressure on the IHR and IFR. As investments in community surveillance of SARS-CoV-2 infection are scaled back, alternative methods are required to accurately track the ever-changing relationship between infection, hospitalisation, and death and hence provide vital information for healthcare provision and utilisation.
Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , SARS-CoV-2 , Bayes Theorem , Pandemics , England/epidemiology , HospitalizationABSTRACT
BACKGROUND: We explore severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) antibody lateral flow immunoassay (LFIA) performance under field conditions compared to laboratory-based electrochemiluminescence immunoassay (ECLIA) and live virus neutralization. METHODS: In July 2021, 3758 participants performed, at home, a self-administered Fortress LFIA on finger-prick blood, reported and submitted a photograph of the result, and provided a self-collected capillary blood sample for assessment of immunoglobulin G (IgG) antibodies using the Roche Elecsys® Anti-SARS-CoV-2 ECLIA. We compared the self-reported LFIA result to the quantitative ECLIA and checked the reading of the LFIA result with an automated image analysis (ALFA). In a subsample of 250 participants, we compared the results to live virus neutralization. RESULTS: Almost all participants (3593/3758, 95.6%) had been vaccinated or reported prior infection. Overall, 2777/3758 (73.9%) were positive on self-reported LFIA, 2811/3457 (81.3%) positive by LFIA when ALFA-reported, and 3622/3758 (96.4%) positive on ECLIA (using the manufacturer reference standard threshold for positivity of 0.8 U mL-1). Live virus neutralization was detected in 169 of 250 randomly selected samples (67.6%); 133/169 were positive with self-reported LFIA (sensitivity 78.7%; 95% confidence interval [CI]: 71.8, 84.6), 142/155 (91.6%; 95% CI: 86.1, 95.5) with ALFA, and 169 (100%; 95% CI: 97.8, 100.0) with ECLIA. There were 81 samples with no detectable virus neutralization; 47/81 were negative with self-reported LFIA (specificity 58.0%; 95% CI: 46.5, 68.9), 34/75 (45.3%; 95% CI: 33.8, 57.3) with ALFA, and 0/81 (0%; 95% CI: 0, 4.5) with ECLIA. CONCLUSIONS: Self-administered LFIA is less sensitive than a quantitative antibody test, but the positivity in LFIA correlates better than the quantitative ECLIA with virus neutralization.
Subject(s)
COVID-19 , SARS-CoV-2 , Humans , COVID-19/diagnosis , Self-Testing , Sensitivity and Specificity , Antibodies, Viral , Immunoassay/methodsABSTRACT
Data System. The UK Department of Health and Social Care funded the REal-time Assessment of Community Transmission-2 (REACT-2) study to estimate community prevalence of SARS-CoV-2 IgG (immunoglobulin G) antibodies in England. Data Collection/Processing. We obtained random cross-sectional samples of adults from the National Health Service (NHS) patient list (near-universal coverage). We sent participants a lateral flow immunoassay (LFIA) self-test, and they reported the result online. Overall, 905 991 tests were performed (28.9% response) over 6 rounds of data collection (June 2020-May 2021). Data Analysis/Dissemination. We produced weighted estimates of LFIA test positivity (validated against neutralizing antibodies), adjusted for test performance, at local, regional, and national levels and by age, sex, and ethnic group and area-level deprivation score. In each round, fieldwork occurred over 2 weeks, with results reported to policymakers the following week. We disseminated results as preprints and peer-reviewed journal publications. Public Health Implications. REACT-2 estimated the scale and variation in antibody prevalence over time. Community self-testing and -reporting produced rapid insights into the changing course of the pandemic and the impact of vaccine rollout, with implications for future surveillance. (Am J Public Health. 2023;113(11):1201-1209. https://doi.org/10.2105/AJPH.2023.307381).
Subject(s)
COVID-19 , Adult , Humans , COVID-19/diagnosis , COVID-19/epidemiology , SARS-CoV-2 , Prevalence , Cross-Sectional Studies , State Medicine , Antibodies, Viral , Immunoglobulin G , England/epidemiologyABSTRACT
Data System. The REal-time Assessment of Community Transmission-1 (REACT-1) Study was funded by the Department of Health and Social Care in England to provide reliable and timely estimates of prevalence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection over time, by person and place. Data Collection/Processing. The study team (researchers from Imperial College London and its logistics partner Ipsos) wrote to named individuals aged 5 years and older in random cross-sections of the population of England, using the National Health Service list of patients registered with a general practitioner (near-universal coverage) as a sampling frame. We collected data over 2 to 3 weeks approximately every month across 19 rounds of data collection from May 1, 2020, to March 31, 2022. Data Analysis/Dissemination. We have disseminated the data and study materials widely via the study Web site, preprints, publications in peer-reviewed journals, and the media. We make available data tabulations, suitably anonymized to protect participant confidentiality, on request to the study's data access committee. Public Health Implications. The study provided inter alia real-time data on SARS-CoV-2 prevalence over time, by area, and by sociodemographic variables; estimates of vaccine effectiveness; and symptom profiles, and detected emergence of new variants based on viral genome sequencing. (Am J Public Health. 2023;113(5):545-554. https://doi.org/10.2105/AJPH.2023.307230).
Subject(s)
COVID-19 , SARS-CoV-2 , Humans , England/epidemiology , Public Health , State Medicine , Cross-Sectional StudiesABSTRACT
BACKGROUND: Following rapidly rising COVID-19 case numbers, England entered a national lockdown on 6 January 2021, with staged relaxations of restrictions from 8 March 2021 onwards. AIM: We characterise how the lockdown and subsequent easing of restrictions affected trends in SARS-CoV-2 infection prevalence. METHODS: On average, risk of infection is proportional to infection prevalence. The REal-time Assessment of Community Transmission-1 (REACT-1) study is a repeat cross-sectional study of over 98,000 people every round (rounds approximately monthly) that estimates infection prevalence in England. We used Bayesian P-splines to estimate prevalence and the time-varying reproduction number (Rt) nationally, regionally and by age group from round 8 (beginning 6 January 2021) to round 13 (ending 12 July 2021) of REACT-1. As a comparator, a separate segmented-exponential model was used to quantify the impact on Rt of each relaxation of restrictions. RESULTS: Following an initial plateau of 1.54% until mid-January, infection prevalence decreased until 13 May when it reached a minimum of 0.09%, before increasing until the end of the study to 0.76%. Following the first easing of restrictions, which included schools reopening, the reproduction number Rt increased by 82% (55%, 108%), but then decreased by 61% (82%, 53%) at the second easing of restrictions, which was timed to match the Easter school holidays. Following further relaxations of restrictions, the observed Rt increased steadily, though the increase due to these restrictions being relaxed was offset by the effects of vaccination and also affected by the rapid rise of Delta. There was a high degree of synchrony in the temporal patterns of prevalence between regions and age groups. CONCLUSION: High-resolution prevalence data fitted to P-splines allowed us to show that the lockdown was effective at reducing risk of infection with school holidays/closures playing a significant part.
Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Cross-Sectional Studies , Bayes Theorem , Communicable Disease Control , SARS-CoV-2ABSTRACT
BACKGROUND: Since the emergence of SARS-CoV-2, evolutionary pressure has driven large increases in the transmissibility of the virus. However, with increasing levels of immunity through vaccination and natural infection the evolutionary pressure will switch towards immune escape. Genomic surveillance in regions of high immunity is crucial in detecting emerging variants that can more successfully navigate the immune landscape. METHODS: We present phylogenetic relationships and lineage dynamics within England (a country with high levels of immunity), as inferred from a random community sample of individuals who provided a self-administered throat and nose swab for rt-PCR testing as part of the REal-time Assessment of Community Transmission-1 (REACT-1) study. During round 14 (9 September-27 September 2021) and 15 (19 October-5 November 2021) lineages were determined for 1322 positive individuals, with 27.1% of those which reported their symptom status reporting no symptoms in the previous month. RESULTS: We identified 44 unique lineages, all of which were Delta or Delta sub-lineages, and found a reduction in their mutation rate over the study period. The proportion of the Delta sub-lineage AY.4.2 was increasing, with a reproduction number 15% (95% CI 8-23%) greater than the most prevalent lineage, AY.4. Further, AY.4.2 was less associated with the most predictive COVID-19 symptoms (p = 0.029) and had a reduced mutation rate (p = 0.050). Both AY.4.2 and AY.4 were found to be geographically clustered in September but this was no longer the case by late October/early November, with only the lineage AY.6 exhibiting clustering towards the South of England. CONCLUSIONS: As SARS-CoV-2 moves towards endemicity and new variants emerge, genomic data obtained from random community samples can augment routine surveillance data without the potential biases introduced due to higher sampling rates of symptomatic individuals.
Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , England/epidemiology , Humans , Phylogeny , SARS-CoV-2/geneticsABSTRACT
BACKGROUND: This study assesses acceptability and usability of home-based self-testing for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) antibodies using lateral flow immunoassays (LFIA). METHODS: We carried out public involvement and pilot testing in 315 volunteers to improve usability. Feedback was obtained through online discussions, questionnaires, observations, and interviews of people who tried the test at home. This informed the design of a nationally representative survey of adults in England using two LFIAs (LFIA1 and LFIA2) which were sent to 10 600 and 3800 participants, respectively, who provided further feedback. RESULTS: Public involvement and pilot testing showed high levels of acceptability, but limitations with the usability of kits. Most people reported completing the test; however, they identified difficulties with practical aspects of the kit, particularly the lancet and pipette, a need for clearer instructions and more guidance on interpretation of results. In the national study, 99.3% (8693/8754) of LFIA1 and 98.4% (2911/2957) of LFIA2 respondents attempted the test and 97.5% and 97.8% of respondents completed it, respectively. Most found the instructions easy to understand, but some reported difficulties using the pipette (LFIA1: 17.7%) and applying the blood drop to the cassette (LFIA2: 31.3%). Most respondents obtained a valid result (LFIA1: 91.5%; LFIA2: 94.4%). Overall there was substantial concordance between participant and clinician interpreted results (kappa: LFIA1 0.72; LFIA2 0.89). CONCLUSIONS: Impactful public involvement is feasible in a rapid response setting. Home self-testing with LFIAs can be used with a high degree of acceptability and usability by adults, making them a good option for use in seroprevalence surveys.
Subject(s)
COVID-19 , SARS-CoV-2 , Adult , Antibodies, Viral , England , Humans , Population Surveillance , Self-Testing , Seroepidemiologic StudiesABSTRACT
BACKGROUND: Accurate antibody tests are essential to monitor the SARS-CoV-2 pandemic. Lateral flow immunoassays (LFIAs) can deliver testing at scale. However, reported performance varies, and sensitivity analyses have generally been conducted on serum from hospitalised patients. For use in community testing, evaluation of finger-prick self-tests, in non-hospitalised individuals, is required. METHODS: Sensitivity analysis was conducted on 276 non-hospitalised participants. All had tested positive for SARS-CoV-2 by reverse transcription PCR and were ≥21 days from symptom onset. In phase I, we evaluated five LFIAs in clinic (with finger prick) and laboratory (with blood and sera) in comparison to (1) PCR-confirmed infection and (2) presence of SARS-CoV-2 antibodies on two 'in-house' ELISAs. Specificity analysis was performed on 500 prepandemic sera. In phase II, six additional LFIAs were assessed with serum. FINDINGS: 95% (95% CI 92.2% to 97.3%) of the infected cohort had detectable antibodies on at least one ELISA. LFIA sensitivity was variable, but significantly inferior to ELISA in 8 out of 11 assessed. Of LFIAs assessed in both clinic and laboratory, finger-prick self-test sensitivity varied from 21% to 92% versus PCR-confirmed cases and from 22% to 96% versus composite ELISA positives. Concordance between finger-prick and serum testing was at best moderate (kappa 0.56) and, at worst, slight (kappa 0.13). All LFIAs had high specificity (97.2%-99.8%). INTERPRETATION: LFIA sensitivity and sample concordance is variable, highlighting the importance of evaluations in setting of intended use. This rigorous approach to LFIA evaluation identified a test with high specificity (98.6% (95%CI 97.1% to 99.4%)), moderate sensitivity (84.4% with finger prick (95% CI 70.5% to 93.5%)) and moderate concordance, suitable for seroprevalence surveys.
Subject(s)
Antibodies, Viral/analysis , COVID-19/diagnosis , Immunoassay/methods , Pandemics , SARS-CoV-2/immunology , Adult , COVID-19/epidemiology , COVID-19/virology , DNA, Viral/analysis , Female , Follow-Up Studies , Humans , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , SARS-CoV-2/genetics , Seroepidemiologic StudiesABSTRACT
In multivariate network meta-analysis (NMA), the piecemeal nature of the evidence base means that there may be treatment-outcome combinations for which no data is available. Most existing multivariate evidence synthesis models are either unable to estimate the missing treatment-outcome combinations, or can only do so under particularly strong assumptions, such as perfect between-study correlations between outcomes or constant effect size across outcomes. Many existing implementations are also limited to two treatments or two outcomes, or rely on model specification that is heavily tailored to the dimensions of the dataset. We present a Bayesian multivariate NMA model that estimates the missing treatment-outcome combinations via mappings between the population mean effects, while allowing the study-specific effects to be imperfectly correlated. The method is designed for aggregate-level data (rather than individual patient data) and is likely to be useful when modeling multiple sparsely reported outcomes, or when varying definitions of the same underlying outcome are adopted by different studies. We implement the model via a novel decomposition of the treatment effect variance, which can be specified efficiently for an arbitrary dataset given some basic assumptions regarding the correlation structure. The method is illustrated using data concerning the efficacy and liver-related safety of eight active treatments for relapsing-remitting multiple sclerosis. The results indicate that fingolimod and interferon beta-1b are the most efficacious treatments but also have some of the worst effects on liver safety. Dimethyl fumarate and glatiramer acetate perform reasonably on all of the efficacy and safety outcomes in the model.
Subject(s)
Multiple Sclerosis, Relapsing-Remitting , Multiple Sclerosis , Bayes Theorem , Dimethyl Fumarate , Humans , Immunosuppressive Agents/therapeutic use , Multiple Sclerosis, Relapsing-Remitting/drug therapy , Network Meta-AnalysisABSTRACT
RATIONALE: There remains uncertainty about the role of corticosteroids in sepsis with clear beneficial effects on shock duration, but conflicting survival effects. Two transcriptomic sepsis response signatures (SRSs) have been identified. SRS1 is relatively immunosuppressed, whereas SRS2 is relatively immunocompetent. OBJECTIVES: We aimed to categorize patients based on SRS endotypes to determine if these profiles influenced response to either norepinephrine or vasopressin, or to corticosteroids in septic shock. METHODS: A post hoc analysis was performed of a double-blind, randomized clinical trial in septic shock (VANISH [Vasopressin vs. Norepinephrine as Initial Therapy in Septic Shock]). Patients were included within 6 hours of onset of shock and were randomized to receive norepinephrine or vasopressin followed by hydrocortisone or placebo. Genome-wide gene expression profiling was performed and SRS endotype was determined by a previously established model using seven discriminant genes. MEASUREMENTS AND MAIN RESULTS: Samples were available from 176 patients: 83 SRS1 and 93 SRS2. There was no significant interaction between SRS group and vasopressor assignment (P = 0.50). However, there was an interaction between assignment to hydrocortisone or placebo, and SRS endotype (P = 0.02). Hydrocortisone use was associated with increased mortality in those with an SRS2 phenotype (odds ratio = 7.9; 95% confidence interval = 1.6-39.9). CONCLUSIONS: Transcriptomic profile at onset of septic shock was associated with response to corticosteroids. Those with the immunocompetent SRS2 endotype had significantly higher mortality when given corticosteroids compared with placebo. Clinical trial registered with www.clinicaltrials.gov (ISRCTN 20769191).
Subject(s)
Gene Expression Profiling , Hydrocortisone/therapeutic use , Sepsis/drug therapy , Transcriptome/drug effects , Aged , Double-Blind Method , Female , Humans , Immunocompetence , Kaplan-Meier Estimate , Male , Middle Aged , Norepinephrine/therapeutic use , Phenotype , Sepsis/metabolism , Sepsis/mortality , Shock, Septic/drug therapy , Shock, Septic/metabolism , Shock, Septic/mortality , Survival Analysis , Vasopressins/therapeutic useABSTRACT
BACKGROUND: Levosimendan is a calcium-sensitizing drug with inotropic and other properties that may improve outcomes in patients with sepsis. METHODS: We conducted a double-blind, randomized clinical trial to investigate whether levosimendan reduces the severity of organ dysfunction in adults with sepsis. Patients were randomly assigned to receive a blinded infusion of levosimendan (at a dose of 0.05 to 0.2 µg per kilogram of body weight per minute) for 24 hours or placebo in addition to standard care. The primary outcome was the mean daily Sequential Organ Failure Assessment (SOFA) score in the intensive care unit up to day 28 (scores for each of five systems range from 0 to 4, with higher scores indicating more severe dysfunction; maximum score, 20). Secondary outcomes included 28-day mortality, time to weaning from mechanical ventilation, and adverse events. RESULTS: The trial recruited 516 patients; 259 were assigned to receive levosimendan and 257 to receive placebo. There was no significant difference in the mean (±SD) SOFA score between the levosimendan group and the placebo group (6.68±3.96 vs. 6.06±3.89; mean difference, 0.61; 95% confidence interval [CI], -0.07 to 1.29; P=0.053). Mortality at 28 days was 34.5% in the levosimendan group and 30.9% in the placebo group (absolute difference, 3.6 percentage points; 95% CI, -4.5 to 11.7; P=0.43). Among patients requiring ventilation at baseline, those in the levosimendan group were less likely than those in the placebo group to be successfully weaned from mechanical ventilation over the period of 28 days (hazard ratio, 0.77; 95% CI, 0.60 to 0.97; P=0.03). More patients in the levosimendan group than in the placebo group had supraventricular tachyarrhythmia (3.1% vs. 0.4%; absolute difference, 2.7 percentage points; 95% CI, 0.1 to 5.3; P=0.04). CONCLUSIONS: The addition of levosimendan to standard treatment in adults with sepsis was not associated with less severe organ dysfunction or lower mortality. Levosimendan was associated with a lower likelihood of successful weaning from mechanical ventilation and a higher risk of supraventricular tachyarrhythmia. (Funded by the NIHR Efficacy and Mechanism Evaluation Programme and others; LeoPARDS Current Controlled Trials number, ISRCTN12776039 .).
ABSTRACT
The design of phase I studies is often challenging, because of limited evidence to inform study protocols. Adaptive designs are now well established in cancer but much less so in other clinical areas. A phase I study to assess the safety, pharmacokinetic profile and antiretroviral efficacy of C34-PEG4 -Chol, a novel peptide fusion inhibitor for the treatment of HIV infection, has been set up with Medical Research Council funding. During the study workup, Bayesian adaptive designs based on the continual reassessment method were compared with a more standard rule-based design, with the aim of choosing a design that would maximise the scientific information gained from the study. The process of specifying and evaluating the design options was time consuming and required the active involvement of all members of the trial's protocol development team. However, the effort was worthwhile as the originally proposed rule-based design has been replaced by a more efficient Bayesian adaptive design. While the outcome to be modelled, design details and evaluation criteria are trial specific, the principles behind their selection are general. This case study illustrates the steps required to establish a design in a novel context. Copyright © 2016 John Wiley & Sons, Ltd.
Subject(s)
Bayes Theorem , Clinical Trials, Phase I as Topic/methods , HIV Fusion Inhibitors/therapeutic use , HIV Infections/drug therapy , Endpoint Determination , HIV Envelope Protein gp41 , HIV Fusion Inhibitors/administration & dosage , Humans , Peptide FragmentsABSTRACT
BACKGROUND: Abnormal biliary secretion leads to the thickening of bile and the formation of plugs within the bile ducts; the consequent obstruction and abnormal bile flow ultimately results in the development of cystic fibrosis-related liver disease. This condition peaks in adolescence with up to 20% of adolescents with cystic fibrosis developing chronic liver disease. Early changes in the liver may ultimately result in end-stage liver disease with people needing transplantation. One therapeutic option currently used is ursodeoxycholic acid. This is an update of a previous review. OBJECTIVES: To analyse evidence that ursodeoxycholic acid improves indices of liver function, reduces the risk of developing chronic liver disease and improves outcomes in general in cystic fibrosis. SEARCH METHODS: We searched the Cochrane CF and Genetic Disorders Group Trials Register comprising references identified from comprehensive electronic database searches, handsearches of relevant journals and abstract books of conference proceedings. We also contacted drug companies and searched online trial registries.Date of the most recent search of the Group's trials register: 09 April 2017. SELECTION CRITERIA: Randomised controlled trials of the use of ursodeoxycholic acid for at least three months compared with placebo or no additional treatment in people with cystic fibrosis. DATA COLLECTION AND ANALYSIS: Two authors independently assessed trial eligibility and quality. The authors used GRADE to assess the quality of the evidence. MAIN RESULTS: Twelve trials have been identified, of which four trials involving 137 participants were included; data were only available from three of the trials (118 participants) since one cross-over trial did not report appropriate data. The dose of ursodeoxycholic acid ranged from 10 to 20 mg/kg/day for up to 12 months. The complex design used in two trials meant that data could only be analysed for subsets of participants. There was no significant difference in weight change, mean difference -0.90 kg (95% confidence interval -1.94 to 0.14) based on 30 participants from two trials. Improvement in biliary excretion was reported in only one trial and no significant change after treatment was shown. There were no data available for analysis for long-term outcomes such as death or need for liver transplantation. AUTHORS' CONCLUSIONS: There are few trials assessing the effectiveness of ursodeoxycholic acid. The quality of the evidence identified ranged from low to very low. There is currently insufficient evidence to justify its routine use in cystic fibrosis.
Subject(s)
Cholagogues and Choleretics/therapeutic use , Cystic Fibrosis/complications , Liver Diseases/prevention & control , Ursodeoxycholic Acid/therapeutic use , Adolescent , Adult , Bile/metabolism , Child , Child, Preschool , Chronic Disease , Humans , Liver/enzymology , Liver Diseases/etiology , Nutritional Status , Randomized Controlled Trials as TopicABSTRACT
PURPOSE: The purpose of this study is to draw on the practical experience from the PROTECT BR case studies and make recommendations regarding the application of a number of methodologies and visual representations for benefit-risk assessment. METHODS: Eight case studies based on the benefit-risk balance of real medicines were used to test various methodologies that had been identified from the literature as having potential applications in benefit-risk assessment. Recommendations were drawn up based on the results of the case studies. RESULTS: A general pathway through the case studies was evident, with various classes of methodologies having roles to play at different stages. Descriptive and quantitative frameworks were widely used throughout to structure problems, with other methods such as metrics, estimation techniques and elicitation techniques providing ways to incorporate technical or numerical data from various sources. Similarly, tree diagrams and effects tables were universally adopted, with other visualisations available to suit specific methodologies or tasks as required. Every assessment was found to follow five broad stages: (i) Planning, (ii) Evidence gathering and data preparation, (iii) Analysis, (iv) Exploration and (v) Conclusion and dissemination. CONCLUSIONS: Adopting formal, structured approaches to benefit-risk assessment was feasible in real-world problems and facilitated clear, transparent decision-making. Prior to this work, no extensive practical application and appraisal of methodologies had been conducted using real-world case examples, leaving users with limited knowledge of their usefulness in the real world. The practical guidance provided here takes us one step closer to a harmonised approach to benefit-risk assessment from multiple perspectives.
Subject(s)
Adverse Drug Reaction Reporting Systems , Data Display , Pharmacoepidemiology/methods , Risk Assessment/methods , Adverse Drug Reaction Reporting Systems/legislation & jurisprudence , Decision Making , Drug Discovery , Drug-Related Side Effects and Adverse Reactions/epidemiology , Government Regulation , Pharmacoepidemiology/legislation & jurisprudence , Risk Assessment/legislation & jurisprudenceABSTRACT
BACKGROUND: The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. METHODS: A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. RESULTS: Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. CONCLUSION: We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information.
Subject(s)
Adverse Drug Reaction Reporting Systems , Data Display , Pharmacoepidemiology/methods , Risk Assessment/methods , Adverse Drug Reaction Reporting Systems/instrumentation , Communication , Decision Making , Pharmacoepidemiology/instrumentationABSTRACT
IMPORTANCE: Norepinephrine is currently recommended as the first-line vasopressor in septic shock; however, early vasopressin use has been proposed as an alternative. OBJECTIVE: To compare the effect of early vasopressin vs norepinephrine on kidney failure in patients with septic shock. DESIGN, SETTING, AND PARTICIPANTS: A factorial (2×2), double-blind, randomized clinical trial conducted in 18 general adult intensive care units in the United Kingdom between February 2013 and May 2015, enrolling adult patients who had septic shock requiring vasopressors despite fluid resuscitation within a maximum of 6 hours after the onset of shock. INTERVENTIONS: Patients were randomly allocated to vasopressin (titrated up to 0.06 U/min) and hydrocortisone (n = 101), vasopressin and placebo (n = 104), norepinephrine and hydrocortisone (n = 101), or norepinephrine and placebo (n = 103). MAIN OUTCOMES AND MEASURES: The primary outcome was kidney failure-free days during the 28-day period after randomization, measured as (1) the proportion of patients who never developed kidney failure and (2) median number of days alive and free of kidney failure for patients who did not survive, who experienced kidney failure, or both. Rates of renal replacement therapy, mortality, and serious adverse events were secondary outcomes. RESULTS: A total of 409 patients (median age, 66 years; men, 58.2%) were included in the study, with a median time to study drug administration of 3.5 hours after diagnosis of shock. The number of survivors who never developed kidney failure was 94 of 165 patients (57.0%) in the vasopressin group and 93 of 157 patients (59.2%) in the norepinephrine group (difference, -2.3% [95% CI, -13.0% to 8.5%]). The median number of kidney failure-free days for patients who did not survive, who experienced kidney failure, or both was 9 days (interquartile range [IQR], 1 to -24) in the vasopressin group and 13 days (IQR, 1 to -25) in the norepinephrine group (difference, -4 days [95% CI, -11 to 5]). There was less use of renal replacement therapy in the vasopressin group than in the norepinephrine group (25.4% for vasopressin vs 35.3% for norepinephrine; difference, -9.9% [95% CI, -19.3% to -0.6%]). There was no significant difference in mortality rates between groups. In total, 22 of 205 patients (10.7%) had a serious adverse event in the vasopressin group vs 17 of 204 patients (8.3%) in the norepinephrine group (difference, 2.5% [95% CI, -3.3% to 8.2%]). CONCLUSIONS AND RELEVANCE: Among adults with septic shock, the early use of vasopressin compared with norepinephrine did not improve the number of kidney failure-free days. Although these findings do not support the use of vasopressin to replace norepinephrine as initial treatment in this situation, the confidence interval included a potential clinically important benefit for vasopressin, and larger trials may be warranted to assess this further. TRIAL REGISTRATION: clinicaltrials.gov Identifier: ISRCTN 20769191.
Subject(s)
Critical Care/methods , Norepinephrine/administration & dosage , Renal Insufficiency/etiology , Renal Replacement Therapy/statistics & numerical data , Shock, Septic/complications , Shock, Septic/drug therapy , Vasoconstrictor Agents/administration & dosage , Vasopressins/administration & dosage , Adult , Aged , Aged, 80 and over , Double-Blind Method , Drug Administration Schedule , Female , Fluid Therapy , Humans , Hydrocortisone/administration & dosage , Intensive Care Units/statistics & numerical data , Male , Middle Aged , Renal Insufficiency/chemically induced , Renal Insufficiency/mortality , Shock, Septic/mortality , Treatment Outcome , United Kingdom/epidemiologyABSTRACT
Quantitative decision models such as multiple criteria decision analysis (MCDA) can be used in benefit-risk assessment to formalize trade-offs between benefits and risks, providing transparency to the assessment process. There is however no well-established method for propagating uncertainty of treatment effects data through such models to provide a sense of the variability of the benefit-risk balance. Here, we present a Bayesian statistical method that directly models the outcomes observed in randomized placebo-controlled trials and uses this to infer indirect comparisons between competing active treatments. The resulting treatment effects estimates are suitable for use within the MCDA setting, and it is possible to derive the distribution of the overall benefit-risk balance through Markov Chain Monte Carlo simulation. The method is illustrated using a case study of natalizumab for relapsing-remitting multiple sclerosis.
Subject(s)
Biometry/methods , Bayes Theorem , Decision Support Techniques , Humans , Multiple Sclerosis/drug therapy , Natalizumab/therapeutic use , Randomized Controlled Trials as Topic , Recurrence , Risk Assessment , UncertaintyABSTRACT
OBJECTIVE: The colonic microbiota ferment dietary fibres, producing short chain fatty acids. Recent evidence suggests that the short chain fatty acid propionate may play an important role in appetite regulation. We hypothesised that colonic delivery of propionate would increase peptide YY (PYY) and glucagon like peptide-1 (GLP-1) secretion in humans, and reduce energy intake and weight gain in overweight adults. DESIGN: To investigate whether propionate promotes PYY and GLP-1 secretion, a primary cultured human colonic cell model was developed. To deliver propionate specifically to the colon, we developed a novel inulin-propionate ester. An acute randomised, controlled cross-over study was used to assess the effects of this inulin-propionate ester on energy intake and plasma PYY and GLP-1 concentrations. The long-term effects of inulin-propionate ester on weight gain were subsequently assessed in a randomised, controlled 24-week study involving 60 overweight adults. RESULTS: Propionate significantly stimulated the release of PYY and GLP-1 from human colonic cells. Acute ingestion of 10â g inulin-propionate ester significantly increased postprandial plasma PYY and GLP-1 and reduced energy intake. Over 24â weeks, 10â g/day inulin-propionate ester supplementation significantly reduced weight gain, intra-abdominal adipose tissue distribution, intrahepatocellular lipid content and prevented the deterioration in insulin sensitivity observed in the inulin-control group. CONCLUSIONS: These data demonstrate for the first time that increasing colonic propionate prevents weight gain in overweight adult humans. TRIAL REGISTRATION NUMBER: NCT00750438.