Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
J Biopharm Stat ; 32(6): 832-857, 2022 11 02.
Article in English | MEDLINE | ID: mdl-35736220

ABSTRACT

Biomedical applications such as genome-wide association studies screen large databases with high-dimensional features to identify rare, weakly expressed, and important continuous-valued features for subsequent detailed analysis. We describe an exact, rapid Bayesian screening approach with attractive diagnostic properties using a Gaussian random mixture model focusing on the missed discovery rate (the probability of failing to identify potentially informative features) rather than the false discovery rate ordinarily used with multiple hypothesis testing. The method provides the likelihood that a feature merits further investigation, as well as distributions of the effect magnitudes and the proportion of features with the same expected responses under alternative conditions. Important features include the dependence of the critical values on clinical and regulatory priorities and direct assessment of the diagnostic properties.


Subject(s)
Genome-Wide Association Study , Research Design , Humans , Bayes Theorem , Genome-Wide Association Study/methods , Probability
2.
PLoS One ; 17(6): e0265712, 2022.
Article in English | MEDLINE | ID: mdl-35749431

ABSTRACT

The FDA's Accelerated Approval program (AA) is a regulatory program to expedite availability of products to treat serious or life-threatening illnesses that lack effective treatment alternatives. Ideally, all of the many stakeholders such as patients, physicians, regulators, and health technology assessment [HTA] agencies that are affected by AA should benefit from it. In practice, however, there is intense debate over whether evidence supporting AA is sufficient to meet the needs of the stakeholders who collectively bring an approved product into routine clinical care. As AAs have become more common, it becomes essential to be able to determine their impact objectively and reproducibly in a way that provides for consistent evaluation of therapeutic decision alternatives. We describe the basic features of an approach for evaluating AA impact that accommodates stakeholder-specific views about potential benefits, risks, and costs. The approach is based on a formal decision-analytic framework combining predictive distributions for therapeutic outcomes (efficacy and safety) based on statistical models that incorporate findings from AA trials with stakeholder assessments of various actions that might be taken. The framework described here provides a starting point for communicating the value of a treatment granted AA in the context of what is important to various stakeholders.


Subject(s)
Drug Approval , Technology Assessment, Biomedical , Humans , Treatment Outcome , United States , United States Food and Drug Administration
3.
Biom J ; 61(5): 1141-1159, 2019 09.
Article in English | MEDLINE | ID: mdl-30565273

ABSTRACT

Successful pharmaceutical drug development requires finding correct doses. The issues that conventional dose-response analyses consider, namely whether responses are related to doses, which doses have responses differing from a control dose response, the functional form of a dose-response relationship, and the dose(s) to carry forward, do not need to be addressed simultaneously. Determining if a dose-response relationship exists, regardless of its functional form, and then identifying a range of doses to study further may be a more efficient strategy. This article describes a novel estimation-focused Bayesian approach (BMA-Mod) for carrying out the analyses when the actual dose-response function is unknown. Realizations from Bayesian analyses of linear, generalized linear, and nonlinear regression models that may include random effects and covariates other than dose are optimally combined to produce distributions of important secondary quantities, including test-control differences, predictive distributions of possible outcomes from future trials, and ranges of doses corresponding to target outcomes. The objective is similar to the objective of the hypothesis-testing based MCP-Mod approach, but provides more model and distributional flexibility and does not require testing hypotheses or adjusting for multiple comparisons. A number of examples illustrate the application of the method.


Subject(s)
Biometry/methods , Models, Statistical , Uncertainty , Bayes Theorem , Dose-Response Relationship, Drug , Regression Analysis
4.
Stat Med ; 37(18): 2667-2689, 2018 08 15.
Article in English | MEDLINE | ID: mdl-29736961

ABSTRACT

Patients in large clinical trials and in studies employing large observational databases report many different adverse events, most of which will not have been anticipated at the outset. Conventional hypothesis testing of between group differences for each adverse event can be problematic: Lack of significance does not mean lack of risk, the tests usually are not adjusted for multiplicity, and the data determine which hypotheses are tested. This article describes a Bayesian screening approach suitable for clinical trials and large observational databases that do not test hypotheses, are self-adjusting for multiplicity, provide a direct assessment of the likelihood of no material drug-event association, and quantify the strength of the observed association. Clinical and/or regulatory considerations define the criteria for assessing drug-event associations. The diagnostic properties of this new approach can be evaluated analytically. The result of comparison of the results from the method relative to current methods when applied to a commonly used data set indicates that the findings are largely similar, but with some interesting differences that may be relevant in application. Applying the method to a large vaccine trial reduces the number of adverse events that might require further investigation substantially.


Subject(s)
Adverse Drug Reaction Reporting Systems , Bayes Theorem , Public Health Surveillance/methods , Clinical Trials as Topic/methods , Computer Simulation , Data Interpretation, Statistical , Humans , Observational Studies as Topic/methods , Poisson Distribution , Risk Assessment/methods , Vaccines/adverse effects
5.
Stat Med ; 36(1): 92-104, 2017 01 15.
Article in English | MEDLINE | ID: mdl-27666940

ABSTRACT

The development of drugs and biologicals whose mechanisms of action may extend beyond their target indications has led to a need to identify unexpected potential toxicities promptly even while blinded clinical trials are under way. One component of recently issued FDA rules regarding safety reporting requirements raises the possibility of breaking the blind for pre-identified serious adverse events that are not the clinical endpoints of a blinded study. Concern has been expressed that unblinding individual cases of frequently occurring adverse events could compromise the overall validity of the study. However, if external information is available about adverse event rates among patients not receiving the test product in populations similar to the study population, then it may be possible to address the potential for elevated risk without unblinding the trial. This article describes a Bayesian approach for determining the likelihood of elevated risk suitable binomial or Poisson likelihoods that applies regardless of the metric used to express the difference. The method appears to be particularly appropriate for routine monitoring of safety information for project development programs that include large blinded trials. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Drug-Related Side Effects and Adverse Reactions , Humans , Likelihood Functions , Poisson Distribution , Research Design
6.
Stat Med ; 35(30): 5561-5578, 2016 12 30.
Article in English | MEDLINE | ID: mdl-27619565

ABSTRACT

Conventional practice monitors accumulating information about drug safety in terms of the numbers of adverse events reported from trials in a drug development program. Estimates of between-treatment adverse event risk differences can be obtained readily from unblinded trials with adjustment for differences among trials using conventional statistical methods. Recent regulatory guidelines require monitoring the cumulative frequency of adverse event reports to identify possible between-treatment adverse event risk differences without unblinding ongoing trials. Conventional statistical methods for assessing between-treatment adverse event risks cannot be applied when the trials are blinded. However, CUSUM charts can be used to monitor the accumulation of adverse event occurrences. CUSUM charts for monitoring adverse event occurrence in a Bayesian paradigm are based on assumptions about the process generating the adverse event counts in a trial as expressed by informative prior distributions. This article describes the construction of control charts for monitoring adverse event occurrence based on statistical models for the processes, characterizes their statistical properties, and describes how to construct useful prior distributions. Application of the approach to two adverse events of interest in a real trial gave nearly identical results for binomial and Poisson observed event count likelihoods. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Adverse Drug Reaction Reporting Systems , Bayes Theorem , Models, Statistical , Probability , Randomized Controlled Trials as Topic , Research Design
8.
Ther Innov Regul Sci ; 49(2): 289-296, 2015 Mar.
Article in English | MEDLINE | ID: mdl-30222420

ABSTRACT

Efficient use of limited pharmaceutical product development resources requires integrating multiple attributes, such as efficacy, safety, pharmacology, and so on, to decide at any stage whether the development of a product should proceed aggressively or slowly or be terminated. The decision process proceeds most effectively when the knowledge and experience of a product development team are transparently and reproducibly integrated with the findings from completed experiments and trials. In this article, the authors describe an approach for quantitatively and objectively assessing evidence at any stage of development, one based on a mathematical combination of sets of pairwise comparisons. The attributes of the process and the rules for combining its elements to guide decisions are determined by the project team and other stakeholders before obtaining the determinative data to facilitate exploration of the sensitivity of a recommended action to various assumptions. Its statistical properties can be evaluated with standard statistical decision analysis methods.

9.
Ther Innov Regul Sci ; 49(1): 65-75, 2015 Jan.
Article in English | MEDLINE | ID: mdl-30222465

ABSTRACT

Spontaneous reporting (SR) adverse event system databases, large observational databases, large clinical trials, and large health records databases comprise repositories of information that may be useful for early detection of potential harms associated with drugs, devices, and vaccines. All of the data sources include many different adverse events and many medical products, so that any approach designed to detect "important" signals of potential harm must have adequate specificity to protect against false alarms yet provide satisfactory sensitivity for detecting issues that really need further investigation. Algorithms for evaluating potential risks using information from these sources, especially SR databases, have been described in the literature. The algorithms may seek to identify potential product-event associations without any prior specifications, to identify events associated with a particular product or set of products, or to identify products associated with a particular event or set of events. This article provides recommendations for using information from postmarketing spontaneous adverse event reporting databases to provide insight into risks of potential harm expressed by safety signals and offers guidance regarding appropriate methods, both frequentist and Bayesian, to use in various situations as a function of the objective of the analysis.

10.
J Biopharm Stat ; 23(4): 829-47, 2013.
Article in English | MEDLINE | ID: mdl-23786257

ABSTRACT

Patients in large clinical trials and in studies employing large observational databases report many different adverse events, most of which will not have been anticipated at the outset. Conventional hypothesis testing of between group differences for each adverse event can be problematic: Lack of significance does not mean lack of risk, the tests usually are not adjusted for multiplicity, and the data determine which hypotheses are tested. This article describes a Bayesian screening approach that does not test hypotheses, is self-adjusting for multiplicity, provides a direct assessment of the likelihood of no material drug-event association, and quantifies the strength of the observed association. The criteria for assessing drug-event associations can be determined by clinical or regulatory considerations. In contrast to conventional approaches, the diagnostic properties of this new approach can be evaluated analytically. Application of the method to findings from a vaccine trial yields results similar to those found by methods using a false discovery rate argument or a hierarchical Bayes approach. [Supplemental materials are available for this article. Go to the publisher's online edition of Journal of Biopharmaceutical Statistics for the following free supplemental resource: Appendix R: Code for calculations.].


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Drug-Related Side Effects and Adverse Reactions , Models, Statistical , Bayes Theorem , Drug-Related Side Effects and Adverse Reactions/diagnosis , Drug-Related Side Effects and Adverse Reactions/epidemiology , Humans , Observational Studies as Topic/statistics & numerical data , Poisson Distribution
11.
J Biopharm Stat ; 22(5): 916-34, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22946940

ABSTRACT

Pharmaceutical product development culminates in confirmatory trials whose evidence for the product's efficacy and safety supports regulatory approval for marketing. Regulatory agencies in countries whose patients were not included in the confirmatory trials often require confirmation of efficacy and safety in their patient populations, which may be accomplished by carrying out bridging studies to establish consistency for local patients of the effects demonstrated by the original trials. This article describes and illustrates an approach for designing and analyzing bridging studies that fully incorporates the information provided by the original trials. The approach determines probability contours or regions of joint predictive intervals for treatment effect and response variability, or endpoints of treatment effect confidence intervals, that are functions of the findings from the original trials, the sample sizes for the bridging studies, and possible deviations from complete consistency with the original trials. The bridging studies are judged consistent with the original trials if their findings fall within the probability contours or regions. Regulatory considerations determine the region definitions and appropriate probability levels. Producer and consumer risks provide a way to assess alternative region and probability choices. [Supplemental materials are available for this article. Go to the Publisher's online edition of the Journal of Biopharmaceutical Statistics for the following free supplemental resource: Appendix 2: R code for Calculations.].


Subject(s)
Bayes Theorem , Multicenter Studies as Topic/statistics & numerical data , Research Design/statistics & numerical data , Algorithms , Clinical Trials as Topic , Data Interpretation, Statistical , Drug Industry , Humans , Likelihood Functions , Models, Statistical , Randomized Controlled Trials as Topic , Risk , Sample Size
13.
Pharmacoepidemiol Drug Saf ; 19(5): 533-6, 2010 May.
Article in English | MEDLINE | ID: mdl-20437460

ABSTRACT

Dr. Walker asserts that a hypothesis always can be tested using the same data source that generated the data if the test data are independent of the data generating the hypothesis. One way to do this is to use part of the totality of data to generate the hypothesis and the other to test the hypothesis. The validity of this assertion depends on what one means by 'independent'. This note addresses the logical and statistical implications of Dr. Walker's assertion. The key conclusion is that what constitutes 'independent' data has to be considered carefully, and that hypothesis-generating and test data from the same data source generally can not be considered 'independent'.


Subject(s)
Causality , Databases, Factual , Decision Theory , Pharmacoepidemiology/methods , Humans , Infant , Intussusception/chemically induced , Intussusception/epidemiology , Pharmacoepidemiology/standards , Pharmacoepidemiology/statistics & numerical data , Rotavirus Vaccines/adverse effects
14.
Am Heart J ; 158(4): 513-519.e3, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19781408

ABSTRACT

BACKGROUND: Residual cardiovascular (CV) risk often remains high despite statin therapy to lower low-density lipoprotein cholesterol (LDL-C). New therapies to raise high-density lipoprotein cholesterol (HDL-C) are currently being investigated. Anacetrapib is a cholesteryl ester transfer protein (CETP) inhibitor that raises HDL-C and reduces LDL-C when administered alone or with a statin. Adverse effects on blood pressure, electrolytes, and aldosterone levels, seen with another drug in this class, have not been noted in studies of anacetrapib to date. METHODS: Determining the EFficacy and Tolerability of CETP INhibition with AnacEtrapib (DEFINE) is a randomized, double-blind, placebo-controlled trial to assess the efficacy and safety profile of anacetrapib in patients with coronary heart disease (CHD) or CHD risk equivalents (clinical trials.gov NCT00685776). Eligible patients at National Cholesterol Education Program-Adult Treatment Panel III LDL-C treatment goal on a statin, with or without other lipid-modifying medications, are treated with anacetrapib, 100 mg, or placebo for 18 months, followed by a 3-month, poststudy follow-up. The primary end points are percent change from baseline in LDL-C and the safety and tolerability of anacetrapib. Comprehensive preplanned interim safety analyses will be performed at the 6- and 12-month time points to examine treatment effects on key safety end points, including blood pressure and electrolytes. A preplanned Bayesian analysis will be performed to interpret the CV event distribution, given the limited number of events expected in this study. RESULTS: A total of 2,757 patients were screened at 153 centers in 20 countries, and 1,623 patients were randomized into the trial. Lipid results, clinical CV events, and safety outcomes from this trial are anticipated in 2010.


Subject(s)
Cholesterol Ester Transfer Proteins/antagonists & inhibitors , Cholesterol, LDL/blood , Coronary Disease/drug therapy , Oxazolidinones/administration & dosage , Adult , Aged , Aged, 80 and over , Cholesterol Ester Transfer Proteins/blood , Cholesterol, LDL/drug effects , Coronary Disease/blood , Coronary Disease/physiopathology , Double-Blind Method , Electrocardiography , Female , Follow-Up Studies , Humans , Male , Middle Aged , Treatment Outcome
15.
Biopharm Drug Dispos ; 30(7): 366-88, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19735073

ABSTRACT

IVIVC (in vitro in vivo correlation) methods may support approving a change in formulation of a drug using only in vitro dissolution data without additional bioequivalence trials in human subjects. Most current IVIVC methods express the in vivo plasma concentration of a drug formulation as a function of the cumulative in vivo absorption. The absorption is not directly observable, so is estimated by the cumulative dissolution of the drug formulation in in vitro dissolution trials. The calculations conventionally entail the complex and potentially unstable mathematical operations of convolution and deconvolution, or approximations aimed at omitting their need. This paper describes, and illustrates with data on a controlled-release formulation, a Bayesian approach to evaluating IVIVC that does not require convolution, deconvolution or approximation. This approach incorporates between- and within-subject (or replicate) variability without assuming asymptotic normality. The plasma concentration curve is expressed in terms of the in vitro dissolution percentage instead of time, recognizing that this correspondence may be noisy because of the various sources of error. All conventional functions of the concentration curve such as AUC, C(max) and T(max) can be expressed in terms of dissolution percentage, with uncertainties arising from variability in measuring absorption and dissolution accounted for explicitly.


Subject(s)
Area Under Curve , Chemistry, Pharmaceutical/statistics & numerical data , Therapeutic Equivalency , Absorption , Administration, Oral , Computational Biology/methods , Excipients/pharmacokinetics , Humans , Mathematics , Solubility , Statistics as Topic
16.
Clin Trials ; 6(4): 305-19, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19667027

ABSTRACT

OBJECTIVE: Studies measuring progression of carotid artery intima-media thickness (cIMT) have been used to estimate the effect of lipid-modifying therapies cardiovascular event risk. The likelihood that future cIMT clinical trials will detect a true treatment effect is estimated by leveraging results from prior studies. The present analyses assess the impact of between- and within-study variability based on currently published data from prior clinical studies on the likelihood that ongoing or future cIMT trials will detect the true treatment effect of lipid-modifying therapies. METHODS: Published data from six contemporary cIMT studies (ASAP, ARBITER 2, RADIANCE 1, RADIANCE 2, ENHANCE, and METEOR) including data from a total of 3563 patients were examined. Bayesian and frequentist methods were used to assess the impact of between study variability on the likelihood of detecting true treatment effects on 1-year cIMT progression/regression and to provide a sample size estimate that would specifically compensate for the effect of between-study variability. RESULTS: In addition to the well-described within-study variability, there is considerable between-study variability associated with the measurement of annualized change in cIMT. Accounting for the additional between-study variability decreases the power for existing study designs. In order to account for the added between-study variability, it is likely that future cIMT studies would require a large increase in sample size in order to provide substantial probability (> or =90%) to have 90% power of detecting a true treatment effect.Limitation Analyses are based on study level data. Future meta-analyses incorporating patient-level data would be useful for confirmation. CONCLUSION: Due to substantial within- and between-study variability in the measure of 1-year change of cIMT, as well as uncertainty about progression rates in contemporary populations, future study designs evaluating the effect of new lipid-modifying therapies on atherosclerotic disease progression are likely to be challenged by large sample sizes in order to demonstrate a true treatment effect.


Subject(s)
Carotid Arteries/drug effects , Carotid Artery Diseases/drug therapy , Hypolipidemic Agents/therapeutic use , Randomized Controlled Trials as Topic , Sample Size , Tunica Intima/drug effects , Tunica Media/drug effects , Bayes Theorem , Carotid Arteries/pathology , Carotid Artery Diseases/physiopathology , Disease Progression , Humans , Models, Statistical , Monte Carlo Method , Research , Research Design , Risk Factors
17.
Drug Saf ; 32(6): 509-25, 2009.
Article in English | MEDLINE | ID: mdl-19459718

ABSTRACT

BACKGROUND: Pharmacovigilance data-mining algorithms (DMAs) are known to generate significant numbers of false-positive signals of disproportionate reporting (SDRs), using various standards to define the terms 'true positive' and 'false positive'. OBJECTIVE: To construct a highly inclusive reference event database of reported adverse events for a limited set of drugs, and to utilize that database to evaluate three DMAs for their overall yield of scientifically supported adverse drug effects, with an emphasis on ascertaining false-positive rates as defined by matching to the database, and to assess the overlap among SDRs detected by various DMAs. METHODS: A sample of 35 drugs approved by the US FDA between 2000 and 2004 was selected, including three drugs added to cover therapeutic categories not included in the original sample. We compiled a reference event database of adverse event information for these drugs from historical and current US prescribing information, from peer-reviewed literature covering 1999 through March 2006, from regulatory actions announced by the FDA and from adverse event listings in the British National Formulary. Every adverse event mentioned in these sources was entered into the database, even those with minimal evidence for causality. To provide some selectivity regarding causality, each entry was assigned a level of evidence based on the source of the information, using rules developed by the authors. Using the FDA adverse event reporting system data for 2002 through 2005, SDRs were identified for each drug using three DMAs: an urn-model based algorithm, the Gamma Poisson Shrinker (GPS) and proportional reporting ratio (PRR), using previously published signalling thresholds. The absolute number and fraction of SDRs matching the reference event database at each level of evidence was determined for each report source and the data-mining method. Overlap of the SDR lists among the various methods and report sources was tabulated as well. RESULTS: The GPS algorithm had the lowest overall yield of SDRs (763), with the highest fraction of events matching the reference event database (89 SDRs, 11.7%), excluding events described in the prescribing information at the time of drug approval. The urn model yielded more SDRs (1562), with a non-significantly lower fraction matching (175 SDRs, 11.2%). PRR detected still more SDRs (3616), but with a lower fraction matching (296 SDRs, 8.2%). In terms of overlap of SDRs among algorithms, PRR uniquely detected the highest number of SDRs (2231, with 144, or 6.5%, matching), followed by the urn model (212, with 26, or 12.3%, matching) and then GPS (0 SDRs uniquely detected). CONCLUSIONS: The three DMAs studied offer significantly different tradeoffs between the number of SDRs detected and the degree to which those SDRs are supported by external evidence. Those differences may reflect choices of detection thresholds as well as features of the algorithms themselves. For all three algorithms, there is a substantial fraction of SDRs for which no external supporting evidence can be found, even when a highly inclusive search for such evidence is conducted.


Subject(s)
Adverse Drug Reaction Reporting Systems/statistics & numerical data , Algorithms , Databases as Topic , Drug-Related Side Effects and Adverse Reactions , United States , United States Food and Drug Administration
18.
Int J Med Inform ; 78(12): e97-e103, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19230751

ABSTRACT

PURPOSE: To compare the results of drug safety data mining with three different algorithms, when adverse events are identified using MedDRA Preferred Terms (PT) vs. High Level Terms (HLT) vs. Standardised MedDRA Queries (SMQ). METHODS: For a representative set of 26 drugs, data from the FDA Adverse Event Reporting System (AERS) database from 2001 through 2005 was mined for signals of disproportionate reporting (SDRs) using three different data mining algorithms (DMAs): the Gamma Poisson Shrinker (GPS), the urn-model algorithm (URN), and the proportional reporting rate (PRR) algorithm. Results were evaluated using a previously described Reference Event Database (RED) which contains documented drug-event associations for the 26 drugs. Analysis emphasized the percentage of SDRs in the "unlabeled supported" category, corresponding to those adverse events that were not described in the U.S. prescribing information for the drug at the time of its approval, but which were supported by some published evidence for an association with the drug. RESULTS: Based on a logistic regression analysis, the percentage of unlabeled supported SDRs was smallest at the PT level, intermediate at the HLT level, and largest at the SMQ level, for all three algorithms. The GPS and URN methods detected comparable percentages of unlabeled supported SDRs while the PRR method detected a smaller percentage, at all three MedDRA levels. No evidence of a method/level interaction was seen. CONCLUSIONS: Use of HLT and SMQ groupings can improve the percentage of unlabeled supported SDRs in data mining results. The trade-off for this gain is the medically less-specific language of HLTs and SMQs compared to PTs, and the need for the added step in data mining of examining the component PTs of each HLT or SMQ that results in a signal of disproportionate reporting.


Subject(s)
Adverse Drug Reaction Reporting Systems/statistics & numerical data , Data Mining , Product Surveillance, Postmarketing , Algorithms , Humans , United States , United States Food and Drug Administration
19.
Biom J ; 50(5): 837-51, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18932142

ABSTRACT

Patients in large clinical trials report many different adverse events, most of which will not have been anticipated in the protocol. Conventional hypothesis testing of between group differences for each adverse event can be problematic: Lack of significance does not mean lack of risk, the tests usually are not adjusted for multiplicity, and the data determine which hypotheses are tested. This paper describes a Bayesian screening approach that does not test hypotheses, is self-adjusting for multiplicity, provides a direct assessment of the likelihood of no material drug-event association, and quantifies the strength of the observed association. The approach directly incorporates clinical judgment by having the criteria for treatment association determined by the investigator(s). Diagnostic properties can be evaluated analytically. Application of the method to findings from a vaccine trial yield results similar to those found by methods using a false discovery rate argument and using a hierarchical Bayes approach.


Subject(s)
Bayes Theorem , Biometry/methods , Clinical Trials as Topic/adverse effects , Clinical Trials as Topic/statistics & numerical data , Confidence Intervals , Humans , Models, Statistical , Odds Ratio , Probability , Safety/statistics & numerical data
20.
Clin Ther ; 29(5): 778-794, 2007 May.
Article in English | MEDLINE | ID: mdl-17697899

ABSTRACT

BACKGROUND: Previous meta-analyses reported by Gould et al found significant decreases of 15% in the risk for coronary heart disease (CHD)-related mortality and 11 % in risk for all-cause mortality per decrease of 10% in total cholesterol (TC) level. OBJECTIVE: To evaluate the effects of reducing cholesterol on clinical events after including data from recent clinical trials. METHODS: Using a literature search (MeSH key terms, including: bezafibrate, coronary disease, efficacy, gemfibrozil, hydroxymethylglutaryl-CoA reductase inhibitors, hypercholesterolemia, niacin [nicotinic acids], randomized controlled trials, and treatment outcome; years: 1999-2005), we identified trials published in English that assessed the effects of lipid-modifying therapies on CHD end points, including CHD-related death, myocardial infarction, and angina pectoris. We also included all studies from the previously published meta-analysis. Using the same analytic approach as previously, we determined the effects of net absolute reductions (1 mmol/L [38.7 mg/dL]) in TC and low-density lipoprotein cholesterol (LDL-C) on the relative risks (RRs) for all-cause mortality, CHD-related mortality, any CHD event (mortality or nonfatal myocardial infarction), and non-CHD-related mortality. RESULTS: We included 62 studies involving 216,616 patients, including 126,474 from 24 randomized controlled trials the findings of which were published since the previous meta-analysis (1998). Among all patients, for every 1-mmol/L decrease in TC, there was a 17.5 reduction in RR for all-cause mortality; 24.5 %, for CHD-related mortality; and 29.5% for any CHD event. Corresponding reductions for every 1-mmol/L decrease in LDL-C were 15.6%, 28.0%, and 26.6%, respectively. Similar relationships were observed in patients without CHD. No significant relationship was found between lipid reduction and non-CHD-related mortality risk. CONCLUSIONS: The results from the present analysis support conclusions from previous meta-analyses that cholesterol lowering is clinically beneficial in patients with CHD or at elevated CHD risk. These results also support the previous finding that non-CHD-related mortality is unrelated to lipid reductions.


Subject(s)
Anticholesteremic Agents/therapeutic use , Coronary Disease/prevention & control , Cholesterol/blood , Cholesterol, LDL/blood , Coronary Disease/epidemiology , Coronary Disease/mortality , Humans , Myocardial Infarction/epidemiology , Myocardial Infarction/prevention & control , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...