Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
JAMA Netw Open ; 6(7): e2324977, 2023 07 03.
Article in English | MEDLINE | ID: mdl-37505498

ABSTRACT

Importance: The development of oncology drugs is expensive and beset by a high attrition rate. Analysis of the costs and causes of translational failure may help to reduce attrition and permit the more appropriate use of resources to reduce mortality from cancer. Objective: To analyze the causes of failure and expenses incurred in clinical trials of novel oncology drugs, with the example of insulin-like growth factor-1 receptor (IGF-1R) inhibitors, none of which was approved for use in oncology practice. Design, Setting, and Participants: In this cross-sectional study, inhibitors of the IGF-1R and their clinical trials for use in oncology practice between January 1, 2000, and July 31, 2021, were identified by searching PubMed and ClinicalTrials.gov. A proprietary commercial database was interrogated to provide expenses incurred in these trials. If data were not available, estimates were made of expenses using mean values from the proprietary database. A search revealed studies of the effects of IGF-1R inhibitors in preclinical in vivo assays, permitting calculation of the percentage of tumor growth inhibition. Archival data on the clinical trials of IGF-1R inhibitors and proprietary estimates of their expenses were examined, together with an analysis of preclinical data on IGF-1R inhibitors obtained from the published literature. Main Outcomes and Measures: Expenses associated with research and development of IGF-1R inhibitors. Results: Sixteen inhibitors of IGF-1R studied in 183 clinical trials were found. None of the trials, in a wide range of tumor types, showed efficacy permitting drug approval. More than 12 000 patients entered trials of IGF-1R inhibitors in oncology indications in 2003 to 2021. These trials incurred aggregate research and development expenses estimated at between $1.6 billion and $2.3 billion. Analysis of the results of preclinical in vivo assays of IGF-1R inhibitors that supported subsequent clinical investigations showed mixed activity and protocols that poorly reflected the treatment of advanced metastatic tumors in humans. Conclusions and Relevance: Failed drug development in oncology incurs substantial expense. At an industry level, an estimated $50 billion to $60 billion is spent annually on failed oncology trials. Improved target validation and more appropriate preclinical models are required to reduce attrition, with more attention to decision-making before launching clinical trials. A more appropriate use of resources may better reduce cancer mortality.


Subject(s)
Insulin-Like Growth Factor I , Neoplasms , Humans , Cross-Sectional Studies , Insulin-Like Growth Factor I/antagonists & inhibitors , Neoplasms/drug therapy
4.
Commun Med (Lond) ; 2(1): 154, 2022 Dec 06.
Article in English | MEDLINE | ID: mdl-36473994

ABSTRACT

BACKGROUND: Conventional preclinical models often miss drug toxicities, meaning the harm these drugs pose to humans is only realized in clinical trials or when they make it to market. This has caused the pharmaceutical industry to waste considerable time and resources developing drugs destined to fail. Organ-on-a-Chip technology has the potential improve success in drug development pipelines, as it can recapitulate organ-level pathophysiology and clinical responses; however, systematic and quantitative evaluations of Organ-Chips' predictive value have not yet been reported. METHODS: 870 Liver-Chips were analyzed to determine their ability to predict drug-induced liver injury caused by small molecules identified as benchmarks by the Innovation and Quality consortium, who has published guidelines defining criteria for qualifying preclinical models. An economic analysis was also performed to measure the value Liver-Chips could offer if they were broadly adopted in supporting toxicity-related decisions as part of preclinical development workflows. RESULTS: Here, we show that the Liver-Chip met the qualification guidelines across a blinded set of 27 known hepatotoxic and non-toxic drugs with a sensitivity of 87% and a specificity of 100%. We also show that this level of performance could generate over $3 billion annually for the pharmaceutical industry through increased small-molecule R&D productivity. CONCLUSIONS: The results of this study show how incorporating predictive Organ-Chips into drug development workflows could substantially improve drug discovery and development, allowing manufacturers to bring safer, more effective medicines to market in less time and at lower costs.


Drug development is lengthy and costly, as it relies on laboratory models that fail to predict human reactions to potential drugs. Because of this, toxic drugs sometimes go on to harm humans when they reach clinical trials or once they are in the marketplace. Organ-on-a-Chip technology involves growing cells on small devices to mimic organs of the body, such as the liver. Organ-Chips could potentially help identify toxicities earlier, but there is limited research into how well they predict these effects compared to conventional models. In this study, we analyzed 870 Liver-Chips to determine how well they predict drug-induced liver injury, a common cause of drug failure, and found that Liver-Chips outperformed conventional models. These results suggest that widespread acceptance of Organ-Chips could decrease drug attrition, help minimize harm to patients, and generate billions in revenue for the pharmaceutical industry.

5.
Nat Rev Drug Discov ; 21(12): 915-931, 2022 12.
Article in English | MEDLINE | ID: mdl-36195754

ABSTRACT

Successful drug discovery is like finding oases of safety and efficacy in chemical and biological deserts. Screens in disease models, and other decision tools used in drug research and development (R&D), point towards oases when they score therapeutic candidates in a way that correlates with clinical utility in humans. Otherwise, they probably lead in the wrong direction. This line of thought can be quantified by using decision theory, in which 'predictive validity' is the correlation coefficient between the output of a decision tool and clinical utility across therapeutic candidates. Analyses based on this approach reveal that the detectability of good candidates is extremely sensitive to predictive validity, because the deserts are big and oases small. Both history and decision theory suggest that predictive validity is under-managed in drug R&D, not least because it is so hard to measure before projects succeed or fail later in the process. This article explains the influence of predictive validity on R&D productivity and discusses methods to evaluate and improve it, with the aim of supporting the application of more effective decision tools and catalysing investment in their creation.


Subject(s)
Drug Discovery , Efficiency , Humans , Drug Discovery/methods
6.
Drug Discov Today ; 27(11): 103333, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36007753

ABSTRACT

Research and development (R&D) outsourcing offers some obvious productivity benefits (e.g., access to new technology, variabilised costs, risk sharing, etc.). However, recent work in economics points to a productivity headwind at the level of the innovation ecosystem. The market for technologies with economies of scope and knowledge spillovers (those with the biggest impact on industry economics and social welfare) has structural features that allow customers to capture a disproportionate share of economic value and transfer a disproportionate share of economic risk to technology providers, even though the providers aim to maximise profit. This reduces the incentives to invest in new ventures that specialise in the most promising early-stage projects. Therefore, near-term gains from R&D outsourcing can be offset by slower innovation in the long run.

8.
BMJ Open ; 7(5): e013497, 2017 06 06.
Article in English | MEDLINE | ID: mdl-28588106

ABSTRACT

OBJECTIVES: To assess the evidence for price-based alcohol policy interventions to determine whether minimum unit pricing (MUP) is likely to be effective. DESIGN: Systematic review and assessment of studies according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, against the Bradford Hill criteria for causality. Three electronic databases were searched from inception to February 2017. Additional articles were found through hand searching and grey literature searches. CRITERIA FOR SELECTING STUDIES: We included any study design that reported on the effect of price-based interventions on alcohol consumption or alcohol-related morbidity, mortality and wider harms. Studies reporting on the effects of taxation or affordability and studies that only investigated price elasticity of demand were beyond the scope of this review. Studies with any conflict of interest were excluded. All studies were appraised for methodological quality. RESULTS: Of 517 studies assessed, 33 studies were included: 26 peer-reviewed research studies and seven from the grey literature. All nine of the Bradford Hill criteria were met, although different types of study satisfied different criteria. For example, modelling studies complied with the consistency and specificity criteria, time series analyses demonstrated the temporality and experiment criteria, and the analogy criterion was fulfilled by comparing the findings with the wider literature on taxation and affordability. CONCLUSIONS: Overall, the Bradford Hill criteria for causality were satisfied. There was very little evidence that minimum alcohol prices are not associated with consumption or subsequent harms. However the overall quality of the evidence was variable, a large proportion of the evidence base has been produced by a small number of research teams, and the quantitative uncertainty in many estimates or forecasts is often poorly communicated outside the academic literature. Nonetheless, price-based alcohol policy interventions such as MUP are likely to reduce alcohol consumption, alcohol-related morbidity and mortality.


Subject(s)
Alcohol Drinking/economics , Alcohol-Related Disorders/mortality , Alcoholic Beverages/economics , Costs and Cost Analysis/standards , Models, Theoretical , Public Policy/economics , Alcohol Drinking/epidemiology , Causality , Humans , Randomized Controlled Trials as Topic , Taxes
9.
PLoS One ; 11(2): e0147215, 2016.
Article in English | MEDLINE | ID: mdl-26863229

ABSTRACT

A striking contrast runs through the last 60 years of biopharmaceutical discovery, research, and development. Huge scientific and technological gains should have increased the quality of academic science and raised industrial R&D efficiency. However, academia faces a "reproducibility crisis"; inflation-adjusted industrial R&D costs per novel drug increased nearly 100 fold between 1950 and 2010; and drugs are more likely to fail in clinical development today than in the 1970s. The contrast is explicable only if powerful headwinds reversed the gains and/or if many "gains" have proved illusory. However, discussions of reproducibility and R&D productivity rarely address this point explicitly. The main objectives of the primary research in this paper are: (a) to provide quantitatively and historically plausible explanations of the contrast; and (b) identify factors to which R&D efficiency is sensitive. We present a quantitative decision-theoretic model of the R&D process. The model represents therapeutic candidates (e.g., putative drug targets, molecules in a screening library, etc.) within a "measurement space", with candidates' positions determined by their performance on a variety of assays (e.g., binding affinity, toxicity, in vivo efficacy, etc.) whose results correlate to a greater or lesser degree. We apply decision rules to segment the space, and assess the probability of correct R&D decisions. We find that when searching for rare positives (e.g., candidates that will successfully complete clinical development), changes in the predictive validity of screening and disease models that many people working in drug discovery would regard as small and/or unknowable (i.e., an 0.1 absolute change in correlation coefficient between model output and clinical outcomes in man) can offset large (e.g., 10 fold, even 100 fold) changes in models' brute-force efficiency. We also show how validity and reproducibility correlate across a population of simulated screening and disease models. We hypothesize that screening and disease models with high predictive validity are more likely to yield good answers and good treatments, so tend to render themselves and their diseases academically and commercially redundant. Perhaps there has also been too much enthusiasm for reductionist molecular models which have insufficient predictive validity. Thus we hypothesize that the average predictive validity of the stock of academically and industrially "interesting" screening and disease models has declined over time, with even small falls able to offset large gains in scientific knowledge and brute-force efficiency. The rate of creation of valid screening and disease models may be the major constraint on R&D productivity.


Subject(s)
Biopharmaceutics/trends , Decision Theory , Drug Discovery , Biopharmaceutics/methods , Cost-Benefit Analysis , Drug Discovery/economics , Efficiency , False Positive Reactions , High-Throughput Screening Assays , Humans , Models, Theoretical , Quality Control , Reproducibility of Results , Research
10.
Ther Innov Regul Sci ; 49(3): 415-424, 2015 May.
Article in English | MEDLINE | ID: mdl-30222401

ABSTRACT

In recent years, concern has been growing that traditional research and development models in the life sciences are unsustainable. Productivity, especially in pharmaceuticals, has plummeted, and too many of the products emerging from increasingly lengthy and costly clinical development offer marginal benefit to patients. Although the phenomenon is global, there are specific and important features of European life sciences that impede the translation of an ever more penetrating understanding of biology into effective treatments. This article analyzes these issues in the context of European biopharmaceutical innovation, describes the actions that Europe is already taking, and suggests what more needs to be done.

11.
Nat Rev Drug Discov ; 11(3): 191-200, 2012 Mar 01.
Article in English | MEDLINE | ID: mdl-22378269

ABSTRACT

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.


Subject(s)
Drug Industry/standards , Efficiency, Organizational/standards , Pharmaceutical Preparations , Research/standards , Animals , Drug Delivery Systems/standards , Drug Delivery Systems/trends , Drug Industry/trends , Efficiency, Organizational/trends , Humans , Pharmaceutical Preparations/administration & dosage , Research/trends
12.
Neuroreport ; 14(7): 1045-50, 2003 May 23.
Article in English | MEDLINE | ID: mdl-12802200

ABSTRACT

To test the hypothesis that correlated neuronal activity serves as the neuronal code for visual feature binding, we applied information theory techniques to multiunit activity recorded from pairs of V1 recording sites in anaesthetised cats while presenting either single or separate bar stimuli. We quantified the roles of firing rates of individual channels and of cross-correlations between recording sites in encoding of visual information. Between 89 and 96% of the information was carried by firing rates; correlations contributed 4-11% extra information. The distribution across the population of either correlation strength or correlation information did not co-vary systematically with changes in perception predicted by Gestalt psychology. These results suggest that firing rates, rather than correlations, are the main element of the population code for feature binding in primary visual cortex.


Subject(s)
Brain Mapping/methods , Photic Stimulation/methods , Visual Cortex/physiology , Action Potentials/physiology , Animals , Cats
13.
Proc Natl Acad Sci U S A ; 99(16): 10494-9, 2002 Aug 06.
Article in English | MEDLINE | ID: mdl-12097644

ABSTRACT

The absolute diversity of prokaryotes is widely held to be unknown and unknowable at any scale in any environment. However, it is not necessary to count every species in a community to estimate the number of different taxa therein. It is sufficient to estimate the area under the species abundance curve for that environment. Log-normal species abundance curves are thought to characterize communities, such as bacteria, which exhibit highly dynamic and random growth. Thus, we are able to show that the diversity of prokaryotic communities may be related to the ratio of two measurable variables: the total number of individuals in the community and the abundance of the most abundant members of that community. We assume that either the least abundant species has an abundance of 1 or Preston's canonical hypothesis is valid. Consequently, we can estimate the bacterial diversity on a small scale (oceans 160 per ml; soil 6,400-38,000 per g; sewage works 70 per ml). We are also able to speculate about diversity at a larger scale, thus the entire bacterial diversity of the sea may be unlikely to exceed 2 x 10(6), while a ton of soil could contain 4 x 10(6) different taxa. These are preliminary estimates that may change as we gain a greater understanding of the nature of prokaryotic species abundance curves. Nevertheless, it is evident that local and global prokaryotic diversity can be understood through species abundance curves and purely experimental approaches to solving this conundrum will be fruitless.


Subject(s)
Bacteria/classification , Genetic Variation , RNA, Bacterial/classification , RNA, Ribosomal, 16S/classification , Soil Microbiology , Water Microbiology , Bacteria/genetics , Mathematical Computing , Oceans and Seas , Prokaryotic Cells
SELECTION OF CITATIONS
SEARCH DETAIL
...