Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Allergy Clin Immunol ; 153(5): 1330-1343, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38369029

RESUMO

BACKGROUND: The development of atopic dermatitis (AD) drugs is challenged by many disease phenotypes and trial design options, which are hard to explore experimentally. OBJECTIVE: We aimed to optimize AD trial design using simulations. METHODS: We constructed a quantitative systems pharmacology model of AD and standard of care (SoC) treatments and generated a phenotypically diverse virtual population whose parameter distribution was derived from known relationships between AD biomarkers and disease severity and calibrated using disease severity evolution under SoC regimens. RESULTS: We applied this workflow to the immunomodulator OM-85, currently being investigated for its potential use in AD, and calibrated the investigational treatment model with the efficacy profile of an existing trial (thereby enriching it with plausible marker levels and dynamics). We assessed the sensitivity of trial outcomes to trial protocol and found that for this particular example the choice of end point is more important than the choice of dosing regimen and patient selection by model-based responder enrichment could increase the expected effect size. A global sensitivity analysis revealed that only a limited subset of baseline biomarkers is needed to predict the drug response of the full virtual population. CONCLUSIONS: This AD quantitative systems pharmacology workflow built around knowledge of marker-severity relationships as well as SoC efficacy can be tailored to specific development cases to optimize several trial protocol parameters and biomarker stratification and therefore has promise to become a powerful model-informed AD drug development and personalized medicine tool.


Assuntos
Biomarcadores , Ensaios Clínicos como Assunto , Dermatite Atópica , Dermatite Atópica/tratamento farmacológico , Humanos , Farmacologia em Rede , Fluxo de Trabalho , Fatores Imunológicos/uso terapêutico , Fatores Imunológicos/farmacologia , Simulação por Computador , Projetos de Pesquisa , Índice de Gravidade de Doença
2.
BMC Bioinformatics ; 24(1): 331, 2023 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-37667175

RESUMO

BACKGROUND: Over the past several decades, metrics have been defined to assess the quality of various types of models and to compare their performance depending on their capacity to explain the variance found in real-life data. However, available validation methods are mostly designed for statistical regressions rather than for mechanistic models. To our knowledge, in the latter case, there are no consensus standards, for instance for the validation of predictions against real-world data given the variability and uncertainty of the data. In this work, we focus on the prediction of time-to-event curves using as an application example a mechanistic model of non-small cell lung cancer. We designed four empirical methods to assess both model performance and reliability of predictions: two methods based on bootstrapped versions of parametric statistical tests: log-rank and combined weighted log-ranks (MaxCombo); and two methods based on bootstrapped prediction intervals, referred to here as raw coverage and the juncture metric. We also introduced the notion of observation time uncertainty to take into consideration the real life delay between the moment when an event happens, and the moment when it is observed and reported. RESULTS: We highlight the advantages and disadvantages of these methods according to their application context. We have shown that the context of use of the model has an impact on the model validation process. Thanks to the use of several validation metrics we have highlighted the limit of the model to predict the evolution of the disease in the whole population of mutations at the same time, and that it was more efficient with specific predictions in the target mutation populations. The choice and use of a single metric could have led to an erroneous validation of the model and its context of use. CONCLUSIONS: With this work, we stress the importance of making judicious choices for a metric, and how using a combination of metrics could be more relevant, with the objective of validating a given model and its predictions within a specific context of use. We also show how the reliability of the results depends both on the metric and on the statistical comparisons, and that the conditions of application and the type of available information need to be taken into account to choose the best validation strategy.


Assuntos
Adenocarcinoma de Pulmão , Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/genética , Reprodutibilidade dos Testes , Incerteza , Neoplasias Pulmonares/genética , Adenocarcinoma de Pulmão/genética , Receptores ErbB/genética
3.
Acta Biotheor ; 70(3): 19, 2022 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-35796890

RESUMO

Mechanistic models are built using knowledge as the primary information source, with well-established biological and physical laws determining the causal relationships within the model. Once the causal structure of the model is determined, parameters must be defined in order to accurately reproduce relevant data. Determining parameters and their values is particularly challenging in the case of models of pathophysiology, for which data for calibration is sparse. Multiple data sources might be required, and data may not be in a uniform or desirable format. We describe a calibration strategy to address the challenges of scarcity and heterogeneity of calibration data. Our strategy focuses on parameters whose initial values cannot be easily derived from the literature, and our goal is to determine the values of these parameters via calibration with constraints set by relevant data. When combined with a covariance matrix adaptation evolution strategy (CMA-ES), this step-by-step approach can be applied to a wide range of biological models. We describe a stepwise, integrative and iterative approach to multiscale mechanistic model calibration, and provide an example of calibrating a pathophysiological lung adenocarcinoma model. Using the approach described here we illustrate the successful calibration of a complex knowledge-based mechanistic model using only the limited heterogeneous datasets publicly available in the literature.


Assuntos
Adenocarcinoma de Pulmão , Modelos Biológicos , Animais , Calibragem
4.
Mol Syst Biol ; 16(4): e9495, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32337855

RESUMO

The prevalence of non-alcoholic fatty liver disease (NAFLD) continues to increase dramatically, and there is no approved medication for its treatment. Recently, we predicted the underlying molecular mechanisms involved in the progression of NAFLD using network analysis and identified metabolic cofactors that might be beneficial as supplements to decrease human liver fat. Here, we first assessed the tolerability of the combined metabolic cofactors including l-serine, N-acetyl-l-cysteine (NAC), nicotinamide riboside (NR), and l-carnitine by performing a 7-day rat toxicology study. Second, we performed a human calibration study by supplementing combined metabolic cofactors and a control study to study the kinetics of these metabolites in the plasma of healthy subjects with and without supplementation. We measured clinical parameters and observed no immediate side effects. Next, we generated plasma metabolomics and inflammatory protein markers data to reveal the acute changes associated with the supplementation of the metabolic cofactors. We also integrated metabolomics data using personalized genome-scale metabolic modeling and observed that such supplementation significantly affects the global human lipid, amino acid, and antioxidant metabolism. Finally, we predicted blood concentrations of these compounds during daily long-term supplementation by generating an ordinary differential equation model and liver concentrations of serine by generating a pharmacokinetic model and finally adjusted the doses of individual metabolic cofactors for future human clinical trials.


Assuntos
Acetilcisteína/administração & dosagem , Carnitina/administração & dosagem , Metabolômica/métodos , Niacinamida/análogos & derivados , Serina/administração & dosagem , Acetilcisteína/sangue , Adulto , Animais , Carnitina/sangue , Suplementos Nutricionais , Quimioterapia Combinada , Voluntários Saudáveis , Humanos , Masculino , Modelos Animais , Niacinamida/administração & dosagem , Niacinamida/sangue , Hepatopatia Gordurosa não Alcoólica/dietoterapia , Medicina de Precisão , Compostos de Piridínio , Ratos , Serina/sangue
5.
CPT Pharmacometrics Syst Pharmacol ; 10(11): 1332-1342, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34327869

RESUMO

A model to quantitatively characterize the effect of evinacumab, an investigational monoclonal antibody against angiopoietin-like protein 3 (ANGPTL3) on lipid trafficking is needed. A quantitative systems pharmacology (QSP) approach was developed to predict the transient responses of different triglyceride (TG)-rich lipoprotein particles in response to evinacumab administration. A previously published hepatic lipid model was modified to address specific queries relevant to the mechanism of evinacumab and its effect on lipid metabolism. Modifications included the addition of intermediate-density lipoprotein and low-density lipoprotein compartments to address the modulation of lipoprotein lipase (LPL) activity by evinacumab, ANGPTL3 biosynthesis and clearance, and a target-mediated drug disposition model. A sensitivity analysis guided the creation of virtual patients (VPs). The drug-free QSP model was found to agree well with clinical data published with the initial hepatic liver model over simulations ranging from 20 to 365 days in duration. The QSP model, including the interaction between LPL and ANGPTL3, was validated against clinical data for total evinacumab, total ANGPTL3, and TG concentrations as well as inhibition of apolipoprotein CIII. Free ANGPTL3 concentration and LPL activity were also modeled. In total, seven VPs were created; the lipid levels of the VPs were found to match the range of responses observed in evinacumab clinical trial data. The QSP model results agreed with clinical data for various subjects and was shown to characterize known TG physiology and drug effects in a range of patient populations with varying levels of TGs, enabling hypothesis testing of evinacumab effects on lipid metabolism.


Assuntos
Anticorpos Monoclonais , Farmacologia em Rede , Proteína 3 Semelhante a Angiopoietina , Proteínas Semelhantes a Angiopoietina , Anticorpos Monoclonais/farmacologia , Humanos , Triglicerídeos/metabolismo
6.
Drug Discov Today ; 22(10): 1532-1538, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28736156

RESUMO

Nonalcoholic steatohepatitis (NASH) is a severe form of nonalcoholic fatty liver disease (NAFLD). We surveyed NASH therapies currently in development, and found a significant variety of targets and approaches. Evaluation and clinical testing of these targets is an expensive and time-consuming process. Systems biology approaches could enable the quantitative evaluation of the likely efficacy and safety of different targets. This motivated our review of recent systems biology studies that focus on the identification of targets and development of effective treatments for NASH. We discuss the potential broader use of genome-scale metabolic models and integrated networks in the validation of drug targets, which could facilitate more productive and efficient drug development decisions for the treatment of NASH.


Assuntos
Fígado/efeitos dos fármacos , Hepatopatia Gordurosa não Alcoólica/tratamento farmacológico , Preparações Farmacêuticas/administração & dosagem , Animais , Humanos , Biologia de Sistemas/métodos
7.
PLoS One ; 11(2): e0147215, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26863229

RESUMO

A striking contrast runs through the last 60 years of biopharmaceutical discovery, research, and development. Huge scientific and technological gains should have increased the quality of academic science and raised industrial R&D efficiency. However, academia faces a "reproducibility crisis"; inflation-adjusted industrial R&D costs per novel drug increased nearly 100 fold between 1950 and 2010; and drugs are more likely to fail in clinical development today than in the 1970s. The contrast is explicable only if powerful headwinds reversed the gains and/or if many "gains" have proved illusory. However, discussions of reproducibility and R&D productivity rarely address this point explicitly. The main objectives of the primary research in this paper are: (a) to provide quantitatively and historically plausible explanations of the contrast; and (b) identify factors to which R&D efficiency is sensitive. We present a quantitative decision-theoretic model of the R&D process. The model represents therapeutic candidates (e.g., putative drug targets, molecules in a screening library, etc.) within a "measurement space", with candidates' positions determined by their performance on a variety of assays (e.g., binding affinity, toxicity, in vivo efficacy, etc.) whose results correlate to a greater or lesser degree. We apply decision rules to segment the space, and assess the probability of correct R&D decisions. We find that when searching for rare positives (e.g., candidates that will successfully complete clinical development), changes in the predictive validity of screening and disease models that many people working in drug discovery would regard as small and/or unknowable (i.e., an 0.1 absolute change in correlation coefficient between model output and clinical outcomes in man) can offset large (e.g., 10 fold, even 100 fold) changes in models' brute-force efficiency. We also show how validity and reproducibility correlate across a population of simulated screening and disease models. We hypothesize that screening and disease models with high predictive validity are more likely to yield good answers and good treatments, so tend to render themselves and their diseases academically and commercially redundant. Perhaps there has also been too much enthusiasm for reductionist molecular models which have insufficient predictive validity. Thus we hypothesize that the average predictive validity of the stock of academically and industrially "interesting" screening and disease models has declined over time, with even small falls able to offset large gains in scientific knowledge and brute-force efficiency. The rate of creation of valid screening and disease models may be the major constraint on R&D productivity.


Assuntos
Biofarmácia/tendências , Teoria da Decisão , Descoberta de Drogas , Biofarmácia/métodos , Análise Custo-Benefício , Descoberta de Drogas/economia , Eficiência , Reações Falso-Positivas , Ensaios de Triagem em Larga Escala , Humanos , Modelos Teóricos , Controle de Qualidade , Reprodutibilidade dos Testes , Pesquisa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA