Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
1.
Biostatistics ; 2023 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-37805939

RESUMO

Joint modeling of longitudinal data such as quality of life data and survival data is important for palliative care researchers to draw efficient inferences because it can account for the associations between those two types of data. Modeling quality of life on a retrospective from death time scale is useful for investigators to interpret the analysis results of palliative care studies which have relatively short life expectancies. However, informative censoring remains a complex challenge for modeling quality of life on the retrospective time scale although it has been addressed for joint models on the prospective time scale. To fill this gap, we develop a novel joint modeling approach that can address the challenge by allowing informative censoring events to be dependent on patients' quality of life and survival through a random effect. There are two sub-models in our approach: a linear mixed effect model for the longitudinal quality of life and a competing-risk model for the death time and dropout time that share the same random effect as the longitudinal model. Our approach can provide unbiased estimates for parameters of interest by appropriately modeling the informative censoring time. Model performance is assessed with a simulation study and compared with existing approaches. A real-world study is presented to illustrate the application of the new approach.

2.
Brief Bioinform ; 22(6)2021 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-34308471

RESUMO

The effect of cancer therapies is often tested pre-clinically via in vitro experiments, where the post-treatment viability of the cancer cell population is measured through assays estimating the number of viable cells. In this way, large libraries of compounds can be tested, comparing the efficacy of each treatment. Drug interaction studies focus on the quantification of the additional effect encountered when two drugs are combined, as opposed to using the treatments separately. In the bayesynergy R package, we implement a probabilistic approach for the description of the drug combination experiment, where the observed dose response curve is modelled as a sum of the expected response under a zero-interaction model and an additional interaction effect (synergistic or antagonistic). Although the model formulation makes use of the Bliss independence assumption, we note that the posterior estimates of the dose-response surface can also be used to extract synergy scores based on other reference models, which we illustrate for the Highest Single Agent model. The interaction is modelled in a flexible manner, using a Gaussian process formulation. Since the proposed approach is based on a statistical model, it allows the natural inclusion of replicates, handles missing data and uneven concentration grids, and provides uncertainty quantification around the results. The model is implemented in the open-source Stan programming language providing a computationally efficient sampler, a fast approximation of the posterior through variational inference, and features parallel processing for working with large drug combination screens.


Assuntos
Teorema de Bayes , Biologia Computacional/métodos , Interações Medicamentosas , Sinergismo Farmacológico , Software , Algoritmos , Linhagem Celular , Avaliação Pré-Clínica de Medicamentos , Quimioterapia Combinada , Humanos , Técnicas In Vitro , Navegador
3.
Stat Med ; 42(9): 1338-1352, 2023 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-36757145

RESUMO

Outcome-dependent sampling (ODS) is a commonly used class of sampling designs to increase estimation efficiency in settings where response information (and possibly adjuster covariates) is available, but the exposure is expensive and/or cumbersome to collect. We focus on ODS within the context of a two-phase study, where in Phase One the response and adjuster covariate information is collected on a large cohort that is representative of the target population, but the expensive exposure variable is not yet measured. In Phase Two, using response information from Phase One, we selectively oversample a subset of informative subjects in whom we collect expensive exposure information. Importantly, the Phase Two sample is no longer representative, and we must use ascertainment-correcting analysis procedures for valid inferences. In this paper, we focus on likelihood-based analysis procedures, particularly a conditional-likelihood approach and a full-likelihood approach. Whereas the full-likelihood retains incomplete Phase One data for subjects not selected into Phase Two, the conditional-likelihood explicitly conditions on Phase Two sample selection (ie, it is a "complete case" analysis procedure). These designs and analysis procedures are typically implemented assuming a known, parametric model for the response distribution. However, in this paper, we approach analyses implementing a novel semi-parametric extension to generalized linear models (SPGLM) to develop likelihood-based procedures with improved robustness to misspecification of distributional assumptions. We specifically focus on the common setting where standard GLM distributional assumptions are not satisfied (eg, misspecified mean/variance relationship). We aim to provide practical design guidance and flexible tools for practitioners in these settings.


Assuntos
Modelos Estatísticos , Humanos , Modelos Lineares , Funções Verossimilhança
4.
Stat Med ; 42(30): 5708-5722, 2023 12 30.
Artigo em Inglês | MEDLINE | ID: mdl-37858287

RESUMO

As the roles of historical trials and real-world evidence in drug development have substantially increased, several approaches have been proposed to leverage external data and improve the design of clinical trials. While most of these approaches focus on methodology development for borrowing information during the analysis stage, there is a risk of inadequate or absent enrollment of concurrent control due to misspecification of heterogeneity from external data, which can result in unreliable estimates of treatment effect. In this study, we introduce a Bayesian hybrid design with flexible sample size adaptation (BEATS) that allows for adaptive borrowing of external data based on the level of heterogeneity to augment the control arm during both the design and interim analysis stages. Moreover, BEATS extends the Bayesian semiparametric meta-analytic predictive prior (BaSe-MAP) to incorporate time-to-event endpoints, enabling optimal borrowing performance. Initially, BEATS calibrates the expected sample size and initial randomization ratio based on heterogeneity among the external data. During the interim analysis, flexible sample size adaptation is performed to address conflicts between the concurrent and historical control, while also conducting futility analysis. At the final analysis, estimation is provided by incorporating the calibrated amount of external data. Therefore, our proposed design allows for an approximation of an ideal randomized controlled trial with an equal randomization ratio while controlling the size of the concurrent control to benefit patients and accelerate drug development. BEATS also offers optimal power and robust estimation through flexible sample size adaptation when conflicts arise between the concurrent control and external data.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Teorema de Bayes , Simulação por Computador
5.
J Biopharm Stat ; 33(5): 555-574, 2023 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-36852969

RESUMO

The main purpose of this paper is to survey the statistical inference for covariate-specific time-dependent receiver operating characteristic (ROC) curves with nonignorable missing continuous biomarker values. To construct time-dependent ROC curves, we consider a joint model which assumes that the failure time depends on the continuous biomarker and the covariates through a Cox proportional hazards model and that the continuous biomarker depends on the covariates through a semiparametric location model. Assuming a purely parametric model on the propensity score, we utilize instrumental variables to deal with the identifiable issue and estimate the unknown parameters of the propensity score by a simple and efficient method. In addition, when the propensity score is estimated, we develop HT and AIPW approaches to estimate our interested quantities. In the presence of nonignorable missing biomarker, our AIPW estimators of the interested quantities are still doubly robust when the true propensity score is a special parametric logistic model. At last, simulation studies are conducted to assess the performance of our proposed approaches, and a real data analysis is also carried out to illustrate its application.


Assuntos
Modelos Estatísticos , Humanos , Curva ROC , Simulação por Computador , Pontuação de Propensão , Modelos Logísticos , Biomarcadores
6.
Lifetime Data Anal ; 29(2): 342-371, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36472759

RESUMO

Nested case-control sampled event time data under a highly stratified proportional hazards model, in which the number of strata increases proportional to sample size, is described and analyzed. The data can be characterized as stratified sampling from the event time risk sets and the analysis approach of Borgan et al. (Ann Stat 23:1749-1778, 1995) is adapted to accommodate both the stratification and case-control sampling from the stratified risk sets. Conditions for the consistency and asymptotic normality of the maximum partial likelihood estimator are provided and the results are used to compare the efficiency of the stratified analysis to an unstratified analysis when the baseline hazards can be semi-parametrically modeled in two special cases. Using the stratified sampling representation of the stratified analysis, methods for absolute risk estimation described by Borgan et al. (1995) for nested case-control data are used to develop methods for absolute risk estimation under the stratified model. The methods are illustrated by a year of birth stratified analysis of radon exposure and lung cancer mortality in a cohort of uranium miners from the Colorado Plateau.


Assuntos
Neoplasias Pulmonares , Humanos , Modelos de Riscos Proporcionais , Estudos de Casos e Controles , Estudos de Coortes , Tamanho da Amostra
7.
Waste Manag Res ; 41(5): 1036-1045, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36544368

RESUMO

Eco-efficiency assessment of municipal solid waste (MSW) suppliers is a useful tool in the transition to a circular economy. Furthermore, it provides evidence of the economic and environmental performance of municipalities that can be used for decision-making and/or elaboration of regulatory policies. In this study, eco-efficiency scores were computed for a sample of 140 Chilean municipalities in the provision of MSW services. In doing so, the stochastic semi-parametric envelopment of data method was applied. It is a novel technique which overcomes the limitations of parametric (stochastic frontier analysis) and non-parametric (data envelopment analysis) methods previously employed to evaluate the eco-efficiency of MSW services. The average eco-efficiency of the 140 assessed municipalities was 0.332 which indicates that they could save 66.8% of their operational costs and recycling the same amount of waste. Moreover, 61.4% of the evaluated municipalities presented an eco-efficiency score which was lower than 0.4, whereas the other municipalities (38.6% of the sample) exhibited an eco-efficiency which raged between 0.4 and 0.80. Hence, none of the municipalities assessed was identified as eco-efficient which, implies that there is room for all municipalities to reduce operational costs in the management of MSW. Population density, tourism and location of the municipality were identified as factors influencing the eco-efficiency of the municipalities in MSW management.


Assuntos
Eliminação de Resíduos , Gerenciamento de Resíduos , Resíduos Sólidos , Eliminação de Resíduos/métodos , Cidades , Custos e Análise de Custo , Reciclagem
8.
Biometrics ; 78(1): 128-140, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33249556

RESUMO

In biomedical practices, multiple biomarkers are often combined using a prespecified classification rule with tree structure for diagnostic decisions. The classification structure and cutoff point at each node of a tree are usually chosen on an ad hoc basis, depending on decision makers' experience. There is a lack of analytical approaches that lead to optimal prediction performance, and that guide the choice of optimal cutoff points in a pre-specified classification tree. In this paper, we propose to search for and estimate the optimal decision rule through an approach of rank correlation maximization. The proposed method is flexible, theoretically sound, and computationally feasible when many biomarkers are available for classification or prediction. Using the proposed approach, for a prespecified tree-structured classification rule, we can guide the choice of optimal cutoff points at tree nodes and estimate optimal prediction performance from multiple biomarkers combined.


Assuntos
Biomarcadores
9.
Stat Med ; 41(13): 2354-2374, 2022 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-35274335

RESUMO

Semi-continuous data present challenges in both model fitting and interpretation. Parametric distributions may be inappropriate for extreme long right tails of the data. Mean effects of covariates, susceptible to extreme values, may fail to capture relevant information for most of the sample. We propose a two-component semi-parametric Bayesian mixture model, with the discrete component captured by a probability mass (typically at zero) and the continuous component of the density modeled by a mixture of B-spline densities that can be flexibly fit to any data distribution. The model includes random effects of subjects to allow for application to longitudinal data. We specify prior distributions on parameters and perform model inference using a Markov chain Monte Carlo (MCMC) Gibbs-sampling algorithm programmed in R. Statistical inference can be made for multiple quantiles of the covariate effects simultaneously providing a comprehensive view. Various MCMC sampling techniques are used to facilitate convergence. We demonstrate the performance and the interpretability of the model via simulations and analyses on the National Consortium on Alcohol and Neurodevelopment in Adolescence study (NCANDA) data on alcohol binge drinking.


Assuntos
Algoritmos , Modelos Estatísticos , Teorema de Bayes , Humanos , Cadeias de Markov , Método de Monte Carlo
10.
Stat Med ; 41(14): 2665-2687, 2022 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-35699116

RESUMO

The article develops marginal models for multivariate longitudinal responses. Overall, the model consists of five regression submodels, one for the mean and four for the covariance matrix, with the latter resulting by considering various matrix decompositions. The decompositions that we employ are intuitive, easy to understand, and they do not rely on any assumptions such as the presence of an ordering among the multivariate responses. The regression submodels are semi-parametric, with unknown functions represented by basis function expansions. We use spike-slap priors for the regression coefficients to achieve variable selection and function regularization, and to obtain parameter estimates that account for model uncertainty. An efficient Markov chain Monte Carlo algorithm for posterior sampling is developed. The simulation study presented investigates the gains that one may have when considering multivariate longitudinal analyses instead of univariate ones, and whether these gains can counteract the negative effects of missing data. We apply the methods on a highly unbalanced longitudinal dataset with four responses observed over a period of 20 years.


Assuntos
Teorema de Bayes , Simulação por Computador , Humanos , Cadeias de Markov , Método de Monte Carlo , Análise Multivariada
11.
Bioprocess Biosyst Eng ; 45(11): 1889-1904, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36245012

RESUMO

Flux balance analysis (FBA) is currently the standard method to compute metabolic fluxes in genome-scale networks. Several FBA extensions employing diverse objective functions and/or constraints have been published. Here we propose a hybrid semi-parametric FBA extension that combines mechanistic-level constraints (parametric) with empirical constraints (non-parametric) in the same linear program. A CHO dataset with 27 measured exchange fluxes obtained from 21 reactor experiments served to evaluate the method. The mechanistic constraints were deduced from a reduced CHO-K1 genome-scale network with 686 metabolites, 788 reactions and 210 degrees of freedom. The non-parametric constraints were obtained by principal component analysis of the flux dataset. The two types of constraints were integrated in the same linear program showing comparable computational cost to standard FBA. The hybrid FBA is shown to significantly improve the specific growth rate prediction under different constraints scenarios. A metabolically efficient cell growth feed targeting minimal byproducts accumulation was designed by hybrid FBA. It is concluded that integrating parametric and nonparametric constraints in the same linear program may be an efficient approach to reduce the solution space and to improve the predictive power of FBA methods when critical mechanistic information is missing.


Assuntos
Redes e Vias Metabólicas , Modelos Biológicos , Cricetinae , Animais , Cricetulus , Células CHO
12.
J Environ Manage ; 323: 116170, 2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36115243

RESUMO

Taking variations in PM2.5 as indicators for assessing the performance of authority in air quality management will probably lead to misjudgment, as PM2.5 concentration is affected not only by anthropogenic emissions but also by uncontrollable circumstances. To solve this problem, we proposed a decomposition method to attribute the variations in PM2.5 to the contributions of meteorological conditions, cross-regional transports of pollutants, secondary aerosols, and local emissions. This method estimated the relationship between PM2.5 concentration and the various influencing factors using a semi-parametric generalized additive model. A case study was conducted in Shenyang, a heavily polluted city in northeast China, based on up to 595,000 hourly data samples from 2014 to 2017. The decomposition results indicated that the average PM2.5 in 2017 decreased by 39.80% compared with 2014, far exceeding the government's target of 15%, but only 11.79% of the decrease was benefited from the control of local emissions. The severe pollution event that occurred in November 2015 was induced by the combination of massive emissions from heating and meteorological conditions conducive to pollutant accumulation. Furthermore, the approach we proposed can be extended to any location that has monitoring data on air pollutant concentrations and meteorological conditions.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Aerossóis/análise , Poluentes Atmosféricos/análise , Poluição do Ar/análise , China , Monitoramento Ambiental/métodos , Material Particulado/análise , Estações do Ano
13.
Entropy (Basel) ; 24(2)2022 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-35205474

RESUMO

The estimation of average treatment effect (ATE) as a causal parameter is carried out in two steps, where in the first step, the treatment and outcome are modeled to incorporate the potential confounders, and in the second step, the predictions are inserted into the ATE estimators such as the augmented inverse probability weighting (AIPW) estimator. Due to the concerns regarding the non-linear or unknown relationships between confounders and the treatment and outcome, there has been interest in applying non-parametric methods such as machine learning (ML) algorithms instead. Some of the literature proposes to use two separate neural networks (NNs) where there is no regularization on the network's parameters except the stochastic gradient descent (SGD) in the NN's optimization. Our simulations indicate that the AIPW estimator suffers extensively if no regularization is utilized. We propose the normalization of AIPW (referred to as nAIPW) which can be helpful in some scenarios. nAIPW, provably, has the same properties as AIPW, that is, the double-robustness and orthogonality properties. Further, if the first-step algorithms converge fast enough, under regulatory conditions, nAIPW will be asymptotically normal. We also compare the performance of AIPW and nAIPW in terms of the bias and variance when small to moderate L1 regularization is imposed on the NNs.

14.
Environ Dev Sustain ; 24(7): 9102-9117, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34539229

RESUMO

Since coronavirus disease 2019 (COVID-19) was first reported on December 2019 in Wuhan, it fast spread to the rest of China, which has turned into a global public health problem later and generated global stock markets to violently shake. We inspect the causal relationships between economic development (ED) and environmental quality (EQ) during the period from January 2019 to May 2020 with the structural break for China and investigate the causal linkages between ED and EQ in subgroup of before and after the outbreak of COVID-19 with a semi-parametric model. The empirical tests show that smoothing structural transforms matter for the linkages of causality between ED and EQ, especially after COVID-19 infection. While the Toda-Yamamoto causality analysis supports unidirectional causality between ED and EQ before the outbreak of COVID-19, under structural shifts by the causality supplies of bidirectional casual linkages after the outbreak of COVID-19. Our results further clarified the proof that the economic activity gives rise to the environmental pollution and energy utilization mainly via the shock of COVID-19 in China. The emphasis on nonlinear causality between economic development and environmental quality may be an opportunity for China's economic recovery under considering the factor of COVID-19 infection.

15.
Biotechnol Bioeng ; 118(11): 4389-4401, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34383309

RESUMO

To date, a large number of experiments are performed to develop a biochemical process. The generated data is used only once, to take decisions for development. Could we exploit data of already developed processes to make predictions for a novel process, we could significantly reduce the number of experiments needed. Processes for different products exhibit differences in behaviour, typically only a subset behave similar. Therefore, effective learning on multiple product spanning process data requires a sensible representation of the product identity. We propose to represent the product identity (a categorical feature) by embedding vectors that serve as input to a Gaussian process regression model. We demonstrate how the embedding vectors can be learned from process data and show that they capture an interpretable notion of product similarity. The improvement in performance is compared to traditional one-hot encoding on a simulated cross product learning task. All in all, the proposed method could render possible significant reductions in wet-lab experiments.


Assuntos
Modelos Biológicos , Animais , Linhagem Celular , Humanos
16.
BMC Med Res Methodol ; 21(1): 56, 2021 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-33743583

RESUMO

BACKGROUND: Estimation that employs instrumental variables (IV) can reduce or eliminate bias due to confounding. In observational studies, instruments result from natural experiments such as the effect of clinician preference or geographic distance on treatment selection. In randomized studies the randomization indicator is typically a valid instrument, especially if the study is blinded, e.g. no placebo effect. Estimation via instruments is a highly developed field for linear models but the use of instruments in time-to-event analysis is far from established. Various IV-based estimators of the hazard ratio (HR) from Cox's regression models have been proposed. METHODS: We extend IV based estimation of Cox's model beyond proportionality of hazards, and address estimation of a log-linear time dependent hazard ratio and a piecewise constant HR. We estimate the marginal time-dependent hazard ratio unlike other approaches that estimate the hazard ratio conditional on the omitted covariates. We use estimating equations motivated by Martingale representations that resemble the partial likelihood score statistic. We conducted simulations that include the use of copulas to generate potential times-to-event that have a given marginal structural time dependent hazard ratio but are dependent on omitted covariates. We compare our approach to the partial likelihood estimator, and two other IV based approaches. We apply it to estimation of the time dependent hazard ratio for two vascular interventions. RESULTS: The method performs well in simulations of a stepwise time-dependent hazard ratio, but illustrates some bias that increases as the hazard ratio moves away from unity (the value that typically underlies the null hypothesis). It compares well to other approaches when the hazard ratio is stepwise constant. It also performs well for estimation of a log-linear hazard ratio where no other instrumental variable approaches exist. CONCLUSION: The estimating equations we propose for estimating a time-dependent hazard ratio using an IV perform well in simulations. We encourage the use of our procedure for time-dependent hazard ratio estimation when unmeasured confounding is a concern and a suitable instrumental variable exists.


Assuntos
Fatores de Confusão Epidemiológicos , Viés , Simulação por Computador , Humanos , Modelos Lineares , Modelos de Riscos Proporcionais
17.
J Biomed Inform ; 120: 103854, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34237438

RESUMO

In recent years, a comprehensive study of complex disease with multi-view datasets (e.g., multi-omics and imaging scans) has been a focus and forefront in biomedical research. State-of-the-art biomedical technologies are enabling us to collect multi-view biomedical datasets for the study of complex diseases. While all the views of data tend to explore complementary information of disease, analysis of multi-view data with complex interactions is challenging for a deeper and holistic understanding of biological systems. In this paper, we propose a novel generalized kernel machine approach to identify higher-order composite effects in multi-view biomedical datasets (GKMAHCE). This generalized semi-parametric (a mixed-effect linear model) approach includes the marginal and joint Hadamard product of features from different views of data. The proposed kernel machine approach considers multi-view data as predictor variables to allow a more thorough and comprehensive modeling of a complex trait. We applied GKMAHCE approach to both synthesized datasets and real multi-view datasets from adolescent brain development and osteoporosis study. Our experiments demonstrate that the proposed method can effectively identify higher-order composite effects and suggest that corresponding features (genes, region of interests, and chemical taxonomies) function in a concerted effort. We show that the proposed method is more generalizable than existing ones. To promote reproducible research, the source code of the proposed method is available at.


Assuntos
Algoritmos , Osteoporose , Adolescente , Encéfalo/diagnóstico por imagem , Humanos , Modelos Lineares , Osteoporose/diagnóstico por imagem , Software
18.
BMC Geriatr ; 21(1): 142, 2021 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-33637045

RESUMO

BACKGROUND: Independence is related to the aging process. Loss of independence is defined as the inability to make decisions and participate in activities of daily living (ADLs). Independence is related to physical, psychological, biological, and socioeconomic factors. An enhanced understanding of older people's independence trajectories and associated risk factors would enable the develop early intervention strategies. METHODS: Independence trajectory analysis was performed on patients identified in the Unité de Prévention de Suivi et d'Analyse du Vieillissement (UPSAV) database. UPSAV cohort is a prospective observational study. Participants were 221 community-dwelling persons aged ≥75 years followed for 24 months between July 2011-November 2013 and benefits from a prevention strategy. Data were collected prospectively using a questionnaire. Independence was assessed using the "Functional Autonomy Measurement System (Système de Mesure de l'Autonomie Fonctionnelle (SMAF))". Group-based trajectory modeling (GBTM) was performed to identify independence trajectories, and the results were compared with those of k-means and hierarchical ascending classifications. A multinomial logistic regression was performed to identify predictive factors of the independence trajectory. RESULTS: Three distinct trajectories of independence were identified including a "Stable functional autonomy (SFA) trajectory" (53% of patients), a "Stable then decline functional autonomy decline (SDFA) trajectory" (33% of patients) and a "Constantly functional autonomy decline (CFAD) trajectory" (14% of patients). Not being a member of an association, and previous fall were significantly associated of a SDFA trajectory (P < 0.01). Absence of financial and human assistance, no hobbies, and cognitive disorder were significantly associated with a CFAD trajectory (P < 0.01). Previous occupation and multiple pathologies were predictive factors of both declining trajectories SDFA and CFAD. CONCLUSIONS: Community-living older persons exhibit distinct independence trajectories and the predictive factors. The evidence from this study suggests that the prevention and screening for the loss of independence of the older adults should be anticipated to maintaining autonomy.


Assuntos
Atividades Cotidianas , Transtornos Cognitivos , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Humanos , Vida Independente , Estudos Prospectivos
19.
Pharm Stat ; 20(6): 1061-1073, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33855778

RESUMO

Before biomarkers can be used in clinical trials or patients' management, the laboratory assays that measure their levels have to go through development and analytical validation. One of the most critical performance metrics for validation of any assay is related to the minimum amount of values that can be detected and any value below this limit is referred to as below the limit of detection (LOD). Most of the existing approaches that model such biomarkers, restricted by LOD, are parametric in nature. These parametric models, however, heavily depend on the distributional assumptions, and can result in loss of precision under the model or the distributional misspecifications. Using an example from a prostate cancer clinical trial, we show how a critical relationship between serum androgen biomarker and a prognostic factor of overall survival is completely missed by the widely used parametric Tobit model. Motivated by this example, we implement a semiparametric approach, through a pseudo-value technique, that effectively captures the important relationship between the LOD restricted serum androgen and the prognostic factor. Our simulations show that the pseudo-value based semiparametric model outperforms a commonly used parametric model for modeling below LOD biomarkers by having lower mean square errors of estimation.


Assuntos
Modelos Estatísticos , Biomarcadores , Simulação por Computador , Humanos , Limite de Detecção , Masculino
20.
BMC Med Res Methodol ; 20(1): 69, 2020 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-32192445

RESUMO

BACKGROUND: With the growth in use of biotherapic drugs in various medical fields, the occurrence of anti-drug antibodies represents nowadays a serious issue. This immune response against a drug can be due either to pre-existing antibodies or to the novel production of antibodies from B-cell clones by a fraction of the exposed subjects. Identifying genetic markers associated with the immunogenicity of biotherapeutic drugs may provide new opportunities for risk stratification before the introduction of the drug. However, real-world investigations should take into account that the population under study is a mixture of pre-immune, immune-reactive and immune-tolerant subjects. METHOD: In this work, we propose a novel test for assessing the effect of genetic markers on drug immunogenicity taking into account that the population under study is a mixed one. This test statistic is derived from a novel two-part semiparametric improper survival model which relies on immunological mechanistic considerations. RESULTS: Simulation results show the good behavior of the proposed statistic as compared to a two-part logrank test. In a study on drug immunogenicity, our results highlighted findings that would have been discarded when considering classical tests. CONCLUSION: We propose a novel test that can be used for analyzing drug immunogenicity and is easy to implement with standard softwares. This test is also applicable for situations where one wants to test the equality of improper survival distributions of semi-continuous outcomes between two or more independent groups.


Assuntos
Anticorpos , Preparações Farmacêuticas , Simulação por Computador , Marcadores Genéticos , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa