Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 765
Filtrer
1.
Article de Anglais | MEDLINE | ID: mdl-39154319

RÉSUMÉ

Visual predictive checks (VPC) are commonly used to evaluate pharmacometrics models. However their performance may be hampered if patients with worse outcomes drop out earlier, as often occurs in clinical trials, especially in oncology. While methods accounting for dropouts have appeared in literature, they vary in assumptions, flexibility, and performance, and the differences between them are not widely understood. This manuscript aims to elucidate which methods can be used to handle VPC with dropout and when, along with a more informative VPC approach using confidence intervals. Additionally, we propose constructing the confidence interval based on the observed data instead of the simulated data. The theoretical framework for incorporating dropout in VPCs is developed and applied to propose two approaches: full and conditional. The full approach is implemented using a parametric time-to-event model, while the conditional approach is implemented using both parametric and Cox proportional-hazard (CPH) models. The practical performances of these approaches are illustrated with an application to the tumor growth dynamics (TGD) modeling of data from two cancer clinical trials of nivolumab and docetaxel, where patients were followed until disease progression. The dataset consisted of 3504 tumor size measurements from 855 subjects, which were described by a TGD model. The dropout of subjects was described by a Weibull or CPH model. Simulated datasets were also used to further illustrate the properties of the VPC methods. The results showed that the more familiar full approach might not provide meaningful improvement for TGD model evaluation over the naive approach of not adjusting for dropout, and could be outperformed by the conditional approach using either the Weibull model or the Cox proportional hazard model. Overall, including confidence intervals in VPC should improve interpretation, the conditional approach was shown to be more generally applicable when dropout occurs, and the nonparametric approach could provide additional robustness.

2.
medRxiv ; 2024 Aug 14.
Article de Anglais | MEDLINE | ID: mdl-39185537

RÉSUMÉ

Recent advances in sequencing technologies have allowed collection of massive genome-wide information that substantially enhances the diagnosis and prognosis of head and neck cancer. Identifying predictive markers for survival time is crucial for devising prognostic systems, and learning the underlying molecular driver of the cancer course. In this paper, we introduce α -KIDS, a model-free feature screening procedure with false discovery rate (FDR) control for ultrahigh dimensional right-censored data, which is robust against unknown censoring mechanisms. Specifically, our two-stage procedure initially selects a set of important features with a dual screening mechanism using nonparametric reproducing-kernel-based ANOVA statistics, followed by identifying a refined set (of features) under directional FDR control through a unified knockoff procedure. The finite sample properties of our method, and its novelty (in light of existing alternatives) are evaluated via simulation studies. Furthermore, we illustrate our methodology via application to a motivating right-censored head and neck (HN) cancer survival data derived from The Cancer Genome Atlas, with further validation on a similar HN cancer data from the Gene Expression Omnibus database. The methodology can be implemented via the R package DSFDRC, available in GitHub.

3.
Am J Transplant ; 2024 Aug 05.
Article de Anglais | MEDLINE | ID: mdl-39111667

RÉSUMÉ

Graft failure and recipient death with functioning graft are important competing outcomes after kidney transplantation. Risk prediction models typically censor for the competing outcome thereby overestimating the cumulative incidence. The magnitude of this overestimation is not well-described in real-world transplant data. This retrospective cohort study analyzed data from the European Collaborative Transplant Study (CTS; n = 125 250) and from the American Scientific Registry of Transplant Recipients (SRTR; n = 190 258). Separate cause-specific hazard models, using donor and recipient age as continuous predictors, were developed for graft failure and recipient death. The hazard of graft failure increased quadratically with increasing donor age and decreased decaying with increasing recipient age. The hazard of recipient death increased linearly with increasing donor and recipient age. The cumulative incidence overestimation due to competing risk-censoring was largest in high-risk populations for both outcomes (old donors/recipients), sometimes amounting to 8.4 and 18.8 percentage points for graft failure and recipient death, respectively. In our illustrative model for post-transplant risk prediction, the absolute risk of graft failure and death is overestimated when censoring for the competing event, mainly in older donors and recipients. Prediction models for absolute risks should treat graft failure and death as competing events.

4.
Clin Trials ; : 17407745241265628, 2024 Aug 08.
Article de Anglais | MEDLINE | ID: mdl-39115164

RÉSUMÉ

Composite endpoints defined as the time to the earliest of two or more events are often used as primary endpoints in clinical trials. Component-wise censoring arises when different components of the composite endpoint are censored differently. We focus on a composite of death and a non-fatal event where death time is right censored and the non-fatal event time is interval censored because the event can only be detected during study visits. Such data are most often analysed using methods for right censored data, treating the time the non-fatal event was first detected as the time it occurred. This can lead to bias, particularly when the time between assessments is long. We describe several approaches for estimating the event-free survival curve and the effect of treatment on event-free survival via the hazard ratio that are specifically designed to handle component-wise censoring. We apply the methods to a randomized study of breastfeeding versus formula feeding for infants of mothers infected with human immunodeficiency virus.

5.
Clin Lung Cancer ; 2024 Jun 29.
Article de Anglais | MEDLINE | ID: mdl-39097467

RÉSUMÉ

OBJECTIVES: CheckMate 227 (NCT02477826) evaluated first-line nivolumab-plus-ipilimumab versus chemotherapy in patients with metastatic nonsmall cell lung cancer (NSCLC) with programmed death ligand 1 (PD-L1) expression ≥ 1% or < 1% and no EGFR/ALK alterations. However, many patients randomized to chemotherapy received subsequent immunotherapy. Here, overall survival (OS) and relative OS benefit of nivolumab-plus-ipilimumab were adjusted for potential bias introduced by treatment switching. MATERIALS AND METHODS: Treatment-switching adjustment analyses were conducted following the NICE Decision Support Unit Technical Support Document 16, for CheckMate 227 Part 1 OS data from treated patients (database lock, July 2, 2019). Inverse probability of censoring weighting (IPCW) was used in the base-case analysis; other methods were explored as sensitivity analyses. RESULTS: Of 1166 randomized patients, 391 (PD-L1 ≥ 1%) and 185 (PD-L1 < 1%) patients received nivolumab-plus-ipilimumab; 387 (PD-L1 ≥ 1%) and 183 (PD-L1 < 1%) patients received chemotherapy, with 29.3-month minimum follow-up. Among chemotherapy-treated patients, 169/387 (43.7%; PD-L1 ≥ 1%) and 66/183 (36.1%; PD-L1 < 1%) switched to immunotherapy poststudy. Among treated patients, median OS was 17.4 months with nivolumab-plus-ipilimumab versus 14.9 months with chemotherapy (hazard ratio [HR], 0.80; 95% confidence interval [CI], 0.68-0.95) in the PD-L1 ≥ 1% subgroup and 17.1 versus 12.4 months (HR, 0.62; 95% CI, 0.49-0.80) in the PD-L1 < 1% subgroup. After treatment-switching adjustment using IPCW, the HR (95% CI) for OS for nivolumab-plus-ipilimumab versus chemotherapy was reduced to 0.68 (0.56-0.83; PD-L1 ≥ 1%) and 0.53 (0.40-0.69; PD-L1 < 1%). Sensitivity analyses supported the robustness of the results. CONCLUSION: Treatment-switching adjustments resulted in a greater estimated relative OS benefit with first-line nivolumab-plus-ipilimumab versus chemotherapy in patients with metastatic NSCLC.

6.
Ann Work Expo Health ; 2024 Aug 14.
Article de Anglais | MEDLINE | ID: mdl-39141417

RÉSUMÉ

BACKGROUND: In studies of occupational health, longitudinal environmental exposure, and biomonitoring data are often subject to right skewing and left censoring, in which measurements fall below the limit of detection (LOD). To address right-skewed data, it is common practice to log-transform the data and model the geometric mean, assuming a log-normal distribution. However, if the transformed data do not follow a known distribution, modeling the mean of exposure may result in bias and reduce efficiency. In addition, when examining longitudinal data, it is possible that certain covariates may vary over time. OBJECTIVE: To develop predictive quantile regression models to resolve the issues of left censoring and time-dependent covariates and to quantitatively evaluate if previous and current covariates can predict current and/or future exposure levels. METHODS: To address these gaps, we suggested incorporating different substitution approaches into quantile regression and utilizing a method for selecting a working type of time dependency for covariates. RESULTS: In a simulation study, we demonstrated that, under different types of time-dependent covariates, the approach of multiple random value imputation outperformed the other approaches. We also applied our methods to a carbon nanotube and nanofiber exposure study. The dependent variables are the left-censored mass of elemental carbon at both the respirable and inhalable aerosol size fractions. In this study, we identified some potential time-dependent covariates with respect to worker-level determinants and job tasks. CONCLUSION: Time dependency for covariates is rarely accounted for when analyzing longitudinal environmental exposure and biomonitoring data with values less than the LOD through predictive modeling. Mistreating the time-dependency as time-independency will lead to an efficiency loss of regression parameter estimation. Therefore, we addressed time-varying covariates in longitudinal exposure and biomonitoring data with left-censored measurements and illustrated an entire conditional distribution through different quantiles.

7.
Front Epidemiol ; 4: 1386922, 2024.
Article de Anglais | MEDLINE | ID: mdl-39188581

RÉSUMÉ

Survival analysis (also referred to as time-to-event analysis) is the study of the time elapsed from a starting date to some event of interest. In practice, these analyses can be challenging and, if methodological errors are to be avoided, require the application of appropriate techniques. By using simulations and real-life data based on the French national registry of patients with primary immunodeficiencies (CEREDIH), we sought to highlight the basic elements that need to be handled correctly when performing the initial steps in a survival analysis. We focused on non-parametric methods to deal with right censoring, left truncation, competing risks, and recurrent events. Our simulations show that ignoring these aspects induces a bias in the results; we then explain how to analyze the data correctly in these situations using non-parametric methods. Rare disease registries are extremely valuable in medical research. We discuss the application of appropriate methods for the analysis of time-to-event from the CEREDIH registry. The objective of this tutorial article is to provide clinicians and healthcare professionals with better knowledge of the issues facing them when analyzing time-to-event data.

8.
Biometrics ; 80(3)2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-39177025

RÉSUMÉ

Interval-censored failure time data frequently arise in various scientific studies where each subject experiences periodical examinations for the occurrence of the failure event of interest, and the failure time is only known to lie in a specific time interval. In addition, collected data may include multiple observed variables with a certain degree of correlation, leading to severe multicollinearity issues. This work proposes a factor-augmented transformation model to analyze interval-censored failure time data while reducing model dimensionality and avoiding multicollinearity elicited by multiple correlated covariates. We provide a joint modeling framework by comprising a factor analysis model to group multiple observed variables into a few latent factors and a class of semiparametric transformation models with the augmented factors to examine their and other covariate effects on the failure event. Furthermore, we propose a nonparametric maximum likelihood estimation approach and develop a computationally stable and reliable expectation-maximization algorithm for its implementation. We establish the asymptotic properties of the proposed estimators and conduct simulation studies to assess the empirical performance of the proposed method. An application to the Alzheimer's Disease Neuroimaging Initiative (ADNI) study is provided. An R package ICTransCFA is also available for practitioners. Data used in preparation of this article were obtained from the ADNI database.


Sujet(s)
Maladie d'Alzheimer , Simulation numérique , Modèles statistiques , Humains , Fonctions de vraisemblance , Algorithmes , Neuroimagerie , Analyse statistique factorielle , Interprétation statistique de données , Facteurs temps
9.
J Comput Graph Stat ; 33(3): 1098-1108, 2024.
Article de Anglais | MEDLINE | ID: mdl-39175935

RÉSUMÉ

The conditional survival function of a time-to-event outcome subject to censoring and truncation is a common target of estimation in survival analysis. This parameter may be of scientific interest and also often appears as a nuisance in nonparametric and semiparametric problems. In addition to classical parametric and semiparametric methods (e.g., based on the Cox proportional hazards model), flexible machine learning approaches have been developed to estimate the conditional survival function. However, many of these methods are either implicitly or explicitly targeted toward risk stratification rather than overall survival function estimation. Others apply only to discrete-time settings or require inverse probability of censoring weights, which can be as difficult to estimate as the outcome survival function itself. Here, we employ a decomposition of the conditional survival function in terms of observable regression models in which censoring and truncation play no role. This allows application of an array of flexible regression and classification methods rather than only approaches that explicitly handle the complexities inherent to survival data. We outline estimation procedures based on this decomposition, empirically assess their performance, and demonstrate their use on data from an HIV vaccine trial. Supplementary materials for this article are available online.

10.
Clin Trials ; : 17407745241268054, 2024 Aug 24.
Article de Anglais | MEDLINE | ID: mdl-39180288

RÉSUMÉ

Clinical trials with random assignment of treatment provide evidence about causal effects of an experimental treatment compared to standard care. However, when disease processes involve multiple types of possibly semi-competing events, specification of target estimands and causal inferences can be challenging. Intercurrent events such as study withdrawal, the introduction of rescue medication, and death further complicate matters. There has been much discussion about these issues in recent years, but guidance remains ambiguous. Some recommended approaches are formulated in terms of hypothetical settings that have little bearing in the real world. We discuss issues in formulating estimands, beginning with intercurrent events in the context of a linear model and then move on to more complex disease history processes amenable to multistate modeling. We elucidate the meaning of estimands implicit in some recommended approaches for dealing with intercurrent events and highlight the disconnect between estimands formulated in terms of potential outcomes and the real world.

11.
Lifetime Data Anal ; 2024 Aug 24.
Article de Anglais | MEDLINE | ID: mdl-39180601

RÉSUMÉ

This paper discusses regression analysis of current status data with dependent censoring, a problem that often occurs in many areas such as cross-sectional studies, epidemiological investigations and tumorigenicity experiments. Copula model-based methods are commonly employed to tackle this issue. However, these methods often face challenges in terms of model and parameter identification. The primary aim of this paper is to propose a copula-based analysis for dependent current status data, where the association parameter is left unspecified. Our method is based on a general class of semiparametric linear transformation models and parametric copulas. We demonstrate that the proposed semiparametric model is identifiable under certain regularity conditions from the distribution of the observed data. For inference, we develop a sieve maximum likelihood estimation method, using Bernstein polynomials to approximate the nonparametric functions involved. The asymptotic consistency and normality of the proposed estimators are established. Finally, to demonstrate the effectiveness and practical applicability of our method, we conduct an extensive simulation study and apply the proposed method to a real data example.

12.
Pract Radiat Oncol ; 2024 Aug 24.
Article de Anglais | MEDLINE | ID: mdl-39187011

RÉSUMÉ

In oncology, "survival curves" frequently appear in journal articles and meeting presentations. The most common labels on survival curves are: Overall Survival, Relapse Free Survival, Progression Free Survival, Distant Metastasis Free Survival and Local and/or Regional Control. Unfortunately, consistency in the definition of an event differs between authors for the same prescribed survival analyses. Furthermore, the quality of a survival curves can be greatly impacted by the methodology used for endpoint selection. This paper will briefly explain widely used names and event endpoints for survival analyses in a way that will help radiation oncologists consistently present and interpret experimental findings that influence clinical practice decisions.

13.
Stat Med ; 43(20): 3943-3957, 2024 Sep 10.
Article de Anglais | MEDLINE | ID: mdl-38951953

RÉSUMÉ

Latent classification model is a class of statistical methods for identifying unobserved class membership among the study samples using some observed data. In this study, we proposed a latent classification model that takes a censored longitudinal binary outcome variable and uses its changing pattern over time to predict individuals' latent class membership. Assuming the time-dependent outcome variables follow a continuous-time Markov chain, the proposed method has two primary goals: (1) estimate the distribution of the latent classes and predict individuals' class membership, and (2) estimate the class-specific transition rates and rate ratios. To assess the model's performance, we conducted a simulation study and verified that our algorithm produces accurate model estimates (ie, small bias) with reasonable confidence intervals (ie, achieving approximately 95% coverage probability). Furthermore, we compared our model to four other existing latent class models and demonstrated that our approach yields higher prediction accuracies for latent classes. We applied our proposed method to analyze the COVID-19 data in Houston, Texas, US collected between January first 2021 and December 31st 2021. Early reports on the COVID-19 pandemic showed that the severity of a SARS-CoV-2 infection tends to vary greatly by cases. We found that while demographic characteristics explain some of the differences in individuals' experience with COVID-19, some unaccounted-for latent variables were associated with the disease.


Sujet(s)
Algorithmes , COVID-19 , Analyse de structure latente , Chaines de Markov , Humains , COVID-19/épidémiologie , Études longitudinales , Simulation numérique , Modèles statistiques , Texas/épidémiologie , SARS-CoV-2 , Femelle
14.
Heliyon ; 10(13): e34087, 2024 Jul 15.
Article de Anglais | MEDLINE | ID: mdl-39071643

RÉSUMÉ

A Bayesian method based on the learning rate parameter η is called a generalized Bayesian method. In this study, joint hybrid censored type I and type II samples from k exponential populations were examined to determine the influence of the parameter η on the estimation results. To investigate the selection effects of the learning rate and the loss parameters on the estimation results, we considered two additional loss functions in the Bayesian approach: the linear and the generalized entropy loss functions. We then compared the generalized Bayesian algorithm with the traditional Bayesian algorithm. We performed Monte Carlo simulations to compare the performance of the estimation results with the losses and different values of η . The effects of different losses with different values and learning rate parameters are examined using an example.

15.
Clin Trials ; : 17407745241259356, 2024 Jul 30.
Article de Anglais | MEDLINE | ID: mdl-39076157

RÉSUMÉ

The win ratio has been increasingly used in trials with hierarchical composite endpoints. While the outcomes involved and the rule for their comparisons vary with the application, there is invariably little attention to the estimand of the resulting statistic, causing difficulties in interpretation and cross-trial comparison. We make the case for articulating the estimand as a first step to win ratio analysis and establish that the root cause for its elusiveness is its intrinsic dependency on the time frame of comparison, which, if left unspecified, is set haphazardly by trial-specific censoring. From the statistical literature, we summarize two general approaches to overcome this uncertainty-a nonparametric one that pre-specifies the time frame for all comparisons, and a semiparametric one that posits a constant win ratio across all times-each with publicly available software and real examples. Finally, we discuss unsolved challenges, such as estimand construction and inference in the presence of intercurrent events.

16.
Stat Methods Med Res ; : 9622802241262525, 2024 Jul 25.
Article de Anglais | MEDLINE | ID: mdl-39053567

RÉSUMÉ

Individualized treatment rules inform tailored treatment decisions based on the patient's information, where the goal is to optimize clinical benefit for the population. When the clinical outcome of interest is survival time, most of current approaches typically aim to maximize the expected time of survival. We propose a new criterion for constructing Individualized treatment rules that optimize the clinical benefit with survival outcomes, termed as the adjusted probability of a longer survival. This objective captures the likelihood of living longer with being on treatment, compared to the alternative, which provides an alternative and often straightforward interpretation to communicate with clinicians and patients. We view it as an alternative to the survival analysis standard of the hazard ratio and the increasingly used restricted mean survival time. We develop a new method to construct the optimal Individualized treatment rule by maximizing a nonparametric estimator of the adjusted probability of a longer survival for a decision rule. Simulation studies demonstrate the reliability of the proposed method across a range of different scenarios. We further perform data analysis using data collected from a randomized Phase III clinical trial (SWOG S0819).

17.
Stat Methods Med Res ; : 9622802241262523, 2024 Jul 25.
Article de Anglais | MEDLINE | ID: mdl-39053572

RÉSUMÉ

An important task in health research is to characterize time-to-event outcomes such as disease onset or mortality in terms of a potentially high-dimensional set of risk factors. For example, prospective cohort studies of Alzheimer's disease (AD) typically enroll older adults for observation over several decades to assess the long-term impact of genetic and other factors on cognitive decline and mortality. The accelerated failure time model is particularly well-suited to such studies, structuring covariate effects as "horizontal" changes to the survival quantiles that conceptually reflect shifts in the outcome distribution due to lifelong exposures. However, this modeling task is complicated by the enrollment of adults at differing ages, and intermittent follow-up visits leading to interval-censored outcome information. Moreover, genetic and clinical risk factors are not only high-dimensional, but characterized by underlying grouping structures, such as by function or gene location. Such grouped high-dimensional covariates require shrinkage methods that directly acknowledge this structure to facilitate variable selection and estimation. In this paper, we address these considerations directly by proposing a Bayesian accelerated failure time model with a group-structured lasso penalty, designed for left-truncated and interval-censored time-to-event data. We develop an R package with a Markov chain Monte Carlo sampler for estimation. We present a simulation study examining the performance of this method relative to an ordinary lasso penalty and apply the proposed method to identify groups of predictive genetic and clinical risk factors for AD in the Religious Orders Study and Memory and Aging Project prospective cohort studies of AD and dementia.

18.
Am Stat ; 78(3): 335-344, 2024.
Article de Anglais | MEDLINE | ID: mdl-39070115

RÉSUMÉ

Despite its drawbacks, the complete case analysis is commonly used in regression models with incomplete covariates. Understanding when the complete case analysis will lead to consistent parameter estimation is vital before use. Our aim here is to demonstrate when a complete case analysis is consistent for randomly right-censored covariates and to discuss the implications of its use even when consistent. Across the censored covariate literature, different assumptions are made to ensure a complete case analysis produces a consistent estimator, which leads to confusion in practice. We make several contributions to dispel this confusion. First, we summarize the language surrounding the assumptions that lead to a consistent complete case estimator. Then, we show a unidirectional hierarchical relationship between these assumptions, which leads us to one sufficient assumption to consider before using a complete case analysis. Lastly, we conduct a simulation study to illustrate the performance of a complete case analysis with a right-censored covariate under different censoring mechanism assumptions, and we demonstrate its use with a Huntington disease data example.

19.
Eur J Cancer ; 207: 114192, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38959677

RÉSUMÉ

CDK4/6 inhibitors are oral agents inhibiting key molecules of the cell cycle regulation. In patients with endocrine receptor positive (ER+), human epidermal growth factor receptor 2 negative (HER2-) breast cancer, the combination of CDK4/6 inhibitors with endocrine therapy is an effective treatment in the metastatic setting. Now, two studies in the adjuvant setting - MonarchE (2 years of abemaciclib) and NATALEE (3 years of ribociclib) - report positive invasive disease-free survival. Here, we re-evaluate these seminal trials. First, an excess drop-out or loss-to-follow up occurred early in the control arms of both studies. Since both trials are open-label, there is concern that the patients who drop-out do not do so at random but based on socioeconomic factors and alternative options. Is it possible that the results merely appear favorable due to loss to follow up? Based on re-constructed Kaplan-Meier curves, we concluded the results of these studies remain fragile, being prone to informative censoring. Secondly, adverse events were notably higher in both trials, and some of them, like COVID-19 related deaths in NATALEE, raise serious concerns. Third, the potential costs associated with CDK4/6 inhibition given as adjuvant therapy are unprecedented. The NATALEE strategy, in particular, could affect up to 35 % of patients with newly diagnosed breast cancer, which is the cancer with the highest incidence worldwide. Without confirmatory data based on a placebo-controlled trial, or better identification of patients that would benefit from the addition of CDK4/6 inhibitors in the adjuvant setting, we argue against their routine use as adjuvant therapy in ER+ /HER2- early breast cancer.


Sujet(s)
Tumeurs du sein , Kinase-4 cycline-dépendante , Kinase-6 cycline-dépendante , Inhibiteurs de protéines kinases , Femelle , Humains , Aminopyridines/usage thérapeutique , Aminopyridines/effets indésirables , Protocoles de polychimiothérapie antinéoplasique/usage thérapeutique , Protocoles de polychimiothérapie antinéoplasique/effets indésirables , Benzimidazoles/usage thérapeutique , Benzimidazoles/effets indésirables , Tumeurs du sein/traitement médicamenteux , Tumeurs du sein/anatomopathologie , Traitement médicamenteux adjuvant , Kinase-4 cycline-dépendante/antagonistes et inhibiteurs , Kinase-6 cycline-dépendante/antagonistes et inhibiteurs , Inhibiteurs de protéines kinases/usage thérapeutique , Inhibiteurs de protéines kinases/effets indésirables , Purines/usage thérapeutique , Purines/effets indésirables , Essais contrôlés randomisés comme sujet
20.
J Appl Stat ; 51(9): 1664-1688, 2024.
Article de Anglais | MEDLINE | ID: mdl-38933139

RÉSUMÉ

This paper presents an effort to investigate the estimations of the Weibull distribution using an improved adaptive Type-II progressive censoring scheme. This scheme effectively guarantees that the experimental time will not exceed a pre-fixed time. The point and interval estimations using two classical estimation methods, namely maximum likelihood and maximum product of spacing, are considered to estimate the unknown parameters as well as the reliability and hazard rate functions. The approximate confidence intervals of these quantities are obtained based on the asymptotic normality of the maximum likelihood and maximum product of spacing methods. The Bayesian estimations are also considered using MCMC techniques based on the two classical approaches. An extensive simulation study is implemented to compare the performance of the different methods. Further, we propose the use of various optimality criteria to find the optimal sampling scheme. Finally, one real data set is applied to show how the proposed estimators and the optimality criteria work in real-life scenarios. The numerical outcomes demonstrated that the Bayesian estimates using the likelihood and product of spacing functions performed better than the classical estimates.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE