Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 331
Filtrar
1.
Stat Med ; 2024 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-39343041

RESUMEN

As a favorable alternative to the censored quantile regression, censored expectile regression has been popular in survival analysis due to its flexibility in modeling the heterogeneous effect of covariates. The existing weighted expectile regression (WER) method assumes that the censoring variable and covariates are independent, and that the covariates effects has a global linear structure. However, these two assumptions are too restrictive to capture the complex and nonlinear pattern of the underlying covariates effects. In this article, we developed a novel weighted expectile regression neural networks (WERNN) method by incorporating the deep neural network structure into the censored expectile regression framework. To handle the random censoring, we employ the inverse probability of censoring weighting (IPCW) technique in the expectile loss function. The proposed WERNN method is flexible enough to fit nonlinear patterns and therefore achieves more accurate prediction performance than the existing WER method for right censored data. Our findings are supported by extensive Monte Carlo simulation studies and a real data application.

2.
J Appl Stat ; 51(11): 2139-2156, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39157272

RESUMEN

The transformation model with partly interval-censored data offers a highly flexible modeling framework that can simultaneously support multiple common survival models and a wide variety of censored data types. However, the real data may contain unexplained heterogeneity that cannot be entirely explained by covariates and may be brought on by a variety of unmeasured regional characteristics. Due to this, we introduce the conditionally autoregressive prior into the transformation model with partly interval-censored data and take the spatial frailty into account. An efficient Markov chain Monte Carlo method is proposed to handle the posterior sampling and model inference. The approach is simple to use and does not include any challenging Metropolis steps owing to four-stage data augmentation. Through several simulations, the suggested method's empirical performance is assessed and then the method is used in a leukemia study.

3.
Heliyon ; 10(14): e34170, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39108904

RESUMEN

In contemporary statistical research, there has been a notable surge of interest surrounding a suggested extension of the Marshall-Olkin-G distributions. The present extension exhibits a higher degree of flexibility in comparison to its parent distributions. In a similar manner, we present in this context an expansion of the Marshall-Olkin-G distributions proposed by statistical scholars. This study utilizes a specific variant of the extension known as the Marshall-Olkin-Weibull Logarithmic model, which is applied to both complete and censored data sets. It is evident that the aforementioned model has strong competitiveness in accurately characterizing both complete and censored observations in lifetime reliability issues, when compared to other comparative models discussed in this research work.

4.
Commun Stat Theory Methods ; 53(17): 6038-6054, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39100716

RESUMEN

Phase IV clinical trials are designed to monitor long-term side effects of medical treatment. For instance, childhood cancer survivors treated with chest radiation and/or anthracycline are often at risk of developing cardiotoxicity during their adulthood. Often the primary focus of a study could be on estimating the cumulative incidence of a particular outcome of interest such as cardiotoxicity. However, it is challenging to evaluate patients continuously and usually, this information is collected through cross-sectional surveys by following patients longitudinally. This leads to interval-censored data since the exact time of the onset of the toxicity is unknown. Rai et al. computed the transition intensity rate using a parametric model and estimated parameters using maximum likelihood approach in an illness-death model. However, such approach may not be suitable if the underlying parametric assumptions do not hold. This manuscript proposes a semi-parametric model, with a logit relationship for the treatment intensities in two groups, to estimate the transition intensity rates within the context of an illness-death model. The estimation of the parameters is done using an EM algorithm with profile likelihood. Results from the simulation studies suggest that the proposed approach is easy to implement and yields comparable results to the parametric model.

5.
Heliyon ; 10(14): e34418, 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39114065

RESUMEN

The importance of biomedical physical data is underscored by its crucial role in advancing our comprehension of human health, unraveling the mechanisms underlying diseases, and facilitating the development of innovative medical treatments and interventions. This data serves as a fundamental resource, empowering researchers, healthcare professionals, and scientists to make informed decisions, pioneer research, and ultimately enhance global healthcare quality and individual well-being. It forms a cornerstone in the ongoing pursuit of medical progress and improved healthcare outcomes. This article aims to tackle challenges in estimating unknown parameters and reliability measures related to the modified Weibull distribution when applied to censored progressive biomedical data from the initial failure occurrence. In this context, the article proposes both classical and Bayesian techniques to derive estimates for unknown parameters, survival, and failure rate functions. Bayesian estimates are computed considering both asymmetric and symmetric loss functions. The Markov chain Monte Carlo method is employed to obtain these Bayesian estimates and their corresponding highest posterior density credible intervals. Due to the inherent complexity of these estimators, which cannot be theoretically compared, a simulation study is conducted to evaluate the performance of various estimation procedures. Additionally, a range of optimization criteria is utilized to identify the most effective progressive control strategies. Lastly, the article presents a medical application to illustrate the effectiveness of the proposed estimators. Numerical findings indicate that Bayesian estimates outperform other estimation methods by achieving minimal root mean square errors and narrower interval lengths.

6.
J Appl Stat ; 51(11): 2197-2213, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39157269

RESUMEN

In this paper, we study the robust estimation and empirical likelihood for the regression parameter in generalized linear models with right censored data. A robust estimating equation is proposed to estimate the regression parameter, and the resulting estimator has consistent and asymptotic normality. A bias-corrected empirical log-likelihood ratio statistic of the regression parameter is constructed, and it is shown that the statistic converges weakly to a standard χ 2 distribution. The result can be directly used to construct the confidence region of regression parameter. We use the bias correction method to directly calibrate the empirical log-likelihood ratio, which does not need to be multiplied by an adjustment factor. We also propose a method for selecting the tuning parameters in the loss function. Simulation studies show that the estimator of the regression parameter is robust and the bias-corrected empirical likelihood is better than the normal approximation method. An example of a real dataset from Alzheimer's disease studies shows that the proposed method can be applied in practical problems.

7.
Am J Epidemiol ; 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39214647

RESUMEN

To optimize colorectal cancer (CRC) surveillance, accurate information on the risk of developing CRC from premalignant lesions is essential. However, directly observing this risk is challenging since precursor lesions, i.e., advanced adenomas (AAs), are removed upon detection. Statistical methods for multistate models can estimate risks, but estimation is challenging due to low CRC incidence. We propose an outcome-dependent sampling (ODS) design for this problem in which we oversample CRCs. More specifically, we propose a three-state model for jointly estimating the time distributions from baseline colonoscopy to AA and from AA onset to CRC accounting for the ODS design using a weighted likelihood approach. We applied the methodology to a sample from a Norwegian adenoma cohort (1993-2007), comprising 1, 495 individuals (median follow-up 6.8 years [IQR: 1.1 - 12.8 years]) of whom 648 did and 847 did not develop CRC. We observed a 5-year AA risk of 13% and 34% for individuals having non-advanced adenoma (NAA) and AA removed at baseline colonoscopy, respectively. Upon AA development, the subsequent risk to develop CRC in 5 years was 17% and age-dependent. These estimates provide a basis for optimizing surveillance intensity and determining the optimal trade-off between CRC prevention, costs, and use of colonoscopy resources.

8.
J Alzheimers Dis ; 101(1): 147-157, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39121117

RESUMEN

Background: Mild cognitive impairment (MCI) patients are at a high risk of developing Alzheimer's disease and related dementias (ADRD) at an estimated annual rate above 10%. It is clinically and practically important to accurately predict MCI-to-dementia conversion time. Objective: It is clinically and practically important to accurately predict MCI-to-dementia conversion time by using easily available clinical data. Methods: The dementia diagnosis often falls between two clinical visits, and such survival outcome is known as interval-censored data. We utilized the semi-parametric model and the random forest model for interval-censored data in conjunction with a variable selection approach to select important measures for predicting the conversion time from MCI to dementia. Two large AD cohort data sets were used to build, validate, and test the predictive model. Results: We found that the semi-parametric model can improve the prediction of the conversion time for patients with MCI-to-dementia conversion, and it also has good predictive performance for all patients. Conclusions: Interval-censored data should be analyzed by using the models that were developed for interval- censored data to improve the model performance.


Asunto(s)
Disfunción Cognitiva , Demencia , Progresión de la Enfermedad , Humanos , Disfunción Cognitiva/diagnóstico , Femenino , Masculino , Anciano , Demencia/diagnóstico , Demencia/epidemiología , Demencia/psicología , Anciano de 80 o más Años , Estudios de Cohortes , Factores de Tiempo , Modelos Estadísticos , Valor Predictivo de las Pruebas , Enfermedad de Alzheimer/diagnóstico , Pruebas Neuropsicológicas/estadística & datos numéricos
9.
Stat Methods Med Res ; : 9622802241262525, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39053567

RESUMEN

Individualized treatment rules inform tailored treatment decisions based on the patient's information, where the goal is to optimize clinical benefit for the population. When the clinical outcome of interest is survival time, most of current approaches typically aim to maximize the expected time of survival. We propose a new criterion for constructing Individualized treatment rules that optimize the clinical benefit with survival outcomes, termed as the adjusted probability of a longer survival. This objective captures the likelihood of living longer with being on treatment, compared to the alternative, which provides an alternative and often straightforward interpretation to communicate with clinicians and patients. We view it as an alternative to the survival analysis standard of the hazard ratio and the increasingly used restricted mean survival time. We develop a new method to construct the optimal Individualized treatment rule by maximizing a nonparametric estimator of the adjusted probability of a longer survival for a decision rule. Simulation studies demonstrate the reliability of the proposed method across a range of different scenarios. We further perform data analysis using data collected from a randomized Phase III clinical trial (SWOG S0819).

10.
Entropy (Basel) ; 26(7)2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-39056917

RESUMEN

This paper proposes a novel censored autoregressive conditional Fréchet (CAcF) model with a flexible evolution scheme for the time-varying parameters, which allows deciphering tail risk dynamics constrained by price limits from the viewpoints of different risk preferences. The proposed model can well accommodate many important empirical characteristics of financial data, such as heavy-tailedness, volatility clustering, extreme event clustering, and price limits. We then investigate tail risk dynamics via the CAcF model in the price-limited stock markets, taking entropic value at risk (EVaR) as a risk measurement. Our findings suggest that tail risk will be seriously underestimated in price-limited stock markets when the censored property of limit prices is ignored. Additionally, the evidence from the Chinese Taiwan stock market shows that widening price limits would lead to a decrease in the incidence of extreme events (hitting limit-down) but a significant increase in tail risk. Moreover, we find that investors with different risk preferences may make opposing decisions about an extreme event. In summary, the empirical results reveal the effectiveness of our model in interpreting and predicting time-varying tail behaviors in price-limited stock markets, providing a new tool for financial risk management.

11.
Stat Med ; 43(20): 3921-3942, 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-38951867

RESUMEN

For survival analysis applications we propose a novel procedure for identifying subgroups with large treatment effects, with focus on subgroups where treatment is potentially detrimental. The approach, termed forest search, is relatively simple and flexible. All-possible subgroups are screened and selected based on hazard ratio thresholds indicative of harm with assessment according to the standard Cox model. By reversing the role of treatment one can seek to identify substantial benefit. We apply a splitting consistency criteria to identify a subgroup considered "maximally consistent with harm." The type-1 error and power for subgroup identification can be quickly approximated by numerical integration. To aid inference we describe a bootstrap bias-corrected Cox model estimator with variance estimated by a Jacknife approximation. We provide a detailed evaluation of operating characteristics in simulations and compare to virtual twins and generalized random forests where we find the proposal to have favorable performance. In particular, in our simulation setting, we find the proposed approach favorably controls the type-1 error for falsely identifying heterogeneity with higher power and classification accuracy for substantial heterogeneous effects. Two real data applications are provided for publicly available datasets from a clinical trial in oncology, and HIV.


Asunto(s)
Simulación por Computador , Infecciones por VIH , Modelos de Riesgos Proporcionales , Humanos , Análisis de Supervivencia
12.
Water Res ; 259: 121857, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-38851116

RESUMEN

Urban areas are built environments containing substantial amounts of impervious surfaces (e.g., streets, sidewalks, roof tops). These areas often include elaborately engineered drainage networks designed to collect, transport, and discharge untreated stormwater into local surface waters. When left uncontrolled, these discharges may contain unsafe levels of fecal waste from sources such as sanitary sewage and wildlife even under dry weather conditions. This study evaluates paired measurements of host-associated genetic markers (log10 copies per reaction) indicative of human (HF183/BacR287 and HumM2), ruminant (Rum2Bac), canine (DG3), and avian (GFD) fecal sources, 12-hour cumulative precipitation (mm), four catchment land use metrics determined by global information system (GIS) mapping, and Escherichia coli (MPN/100 ml) from seven municipal separate storm sewer system outfall locations situated at the southern portion of the Anacostia River Watershed (District of Columbia, U.S.A.). A total of 231 discharge samples were collected twice per month (n = 24 sampling days) and after rain events (n = 9) over a 13-month period. Approximately 50 % of samples (n = 116) were impaired, exceeding the local E. coli single sample maximum of 2.613 log10 MPN/100 ml. Genetic quality controls indicated the absence of amplification inhibition in 97.8 % of samples, however 14.7 % (n = 34) samples showed bias in DNA recovery. Of eligible samples, quantifiable levels were observed for avian (84.1 %), human (57.4 % for HF183/BacR287 and 40 % for HumM2), canine (46.7 %), and ruminant (15.9 %) host-associated genetic markers. Potential links between paired measurements are explored with a recently developed Bayesian qPCR censored data analysis approach. Findings indicate that human, pet, and urban wildlife all contribute to storm outfall discharge water quality in the District of Columbia, but pollutant source contributions vary based on 'wet' and 'dry' conditions and catchment land use, demonstrating that genetic-based fecal source identification methods combined with GIS land use mapping can complement routine E. coli monitoring to improve stormwater management in urban areas.


Asunto(s)
Escherichia coli , Heces , Aguas del Alcantarillado , Heces/microbiología , Animales , Humanos , Escherichia coli/genética , Tiempo (Meteorología) , Lluvia , Ciudades , Monitoreo del Ambiente , Perros , Aves
13.
J Appl Stat ; 51(7): 1251-1270, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38835825

RESUMEN

The accelerated hazards model is one of the most commonly used models for regression analysis of failure time data and this is especially the case when, for example, the hazard functions may have monotonicity property. Correspondingly a large literature has been established for its estimation or inference when right-censored data are observed. Although several methods have also been developed for its inference based on interval-censored data, they apply only to limited situations or rely on some assumptions such as independent censoring. In this paper, we consider the situation where one observes case K interval-censored data, the type of failure time data that occur most in, for example, medical research such as clinical trials or periodical follow-up studies. For inference, we propose a sieve borrow-strength method and in particular, it allows for informative censoring. The asymptotic properties of the proposed estimators are established. Simulation studies demonstrate that the proposed inference procedure performs well. The method is applied to a set of real data set arising from an AIDS clinical trial.

14.
BMC Public Health ; 24(1): 1674, 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38914983

RESUMEN

BACKGROUND: Hormone therapy (HT) use among menopausal women declined after negative information from the 2002 Women's Health Initiative (WHI) HT study. The 2017 post-intervention follow-up WHI study revealed that HT did not increase long-term mortality. However, studies on the effects of the updated WHI findings are lacking. Thus, we assessed the impact of the 2017 WHI findings on HT use in Taiwan. METHODS: We identified 1,869,050 women aged 50-60 years, between June and December 2017, from health insurance claims data to compare HT use in the 3 months preceding and following September 2017. To address the limitations associated with interval-censored data, we employed an emulated repeated cross-sectional design. Using logistic regression analysis, we evaluated the impact of the 2017 WHI study on menopausal symptom-related outpatient visits and HT use. In a scenario analysis, we examined the impact of the 2002 trial on HT use to validate our study design. RESULTS: Study participants' baseline characteristics before and after the 2017 WHI study were not significantly different. Logistic regressions demonstrated that the 2017 study had no significant effect on outpatient visits for menopause-related symptoms or HT use among women with outpatient visits. The scenario analysis confirmed the negative impact of the 2002 WHI trial on HT use. CONCLUSIONS: The 2017 WHI study did not demonstrate any impact on either menopause-related outpatient visits or HT use among middle-aged women in Taiwan. Our emulated cross-sectional study design may be employed in similar population-based policy intervention studies using interval-censored data.


Asunto(s)
Salud de la Mujer , Humanos , Femenino , Estudios Transversales , Persona de Mediana Edad , Taiwán , Terapia de Reemplazo de Estrógeno/estadística & datos numéricos , Menopausia , Terapia de Reemplazo de Hormonas/estadística & datos numéricos
15.
J Appl Stat ; 51(9): 1772-1791, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38933141

RESUMEN

This paper presents a novel approach for analyzing bivariate positive data, taking into account a covariate vector and left-censored observations, by introducing a hierarchical Bayesian analysis. The proposed method assumes marginal Weibull distributions and employs either a usual Weibull likelihood or Weibull-Tobit likelihood approaches. A latent variable or frailty is included in the model to capture the possible correlation between the bivariate responses for the same sampling unit. The posterior summaries of interest are obtained through Markov Chain Monte Carlo methods. To demonstrate the effectiveness of the proposed methodology, we apply it to a bivariate data set from stellar astronomy that includes left-censored observations and covariates. Our results indicate that the new bivariate model approach, which incorporates the latent factor to capture the potential dependence between the two responses of interest, produces accurate inference results. We also compare the two models using the different likelihood approaches (Weibull or Weibull-Tobit likelihoods) in the application. Overall, our findings suggest that the proposed hierarchical Bayesian analysis is a promising approach for analyzing bivariate positive data with left-censored observations and covariate information.

16.
J Appl Stat ; 51(9): 1642-1663, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38933143

RESUMEN

The article proposes a new regression based on the generalized odd log-logistic family for interval-censored data. The survival times are not observed for this type of data, and the event of interest occurs at some random interval. This family can be used in interval modeling since it generalizes some popular lifetime distributions in addition to its ability to present various forms of the risk function. The estimation of the parameters is addressed by the classical and Bayesian methods. We examine the behavior of the estimates for some sample sizes and censorship percentages. Selection criteria, likelihood ratio tests, residual analysis, and graphical techniques assess the goodness of fit of the fitted models. The usefulness of the proposed models is red shown by means of two real data sets.

17.
Stat Med ; 43(19): 3742-3758, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-38897921

RESUMEN

Biomarkers are often measured in bulk to diagnose patients, monitor patient conditions, and research novel drug pathways. The measurement of these biomarkers often suffers from detection limits that result in missing and untrustworthy measurements. Frequently, missing biomarkers are imputed so that down-stream analysis can be conducted with modern statistical methods that cannot normally handle data subject to informative censoring. This work develops an empirical Bayes g $$ g $$ -modeling method for imputing and denoising biomarker measurements. We establish superior estimation properties compared to popular methods in simulations and with real data, providing the useful biomarker measurement estimations for down-stream analysis.


Asunto(s)
Teorema de Bayes , Biomarcadores , Simulación por Computador , Humanos , Biomarcadores/análisis , Modelos Estadísticos , Estadísticas no Paramétricas , Interpretación Estadística de Datos
18.
Stat Med ; 43(18): 3503-3523, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38857600

RESUMEN

Analysis of competing risks data has been an important topic in survival analysis due to the need to account for the dependence among the competing events. Also, event times are often recorded on discrete time scales, rendering the models tailored for discrete-time nature useful in the practice of survival analysis. In this work, we focus on regression analysis with discrete-time competing risks data, and consider the errors-in-variables issue where the covariates are prone to measurement errors. Viewing the true covariate value as a parameter, we develop the conditional score methods for various discrete-time competing risks models, including the cause-specific and subdistribution hazards models that have been popular in competing risks data analysis. The proposed estimators can be implemented by efficient computation algorithms, and the associated large sample theories can be simply obtained. Simulation results show satisfactory finite sample performances, and the application with the competing risks data from the scleroderma lung study reveals the utility of the proposed methods.


Asunto(s)
Simulación por Computador , Modelos de Riesgos Proporcionales , Humanos , Análisis de Supervivencia , Algoritmos , Modelos Estadísticos , Análisis de Regresión , Medición de Riesgo/métodos , Esclerodermia Sistémica
19.
Biomed J ; : 100732, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38697480

RESUMEN

BACKGROUND: Electrocardiogram (ECG) abnormalities have demonstrated potential as prognostic indicators of patient survival. However, the traditional statistical approach is constrained by structured data input, limiting its ability to fully leverage the predictive value of ECG data in prognostic modeling. METHODS: This study aims to introduce and evaluate a deep-learning model to simultaneously handle censored data and unstructured ECG data for survival analysis. We herein introduce a novel deep neural network called ECG-surv, which includes a feature extraction neural network and a time-to-event analysis neural network. The proposed model is specifically designed to predict the time to 1-year mortality by extracting and analyzing unique features from 12-lead ECG data. ECG-surv was evaluated using both an independent test set and an external set, which were collected using different ECG devices. RESULTS: The performance of ECG-surv surpassed that of the Cox proportional model, which included demographics and ECG waveform parameters, in predicting 1-year all-cause mortality, with a significantly higher concordance index (C-index) in ECG-surv than in the Cox model using both the independent test set (0.860 [95% CI: 0.859- 0.861] vs. 0.796 [95% CI: 0.791- 0.800]) and the external test set (0.813 [95% CI: 0.807- 0.814] vs. 0.764 [95% CI: 0.755- 0.770]). ECG-surv also demonstrated exceptional predictive ability for cardiovascular death (C-index of 0.891 [95% CI: 0.890- 0.893]), outperforming the Framingham risk Cox model (C-index of 0.734 [95% CI: 0.715-0.752]). CONCLUSION: ECG-surv effectively utilized unstructured ECG data in a survival analysis. It outperformed traditional statistical approaches in predicting 1-year all-cause mortality and cardiovascular death, which makes it a valuable tool for predicting patient survival.

20.
Comput Med Imaging Graph ; 115: 102395, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38729092

RESUMEN

In this paper, we hypothesize that it is possible to localize image regions of preclinical tumors in a Chest X-ray (CXR) image by a weakly-supervised training of a survival prediction model using a dataset containing CXR images of healthy patients and their time-to-death label. These visual explanations can empower clinicians in early lung cancer detection and increase patient awareness of their susceptibility to the disease. To test this hypothesis, we train a censor-aware multi-class survival prediction deep learning classifier that is robust to imbalanced training, where classes represent quantized number of days for time-to-death prediction. Such multi-class model allows us to use post-hoc interpretability methods, such as Grad-CAM, to localize image regions of preclinical tumors. For the experiments, we propose a new benchmark based on the National Lung Cancer Screening Trial (NLST) dataset to test weakly-supervised preclinical tumor localization and survival prediction models, and results suggest that our proposed method shows state-of-the-art C-index survival prediction and weakly-supervised preclinical tumor localization results. To our knowledge, this constitutes a pioneer approach in the field that is able to produce visual explanations of preclinical events associated with survival prediction results.


Asunto(s)
Detección Precoz del Cáncer , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/mortalidad , Detección Precoz del Cáncer/métodos , Radiografía Torácica , Aprendizaje Profundo , Análisis de Supervivencia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...