Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 255
Filtrar
1.
Cureus ; 16(9): e68734, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39371737

RESUMEN

Objective This study aimed to evaluate the impact of the metastatic lymph node ratio (mtLNR) on survival outcomes and prognosis in patients with rectal carcinoma, in comparison with other clinicopathological factors. Methods A retrospective cohort analysis was conducted on 97 patients with rectal adenocarcinoma who underwent surgical treatment at Erol Olçok Training and Research Hospital between January 2017 and December 2022. The inclusion criteria consisted of patients over 18 years of age and the absence of hematological disorders or concurrent inflammatory conditions. The patients' demographic data, tumor characteristics, surgical details, lymph node (LN) status, mtLNR, and survival outcomes were analyzed. The optimal cutoff value of mtLNR for predicting mortality was determined using receiver operating characteristic (ROC) curve analysis. Kaplan-Meier survival analysis was employed to estimate overall survival (OS) and disease-free survival (DFS), and differences between groups were evaluated using the log-rank test. The Cox proportional hazards model was used to calculate hazard ratios (HRs) for all-cause mortality. Statistical significance was set at p<0.05. Results The mean age of the patients was 70.31 ± 11.57 years, with 65.98% being male. Low anterior resection (LAR) was performed in 83.51% of the patients, and laparoscopic surgery was conducted in 26.8%. The median OS for the entire cohort was 24 months (range: 3-60). Patients were divided into two groups based on mtLNR, with the cutoff value set at 0.2183. A high mtLNR was significantly associated with poorer DFS and OS (p=0.021 and p=0.003, respectively). Moreover, patients with an mtLNR>0.2183 exhibited significantly higher rates of recurrence, lymphovascular invasion (LVI), and perineural invasion (PNI) compared to those with a lower mtLNR (all p<0.001). The optimal cutoff value of mtLNR predicted mortality with a specificity of 81.4% and a sensitivity of 48.1% (area under the curve (AUC) 0.662, p=0.012). Kaplan-Meier analysis showed a significant difference in survival between the two groups; the risk of all-cause mortality was 3.71 times higher in patients with mtLNR>0.2183 (p=0.002). Conclusion The mtLNR is a strong determinant of survival and prognosis in patients with rectal carcinoma. High mtLNR values are associated with worse survival outcomes and more aggressive tumor characteristics. The findings suggest that mtLNR should be considered in clinical decision-making processes. These results indicate that mtLNR could be a valuable prognostic tool in clinical decision-making.

2.
Stat Med ; 2024 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-39379012

RESUMEN

It is becoming increasingly common for researchers to consider leveraging information from external sources to enhance the analysis of small-scale studies. While much attention has focused on univariate survival data, correlated survival data are prevalent in epidemiological investigations. In this article, we propose a unified framework to improve the estimation of the marginal accelerated failure time model with correlated survival data by integrating additional information given in the form of covariate effects evaluated in a reduced accelerated failure time model. Such auxiliary information can be summarized by using valid estimating equations and hence can then be combined with the internal linear rank-estimating equations via the generalized method of moments. We investigate the asymptotic properties of the proposed estimator and show that it is more efficient than the conventional estimator using internal data only. When population heterogeneity exists, we revise the proposed estimation procedure and present a shrinkage estimator to protect against bias and loss of efficiency. Moreover, the proposed estimation procedure can be further refined to accommodate the non-negligible uncertainty in the auxiliary information, leading to more trustable inference conclusions. Simulation results demonstrate the finite sample performance of the proposed methods, and empirical application on the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial substantiates its practical relevance.

3.
Biostatistics ; 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39255367

RESUMEN

Random effect models for time-to-event data, also known as frailty models, provide a conceptually appealing way of quantifying association between survival times and of representing heterogeneities resulting from factors which may be difficult or impossible to measure. In the literature, the random effect is usually assumed to have a continuous distribution. However, in some areas of application, discrete frailty distributions may be more appropriate. The present paper is about the implementation and interpretation of the Addams family of discrete frailty distributions. We propose methods of estimation for this family of densities in the context of shared frailty models for the hazard rates for case I interval-censored data. Our optimization framework allows for stratification of random effect distributions by covariates. We highlight interpretational advantages of the Addams family of discrete frailty distributions and theK-point distribution as compared to other frailty distributions. A unique feature of the Addams family and the K-point distribution is that the support of the frailty distribution depends on its parameters. This feature is best exploited by imposing a model on the distributional parameters, resulting in a model with non-homogeneous covariate effects that can be analyzed using standard measures such as the hazard ratio. Our methods are illustrated with applications to multivariate case I interval-censored infection data.

4.
Sci Rep ; 14(1): 20967, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251622

RESUMEN

This paper presents the exponentiated alpha-power log-logistic (EAPLL) distribution, which extends the log-logistic distribution. The EAPLL distribution emphasizes its suitability for survival data modeling by providing analytical simplicity and accommodating both monotone and non-monotone failure rates. We derive some of its mathematical properties and test eight estimation methods using an extensive simulation study. To determine the best estimation approach, we rank mean estimates, mean square errors, and average absolute biases on a partial and overall ranking. Furthermore, we use the EAPLL distribution to examine three real-life survival data sets, demonstrating its superior performance over competing log-logistic distributions. This study adds vital insights to survival analysis methodology and provides a solid framework for modeling various survival data scenarios.

5.
Med Decis Making ; : 272989X241279459, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39305058

RESUMEN

HIGHLIGHTS: The net value of reducing decision uncertainty by collecting additional data is quantified by the expected net benefit of sampling (ENBS). This tutorial presents a general-purpose algorithm for computing the ENBS for collecting survival data along with a step-by-step implementation in R.The algorithm is based on recently published methods for simulating survival data and computing expected value of sample information that do not rely on the survival data to follow any particular parametric distribution and that can take into account any arbitrary censoring process.We demonstrate in a case study based on a previous cancer technology appraisal that ENBS calculations are useful not only for designing new studies but also for optimizing reimbursement decisions for new health technologies based on immature evidence from ongoing trials.

6.
J Alzheimers Dis ; 101(1): 147-157, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39121117

RESUMEN

Background: Mild cognitive impairment (MCI) patients are at a high risk of developing Alzheimer's disease and related dementias (ADRD) at an estimated annual rate above 10%. It is clinically and practically important to accurately predict MCI-to-dementia conversion time. Objective: It is clinically and practically important to accurately predict MCI-to-dementia conversion time by using easily available clinical data. Methods: The dementia diagnosis often falls between two clinical visits, and such survival outcome is known as interval-censored data. We utilized the semi-parametric model and the random forest model for interval-censored data in conjunction with a variable selection approach to select important measures for predicting the conversion time from MCI to dementia. Two large AD cohort data sets were used to build, validate, and test the predictive model. Results: We found that the semi-parametric model can improve the prediction of the conversion time for patients with MCI-to-dementia conversion, and it also has good predictive performance for all patients. Conclusions: Interval-censored data should be analyzed by using the models that were developed for interval- censored data to improve the model performance.


Asunto(s)
Disfunción Cognitiva , Demencia , Progresión de la Enfermedad , Humanos , Disfunción Cognitiva/diagnóstico , Femenino , Masculino , Anciano , Demencia/diagnóstico , Demencia/epidemiología , Demencia/psicología , Anciano de 80 o más Años , Estudios de Cohortes , Factores de Tiempo , Modelos Estadísticos , Valor Predictivo de las Pruebas , Enfermedad de Alzheimer/diagnóstico , Pruebas Neuropsicológicas/estadística & datos numéricos
7.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39136277

RESUMEN

Time-to-event data are often recorded on a discrete scale with multiple, competing risks as potential causes for the event. In this context, application of continuous survival analysis methods with a single risk suffers from biased estimation. Therefore, we propose the multivariate Bernoulli detector for competing risks with discrete times involving a multivariate change point model on the cause-specific baseline hazards. Through the prior on the number of change points and their location, we impose dependence between change points across risks, as well as allowing for data-driven learning of their number. Then, conditionally on these change points, a multivariate Bernoulli prior is used to infer which risks are involved. Focus of posterior inference is cause-specific hazard rates and dependence across risks. Such dependence is often present due to subject-specific changes across time that affect all risks. Full posterior inference is performed through a tailored local-global Markov chain Monte Carlo (MCMC) algorithm, which exploits a data augmentation trick and MCMC updates from nonconjugate Bayesian nonparametric methods. We illustrate our model in simulations and on ICU data, comparing its performance with existing approaches.


Asunto(s)
Algoritmos , Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Método de Montecarlo , Humanos , Análisis de Supervivencia , Modelos Estadísticos , Análisis Multivariante , Biometría/métodos
8.
Heliyon ; 10(15): e35250, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39170474

RESUMEN

Here we propose a model-free, non-parametric method to solve an ill-posed inverse problem arising in several fields. It consists of determining a probability density of the lifetime, or the probability of survival of an individual, from the knowledge of the fractional moments of its probability distribution. The two problems are related, but they are different because of the natural normalization condition in each case. We provide a maximum entropy based approach to solve both problems. This problem provides a concrete framework to analyze an interesting problem in the theory of exponential models for probability densities. The central issue that comes up concerns the choice of the fractional moments and their number. We find that there are many possible choices that lead to solutions compatible with the data but in all of them, no more than four moments are necessary. The fact that a given data set can be accurately described by different exponential families poses a challenging problem for the model builder when attaching theoretical meaning to the resulting exponential density.

9.
Biometrika ; 111(1): 255-272, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38948429

RESUMEN

Quantile regression has become a widely used tool for analysing competing risk data. However, quantile regression for competing risk data with a continuous mark is still scarce. The mark variable is an extension of cause of failure in a classical competing risk model where cause of failure is replaced by a continuous mark only observed at uncensored failure times. An example of the continuous mark variable is the genetic distance that measures dissimilarity between the infecting virus and the virus contained in the vaccine construct. In this article, we propose a novel mark-specific quantile regression model. The proposed estimation method borrows strength from data in a neighbourhood of a mark and is based on an induced smoothed estimation equation, which is very different from the existing methods for competing risk data with discrete causes. The asymptotic properties of the resulting estimators are established across mark and quantile continuums. In addition, a mark-specific quantile-type vaccine efficacy is proposed and its statistical inference procedures are developed. Simulation studies are conducted to evaluate the finite sample performances of the proposed estimation and hypothesis testing procedures. An application to the first HIV vaccine efficacy trial is provided.

10.
Cancers (Basel) ; 16(13)2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-39001373

RESUMEN

BACKGROUND: Most liver cancer scoring systems focus on patients with preexisting liver diseases such as chronic viral hepatitis or liver cirrhosis. Patients with diabetes are at higher risk of developing liver cancer than the general population. However, liver cancer scoring systems for patients in the absence of liver diseases or those with diabetes remain rare. This study aims to develop a risk scoring system for liver cancer prediction among diabetes patients and a sub-model among diabetes patients without cirrhosis/chronic viral hepatitis. METHODS: A retrospective cohort study was performed using electronic health records of Hong Kong. Patients who received diabetes care in general outpatient clinics between 2010 and 2019 without cancer history were included and followed up until December 2019. The outcome was diagnosis of liver cancer during follow-up. A risk scoring system was developed by applying random survival forest in variable selection, and Cox regression in weight assignment. RESULTS: The liver cancer incidence was 0.92 per 1000 person-years. Patients who developed liver cancer (n = 1995) and those who remained free of cancer (n = 1969) during follow-up (median: 6.2 years) were selected for model building. In the final time-to-event scoring system, presence of chronic hepatitis B/C, alanine aminotransferase, age, presence of cirrhosis, and sex were included as predictors. The concordance index was 0.706 (95%CI: 0.676-0.741). In the sub-model for patients without cirrhosis/chronic viral hepatitis, alanine aminotransferase, age, triglycerides, and sex were selected as predictors. CONCLUSIONS: The proposed scoring system may provide a parsimonious score for liver cancer risk prediction among diabetes patients.

11.
J R Soc Interface ; 21(216): 20230682, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39081111

RESUMEN

Monitoring disease progression often involves tracking biomarker measurements over time. Joint models (JMs) for longitudinal and survival data provide a framework to explore the relationship between time-varying biomarkers and patients' event outcomes, offering the potential for personalized survival predictions. In this article, we introduce the linear state space dynamic survival model for handling longitudinal and survival data. This model enhances the traditional linear Gaussian state space model by including survival data. It differs from the conventional JMs by offering an alternative interpretation via differential or difference equations, eliminating the need for creating a design matrix. To showcase the model's effectiveness, we conduct a simulation case study, emphasizing its performance under conditions of limited observed measurements. We also apply the proposed model to a dataset of pulmonary arterial hypertension patients, demonstrating its potential for enhanced survival predictions when compared with conventional risk scores.


Asunto(s)
Modelos Estadísticos , Humanos , Estudios Longitudinales , Análisis de Supervivencia
12.
Pharm Stat ; 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38924620

RESUMEN

Subgroup analysis may be used to investigate treatment effect heterogeneity among subsets of the study population defined by baseline characteristics. Several methodologies have been proposed in recent years and with these, statistical issues such as multiplicity, complexity, and selection bias have been widely discussed. Some methods adjust for one or more of these issues; however, few of them discuss or consider the stability of the subgroup assignments. We propose exploring the stability of subgroups as a sensitivity analysis step for stratified medicine to assess the robustness of the identified subgroups besides identifying possible factors that may drive this instability. After applying Bayesian credible subgroups, a nonparametric bootstrap can be used to assess stability at subgroup-level and patient-level. Our findings illustrate that when the treatment effect is small or not so evident, patients are more likely to switch to different subgroups (jumpers) across bootstrap resamples. In contrast, when the treatment effect is large or extremely convincing, patients generally remain in the same subgroup. While the proposed subgroup stability method is illustrated through Bayesian credible subgroups method on time-to-event data, this general approach can be used with other subgroup identification methods and endpoints.

13.
Stat Med ; 43(19): 3563-3577, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-38880963

RESUMEN

In cancer and other medical studies, time-to-event (eg, death) data are common. One major task to analyze time-to-event (or survival) data is usually to compare two medical interventions (eg, a treatment and a control) regarding their effect on patients' hazard to have the event in concern. In such cases, we need to compare two hazard curves of the two related patient groups. In practice, a medical treatment often has a time-lag effect, that is, the treatment effect can only be observed after a time period since the treatment is applied. In such cases, the two hazard curves would be similar in an initial time period, and the traditional testing procedures, such as the log-rank test, would be ineffective in detecting the treatment effect because the similarity between the two hazard curves in the initial time period would attenuate the difference between the two hazard curves that is reflected in the related testing statistics. In this paper, we suggest a new method for comparing two hazard curves when there is a potential treatment time-lag effect based on a weighted log-rank test with a flexible weighting scheme. The new method is shown to be more effective than some representative existing methods in various cases when a treatment time-lag effect is present.


Asunto(s)
Modelos de Riesgos Proporcionales , Humanos , Factores de Tiempo , Análisis de Supervivencia , Simulación por Computador , Femenino
14.
J Magn Reson Imaging ; 2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38739014

RESUMEN

Time-to-event endpoints are widely used as measures of patients' well-being and indicators of prognosis. In imaging-based biomarker studies, there are increasingly more studies that focus on examining imaging biomarkers' prognostic or predictive utilities on those endpoints, whether in a trial or an observational study setting. In this educational review article, we briefly introduce some basic concepts of time-to-event endpoints and point out potential pitfalls in the context of imaging biomarker research in hope of improving radiologists' understanding of related subjects. Besides, we have included some review and discussions on the benefits of using time-to-event endpoints and considerations on selecting overall survival or progression-free survival for primary analysis. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 3.

15.
Health Sci Rep ; 7(6): e2135, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38812715

RESUMEN

Background and Aims: Severe acute malnutrition remains a significant health challenge, particularly in low- and middle-income countries. The aim of this study was to determine the survival time of under-five children with severe acute malnutrition. Methods: A retrospective cohort study was conducted at a hospital, focusing on under-five children with severe acute malnutrition. The study included 322 inpatients admitted to the Chiro hospital in Chiro, Ethiopia, between September 2019 and August 2020, whose data was obtained from medical records. Survival functions were analysed using Kaplan‒Meier plots and log-rank tests. The survival time of severe acute malnutrition was further analysed using the Cox proportional hazards model and Bayesian parametric survival models, employing integrated nested Laplace approximation methods. Results: Among the 322 patients, 118 (36.6%) died as a result of severe acute malnutrition. The estimated median survival time for inpatients was found to be 2 weeks. Model selection criteria favored the Bayesian Weibull accelerated failure time model, which demonstrated that age, body temperature, pulse rate, nasogastric (NG) tube usage, hypoglycemia, anemia, diarrhea, dehydration, malaria, and pneumonia significantly influenced the survival time of severe acute malnutrition. Conclusions: This study revealed that children below 24 months, those with altered body temperature and pulse rate, NG tube usage, hypoglycemia, and comorbidities such as anemia, diarrhea, dehydration, malaria, and pneumonia had a shorter survival time when affected by severe acute malnutrition under the age of five. To reduce the death rate of children under 5 years of age, it is necessary to design community management for acute malnutrition to ensure early detection and improve access to and coverage for children who are malnourished.

16.
BMC Res Notes ; 17(1): 85, 2024 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-38504305

RESUMEN

BACKGROUND: Neisseria meningitides, Streptococcus pneumonia, and hemophilic influenza type B are frequently linked to bacterial meningitis (BM) in children. It's an infectious sickness that kills and severely mobilizes children. For a variety of reasons, bacterial meningitis remains a global public health concern; most cases and deaths are found in Sub-Saharan Africa, particularly in Ethiopia. Even though vaccination has made BM more preventable, children worldwide are still severely harmed by this serious illness. Age, sex, and co-morbidity are among the risk variables for BM that have been found. Therefore, the main objective of this study was to identify the variables influencing the time to recovery for children with bacterial meningitis at Jigjiga University referral hospital in the Somali regional state of Ethiopia. METHOD: A retrospective cohort of 535 children with bacterial meningitis who received antibiotic treatment was the subject of this study. Parametric Shared Frailty ty and the AFT model were employed with log likelihood, BIC, and AIC methods of model selection. The frailty models all employed the patients' kebele as a clustering factor. RESULTS: The number of cases of BM declined in young children during the duration of the 2 year, 11 month study period, but not in the elderly. Streptococcus pneumonia (50%), hemophilic influenza (30.5%), and Neisseria meningitides (15%) were the most frequent causes of BM. The time to recovery of patients from bacteria was significantly influenced by the covariates male patients (ϕ = 0.927; 95% CI (0.866, 0.984); p-value = 0.014), patients without a vaccination history (ϕ = 0.898; 95% CI (0.834, 0.965); P value = 0.0037), and patients who were not breastfeeding (ϕ = 0.616; 95% CI (0.404, 0.039); P-value = 0.024). The recovery times for male, non-breastfed children with bacterial patients are 7.9 and 48.4% shorter, respectively. In contrast to children with comorbidity, the recovery time for children without comorbidity increased by 8.7%. CONCLUSION: Age group, sex, vaccination status, co-morbidity, breastfeeding, and medication regimen were the main determinant factors for the time to recovery of patients with bacterial meningitis. Patients with co-morbidities require the doctor at Jigjiga University Referral Hospital to pay close attention to them.


Asunto(s)
Fragilidad , Gripe Humana , Meningitis Bacterianas , Meningitis Meningocócica , Neumonía , Niño , Humanos , Masculino , Lactante , Preescolar , Anciano , Etiopía/epidemiología , Somalia , Estudios Retrospectivos , Universidades , Meningitis Bacterianas/epidemiología , Meningitis Bacterianas/microbiología , Hospitales , Derivación y Consulta
17.
Stat Med ; 43(8): 1509-1526, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38320545

RESUMEN

We propose a new simultaneous variable selection and estimation procedure with the Gaussian seamless- L 0 $$ {L}_0 $$ (GSELO) penalty for Cox proportional hazard model and additive hazards model. The GSELO procedure shows good potential to improve the existing variable selection methods by taking strength from both best subset selection (BSS) and regularization. In addition, we develop an iterative algorithm to implement the proposed procedure in a computationally efficient way. Theoretically, we establish the convergence properties of the algorithm and asymptotic theoretical properties of the proposed procedure. Since parameter tuning is crucial to the performance of the GSELO procedure, we also propose an extended Bayesian information criteria (EBIC) parameter selector for the GSELO procedure. Simulated and real data studies have demonstrated the prediction performance and effectiveness of the proposed method over several state-of-the-art methods.


Asunto(s)
Algoritmos , Humanos , Teorema de Bayes , Modelos de Riesgos Proporcionales
18.
Stat Methods Med Res ; 33(3): 392-413, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38332489

RESUMEN

The estimation of heterogeneous treatment effects has attracted considerable interest in many disciplines, most prominently in medicine and economics. Contemporary research has so far primarily focused on continuous and binary responses where heterogeneous treatment effects are traditionally estimated by a linear model, which allows the estimation of constant or heterogeneous effects even under certain model misspecifications. More complex models for survival, count, or ordinal outcomes require stricter assumptions to reliably estimate the treatment effect. Most importantly, the noncollapsibility issue necessitates the joint estimation of treatment and prognostic effects. Model-based forests allow simultaneous estimation of covariate-dependent treatment and prognostic effects, but only for randomized trials. In this paper, we propose modifications to model-based forests to address the confounding issue in observational data. In particular, we evaluate an orthogonalization strategy originally proposed by Robinson (1988, Econometrica) in the context of model-based forests targeting heterogeneous treatment effect estimation in generalized linear models and transformation models. We found that this strategy reduces confounding effects in a simulated study with various outcome distributions. We demonstrate the practical aspects of heterogeneous treatment effect estimation for survival and ordinal outcomes by an assessment of the potentially heterogeneous effect of Riluzole on the progress of Amyotrophic Lateral Sclerosis.


Asunto(s)
Esclerosis Amiotrófica Lateral , Heterogeneidad del Efecto del Tratamiento , Humanos , Riluzol , Modelos Lineales
19.
Geroscience ; 46(1): 1331-1342, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37544968

RESUMEN

Telomere shortening is a biological aging hallmark. The effect of short telomere length may be targeted by increased physical activity to reduce the risk of multiple aging-related diseases, including coronary heart disease (CHD). The objective was to assess the moderation effect of accelerometer-based physical activity (aPA) on the association between shorter leukocyte telomere length (LTL) relatively in the population sample and incident CHD. Data were from the UK Biobank participants with well-calibrated accelerometer data for at least 6.5 days (n = 54,180). Relative mean LTL at baseline (5-6 years prior to aPA assessment) was measured in T/S ratio, using a multiplex quantitative polymerase chain reaction (qPCR) technology, by comparing the amount of the telomere amplification product (T) to that of a single-copy gene (S). aPA measures included total number of events (at least 10-s continued physical activity > 32 milligravities [mg]), total volume, mean duration, mean intensity, and peak intensity of all events. LTL, aPA measures, and their interactions were associated with incident CHD (mean follow-up 6.8 years) using Cox proportional hazards models adjusting for covariates. Longer LTL (relative to the sample distribution) was associated with reduced incidence of CHD (adjusted hazard ratio [aHR] = 0.94 per standard deviation [SD] increase in LTL, [95% CI, 0.90 to 0.99], P = .010). Incidence of CHD was reduced by higher total volume of aPA (aHR = 0.82 per SD increase in LTL, [95% CI, 0.71 to 0.95], P = .010) but increased by higher total number of events (aHR = 1.11 per SD increase in LTL, [95% CI, 1.02 to 1.21], P = .020) after controlling for other aPA measures and covariates. However, none of the interactions between LTL and aPA measures was statistically significant (P = .171).


Asunto(s)
Bancos de Muestras Biológicas , Enfermedad Coronaria , Humanos , Biobanco del Reino Unido , Enfermedad Coronaria/epidemiología , Enfermedad Coronaria/genética , Leucocitos , Telómero/genética , Ejercicio Físico
20.
J Appl Stat ; 50(15): 3031-3047, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37969546

RESUMEN

The joint models for longitudinal and survival data have recently received significant attention in medical and epidemiological studies. Joint models typically combine linear mixed effects models for repeated measurement data and Cox models for survival time. When we are jointly modeling the longitudinal and survival data, variable selection and efficient estimation of parameters are especially important for performing reliable statistical analyzes, both of which are currently lacking in the literature. In this paper we discuss the pretest and shrinkage estimation methods for jointly modeling longitudinal data and survival time data when some of the covariates in both longitudinal and survival components may not be relevant for predicting survival times. In this situation, we fit two models: the full model that contains all the covariates and the subset model that contains a reduced number of covariates. We combine the full model estimators and the estimators that are restricted by a linear hypothesis to define pretest and shrinkage estimators. We provide their numerical mean squared errors (MSE) and relative MSE. We show that if the shrinkage dimension exceeds two, the risk of the shrinkage estimators is strictly less than that of the full model estimators. Our proposed methods are illustrated by extensive simulation studies and a real-data example.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA