Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 18.373
Filter
3.
Biom J ; 66(6): e202300185, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39101657

ABSTRACT

There has been growing research interest in developing methodology to evaluate the health care providers' performance with respect to a patient outcome. Random and fixed effects models are traditionally used for such a purpose. We propose a new method, using a fusion penalty to cluster health care providers based on quasi-likelihood. Without any priori knowledge of grouping information, our method provides a desirable data-driven approach for automatically clustering health care providers into different groups based on their performance. Further, the quasi-likelihood is more flexible and robust than the regular likelihood in that no distributional assumption is needed. An efficient alternating direction method of multipliers algorithm is developed to implement the proposed method. We show that the proposed method enjoys the oracle properties; namely, it performs as well as if the true group structure were known in advance. The consistency and asymptotic normality of the estimators are established. Simulation studies and analysis of the national kidney transplant registry data demonstrate the utility and validity of our method.


Subject(s)
Biometry , Health Personnel , Cluster Analysis , Likelihood Functions , Humans , Health Personnel/statistics & numerical data , Biometry/methods , Kidney Transplantation , Algorithms
4.
BMC Ophthalmol ; 24(1): 349, 2024 Aug 16.
Article in English | MEDLINE | ID: mdl-39152392

ABSTRACT

BACKGROUND: Accurate prediction of postoperative vault in implantable collamer lens (ICL) implantation is crucial; however, current formulas often fail to account for individual anatomical variations, leading to suboptimal visual outcomes and necessitating improved predictive models. We aimed to verify the prediction accuracy of our new predictive model for vaulting based on anterior and posterior chamber structural parameters. METHODS: This retrospective observational study included 137 patients (240 eyes) who previously underwent ICL surgery. Patients were randomly divided into the model establishment (192 eyes) or validation (48 eyes) groups. Preoperative measurements of the anterior and posterior chamber structures were obtained using Pentacam, CASIA2 anterior segment optical coherence tomography (AS-OCT), ultrasound biomicroscopy, and other devices. Stepwise multiple linear regression analysis was used to evaluate the relationship between the vault and each variable (WL formula). The Friedman test was performed for the vaulting prediction results of the WL, NK (Ver. 3), and KS formulas (Ver. 4) in CASIA2 AS-OCT, as well as the Zhu formula and vault measurements. The proportions of prediction error within ± 250 µm per formula were compared. RESULTS: The predicted vault values of the WL, NK, KS, and Zhu formulas and vault measurements were 668.74 ± 162.12, 650.85 ± 248.47, 546.56 ± 128.99, 486.56 ± 210.76, and 716.06 ± 233.84 µm, respectively, with a significant difference (χ2 = 69.883, P = 0.000). Significant differences were also found between the measured vault value and Zhu formula, measured vault value and KS formula, WL formula and Zhu formula, WL formula and KS formula, NK formula and KS formula, and NK formula and Zhu formula (P < 0.001) but not between other groups. The proportions of prediction error within ± 250 µm per formula were as follows: WL formula (81.3%) > NK formula (70.8%) > KS formula (66.7%) > Zhu formula (54.2%). CONCLUSIONS: The WL formula, which considers the complexity of the anterior and posterior chamber structures, demonstrates greater calculation accuracy, compared with the KS (Ver. 4) and Zhu formulas. The proportion of absolute prediction error ≤ 250 µm is higher with the WL formula than with the NK formula (ver. 3). This enhanced predictive capability can improve ICL sizing decisions, thereby increasing the safety and efficacy of ICL implantation surgeries.


Subject(s)
Lens Implantation, Intraocular , Tomography, Optical Coherence , Humans , Retrospective Studies , Female , Male , Tomography, Optical Coherence/methods , Adult , Lens Implantation, Intraocular/methods , Anterior Chamber/diagnostic imaging , Phakic Intraocular Lenses , Myopia/surgery , Microscopy, Acoustic/methods , Young Adult , Middle Aged , Visual Acuity , Biometry/methods , Refraction, Ocular/physiology
5.
Biometrics ; 80(3)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39101548

ABSTRACT

We consider the setting where (1) an internal study builds a linear regression model for prediction based on individual-level data, (2) some external studies have fitted similar linear regression models that use only subsets of the covariates and provide coefficient estimates for the reduced models without individual-level data, and (3) there is heterogeneity across these study populations. The goal is to integrate the external model summary information into fitting the internal model to improve prediction accuracy. We adapt the James-Stein shrinkage method to propose estimators that are no worse and are oftentimes better in the prediction mean squared error after information integration, regardless of the degree of study population heterogeneity. We conduct comprehensive simulation studies to investigate the numerical performance of the proposed estimators. We also apply the method to enhance a prediction model for patella bone lead level in terms of blood lead level and other covariates by integrating summary information from published literature.


Subject(s)
Computer Simulation , Humans , Linear Models , Biometry/methods , Lead/blood , Patella , Models, Statistical , Data Interpretation, Statistical
6.
Arq Bras Oftalmol ; 87(5): e20230009, 2024.
Article in English | MEDLINE | ID: mdl-39109702

ABSTRACT

This document on myopia control is derived from a compilation of medical literature and the collective clinical expertise of an expert committee comprising members from the Brazilian Society of Pediatric Ophthalmology and the Brazilian Society of Contact Lenses and Cornea. To manage myopia in children, the committee recommends corneal topography and biannual visits with cycloplegic refraction, along with annual optical biometry. For fast-progressing myopia, biannual biometry should be considered. Myopic progression is defined as an annual increase in spherical equivalent greater than 0.50 D/year or in axial length greater than 0.3 mm (until 10 years old) or 0.2 mm (above 11 years). The proposed treatments for myopia progression include environmental control, low concentration atropine, defocus glasses, contact lenses, or Ortho-K lenses, and combinations of these methods may be necessary for uncontrolled cases. Treatment should be sustained for at least 2 years. This document serves as a comprehensive guideline for diagnosing, treating, and monitoring pre-myopic and myopic children in Brazil.


Subject(s)
Disease Progression , Myopia , Humans , Child , Myopia/prevention & control , Myopia/therapy , Brazil , Refraction, Ocular/physiology , Corneal Topography/methods , Biometry/methods
7.
Transl Vis Sci Technol ; 13(8): 16, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39120886

ABSTRACT

Purpose: To develop and validate machine learning (ML) models for predicting cycloplegic refractive error and myopia status using noncycloplegic refractive error and biometric data. Methods: Cross-sectional study of children aged five to 18 years who underwent biometry and autorefraction before and after cycloplegia. Myopia was defined as cycloplegic spherical equivalent refraction (SER) ≤-0.5 Diopter (D). Models were evaluated for predicting SER using R2 and mean absolute error (MAE) and myopia status using area under the receiver operating characteristic (ROC) curve (AUC). Best-performing models were further evaluated using sensitivity/specificity and comparison of observed versus predicted myopia prevalence rate overall and in each age group. Independent data sets were used for training (n = 1938) and validation (n = 1476). Results: In the validation dataset, ML models predicted cycloplegic SER with high R2 (0.913-0.935) and low MAE (0.393-0.480 D). The AUC for predicting myopia was high (0.984-0.987). The best-performing model for SER (XGBoost) had high sensitivity and specificity (91.1% and 97.2%). Random forest (RF), the best-performing model for myopia, had high sensitivity and specificity (92.2% and 96.9%). Within each age group, difference between predicted and actual myopia prevalence was within 4%. Conclusions: Using noncycloplegic refractive error and ocular biometric data, ML models performed well for predicting cycloplegic SER and myopia status. When measuring cycloplegic SER is not feasible, ML may provide a useful tool for estimating cycloplegic SER and myopia prevalence rate in epidemiological studies. Translational Relevance: Using ML to predict cycloplegic refraction based on noncycloplegic data is a powerful tool for large, population-based studies of refractive error.


Subject(s)
Machine Learning , Mydriatics , Myopia , Refraction, Ocular , Humans , Child , Cross-Sectional Studies , Male , Female , Myopia/epidemiology , Myopia/diagnosis , Adolescent , Child, Preschool , Mydriatics/administration & dosage , Refraction, Ocular/physiology , China/epidemiology , Biometry/methods , Refractive Errors/epidemiology , Refractive Errors/diagnosis , ROC Curve , Prevalence , Area Under Curve , Students , East Asian People
8.
Sensors (Basel) ; 24(15)2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39123851

ABSTRACT

This work presents a novel approach to enhancing iris recognition systems through a two-module approach focusing on low-level image preprocessing techniques and advanced feature extraction. The primary contributions of this paper include: (i) the development of a robust preprocessing module utilizing the Canny algorithm for edge detection and the circle-based Hough transform for precise iris extraction, and (ii) the implementation of Binary Statistical Image Features (BSIF) with domain-specific filters trained on iris-specific data for improved biometric identification. By combining these advanced image preprocessing techniques, the proposed method addresses key challenges in iris recognition, such as occlusions, varying pigmentation, and textural diversity. Experimental results on the Human-inspired Domain-specific Binarized Image Features (HDBIF) Dataset, consisting of 1892 iris images, confirm the significant enhancements achieved. Moreover, this paper offers a comprehensive and reproducible research framework by providing source codes and access to the testing database through the Notre Dame University dataset website, thereby facilitating further application and study. Future research will focus on exploring adaptive algorithms and integrating machine learning techniques to improve performance across diverse and unpredictable real-world scenarios.


Subject(s)
Algorithms , Biometric Identification , Image Processing, Computer-Assisted , Iris , Iris/diagnostic imaging , Humans , Biometric Identification/methods , Image Processing, Computer-Assisted/methods , Biometry/methods , Databases, Factual , Machine Learning
9.
Biometrics ; 80(3)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39136276

ABSTRACT

Clustered coefficient regression (CCR) extends the classical regression model by allowing regression coefficients varying across observations and forming clusters of observations. It has become an increasingly useful tool for modeling the heterogeneous relationship between the predictor and response variables. A typical issue of existing CCR methods is that the estimation and clustering results can be unstable in the presence of multicollinearity. To address the instability issue, this paper introduces a low-rank structure of the CCR coefficient matrix and proposes a penalized non-convex optimization problem with an adaptive group fusion-type penalty tailor-made for this structure. An iterative algorithm is developed to solve this non-convex optimization problem with guaranteed convergence. An upper bound for the coefficient estimation error is also obtained to show the statistical property of the estimator. Empirical studies on both simulated datasets and a COVID-19 mortality rate dataset demonstrate the superiority of the proposed method to existing methods.


Subject(s)
Algorithms , COVID-19 , Computer Simulation , Models, Statistical , Humans , Cluster Analysis , Regression Analysis , SARS-CoV-2 , Biometry/methods , Data Interpretation, Statistical
10.
Biometrics ; 80(3)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39136277

ABSTRACT

Time-to-event data are often recorded on a discrete scale with multiple, competing risks as potential causes for the event. In this context, application of continuous survival analysis methods with a single risk suffers from biased estimation. Therefore, we propose the multivariate Bernoulli detector for competing risks with discrete times involving a multivariate change point model on the cause-specific baseline hazards. Through the prior on the number of change points and their location, we impose dependence between change points across risks, as well as allowing for data-driven learning of their number. Then, conditionally on these change points, a multivariate Bernoulli prior is used to infer which risks are involved. Focus of posterior inference is cause-specific hazard rates and dependence across risks. Such dependence is often present due to subject-specific changes across time that affect all risks. Full posterior inference is performed through a tailored local-global Markov chain Monte Carlo (MCMC) algorithm, which exploits a data augmentation trick and MCMC updates from nonconjugate Bayesian nonparametric methods. We illustrate our model in simulations and on ICU data, comparing its performance with existing approaches.


Subject(s)
Algorithms , Bayes Theorem , Computer Simulation , Markov Chains , Monte Carlo Method , Humans , Survival Analysis , Models, Statistical , Multivariate Analysis , Biometry/methods
11.
Biometrics ; 80(3)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39101549

ABSTRACT

Many existing methodologies for analyzing spatiotemporal point patterns are developed based on the assumption of stationarity in both space and time for the second-order intensity or pair correlation. In practice, however, such an assumption often lacks validity or proves to be unrealistic. In this paper, we propose a novel and flexible nonparametric approach for estimating the second-order characteristics of spatiotemporal point processes, accommodating non-stationary temporal correlations. Our proposed method employs kernel smoothing and effectively accounts for spatial and temporal correlations differently. Under a spatially increasing-domain asymptotic framework, we establish consistency of the proposed estimators, which can be constructed using different first-order intensity estimators to enhance practicality. Simulation results reveal that our method, in comparison with existing approaches, significantly improves statistical efficiency. An application to a COVID-19 dataset further illustrates the flexibility and interpretability of our procedure.


Subject(s)
COVID-19 , Computer Simulation , Spatio-Temporal Analysis , Humans , Statistics, Nonparametric , Models, Statistical , SARS-CoV-2 , Biometry/methods , Data Interpretation, Statistical
12.
BMC Ophthalmol ; 24(1): 326, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103785

ABSTRACT

PURPOSE: To research the accuracy of intraocular lens (IOL) calculation formulas and investigate the effect of anterior chamber depth (ACD) and lens thickness (LT) measured by swept-source optical coherence tomography biometer (IOLMaster 700) in patients with posterior chamber phakic IOL (PC-pIOL). METHODS: Retrospective case series. The IOLMaster 700 biometer was used to measure axial length (AL) and anterior segment parameters. The traditional formulas (SRK/T, Holladay 1 and Haigis) with or without Wang-Koch (WK) AL adjustment, and new-generation formulas (Barret Universal II [BUII], Emmetropia Verifying Optical [EVO] v2.0, Kane, Pearl-DGS) were utilized in IOL power calculation. RESULTS: This study enrolled 24 eyes of 24 patients undergoing combined PC-pIOL removal and cataract surgery at Xiamen Eye Center of Xiamen University, Xiamen, Fujian, China. The median absolute prediction error in ascending order was EVO 2.0 (0.33), Kane (0.35), SRK/T-WKmodified (0.42), Holladay 1-WKmodified (0.44), Haigis-WKC1 (0.46), Pearl-DGS (0.47), BUII (0.58), Haigis (0.75), SRK/T (0.79), and Holladay 1 (1.32). The root-mean-square absolute error in ascending order was Haigis-WKC1 (0.591), Holladay 1-WKmodified (0.622), SRK/T-WKmodified (0.623), EVO (0.673), Kane (0.678), Pearl-DGS (0.753), BUII (0.863), Haigis (1.061), SRK/T (1.188), and Holladay 1 (1.513). A detailed analysis of ACD and LT measurement error revealed negligible impact on refractive outcomes in BUII and EVO 2.0 when these parameters were incorporated or omitted in the formula calculation. CONCLUSION: The Kane, EVO 2.0, and traditional formulas with WK AL adjustment displayed high prediction accuracy. Furthermore, the ACD and LT measurement error does not exert a significant influence on the accuracy of IOL power calculation formulas in highly myopic eyes implanted with PC-pIOL.


Subject(s)
Biometry , Cataract , Phakic Intraocular Lenses , Refraction, Ocular , Tomography, Optical Coherence , Humans , Retrospective Studies , Tomography, Optical Coherence/methods , Female , Male , Middle Aged , Biometry/methods , Refraction, Ocular/physiology , Cataract/complications , Adult , Optics and Photonics , Reproducibility of Results , Aged , Axial Length, Eye/diagnostic imaging , Axial Length, Eye/pathology , Anterior Chamber/diagnostic imaging , Visual Acuity/physiology , Lens Implantation, Intraocular/methods
13.
Biom J ; 66(6): e202300242, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39126674

ABSTRACT

Subset selection methods aim to choose a nonempty subset of populations including a best population with some prespecified probability. An example application involves location parameters that quantify yields in agriculture to select the best wheat variety. This is quite different from variable selection problems, for instance, in regression. Unfortunately, subset selection methods can become very conservative when the parameter configuration is not least favorable. This will lead to a selection of many non-best populations, making the set of selected populations less informative. To solve this issue, we propose less conservative adaptive approaches based on estimating the number of best populations. We also discuss variants of our adaptive approaches that are applicable when the sample sizes and/or variances differ between populations. Using simulations, we show that our methods yield a desirable performance. As an illustration of potential gains, we apply them to two real datasets, one on the yield of wheat varieties and the other obtained via genome sequencing of repeated samples.


Subject(s)
Biometry , Triticum , Triticum/genetics , Biometry/methods
14.
Biom J ; 66(6): e202400014, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39162087

ABSTRACT

Random survival forests (RSF) can be applied to many time-to-event research questions and are particularly useful in situations where the relationship between the independent variables and the event of interest is rather complex. However, in many clinical settings, the occurrence of the event of interest is affected by competing events, which means that a patient can experience an outcome other than the event of interest. Neglecting the competing event (i.e., regarding competing events as censoring) will typically result in biased estimates of the cumulative incidence function (CIF). A popular approach for competing events is Fine and Gray's subdistribution hazard model, which directly estimates the CIF by fitting a single-event model defined on a subdistribution timescale. Here, we integrate concepts from the subdistribution hazard modeling approach into the RSF. We develop several imputation strategies that use weights as in a discrete-time subdistribution hazard model to impute censoring times in cases where a competing event is observed. Our simulations show that the CIF is well estimated if the imputation already takes place outside the forest on the overall dataset. Especially in settings with a low rate of the event of interest or a high censoring rate, competing events must not be neglected, that is, treated as censoring. When applied to a real-world epidemiological dataset on chronic kidney disease, the imputation approach resulted in highly plausible predictor-response relationships and CIF estimates of renal events.


Subject(s)
Biometry , Humans , Biometry/methods , Survival Analysis , Models, Statistical , Proportional Hazards Models
15.
Biom J ; 66(6): e202300198, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39162085

ABSTRACT

Lesion-symptom mapping studies provide insight into what areas of the brain are involved in different aspects of cognition. This is commonly done via behavioral testing in patients with a naturally occurring brain injury or lesions (e.g., strokes or brain tumors). This results in high-dimensional observational data where lesion status (present/absent) is nonuniformly distributed, with some voxels having lesions in very few (or no) subjects. In this situation, mass univariate hypothesis tests have severe power heterogeneity where many tests are known a priori to have little to no power. Recent advancements in multiple testing methodologies allow researchers to weigh hypotheses according to side information (e.g., information on power heterogeneity). In this paper, we propose the use of p-value weighting for voxel-based lesion-symptom mapping studies. The weights are created using the distribution of lesion status and spatial information to estimate different non-null prior probabilities for each hypothesis test through some common approaches. We provide a monotone minimum weight criterion, which requires minimum a priori power information. Our methods are demonstrated on dependent simulated data and an aphasia study investigating which regions of the brain are associated with the severity of language impairment among stroke survivors. The results demonstrate that the proposed methods have robust error control and can increase power. Further, we showcase how weights can be used to identify regions that are inconclusive due to lack of power.


Subject(s)
Biometry , Humans , Biometry/methods , Aphasia/physiopathology , Brain/diagnostic imaging , Brain Mapping/methods , False Positive Reactions
16.
BMC Ophthalmol ; 24(1): 321, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090603

ABSTRACT

BACKGROUND: Assessing refractive errors under cycloplegia is recommended for paediatric patients; however, this may not always be feasible. In these situations, refraction has to rely on measurements made under active accommodation which may increase measurements variability and error. Therefore, evaluating the accuracy and precision of non-cycloplegic refraction and biometric measurements is clinically relevant. The Myopia Master, a novel instrument combining autorefraction and biometry, is designed for monitoring refractive error and ocular biometry in myopia management. This study assessed its repeatability and agreement for autorefraction and biometric measurements pre- and post-cycloplegia. METHODS: A prospective cross-sectional study evaluated a cohort of 96 paediatric patients that underwent ophthalmologic examination. An optometrist performed two repeated measurements of autorefraction and biometry pre- and post-cycloplegia. Test-retest repeatability (TRT) was assessed as differences between consecutive measurements and agreement as differences between post- and pre-cycloplegia measurements, for spherical equivalent (SE), refractive and keratometric J0/J45 astigmatic components, mean keratometry (Km) and axial length (AL). RESULTS: Cycloplegia significantly improved the SE repeatability (TRT, pre-cyclo: 0.65 D, post-cyclo: 0.31 D). SE measurements were more repeatable in myopes and emmetropes compared to hyperopes. Keratometry (Km) repeatability did not change with cycloplegia (TRT, pre-cyclo: 0.25 D, post-cyclo:0.27 D) and AL repeatability improved marginally (TRT, pre-cyclo: 0.14 mm, post-cyclo: 0.09 mm). Regarding pre- and post-cycloplegia agreement, SE became more positive by + 0.79 D, varying with refractive error. Myopic eyes showed a mean difference of + 0.31 D, while hyperopes differed by + 1.57 D. Mean keratometry, refractive and keratometric J0/J45 and AL showed no clinically significant differences. CONCLUSIONS: Refractive error measurements, using the Myopia Master were 2.5x less precise pre-cycloplegia than post-cycloplegia. Accuracy of pre-cycloplegic refractive error measurements was often larger than the clinically significant threshold (0.25 D) and was refractive error dependent. The higher precision compared to autorefraction measurements, pre- and post-cycloplegia agreement and refractive error independence of AL measurements emphasize the superiority of AL in refractive error monitoring.


Subject(s)
Axial Length, Eye , Biometry , Mydriatics , Myopia , Refraction, Ocular , Humans , Prospective Studies , Cross-Sectional Studies , Female , Male , Refraction, Ocular/physiology , Mydriatics/administration & dosage , Child , Myopia/physiopathology , Biometry/methods , Adolescent , Reproducibility of Results , Pupil/drug effects , Pupil/physiology , Cornea/pathology , Cornea/physiopathology
17.
Biom J ; 66(6): e202300334, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39104093

ABSTRACT

Adaptive platform trials allow treatments to be added or dropped during the study, meaning that the control arm may be active for longer than the experimental arms. This leads to nonconcurrent controls, which provide nonrandomized information that may increase efficiency but may introduce bias from temporal confounding and other factors. Various methods have been proposed to control confounding from nonconcurrent controls, based on adjusting for time period. We demonstrate that time adjustment is insufficient to prevent bias in some circumstances where nonconcurrent controls are present in adaptive platform trials, and we propose a more general analytical framework that accounts for nonconcurrent controls in such circumstances. We begin by defining nonconcurrent controls using the concept of a concurrently randomized cohort, which is a subgroup of participants all subject to the same randomized design. We then use cohort adjustment rather than time adjustment. Due to flexibilities in platform trials, more than one randomized design may be in force at any time, meaning that cohort-adjusted and time-adjusted analyses may be quite different. Using simulation studies, we demonstrate that time-adjusted analyses may be biased while cohort-adjusted analyses remove this bias. We also demonstrate that the cohort-adjusted analysis may be interpreted as a synthesis of randomized and indirect comparisons analogous to mixed treatment comparisons in network meta-analysis. This allows the use of network meta-analysis methodology to separate the randomized and nonrandomized components and to assess their consistency. Whenever nonconcurrent controls are used in platform trials, the separate randomized and indirect contributions to the treatment effect should be presented.


Subject(s)
Biometry , Humans , Biometry/methods , Randomized Controlled Trials as Topic
18.
Biom J ; 66(6): e202300257, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39104134

ABSTRACT

We introduce a new modelling for long-term survival models, assuming that the number of competing causes follows a mixture of Poisson and the Birnbaum-Saunders distribution. In this context, we present some statistical properties of our model and demonstrate that the promotion time model emerges as a limiting case. We delve into detailed discussions of specific models within this class. Notably, we examine the expected number of competing causes, which depends on covariates. This allows for direct modeling of the cure rate as a function of covariates. We present an Expectation-Maximization (EM) algorithm for parameter estimation, to discuss the estimation via maximum likelihood (ML) and provide insights into parameter inference for this model. Additionally, we outline sufficient conditions for ensuring the consistency and asymptotic normal distribution of ML estimators. To evaluate the performance of our estimation method, we conduct a Monte Carlo simulation to provide asymptotic properties and a power study of LR test by contrasting our methodology against the promotion time model. To demonstrate the practical applicability of our model, we apply it to a real medical dataset from a population-based study of incidence of breast cancer in São Paulo, Brazil. Our results illustrate that the proposed model can outperform traditional approaches in terms of model fitting, highlighting its potential utility in real-world scenarios.


Subject(s)
Biometry , Breast Neoplasms , Models, Statistical , Breast Neoplasms/epidemiology , Breast Neoplasms/therapy , Humans , Biometry/methods , Female , Monte Carlo Method , Likelihood Functions , Survival Analysis , Algorithms
19.
Biom J ; 66(6): e202300271, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39132909

ABSTRACT

Many clinical trials assess time-to-event endpoints. To describe the difference between groups in terms of time to event, we often employ hazard ratios. However, the hazard ratio is only informative in the case of proportional hazards (PHs) over time. There exist many other effect measures that do not require PHs. One of them is the average hazard ratio (AHR). Its core idea is to utilize a time-dependent weighting function that accounts for time variation. Though propagated in methodological research papers, the AHR is rarely used in practice. To facilitate its application, we unfold approaches for sample size calculation of an AHR test. We assess the reliability of the sample size calculation by extensive simulation studies covering various survival and censoring distributions with proportional as well as nonproportional hazards (N-PHs). The findings suggest that a simulation-based sample size calculation approach can be useful for designing clinical trials with N-PHs. Using the AHR can result in increased statistical power to detect differences between groups with more efficient sample sizes.


Subject(s)
Proportional Hazards Models , Sample Size , Humans , Clinical Trials as Topic , Biometry/methods
20.
Biom J ; 66(6): e202200371, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39149839

ABSTRACT

Analysis of the restricted mean survival time (RMST) has become increasingly common in biomedical studies during the last decade as a means of estimating treatment or covariate effects on survival. Advantages of RMST over the hazard ratio (HR) include increased interpretability and lack of reliance on the often tenuous proportional hazards assumption. Some authors have argued that RMST regression should generally be the frontline analysis as opposed to methods based on counting process increments. However, in order for the use of the RMST to be more mainstream, it is necessary to broaden the range of data structures to which pertinent methods can be applied. In this report, we address this issue from two angles. First, most of existing methodological development for directly modeling RMST has focused on multiplicative models. An additive model may be preferred due to goodness of fit and/or parameter interpretation. Second, many settings encountered nowadays feature high-dimensional categorical (nuisance) covariates, for which parameter estimation is best avoided. Motivated by these considerations, we propose stratified additive models for direct RMST analysis. The proposed methods feature additive covariate effects. Moreover, nuisance factors can be factored out of the estimation, akin to stratification in Cox regression, such that focus can be appropriately awarded to the parameters of chief interest. Large-sample properties of the proposed estimators are derived, and a simulation study is performed to assess finite-sample performance. In addition, we provide techniques for evaluating a fitted model with respect to risk discrimination and predictive accuracy. The proposed methods are then applied to liver transplant data to estimate the effects of donor characteristics on posttransplant survival time.


Subject(s)
Models, Statistical , Humans , Survival Analysis , Liver Transplantation , Proportional Hazards Models , Biometry/methods
SELECTION OF CITATIONS
SEARCH DETAIL