RESUMO
Binary spatial observations arise in environmental and ecological studies, where Markov random field (MRF) models are often applied. Despite the prevalence and the long history of MRF models for spatial binary data, appropriate model diagnostics have remained an unresolved issue in practice. A complicating factor is that such models involve neighborhood specifications, which are difficult to assess for binary data. To address this, we propose a formal goodness-of-fit (GOF) test for diagnosing an MRF model for spatial binary values. The test statistic involves a type of conditional Moran's I based on the fitted conditional probabilities, which can detect departures in model form, including neighborhood structure. Numerical studies show that the GOF test can perform well in detecting deviations from a null model, with a focus on neighborhoods as a difficult issue. We illustrate the spatial test with an application to Besag's historical endive data as well as the breeding pattern of grasshopper sparrows across Iowa.
Assuntos
Cadeias de Markov , Modelos Estatísticos , Animais , Pardais , Biometria/métodos , Simulação por Computador , Iowa , Análise EspacialRESUMO
Network deconvolution (ND) is a method to reconstruct a direct-effect network describing direct (or conditional) effects (or associations) between any two nodes from a given network depicting total (or marginal) effects (or associations). Its key idea is that, in a directed graph, a total effect can be decomposed into the sum of a direct and an indirect effects, with the latter further decomposed as the sum of various products of direct effects. This yields a simple closed-form solution for the direct-effect network, facilitating its important applications to distinguish direct and indirect effects. Despite its application to undirected graphs, it is not well known why the method works, leaving it with skepticism. We first clarify the implicit linear model assumption underlying ND, then derive a surprisingly simple result on the equivalence between ND and use of precision matrices, offering insightful justification and interpretation for the application of ND to undirected graphs. We also establish a formal result to characterize the effect of scaling a total-effect graph. Finally, leveraging large-scale genome-wide association study data, we show a novel application of ND to contrast marginal versus conditional genetic correlations between body height and risk of coronary artery disease; the results align with an inferred causal directed graph using ND. We conclude that ND is a promising approach with its easy and wide applicability to both directed and undirected graphs.
Assuntos
Estudo de Associação Genômica Ampla , Humanos , Estudo de Associação Genômica Ampla/estatística & dados numéricos , Biometria/métodos , Modelos Estatísticos , Modelos Lineares , Estatura , Simulação por Computador , Interpretação Estatística de Dados , CausalidadeRESUMO
The identification of surrogate markers is motivated by their potential to make decisions sooner about a treatment effect. However, few methods have been developed to actually use a surrogate marker to test for a treatment effect in a future study. Most existing methods consider combining surrogate marker and primary outcome information to test for a treatment effect, rely on fully parametric methods where strict parametric assumptions are made about the relationship between the surrogate and the outcome, and/or assume the surrogate marker is measured at only a single time point. Recent work has proposed a nonparametric test for a treatment effect using only surrogate marker information measured at a single time point by borrowing information learned from a prior study where both the surrogate and primary outcome were measured. In this paper, we utilize this nonparametric test and propose group sequential procedures that allow for early stopping of treatment effect testing in a setting where the surrogate marker is measured repeatedly over time. We derive the properties of the correlated surrogate-based nonparametric test statistics at multiple time points and compute stopping boundaries that allow for early stopping for a significant treatment effect, or for futility. We examine the performance of our proposed test using a simulation study and illustrate the method using data from two distinct AIDS clinical trials.
Assuntos
Biomarcadores , Simulação por Computador , Biomarcadores/análise , Humanos , Resultado do Tratamento , Modelos Estatísticos , Infecções por HIV/tratamento farmacológico , Estatísticas não Paramétricas , Biometria/métodos , Interpretação Estatística de DadosRESUMO
In survival analysis, it often happens that some individuals, referred to as cured individuals, never experience the event of interest. When analyzing time-to-event data with a cure fraction, it is crucial to check the assumption of "sufficient follow-up," which means that the right extreme of the censoring time distribution is larger than that of the survival time distribution for the noncured individuals. However, the available methods to test this assumption are limited in the literature. In this article, we study the problem of testing whether follow-up is sufficient for light-tailed distributions and develop a simple novel test. The proposed test statistic compares an estimator of the noncure proportion under sufficient follow-up to one without the assumption of sufficient follow-up. A bootstrap procedure is employed to approximate the critical values of the test. We also carry out extensive simulations to evaluate the finite sample performance of the test and illustrate the practical use with applications to leukemia and breast cancer data sets.
Assuntos
Neoplasias da Mama , Humanos , Análise de Sobrevida , Neoplasias da Mama/mortalidade , Leucemia/mortalidade , Seguimentos , Modelos Estatísticos , Biometria/métodos , Interpretação Estatística de Dados , Feminino , Simulação por ComputadorRESUMO
In this work, a method to regularize Cox frailty models is proposed that accommodates time-varying covariates and time-varying coefficients and is based on the full likelihood instead of the partial likelihood. A particular advantage of this framework is that the baseline hazard can be explicitly modeled in a smooth, semiparametric way, for example, via P-splines. Regularization for variable selection is performed via a lasso penalty and via group lasso for categorical variables while a second penalty regularizes wiggliness of smooth estimates of time-varying coefficients and the baseline hazard. Additionally, adaptive weights are included to stabilize the estimation. The method is implemented in the R function coxlasso, which is now integrated into the package PenCoxFrail, and will be compared to other packages for regularized Cox regression.
Assuntos
Biometria , Modelos de Riscos Proporcionais , Funções Verossimilhança , Biometria/métodos , Humanos , FragilidadeRESUMO
PURPOSE: To compare the accuracy of the Z CALC2 calculator and Barrett toric calculator in toric intraocular lens (IOL) calculation. METHODS: Eighty-five eyes of 85 patients who underwent uneventful cataract surgery with toric IOL implantation were included. The accuracy was compared between the Z CALC2 calculator and Barrett toric calculator under two calculation modes: using simulated keratometry (SimK) from the IOLMaster 700 (Carl Zeiss Meditec AG) for posterior corneal astigmatism (PCA) prediction and employing total corneal astigmatism (total corneal refractive power [TCRP] or measured PCA) obtained from the Pentacam (Oculus Optikgeräte GmbH). The centroid of prediction errors, percentage of eyes with prediction errors within ±0.50 diopter (D), mean prediction error, and mean absolute prediction error were calculated. Subgroup analysis was conducted based on the orientation and magnitude of anterior corneal astigmatism and axial length. RESULTS: When using SimK, the two calculators with predicted PCA showed comparable accuracy. When employing total corneal astigmatism, the Barrett toric calculator with measured PCA showed a lower centroid error (0.15 vs 0.38 D), a higher percentage of eyes with prediction errors within ±0.50 D (47.1% vs 32.9%, P = .018), and a lower mean prediction error (0.57 vs 0.71 D, P = .033) compared to the Z CALC2 calculator with TCRP in the 4-mm zone. In subgroup analysis, when employing total corneal astigmatism, the Barrett toric calculator with measured PCA exhibited superior accuracy, especially in the with-the-rule and anterior corneal astigmatism of 2.00 D or less subgroups. CONCLUSIONS: When using SimK, the Z CALC2 calculator and Barrett toric calculator yield comparable accuracy. The Barrett toric calculator with measured PCA may be more recommended when employing total corneal astigmatism. [J Refract Surg. 2024;40(10):e681-e691.].
Assuntos
Astigmatismo , Biometria , Lentes Intraoculares , Óptica e Fotônica , Refração Ocular , Humanos , Masculino , Idoso , Feminino , Refração Ocular/fisiologia , Astigmatismo/fisiopatologia , Astigmatismo/cirurgia , Pessoa de Meia-Idade , Biometria/métodos , Reprodutibilidade dos Testes , Facoemulsificação , Implante de Lente Intraocular , Idoso de 80 Anos ou mais , Estudos Retrospectivos , Acuidade Visual/fisiologia , Comprimento Axial do Olho/patologia , Córnea/fisiopatologia , Córnea/patologiaRESUMO
BACKGROUND: Student wellness is of increasing concern in medical education. Increased rates of burnout, sleep disturbances, and psychological concerns in medical students are well documented. These concerns lead to impacts on current educational goals and may set students on a path for long-term health consequences. METHODS: Undergraduate medical students were recruited to participate in a novel longitudinal wellness tracking project. This project utilized validated wellness surveys to assess emotional health, sleep health, and burnout at multiple timepoints. Biometric information was collected from participant Fitbit devices that tracked longitudinal sleep patterns. RESULTS: Eighty-one students from three cohorts were assessed during the first semester of their M1 preclinical curriculum. Biometric data showed that nearly 30% of the students had frequent short sleep episodes (<6 hours of sleep for at least 30% of recorded days), and nearly 68% of students had at least one episode of three or more consecutive days of short sleep. Students that had consecutive short sleep episodes had higher rates of stress (8.3%) and depression (5.4%) symptoms and decreased academic efficiency (1.72%). CONCLUSIONS: Biometric data were shown to significantly predict psychological health and academic experiences in medical students. Biometrically assessed sleep is poor in medical students, and consecutive days of short sleep duration are particularly impactful as it relates to other measures of wellness. Longitudinal, biometric data tracking is feasible and can provide students the ability to self-monitor health behaviors and allow for low-intensity health interventions.
Assuntos
Educação de Graduação em Medicina , Saúde Mental , Sono , Estudantes de Medicina , Humanos , Estudantes de Medicina/psicologia , Masculino , Feminino , Estresse Psicológico/epidemiologia , Estresse Psicológico/psicologia , Depressão/epidemiologia , Depressão/psicologia , Adulto Jovem , Biometria , Estudos Longitudinais , Adulto , Transtornos do Sono-Vigília/epidemiologia , Transtornos do Sono-Vigília/psicologiaRESUMO
The objective is to evaluate the parameters significantly related to calculating the power of the implanted lens and to determine the importance of different biometric, retina, and corneal aberrations variables. A retrospective cross-sectional observational study used a database of 422 patients who underwent cataract surgery at the Oftalvist Center in Almeria between January 2021 and December 2022. A random forest based on machine learning techniques was proposed to classify the importance of preoperative variables for calculating IOL power. Correlations were explored between implanted IOL power and the most important variables in random forests. The importance of each variable was analyzed using the random forest technique, which established a ranking of feature selections based on different criteria. A positive correlation was found with the random forest variables. Selection: axial length (AL), keratometry preoperative, anterior chamber depth (ACD), measured from corneal epithelium to lens, corneal diameter, lens constant, and astigmatism aberration. The variables coma aberration (p-value = 0,12) and macular thickness (p-value = 0,10) were almost slightly significant. In cataract surgery, the implanted IOL power is mainly correlated with axial length, anterior chamber depth, corneal diameter, lens constant, and preoperative keratometry. New variables such as astigmatism and anterior coma aberration and retina variables such as the preoperative central macular thickness could be included in the new generation of biometric formulas based on artificial intelligence techniques.
Assuntos
Biometria , Lentes Intraoculares , Humanos , Masculino , Feminino , Biometria/métodos , Estudos Transversais , Estudos Retrospectivos , Idoso , Pessoa de Meia-Idade , Extração de Catarata , Retina/diagnóstico por imagem , Implante de Lente Intraocular , Córnea/cirurgia , Córnea/patologia , Idoso de 80 Anos ou mais , Refração Ocular/fisiologia , Comprimento Axial do OlhoRESUMO
Establishing cause-effect relationships from observational data often relies on untestable assumptions. It is crucial to know whether, and to what extent, the conclusions drawn from non-experimental studies are robust to potential unmeasured confounding. In this paper, we focus on the average causal effect (ACE) as our target of inference. We generalize the sensitivity analysis approach developed by Robins et al., Franks et al., and Zhou and Yao. We use semiparametric theory to derive the non-parametric efficient influence function of the ACE, for fixed sensitivity parameters. We use this influence function to construct a one-step, split sample, truncated estimator of the ACE. Our estimator depends on semiparametric models for the distribution of the observed data; importantly, these models do not impose any restrictions on the values of sensitivity analysis parameters. We establish sufficient conditions ensuring that our estimator has $\sqrt{n}$ asymptotics. We use our methodology to evaluate the causal effect of smoking during pregnancy on birth weight. We also evaluate the performance of estimation procedure in a simulation study.
Assuntos
Causalidade , Simulação por Computador , Fatores de Confusão Epidemiológicos , Modelos Estatísticos , Estudos Observacionais como Assunto , Humanos , Gravidez , Feminino , Estudos Observacionais como Assunto/estatística & dados numéricos , Peso ao Nascer , Fumar/efeitos adversos , Biometria/métodos , Interpretação Estatística de Dados , Sensibilidade e EspecificidadeRESUMO
BACKGROUND: To analyze the difference and agreement between measurements obtained by a new fully automatic optical biometer, the SW-9000 µm Plus, based on optical low-coherence reflectometry (OLCR) and a commonly used optical biometer (Pentacam AXL) based on Scheimpflug imaging with partial coherence interferometry (PCI). METHODS: The central corneal thickness (CCT), anterior chamber depth (ACD, from epithelium to anterior lens surface), lens thickness (LT), mean keratometry (Km), corneal astigmatism, corneal diameter (CD), pupil diameter (PD), and axial length (AL) of 74 eyes (from 74 healthy subjects) were measured using the SW-9000 µm Plus and the Pentacam AXL to determine the agreement. Double angle plots were used for astigmatism vector analysis. Bland-Altman and 95% limits of agreement (LoA) were calculated. RESULTS: Statistically significant differences were detected for all parameters but J0 vector. The Bland-Altman analysis of AL, CCT, ACD, Km, CD, J0 and J45 indicated a high level of agreement between the two devices. Among AL, CCT, ACD, Km, J0, J45, CD, and PD, the 95% LoA ranged from -0.07 to 0.05 mm, -9.67 to 7.34 mm, -0.11 to 0.04 mm, -0.25 to 0.50 D, -0.22 to 0.20 D, -0.15 to 0.20 D, -0.23 to 0.35 mm and 1.55 to 3.77 mm, respectively. CONCLUSIONS: The measurements of AL, CCT, ACD, Km, corneal astigmatism, and CD showed a narrow LoA and may be used interchangeably in healthy subjects between the new OLCR optical biometer and the Scheimpflug/PCI biometer; however, a poor agreement was noted for PD values.
Assuntos
Biometria , Córnea , Interferometria , Humanos , Masculino , Interferometria/instrumentação , Interferometria/métodos , Feminino , Adulto , Biometria/instrumentação , Biometria/métodos , Pessoa de Meia-Idade , Córnea/diagnóstico por imagem , Adulto Jovem , Comprimento Axial do Olho/diagnóstico por imagem , Reprodutibilidade dos Testes , Câmara Anterior/diagnóstico por imagem , Fotografação/instrumentação , Fotografação/métodos , Cristalino/diagnóstico por imagem , Voluntários Saudáveis , Idoso , PupilaRESUMO
The modified Poisson and least-squares regression analyses for binary outcomes have been widely used as effective multivariable analysis methods to provide risk ratio and risk difference estimates in clinical and epidemiological studies. However, there is no certain evidence that assessed their operating characteristics under small and sparse data settings and no effective methods have been proposed for these regression analyses to address this issue. In this article, we show that the modified Poisson regression provides seriously biased estimates under small and sparse data settings. In addition, the modified least-squares regression provides unbiased estimates under these settings. We further show that the ordinary robust variance estimators for both of the methods have certain biases under situations that involve small or moderate sample sizes. To address these issues, we propose the Firth-type penalized methods for the modified Poisson and least-squares regressions. The adjustment methods lead to a more accurate and stable risk ratio estimator under small and sparse data settings, although the risk difference estimator is not invariant. In addition, to improve the inferences of the effect measures, we provide an improved robust variance estimator for these regression analyses. We conducted extensive simulation studies to assess the performances of the proposed methods under real-world conditions and found that the accuracies of the point and interval estimations were markedly improved by the proposed methods. We illustrate the effectiveness of these methods by applying them to a clinical study of epilepsy.
Assuntos
Biometria , Análise dos Mínimos Quadrados , Humanos , Distribuição de Poisson , Análise de Regressão , Biometria/métodos , Modelos Estatísticos , EpilepsiaRESUMO
The analysis of multiple time-to-event outcomes in a randomized controlled clinical trial can be accomplished with existing methods. However, depending on the characteristics of the disease under investigation and the circumstances in which the study is planned, it may be of interest to conduct interim analyses and adapt the study design if necessary. Due to the expected dependency of the endpoints, the full available information on the involved endpoints may not be used for this purpose. We suggest a solution to this problem by embedding the endpoints in a multistate model. If this model is Markovian, it is possible to take the disease history of the patients into account and allow for data-dependent design adaptations. To this end, we introduce a flexible test procedure for a variety of applications, but are particularly concerned with the simultaneous consideration of progression-free survival (PFS) and overall survival (OS). This setting is of key interest in oncological trials. We conduct simulation studies to determine the properties for small sample sizes and demonstrate an application based on data from the NB2004-HR study.
Assuntos
Biometria , Cadeias de Markov , Modelos Estatísticos , Humanos , Biometria/métodos , Ensaios Clínicos como Assunto/métodos , Projetos de Pesquisa , Ensaios Clínicos Controlados Aleatórios como Assunto , Determinação de Ponto Final , Intervalo Livre de ProgressãoRESUMO
Randomized trials seek efficient treatment effect estimation within target populations, yet scientific interest often also centers on subpopulations. Although there are typically too few subjects within each subpopulation to efficiently estimate these subpopulation treatment effects, one can gain precision by borrowing strength across subpopulations, as is the case in a basket trial. While dynamic borrowing has been proposed as an efficient approach to estimating subpopulation treatment effects on primary endpoints, additional efficiency could be gained by leveraging the information found in secondary endpoints. We propose a multisource exchangeability model (MEM) that incorporates secondary endpoints to more efficiently assess subpopulation exchangeability. Across simulation studies, our proposed model almost uniformly reduces the mean squared error when compared to the standard MEM that only considers data from the primary endpoint by gaining efficiency when subpopulations respond similarly to the treatment and reducing the magnitude of bias when the subpopulations are heterogeneous. We illustrate our model's feasibility using data from a recently completed trial of very low nicotine content cigarettes to estimate the effect on abstinence from smoking within three priority subpopulations. Our proposed model led to increases in the effective sample size two to four times greater than under the standard MEM.
Assuntos
Simulação por Computador , Modelos Estatísticos , Abandono do Hábito de Fumar , Humanos , Abandono do Hábito de Fumar/métodos , Abandono do Hábito de Fumar/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Determinação de Ponto Final/estatística & dados numéricos , Determinação de Ponto Final/métodos , Interpretação Estatística de Dados , Biometria/métodos , Tamanho da Amostra , Resultado do TratamentoRESUMO
Optimizing doses for multiple indications is challenging. The pooled approach of finding a single optimal biological dose (OBD) for all indications ignores that dose-response or dose-toxicity curves may differ between indications, resulting in varying OBDs. Conversely, indication-specific dose optimization often requires a large sample size. To address this challenge, we propose a Randomized two-stage basket trial design that Optimizes doses in Multiple Indications (ROMI). In stage 1, for each indication, response and toxicity are evaluated for a high dose, which may be a previously obtained maximum tolerated dose, with a rule that stops accrual to indications where the high dose is unsafe or ineffective. Indications not terminated proceed to stage 2, where patients are randomized between the high dose and a specified lower dose. A latent-cluster Bayesian hierarchical model is employed to borrow information between indications, while considering the potential heterogeneity of OBD across indications. Indication-specific utilities are used to quantify response-toxicity trade-offs. At the end of stage 2, for each indication with at least one acceptable dose, the dose with highest posterior mean utility is selected as optimal. Two versions of ROMI are presented, one using only stage 2 data for dose optimization and the other optimizing doses using data from both stages. Simulations show that both versions have desirable operating characteristics compared to designs that either ignore indications or optimize dose independently for each indication.
Assuntos
Teorema de Bayes , Relação Dose-Resposta a Droga , Dose Máxima Tolerável , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa , Simulação por Computador , Tamanho da Amostra , Modelos Estatísticos , Biometria/métodosRESUMO
PURPOSE: To evaluate the agreement of ocular biometry measured using a swept-source optical coherence (Casia 2) and a dual Scheimpflug (Galilei G6) tomography in keratoconus. METHODS: This retrospective study included 102 eyes from 102 keratoconus patient examined using both devices. Parameters compared included flat (Kf) and steep (Ks) keratometry, astigmatism of anterior, posterior, and total keratometry, central (CCT) and thinnest (TCT) corneal thickness. To assess the agreement, intraclass coefficient (ICC) and Bland-Altman analysis with 95% limits of agreement (LoA) were used. RESULTS: Anterior and total Ks and Kf showed moderate or good agreement with 95% limits of agreement (LoA) range over 9.75 D. Posterior Ks and Kf were lower in the Galilei G6 (all p < 0.001). Astigmatism showed moderate, poor, and moderate agreement for anterior, posterior, and total keratometry, respectively. CCT and TCT showed excellent agreement; however, the 95% LoA range was over 60 µm. CONCLUSION: The agreement between the two devices was not excellent for most parameters used to diagnose keratoconus and assess disease progression, and the differences were clinically significant. Therefore, the measurements from these two devices are not interchangeable for patients with keratoconus.
Assuntos
Ceratocone , Tomografia de Coerência Óptica , Humanos , Ceratocone/diagnóstico por imagem , Ceratocone/diagnóstico , Masculino , Feminino , Adulto , Tomografia de Coerência Óptica/métodos , Tomografia de Coerência Óptica/instrumentação , Estudos Retrospectivos , Adulto Jovem , Topografia da Córnea/métodos , Topografia da Córnea/instrumentação , Adolescente , Córnea/diagnóstico por imagem , Córnea/patologia , Segmento Anterior do Olho/diagnóstico por imagem , Segmento Anterior do Olho/patologia , Pessoa de Meia-Idade , Biometria/métodos , Biometria/instrumentaçãoRESUMO
In this paper, we introduce functional generalized canonical correlation analysis, a new framework for exploring associations between multiple random processes observed jointly. The framework is based on the multiblock regularized generalized canonical correlation analysis framework. It is robust to sparsely and irregularly observed data, making it applicable in many settings. We establish the monotonic property of the solving procedure and introduce a Bayesian approach for estimating canonical components. We propose an extension of the framework that allows the integration of a univariate or multivariate response into the analysis, paving the way for predictive applications. We evaluate the method's efficiency in simulation studies and present a use case on a longitudinal dataset.
Assuntos
Teorema de Bayes , Simulação por Computador , Modelos Estatísticos , Estudos Longitudinais , Humanos , Biometria/métodos , Interpretação Estatística de DadosRESUMO
The problem of health and care of people is being revolutionized. An important component of that revolution is disease prevention and health improvement from home. A natural approach to the health problem is monitoring changes in people's behavior or activities. These changes can be indicators of potential health problems. However, due to a person's daily pattern, changes will be observed throughout each day, with, eg, an increase of events around meal times and fewer events during the night. We do not wish to detect such within-day changes but rather changes in the daily behavior pattern from one day to the next. To this end, we assume the set of event times within a given day as a single observation. We model this observation as the realization of an inhomogeneous Poisson process where the rate function can vary with the time of day. Then, we propose to detect changes in the sequence of inhomogeneous Poisson processes. This approach is appropriate for many phenomena, particularly for home activity data. Our methodology is evaluated on simulated data. Overall, our approach uses local change information to detect changes across days. At the same time, it allows us to visualize and interpret the results, changes, and trends over time, allowing the detection of potential health decline.
Assuntos
Atividades Cotidianas , Simulação por Computador , Distribuição de Poisson , Humanos , Modelos Estatísticos , Biometria/métodos , Interpretação Estatística de DadosRESUMO
Swimming goggles (SG) are widely used in water sports, and this study aimed to evaluate the acute effects of wearing SG on intraocular pressure (IOP), anterior chamber biometrics, axial length (AL), and optic nerve head (ONH) morphology. Twenty-eight healthy young adults participated in this cross-sectional study, with assessed parameters including IOP, central corneal thickness (CCT), anterior chamber depth (ACD), anterior chamber angle (ACA), AL, and optical coherence tomography (OCT) imaging of the ONH, specifically Bruch membrane opening (BMO), Bruch membrane opening-minimum rim width (BMO-MRW), lamina cribrosa depth (LCD), and prelaminar tissue (PLT). Measurements were taken at four time points: before wearing SG, at the 1st and 10th minutes of wearing, and immediately after removal. The results showed a significant increase in IOP at the 1st and 10th minutes of SG wear compared to pre-wear and post-removal values. Additionally, decreases in CCT, ACD, and ACA, along with an increase in AL, were observed while wearing SG. However, these changes reverted to baseline after the goggles were removed. No significant alterations were detected in ONH parameters during the study. The findings suggest that wearing SG induces an acute rise in IOP and changes in anterior segment parameters, likely due to oculopression, but does not appear to affect ONH morphology in the short term. Further studies are needed to investigate any potential long-term effects.
Assuntos
Câmara Anterior , Biometria , Dispositivos de Proteção dos Olhos , Pressão Intraocular , Disco Óptico , Natação , Tomografia de Coerência Óptica , Humanos , Pressão Intraocular/fisiologia , Masculino , Câmara Anterior/diagnóstico por imagem , Feminino , Adulto Jovem , Disco Óptico/diagnóstico por imagem , Adulto , Tomografia de Coerência Óptica/métodos , Estudos TransversaisRESUMO
Purpose: In cataract surgery, accurate intraocular lens (IOL) power calculations are crucial for optimal postoperative refractive outcomes. This study explores the impact of prioritizing the reduction of the standard deviation (SD) of prediction errors before mean prediction error (PE) adjustment on IOL calculation formula precision and accuracy. Methods: We conducted a retrospective analysis of 4885 eyes from 2611 patients, all implanted with the same IOL model, comparing four traditional IOL power calculation formulas: SRK/T, Holladay 1, Haigis, and Hoffer Q. We introduced new constants aiming to minimize the SD of PE (new_const) against traditionally optimized constants (classic_const), using a heteroscedastic statistical method for comparison. Validation of precision improvements used a secondary dataset of 262 eyes from 132 patients. Results: We observed significant reductions in mean absolute error (MAE) across training and test sets for Hoffer Q, Holladay, and Haigis formulas, indicating accuracy enhancements. Optimized constants significantly reduced SDs for Haigis from 0.3255 to 0.3153 and for Hoffer Q from 0.3521 to 0.3387. These optimizations also increased the proportion of eyes achieving PE within ±0.25 D. SRK/T showed improved SD from 0.3596 to 0.3585. However, Holladay 1 showed minimal change with no significant improvement. In the test dataset, significant reductions in SD were observed for Haigis and Hoffer Q. Conclusions: Prioritizing SD minimization before adjusting mean PE significantly improves the precision of selected IOL power formulas, enhancing postoperative refractive outcomes. The effectiveness varies among formulas, underscoring the need for formula-specific adjustments. Translational Relevance: The study presents a novel two-step approach for optimizing IOL power calculations.
Assuntos
Lentes Intraoculares , Óptica e Fotônica , Refração Ocular , Humanos , Estudos Retrospectivos , Feminino , Refração Ocular/fisiologia , Masculino , Idoso , Pessoa de Meia-Idade , Implante de Lente Intraocular , Acuidade Visual/fisiologia , Biometria/métodos , Extração de Catarata/métodos , FacoemulsificaçãoRESUMO
With the ever advancing of modern technologies, it has become increasingly common that the number of collected confounders exceeds the number of subjects in a data set. However, matching based methods for estimating causal treatment effect in their original forms are not capable of handling high-dimensional confounders, and their various modified versions lack statistical support and valid inference tools. In this article, we propose a new approach for estimating causal treatment effect, defined as the difference of the restricted mean survival time (RMST) under different treatments in high-dimensional setting for survival data. We combine the factor model and the sufficient dimension reduction techniques to construct propensity score and prognostic score. Based on these scores, we develop a kernel based doubly robust estimator of the RMST difference. We demonstrate its link to matching and establish the consistency and asymptotic normality of the estimator. We illustrate our method by analyzing a dataset from a study aimed at comparing the effects of two alternative treatments on the RMST of patients with diffuse large B cell lymphoma.