Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 83
Filter
1.
MethodsX ; 13: 102802, 2024 Dec.
Article in English | MEDLINE | ID: mdl-39105092

ABSTRACT

This study proposes the development of a nonparametric regression model combined with geographically weighted regression. The regression model considers geographical factors and has a data pattern that does not follow a parametric form to overcome the problem of spatial heterogeneity and unknown regression functions. This study aims to model provincial food security index data in Indonesia with the GWSNR model, so finding the optimal knot point and the best geographic weighting is necessary. We propose the selection of optimal knot points using the Cross Validation (CV) and Generalized Cross Validation (GCV) methods. The optimal knot point will control the accuracy of the regression curve as we also consider the MSE value in showing the ability of the model. In addition, we determine the best geographic weighting and test the significance of the model parameters. We demonstrate the GWSNR model on food security index data. The best GWSNR model uses the Gaussian kernel weighting function and selects the optimal knot point as one-knot point based on the lowest CV and GCV values. Simultaneous and partial parameter test results show that there are 10 area classifications with different effects on each group of classification results. Some of the highlights of the proposed approach are:•The method is the development of a nonparametric regression model with geographic weighting, which combines nonparametric and spatial regression in modeling the national food security index.•There are three-knot points tested in nonparametric truncated spline regression and three geographic weightings in spatial regression. Then the optimal knot point and best bandwidth are determined using Cross Validation and Generalized Cross Validation.•This article will determine regional groupings in Indonesia in 2022 based on significant predictors in modeling the national food security index numbers.

2.
J Am Stat Assoc ; 119(546): 1102-1111, 2024.
Article in English | MEDLINE | ID: mdl-39184839

ABSTRACT

We propose a nonparametric bivariate time-varying coefficient model for longitudinal measurements with the occurrence of a terminal event that is subject to right censoring. The time-varying coefficients capture the longitudinal trajectories of covariate effects along with both the followup time and the residual lifetime. The proposed model extends the parametric conditional approach given terminal event time in recent literature, and thus avoids potential model misspecification. We consider a kernel smoothing method for estimating regression coefficients in our model and use cross-validation for bandwidth selection, applying undersmoothing in the final analysis to eliminate the asymptotic bias of the kernel estimator. We show that the kernel estimates follow a finite-dimensional normal distribution asymptotically under mild regularity conditions, and provide an easily computed sandwich covariance matrix estimator. We conduct extensive simulations that show desirable performance of the proposed approach, and apply the method to analyzing the medical cost data for patients with end-stage renal disease.

3.
Spat Stat ; 612024 Jun.
Article in English | MEDLINE | ID: mdl-38779141

ABSTRACT

Particulate matter (PM) has emerged as a primary air quality concern due to its substantial impact on human health. Many recent research works suggest that PM2.5 concentrations depend on meteorological conditions. Enhancing current pollution control strategies necessitates a more holistic comprehension of PM2.5 dynamics and the precise quantification of spatiotemporal heterogeneity in the relationship between meteorological factors and PM2.5 levels. The spatiotemporal varying coefficient model stands as a prominent spatial regression technique adept at addressing this heterogeneity. Amidst the challenges posed by the substantial scale of modern spatiotemporal datasets, we propose a pioneering distributed estimation method (DEM) founded on multivariate spline smoothing across a domain's triangulation. This DEM algorithm ensures an easily implementable, highly scalable, and communication-efficient strategy, demonstrating almost linear speedup potential. We validate the effectiveness of our proposed DEM through extensive simulation studies, demonstrating that it achieves coefficient estimations akin to those of global estimators derived from complete datasets. Applying the proposed model and method to the US daily PM2.5 and meteorological data, we investigate the influence of meteorological variables on PM2.5 concentrations, revealing both spatial and seasonal variations in this relationship.

4.
Biom J ; 66(3): e2300039, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38581095

ABSTRACT

In this paper, we propose a general framework to select tuning parameters for the nonparametric derivative estimation. The new framework broadens the scope of the previously proposed generalized C p $C_p$ criterion by replacing the empirical derivative with any other linear nonparametric smoother. We provide the theoretical support of the proposed derivative estimation in a random design and justify it through simulation studies. The practical application of the proposed framework is demonstrated in the study of the age effect on hippocampal gray matter volume in healthy adults from the IXI dataset and the study of the effect of age and body mass index on blood pressure from the Pima Indians dataset.


Subject(s)
Statistics, Nonparametric , Humans , Computer Simulation , Body Mass Index , Blood Pressure
5.
MethodsX ; 12: 102536, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38274699

ABSTRACT

One of the approach of Geographically Weighted Regression (GWR) models is the Geographically Weighted Nonparametric Regression (GWNR) has more parameters than the GWR model. Models with more parameters usually have better match values, which is an advantage, while models with fewer parameters have the advantage of being easier to use and interpret. However, a model with more parameters should be used if it is proven to be significantly superior. Therefore, the purpose of this study was to develop a hypothesis test of goodness of fit test for GWNR model. The goodness of fit test was performed for the real data. We found that the GWNR model was more suitable than the mixed nonparametric regression model. Some highlights of the proposed method are:•A new model for GWR to overcome the unknown regression function by using mixed estimator spline truncated and fourier series at nonparametric regression•Goodness of fit for GWNR to testing the model fit between the mixed nonparametric regression model and GWNR•Applied goodness of fit test to poverty data in Sulawesi Island and infant mortality in East Java.

6.
Stat Med ; 43(6): 1103-1118, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38183296

ABSTRACT

Regression modeling is the workhorse of statistics and there is a vast literature on estimation of the regression function. It has been realized in recent years that in regression analysis the ultimate aim may be the estimation of a level set of the regression function, ie, the set of covariate values for which the regression function exceeds a predefined level, instead of the estimation of the regression function itself. The published work on estimation of the level set has thus far focused mainly on nonparametric regression, especially on point estimation. In this article, the construction of confidence sets for the level set of linear regression is considered. In particular, 1 - α $$ 1-\alpha $$ level upper, lower and two-sided confidence sets are constructed for the normal-error linear regression. It is shown that these confidence sets can be easily constructed from the corresponding 1 - α $$ 1-\alpha $$ level simultaneous confidence bands. It is also pointed out that the construction method is readily applicable to other parametric regression models where the mean response depends on a linear predictor through a monotonic link function, which include generalized linear models, linear mixed models and generalized linear mixed models. Therefore, the method proposed in this article is widely applicable. Simulation studies with both linear and generalized linear models are conducted to assess the method and real examples are used to illustrate the method.


Subject(s)
Models, Statistical , Humans , Linear Models , Regression Analysis , Computer Simulation
7.
MethodsX ; 11: 102468, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37964783

ABSTRACT

Nonparametric regression model with the Fourier series approach was first introduced by Bilodeau in 1994. In the later years, several researchers developed a nonparametric regression model with the Fourier series approach. However, these researches are limited to parameter estimation and there is no research related to parameter hypothesis testing. Parameter hypothesis testing is a statistical method used to test the significance of the parameters. In nonparametric regression model with the Fourier series approach, parameter hypothesis testing is used to determine whether the estimated parameters have significance influence on the model or not. Therefore, the purpose of this research is for parameter hypothesis testing in the nonparametric regression model with the Fourier series approach. The method that we use for hypothesis testing is the LRT method. The LRT method is a method that compares the likelihood functions under the parameter space of the null hypothesis and the hypothesis. By using the LRT method, we obtain the form of the statistical test and its distribution as well as the rejection region of the null hypothesis. To apply the method, we use ROA data from 47 go public banks that are listed on the Indonesia stock exchange in 2020. The highlights of this research are:•The Fourier series function is assumed as a non-smooth function.•The form of the statistical test is obtained using the LRT method and is distributed as F distribution.•The estimated parameters on modelling ROA data have a significant influence on the model.

8.
Stat Sin ; 33(1): 127-148, 2023 Jan.
Article in English | MEDLINE | ID: mdl-37153711

ABSTRACT

The goal of nonparametric regression is to recover an underlying regression function from noisy observations, under the assumption that the regression function belongs to a prespecified infinite-dimensional function space. In the online setting, in which the observations come in a stream, it is generally computationally infeasible to refit the whole model repeatedly. As yet, there are no methods that are both computationally efficient and statistically rate optimal. In this paper, we propose an estimator for online nonparametric regression. Notably, our estimator is an empirical risk minimizer in a deterministic linear space, which is quite different from existing methods that use random features and a functional stochastic gradient. Our theoretical analysis shows that this estimator obtains a rate-optimal generalization error when the regression function is known to live in a reproducing kernel Hilbert space. We also show, theoretically and empirically, that the computational cost of our estimator is much lower than that of other rate-optimal estimators proposed for this online setting.

9.
MethodsX ; 10: 101994, 2023.
Article in English | MEDLINE | ID: mdl-36691670

ABSTRACT

This study proposes the development of nonparametric regression for data containing spatial heterogeneity with local parameter estimates for each observation location. GWTSNR combines Truncated Spline Nonparametric Regression (TSNR) and Geographically Weighted Regression (GWR). So it is necessary to determine the optimum knot point from TSNR and determine the best geographic weighting (bandwidth) from GWR by deciding the best knot point and bandwidth using Generalized Cross Validation (GCV). The case study analyzed the Morbidity Rate in North Sumatra in 2020. This study will estimate the model using knot points 1, 2, and 3 and geographic weighting of the Kernel Function, Gaussian, Bisquare, Tricube, and Exponential. Based on data analysis, we obtained that the best model for Morbidity Rate data in North Sumatra 2020 based on the minimum GCV value is the model using knots 1 and the Kernel Function of Bisquare. Based on the GWTSNR model, the significant predictors in each district/city were grouped into eight groups. Furthermore, the GWTSNR is better at modeling morbidity rates in North Sumatra 2020 by obtaining adjusted R-square = 96.235 than the TSNR by obtaining adjusted R-squared = 70.159. Some of the highlights of the proposed approach are:•The method combines nonparametric and spatial regression in determining morbidity rate modeling.•There were three-knot points tested in the truncated spline nonparametric regression and four geographic weightings in the spatial regression and then to determine the best knot and bandwidth using Generalized Cross Validation.•This paper will determine regional groupings in North Sumatra 2020 based on significant predictors in modeling morbidity rates.

10.
Biometrics ; 79(3): 2394-2403, 2023 09.
Article in English | MEDLINE | ID: mdl-36511353

ABSTRACT

In data analysis using dimension reduction methods, the main goal is to summarize how the response is related to the covariates through a few linear combinations. One key issue is to determine the number of independent, relevant covariate combinations, which is the dimension of the sufficient dimension reduction (SDR) subspace. In this work, we propose an easily-applied approach to conduct inference for the dimension of the SDR subspace, based on augmentation of the covariate set with simulated pseudo-covariates. Applying the partitioning principal to the possible dimensions, we use rigorous sequential testing to select the dimensionality, by comparing the strength of the signal arising from the actual covariates to that appearing to arise from the pseudo-covariates. We show that under a "uniform direction" condition, our approach can be used in conjunction with several popular SDR methods, including sliced inverse regression. In these settings, the test statistic asymptotically follows a beta distribution and therefore is easily calibrated. Moreover, the family-wise type I error rate of our sequential testing is rigorously controlled. Simulation studies and an analysis of newborn anthropometric data demonstrate the robustness of the proposed approach, and indicate that the power is comparable to or greater than the alternatives.


Subject(s)
Computer Simulation , Statistics as Topic
11.
J Comput Graph Stat ; 31(3): 802-812, 2022.
Article in English | MEDLINE | ID: mdl-36407675

ABSTRACT

Smoothing splines have been used pervasively in nonparametric regressions. However, the computational burden of smoothing splines is significant when the sample size n is large. When the number of predictors d ≥ 2 , the computational cost for smoothing splines is at the order of O(n 3) using the standard approach. Many methods have been developed to approximate smoothing spline estimators by using q basis functions instead of n ones, resulting in a computational cost of the order O(nq 2). These methods are called the basis selection methods. Despite algorithmic benefits, most of the basis selection methods require the assumption that the sample is uniformly-distributed on a hyper-cube. These methods may have deteriorating performance when such an assumption is not met. To overcome the obstacle, we develop an efficient algorithm that is adaptive to the unknown probability density function of the predictors. Theoretically, we show the proposed estimator has the same convergence rate as the full-basis estimator when q is roughly at the order of O[n 2d/{(pr+1)(d +2)}] , where p ∈[1, 2] and r ≈ 4 are some constants depend on the type of the spline. Numerical studies on various synthetic datasets demonstrate the superior performance of the proposed estimator in comparison with mainstream competitors.

12.
Sensors (Basel) ; 22(21)2022 Oct 30.
Article in English | MEDLINE | ID: mdl-36366031

ABSTRACT

Unmanned ground vehicles (UGVs) are technically complex machines to operate in difficult or dangerous environmental conditions. In recent years, there has been an increase in research on so called "following vehicles". The said concept introduces a guide-an object that sets the route the platform should follow. Afterwards, the role of the UGV is to reproduce the mentioned path. The article is based on the field test results of an outdoor localization subsystem using ultra-wideband technology. It focuses on determining the guide's route using a smoothing spline for constructing a UGV's path planning subsystem, which is one of the stages for implementing a "follow-me" system. It has been shown that the use of a smoothing spline, due to the implemented mathematical model, allows for recreating the guide's path in the event of data decay lasting up to a several seconds. The innovation of this article originates from influencing studies on the smoothing parameter of the estimation errors of the guide's location.

13.
BMC Med Res Methodol ; 22(1): 113, 2022 04 18.
Article in English | MEDLINE | ID: mdl-35436861

ABSTRACT

BACKGROUND: Traditional mediation analysis typically examines the relations among an intervention, a time-invariant mediator, and a time-invariant outcome variable. Although there may be a total effect of the intervention on the outcome, there is a need to understand the process by which the intervention affects the outcome (i.e., the indirect effect through the mediator). This indirect effect is frequently assumed to be time-invariant. With improvements in data collection technology, it is possible to obtain repeated assessments over time resulting in intensive longitudinal data. This calls for an extension of traditional mediation analysis to incorporate time-varying variables as well as time-varying effects. METHODS: We focus on estimation and inference for the time-varying mediation model, which allows mediation effects to vary as a function of time. We propose a two-step approach to estimate the time-varying mediation effect. Moreover, we use a simulation-based approach to derive the corresponding point-wise confidence band for the time-varying mediation effect. RESULTS: Simulation studies show that the proposed procedures perform well when comparing the confidence band and the true underlying model. We further apply the proposed model and the statistical inference procedure to data collected from a smoking cessation study. CONCLUSIONS: We present a model for estimating time-varying mediation effects that allows both time-varying outcomes and mediators. Simulation-based inference is also proposed and implemented in a user-friendly R package.


Subject(s)
Models, Statistical , Negotiating , Causality , Computer Simulation , Humans , Time
14.
Med Decis Making ; 42(5): 612-625, 2022 07.
Article in English | MEDLINE | ID: mdl-34967237

ABSTRACT

BACKGROUND: Decisions about new health technologies are increasingly being made while trials are still in an early stage, which may result in substantial uncertainty around key decision drivers such as estimates of life expectancy and time to disease progression. Additional data collection can reduce uncertainty, and its value can be quantified by computing the expected value of sample information (EVSI), which has typically been described in the context of designing a future trial. In this article, we develop new methods for computing the EVSI of extending an existing trial's follow-up, first for an assumed survival model and then extending to capture uncertainty about the true survival model. METHODS: We developed a nested Markov Chain Monte Carlo procedure and a nonparametric regression-based method. We compared the methods by computing single-model and model-averaged EVSI for collecting additional follow-up data in 2 synthetic case studies. RESULTS: There was good agreement between the 2 methods. The regression-based method was fast and straightforward to implement, and scales easily to include any number of candidate survival models in the model uncertainty case. The nested Monte Carlo procedure, on the other hand, was extremely computationally demanding when we included model uncertainty. CONCLUSIONS: We present a straightforward regression-based method for computing the EVSI of extending an existing trial's follow-up, both where a single known survival model is assumed and where we are uncertain about the true survival model. EVSI for ongoing trials can help decision makers determine whether early patient access to a new technology can be justified on the basis of the current evidence or whether more mature evidence is needed. HIGHLIGHTS: Decisions about new health technologies are increasingly being made while trials are still in an early stage, which may result in substantial uncertainty around key decision drivers such as estimates of life-expectancy and time to disease progression. Additional data collection can reduce uncertainty, and its value can be quantified by computing the expected value of sample information (EVSI), which has typically been described in the context of designing a future trial.In this article, we have developed new methods for computing the EVSI of extending a trial's follow-up, both where a single known survival model is assumed and where we are uncertain about the true survival model. We extend a previously described nonparametric regression-based method for computing EVSI, which we demonstrate in synthetic case studies is fast, straightforward to implement, and scales easily to include any number of candidate survival models in the EVSI calculations.The EVSI methods that we present in this article can quantify the need for collecting additional follow-up data before making an adoption decision given any decision-making context.


Subject(s)
Monte Carlo Method , Cost-Benefit Analysis , Disease Progression , Humans , Markov Chains , Regression Analysis , Uncertainty
15.
Curr Diabetes Rev ; 18(7): e171121197990, 2022.
Article in English | MEDLINE | ID: mdl-34789135

ABSTRACT

BACKGROUND: Blood sugar and lifestyle problems have long been problems in diabetes. There has also been a lot of research on that. However, we see that diabetic patients are still increasing even though many patients are not aware of the start of the disease occurrence. Therefore, we consider it very important to examine these two main problems of diabetes by using a more flexible statistical approach to obtain more specific results regarding the patient's condition. OBJECTIVE: The form of data for type 2 diabetes patients is repeated measurements so that it is approached through longitudinal studies. We investigated various intervals of pattern change that can occur in blood glucose, namely fasting, random, and 2 hours after meals based on blood pressure and carbohydrate diets in diabetic patients in South Sulawesi Province, Indonesia. METHODS: This research is a longitudinal study proposing a flexible and accurate statistical approach. It is a weighted spline multi-response nonparametric regression model. This model is able to detect any pattern of changes in irregular data in large dimensions. The data were obtained from Hasanuddin University Teaching Hospital in South Sulawesi Province, Indonesia. The number of samples analyzed was 418 from 50 patients with different measurements. RESULTS: The optimal spline model was obtained at 2 knots for blood pressure and 3 knots for carbohydrate diets. There are three blood pressure intervals that give different patterns of increase in patient blood glucose levels, namely below 126.6 mmHg, 126.6-163.3 mmHg, and above 163.3 mmHg. It was found that blood sugar rose sharply at blood pressure above 163.3 mmHg. Furthermore, there are four carbohydrate diet intervals that are formed, which are below 118.6 g, 118.6-161.8 g, 161.8-205 g, and above 205 g. The result is that blood sugar decreased significantly at intervals of carbohydrate diet 161.8-205 g. CONCLUSION: Blood glucose increases with a very high increase in blood pressure, whereas for a carbohydrate diet, there is no guarantee that a high diet will be able to reduce blood glucose significantly. This may be affected by the patient's saturation of a very high carbohydrate diet.


Subject(s)
Blood Glucose , Diabetes Mellitus, Type 2 , Dietary Carbohydrates , Humans , Life Style , Longitudinal Studies
16.
Biostatistics ; 24(1): 52-67, 2022 12 12.
Article in English | MEDLINE | ID: mdl-33948617

ABSTRACT

Functional connectivity is defined as the undirected association between two or more functional magnetic resonance imaging (fMRI) time series. Increasingly, subject-level functional connectivity data have been used to predict and classify clinical outcomes and subject attributes. We propose a single-index model wherein response variables and sparse functional connectivity network valued predictors are linked by an unspecified smooth function in order to accommodate potentially nonlinear relationships. We exploit the network structure of functional connectivity by imposing meaningful sparsity constraints, which lead not only to the identification of association of interactions between regions with the response but also the assessment of whether or not the functional connectivity associated with a brain region is related to the response variable. We demonstrate the effectiveness of the proposed model in simulation studies and in an application to a resting-state fMRI data set from the Human Connectome Project to model fluid intelligence and sex and to identify predictive links between brain regions.


Subject(s)
Connectome , Nerve Net , Humans , Nerve Net/diagnostic imaging , Nerve Net/physiology , Connectome/methods , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/physiology , Computer Simulation
17.
Ann Stat ; 50(5): 2848-2871, 2022 Oct.
Article in English | MEDLINE | ID: mdl-38169958

ABSTRACT

The goal of regression is to recover an unknown underlying function that best links a set of predictors to an outcome from noisy observations. in nonparametric regression, one assumes that the regression function belongs to a pre-specified infinite-dimensional function space (the hypothesis space). in the online setting, when the observations come in a stream, it is computationally-preferable to iteratively update an estimate rather than refitting an entire model repeatedly. inspired by nonparametric sieve estimation and stochastic approximation methods, we propose a sieve stochastic gradient descent estimator (Sieve-SGD) when the hypothesis space is a Sobolev ellipsoid. We show that Sieve-SGD has rate-optimal mean squared error (MSE) under a set of simple and direct conditions. The proposed estimator can be constructed with a low computational (time and space) expense: We also formally show that Sieve-SGD requires almost minimal memory usage among all statistically rate-optimal estimators.

18.
Entropy (Basel) ; 23(10)2021 Oct 12.
Article in English | MEDLINE | ID: mdl-34682056

ABSTRACT

To monitor the Earth's surface, the satellite of the NASA Landsat program provides us image sequences of any region on the Earth constantly over time. These image sequences give us a unique resource to study the Earth's surface, changes of the Earth resource over time, and their implications in agriculture, geology, forestry, and more. Besides natural sciences, image sequences are also commonly used in functional magnetic resonance imaging (fMRI) of medical studies for understanding the functioning of brains and other organs. In practice, observed images almost always contain noise and other contaminations. For a reliable subsequent image analysis, it is important to remove such contaminations in advance. This paper focuses on image sequence denoising, which has not been well-discussed in the literature yet. To this end, an edge-preserving image denoising procedure is suggested. The suggested method is based on a jump-preserving local smoothing procedure, in which the bandwidths are chosen such that the possible spatio-temporal correlations in the observed image intensities are accommodated properly. Both theoretical arguments and numerical studies show that this method works well in the various cases considered.

19.
Stat Med ; 40(24): 5188-5198, 2021 10 30.
Article in English | MEDLINE | ID: mdl-34181277

ABSTRACT

Observational studies usually include participants representing the wide heterogeneous population. The conditional causal effect, treatment effect conditional on baseline characteristics, is of practical importance. Its estimation is subject to two challenges. First, the causal effect is not observable in any individual due to counterfactuality. Second, high-dimensional baseline variables are involved to satisfy the ignorable treatment selection assumption and to attain better estimation efficiency. In this work, a nonparametric estimation procedure, along with a pseudo-response, is proposed to estimate the conditional treatment effect through "characteristic score"-a parsimonious representation of baseline variable influence on treatment benefit. Adopting sparse dimension reduction with variable prescreening in the proposed estimation, we aim to identify the key baseline variables that impact the conditional treatment effect and to uncover the characteristic score that best predicts the treatment effect. This approach is applied to an HIV study for assessing the benefit of antiretroviral regimens and identifying the beneficiary subpopulation.


Subject(s)
Causality , Humans
20.
Article in English | MEDLINE | ID: mdl-33918420

ABSTRACT

(1) Background: As diabetes melllitus (DM) can affect the microvasculature, this study evaluates different clinical parameters and the vascular density of ocular surface microvasculature in diabetic patients. (2) Methods: In this cross-sectional study, red-free conjunctival photographs of diabetic individuals aged 30-60 were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The Areas Occupied by Vessels (AOV) images of different diameters were calculated. To establish the sum of AOV of different sized vessels. We adopt a novel approach to investigate the association between clinical characteristics as the predictors and AOV as the outcome, that is Tilted Additive Model (TAM). We use a tilted nonparametric regression estimator to estimate the nonlinear effect of predictors on the outcome in the additive setting for the first time. (3) Results: The results show Age (p-value = 0.019) and Mean Arterial Pressure (MAP) have a significant linear effect on AOV (p-value = 0.034). We also find a nonlinear association between Body Mass Index (BMI), daily Urinary Protein Excretion (UPE), Hemoglobin A1C, and Blood Urea Nitrogen (BUN) with AOV. (4) Conclusions: As many predictors do not have a linear relationship with the outcome, we conclude that the TAM will help better elucidate the effect of the different predictors. The highest level of AOV can be seen at Hemoglobin A1C of 9% and AOV increases when the daily UPE exceeds 600 mg. These effects need to be considered in future studies of ocular surface vessels of diabetic patients.


Subject(s)
Diabetes Mellitus , Eye/blood supply , Adult , Algorithms , Cross-Sectional Studies , Glycated Hemoglobin , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL