Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 103
Filtrar
1.
Sci Rep ; 14(1): 20865, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39242750

RESUMO

Partial accelerated life tests (PALTs) are employed when the results of accelerated life testing cannot be extended to usage circumstances. This work discusses the challenge of different estimating strategies in constant PALT with complete data. The lifetime distribution of the test item is assumed to follow the power half-logistic distribution. Several classical and Bayesian estimation techniques are presented to estimate the distribution parameters and the acceleration factor of the power half-logistic distribution. These techniques include Anderson-Darling, maximum likelihood, Cramér von-Mises, ordinary least squares, weighted least squares, maximum product of spacing and Bayesian. Additionally, the Bayesian credible intervals and approximate confidence intervals are constructed. A simulation study is provided to compare the outcomes of various estimation methods that have been provided based on mean squared error, absolute average bias, length of intervals, and coverage probabilities. This study shows that the maximum product of spacing estimation is the most effective strategy among the options in most circumstances when adopting the minimum values for MSE and average bias. In the majority of situations, Bayesian method outperforms other methods when taking into account both MSE and average bias values. When comparing approximation confidence intervals to Bayesian credible intervals, the latter have a higher coverage probability and smaller average length. Two authentic data sets are examined for illustrative purposes. Examining the two real data sets shows that the value methods are workable and applicable to certain engineering-related problems.

2.
Talanta ; 275: 126078, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38678921

RESUMO

A method for simultaneous determination of nitrogen content and 15N isotope abundance in plants was established by Elemental analysis-gas isotope ratio mass spectrometry. Taking poplar leaves and l-glutamic acid as standards, nitrogen content was determined using the standard curve established by weighted least squares regression between the mass of nitrogen element and the total peak height intensity at m/z 28 and 29. Then the 15N isotope abundance was calculated with the peak height intensity at m/z 28 and 29. Through the comparison of several sets of experiments, the impact of mass discrimination effect, tin capsule consumables, isotope memory effect, and the quality of nitrogen on the results were assessed. The results showed that with a weight of 1/x2, the standard curve has a coefficient of determination (R2) of 0.9996. Compared to the traditional Kjeldahl method, the measured nitrogen content deviated less than 0.2 %, and the standard deviation (SD) was less than 0.2 %. Compared to the sodium hypobromite method, the 15N isotopic abundances differed less than 0.2 atom%15N, and the SD was less than 0.2 atom% 15N. The established method offers the advantages of being fast, simple, accurate, and high throughput, providing a novel approach for the simultaneous determination of nitrogen content and 15N isotope abundance in plant samples.


Assuntos
Isótopos de Nitrogênio , Nitrogênio , Isótopos de Nitrogênio/análise , Nitrogênio/análise , Nitrogênio/química , Folhas de Planta/química , Espectrometria de Massas/métodos , Populus/química
3.
J Environ Manage ; 351: 119881, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38150925

RESUMO

In today's world, where economic development and environmental sustainability are becoming increasingly important aspects of national strategy, attention to the impact of different economic sectors on climate change is becoming an integral part of scientific research. This article focuses on analyzing the impact of primary and secondary economic sectors development on carbon dioxide (CO2) emissions at the sub-national level in Russia from 2005 to 2019. The aim of the study is to provide an in-depth understanding of the relationships between the dynamics of these sectors and CO2 emission levels in different regions of the country. Weighted regression and panel data methods were applied to better identify the patterns of the impact. The results show that the size of population and electricity consumption have the highest impact on CO2 emissions. So that, the expansion of nuclear and gas generation capacity, as well as significant improvement of energy efficiency, are of crucial importance to reduce the emissions. Other sectors have a heterogeneous impact and requires more differential approaches, considering the specifics of regions. Taking into account the significant differences between the Russian constituent entities, this paper emphasizes the low informativeness of assessments at the national level and their inadequacy in terms of improving the efficiency of domestic management, including decarbonization policies.


Assuntos
Dióxido de Carbono , Desenvolvimento Econômico , Dióxido de Carbono/análise , Indústrias , Mudança Climática , Federação Russa
4.
Sensors (Basel) ; 23(12)2023 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-37420826

RESUMO

This work presents a data-driven factor graph (FG) model designed to perform anchor-based positioning. The system makes use of the FG to compute the target position, given the distance measurements to the anchor node that know its own position.The aim was to design a hybrid structure (that involves data and modeling approaches) to address positioning models from a Bayesian point of view, customizing them for each technology and scenario. The weighted geometric dilution of precision (WGDOP) metric, which measures the effect on the positioning solution of distance error to the corresponding anchor node and network geometry of the anchor nodes, was taken into account. The presented algorithms were tested with simulated data and also with real-life data collected from IEEE 802.15.4-compliant sensor network nodes with a physical layer based on ultra-wide band (UWB) technology, in scenarios with one target node, three and four anchor nodes, and a time-of-arrival-based range technique. The results showed that the presented algorithm based on the FG technique provided better positioning results than the least squares-based algorithms and even UWB-based commercial systems in various scenarios, with different setups in terms of geometries and propagation conditions.


Assuntos
Algoritmos , Tecnologia , Teorema de Bayes
5.
Sensors (Basel) ; 23(14)2023 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-37514549

RESUMO

This paper develops a new time difference of arrival (TDOA) emitter localization algorithm in the 3D space, employing conic approximations of hyperboloids associated with TDOA measurements. TDOA measurements are first converted to 1D angle of arrival (1D-AOA) measurements that define TDOA cones centred about axes connecting the corresponding TDOA sensor pairs. Then, the emitter location is calculated from the triangulation of 1D-AOAs, which is formulated as a system of nonlinear equations and solved by a low-complexity two-stage estimation algorithm composed of an iterative weighted least squares (IWLS) estimator and a Taylor series estimator aimed at refining the IWLS estimate. Important conclusions are reached about the optimality of sensor-emitter and sensor array geometries. The approximate efficiency of the IWLS estimator is also established under mild conditions. The new two-stage estimator is shown to be capable of outperforming the maximum likelihood estimator while performing very close to the Cramer Rao lower bound in poor sensor-emitter geometries and large noise by way of numerical simulations.

6.
J Clin Epidemiol ; 157: 53-58, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889450

RESUMO

OBJECTIVES: To evaluate how well meta-analysis mean estimators represent reported medical research and establish which meta-analysis method is better using widely accepted model selection measures: Akaike information criterion (AIC) and Bayesian information criterion (BIC). STUDY DESIGN AND SETTING: We compiled 67,308 meta-analyses from the Cochrane Database of Systematic Reviews (CDSR) published between 1997 and 2020, collectively encompassing nearly 600,000 medical findings. We compared unrestricted weighted least squares (UWLS) vs. random effects (RE); fixed effect was also secondarily considered. RESULTS: The probability that a randomly selected systematic review from the CDSR would favor UWLS over RE is 79.4% (95% confidence interval [CI95%]: 79.1; 79.7). The odds ratio that a Cochrane systematic review would substantially favor UWLS over RE is 9.33 (CI95%: 8.94; 9.73) using the conventional criterion that a difference in AIC (or BIC) of two or larger represents a 'substantial' improvement. UWLS's advantage over RE is most prominent in the presence of low heterogeneity. However, UWLS also has a notable advantage in high heterogeneity research, across different sizes of meta-analyses and types of outcomes. CONCLUSION: UWLS frequently dominates RE in medical research, often substantially. Thus, the UWLS should be reported routinely in the meta-analysis of clinical trials.


Assuntos
Pesquisa Biomédica , Humanos , Análise dos Mínimos Quadrados , Teorema de Bayes , Revisões Sistemáticas como Assunto
7.
J Appl Stat ; 50(3): 703-723, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36819074

RESUMO

Feature selection is an important data dimension reduction method, and it has been used widely in applications involving high-dimensional data such as genetic data analysis and image processing. In order to achieve robust feature selection, the latest works apply the l 2 , 1 or l 2 , p -norm of matrix to the loss function and regularization terms in regression, and have achieved encouraging results. However, these existing works rigidly set the matrix norms used in the loss function and the regularization terms to the same l 2 , 1 or l 2 , p -norm, which limit their applications. In addition, the algorithms for solutions they present either have high computational complexity and are not suitable for large data sets, or cannot provide satisfying performance due to the approximate calculation. To address these problems, we present a generalized l 2 , p -norm regression based feature selection ( l 2 , p -RFS) method based on a new optimization criterion. The criterion extends the optimization criterion of ( l 2 , p -RFS) when the loss function and the regularization terms in regression use different matrix norms. We cast the new optimization criterion in a regression framework without regularization. In this framework, the new optimization criterion can be solved using an iterative re-weighted least squares (IRLS) procedure in which the least squares problem can be solved efficiently by using the least square QR decomposition (LSQR) algorithm. We have conducted extensive experiments to evaluate the proposed algorithm on various well-known data sets of both gene expression and image data sets, and compare it with other related feature selection methods.

8.
Sensors (Basel) ; 23(3)2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36772637

RESUMO

To achieve low-cost and robustness, an indoor location system using simple visual tags is designed by comprehensively considering accuracy and computation complexity. Only the color and shape features are used for tag detection, by which both algorithm complexity and data storage requirement are reduced. To manage the nonunique problem caused by the simple tag features, a fast query and matching method is further presented by using the view field of the camera and the tag azimuth. Then, based on the relationship analysis between the spatial distribution of tags and location error, a pose and position estimation method using the weighted least square algorithm is designed and works together with the interactive algorithm by the designed switching strategy. By using the techniques presented, a favorable balance is achieved between the algorithm complexity and the location accuracy. The simulation and experiment results show that the proposed method can manage the singular problem of the overdetermined equations effectively and attenuate the negative effect of unfavorable label groups. Compared with the ultrawide band technology, the location error is reduced by more than 62%.

9.
Front Genet ; 13: 913354, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36531249

RESUMO

Here, we report the use of genome-wide association study (GWAS) for the analysis of canine whole-genome sequencing (WGS) repository data using breed phenotypes. Single-nucleotide polymorphisms (SNPs) were called from WGS data from 648 dogs that included 119 breeds from the Dog10K Genomes Project. Next, we assigned breed phenotypes for hip dysplasia (Orthopedic Foundation for Animals (OFA) HD, n = 230 dogs from 27 breeds; hospital HD, n = 279 dogs from 38 breeds), elbow dysplasia (ED, n = 230 dogs from 27 breeds), and anterior cruciate ligament rupture (ACL rupture, n = 279 dogs from 38 breeds), the three most important canine spontaneous complex orthopedic diseases. Substantial morbidity is common with these diseases. Previous within- and between-breed GWAS for HD, ED, and ACL rupture using array SNPs have identified disease-associated loci. Individual disease phenotypes are lacking in repository data. There is a critical knowledge gap regarding the optimal approach to undertake categorical GWAS without individual phenotypes. We considered four GWAS approaches: a classical linear mixed model, a haplotype-based model, a binary case-control model, and a weighted least squares model using SNP average allelic frequency. We found that categorical GWAS was able to validate HD candidate loci. Additionally, we discovered novel candidate loci and genes for all three diseases, including FBX025, IL1A, IL1B, COL27A1, SPRED2 (HD), UGDH, FAF1 (ED), TGIF2 (ED & ACL rupture), and IL22, IL26, CSMD1, LDHA, and TNS1 (ACL rupture). Therefore, categorical GWAS of ancestral dog populations may contribute to the understanding of any disease for which breed epidemiological risk data are available, including diseases for which GWAS has not been performed and candidate loci remain elusive.

10.
Sensors (Basel) ; 22(19)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36236260

RESUMO

Visible light positioning (VLP) has attracted intensive attention from both academic and industrial communities thanks to its high accuracy, immunity to electromagnetic interference, and low deployment cost. In general, the receiver in a VLP system determines its own position by exploring the received signal strength (RSS) from the transmitter according to a pre-built RSS attenuation model. In such model-based methods, the LED's emission power and the receiver's height are usually required known and constant parameters to obtain reasonable positioning accuracy. However, the LED's emission power is normally time-varying due to the fact that the LED's optical output power is prone to changing with the LED's temperature, and the receiver's height is random in a realistic application scenario. To this end, we propose a height-independent three-dimensional (3D) VLP scheme based on the RSS ratio (RSSR), rather than only using RSS. Unlike existing RSS-based VLP methods, our method is able to independently find the horizontal coordinate, i.e., two-dimensional (2D) position, without a priori height information of the receiver, and also avoids the negative effect caused by fluctuation of the LED's emission power. Moreover, we can further infer the height of the receiver to achieve three-dimensional (3D) positioning by iterating the 2D results back into positioning equations. To quickly verify the proposed scheme, we conduct theoretical analysis with mathematical proof and experimental results with real data, which confirm that the proposed scheme can achieve high position accuracy without known information of the receiver's height and LED's emission power. We also implement a VLP prototype with five LED transmitters, and experimental results show that the proposed scheme can achieve very low average errors of 2.73 cm in 2D and 7.20 cm in 3D.

11.
Stat Methods Med Res ; 31(12): 2352-2367, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36113153

RESUMO

The distribution of time-to-event outcomes is usually right-skewed. While for symmetric and moderately skewed data the mean and median are appropriate location measures, the mode is preferable for heavily skewed data as it better represents the center of the distribution. Mode regression has been introduced for uncensored data to model the relationship between covariates and the mode of the outcome. Starting from nonparametric kernel density based mode regression, we examine the use of inverse probability of censoring weights to extend mode regression to handle right-censored data. We add a semiparametric predictor to add further flexibility to the model and we construct a pseudo Akaike's information criterion to select the bandwidth and smoothing parameters. We use simulations to evaluate the performance of our proposed approach. We demonstrate the benefit of adding mode regression to one's toolbox for analyzing survival data on a pancreatic cancer data set from a prospectively maintained cancer registry.


Assuntos
Modelos Estatísticos , Simulação por Computador , Probabilidade
12.
Sensors (Basel) ; 22(15)2022 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-35898071

RESUMO

Until now, RTK (real-time kinematic) and NRTK (Network-based RTK) have been the most popular cm-level accurate positioning approaches based on Global Navigation Satellite System (GNSS) signals in real-time. The tropospheric delay is a major source of RTK errors, especially for medium and long baselines. This source of error is difficult to quantify due to its reliance on highly variable atmospheric humidity. In this paper, we use the NRTK approach to estimate double-differenced zenith tropospheric delays alongside ambiguities and positions based on a complete set of multi-GNSS data in a sample 6-station network in Europe. The ZTD files published by IGS were used to validate the estimated ZTDs. The results confirmed a good agreement, with an average Root Mean Squares Error (RMSE) of about 12 mm. Although multiplying the unknowns makes the mathematical model less reliable in correctly fixing integer ambiguities, adding a priori interpolated ZTD as quasi-observations can improve positioning accuracy and Integer Ambiguity Resolution (IAR) performance. In this work, weighted least-squares (WLS) were performed using the interpolation of ZTD values of near reference stations of the IGS network. When using a well-known Kriging interpolation, the weights depend on the semivariogram, and a higher network density is required to obtain the correct covariance function. Hence, we used a simple interpolation strategy, which minimized the impact of altitude variability within the network. Compared to standard RTK where ZTD is assumed to be unknown, this technique improves the positioning accuracy by about 50%. It also increased the success rate for IAR by nearly 1.

13.
Metabolites ; 12(7)2022 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-35888712

RESUMO

Flux balance analysis (FBA) is a key method for the constraint-based analysis of metabolic networks. A technical problem may occur in FBA when known (e.g., measured) fluxes of certain reactions are integrated into an FBA scenario rendering the underlying linear program (LP) infeasible, for example, due to inconsistencies between some of the measured fluxes causing a violation of the steady-state or other constraints. Here, we present and compare two methods, one based on an LP and one on a quadratic program (QP), to find minimal corrections for the given flux values so that the FBA problem becomes feasible. We provide a general guide on how to treat infeasible FBA systems in practice and discuss relevant examples of potentially infeasible scenarios in core and genome-scale metabolic models. Finally, we also highlight and clarify the relationships to classical metabolic flux analysis, where solely algebraic approaches are used to compute unknown metabolic rates from measured fluxes and to balance infeasible flux scenarios.

14.
Clin Chem Lab Med ; 60(7): 989-994, 2022 06 27.
Artigo em Inglês | MEDLINE | ID: mdl-35531706

RESUMO

OBJECTIVES: Recently, the linearity evaluation protocol by the Clinical & Laboratory Standards Institute (CLSI) has been revised from EP6-A to EP6-ED2, with the statistical method of interpreting linearity evaluation data being changed from polynomial regression to weighted least squares linear regression (WLS). We analyzed and compared the analytical measurement range (AMR) verification results according to the present and prior linearity evaluation guidelines. METHODS: The verification of AMR of clinical chemistry tests was performed using five samples with two replicates in three different laboratories. After analyzing the same evaluation data in each laboratory by the polynomial regression analysis and WLS methods, results were compared to determine whether linearity was verified across the five sample concentrations. In addition, whether the 90% confidence interval of deviation from linearity by WLS was included in the allowable deviation from linearity (ADL) was compared. RESULTS: A linearity of 42.3-56.8% of the chemistry items was verified by polynomial regression analysis in three laboratories. For analysis of the same data by WLS, a linearity of 63.5-78.3% of the test items was verified where the deviation from linearity of all five samples was within the ADL criteria, and the cases where the 90% confidence interval of all deviation from linearity overlapped the ADL was 78.8-91.3%. CONCLUSIONS: Interpreting AMR verification data by the WLS method according to the newly revised CLSI document EP6-ED2 could reduce laboratory workload, enabling efficient laboratory practice.


Assuntos
Testes de Química Clínica , Laboratórios , Humanos , Análise dos Mínimos Quadrados , Modelos Lineares , Padrões de Referência
15.
Stat Med ; 41(13): 2403-2416, 2022 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-35277866

RESUMO

Negative binomial regression is commonly employed to analyze overdispersed count data. With small to moderate sample sizes, the maximum likelihood estimator of the dispersion parameter may be subject to a significant bias, that in turn affects inference on mean parameters. This article proposes inference for negative binomial regression based on adjustments of the score function aimed at mean or median bias reduction. The resulting estimating equations generalize those available for improved inference in generalized linear models and can be solved using a suitable extension of iterative weighted least squares. Simulation studies confirm the good properties of the new methods, which are also found to solve in many cases numerical problems of maximum likelihood estimation. The methods are illustrated and evaluated using two case studies: an Ames salmonella assay data set and data on epileptic seizures. Inference based on adjusted scores turns out to generally improve on maximum likelihood, and even on explicit bias correction, with median bias reduction being overall preferable.


Assuntos
Modelos Estatísticos , Viés , Simulação por Computador , Humanos , Análise dos Mínimos Quadrados , Funções Verossimilhança , Tamanho da Amostra
16.
Entropy (Basel) ; 24(1)2022 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-35052121

RESUMO

Liquid financial markets, such as the options market of the S&P 500 index, create vast amounts of data every day, i.e., so-called intraday data. However, this highly granular data is often reduced to single-time when used to estimate financial quantities. This under-utilization of the data may reduce the quality of the estimates. In this paper, we study the impacts on estimation quality when using intraday data to estimate dividends. The methodology is based on earlier linear regression (ordinary least squares) estimates, which have been adapted to intraday data. Further, the method is also generalized in two aspects. First, the dividends are expressed as present values of future dividends rather than dividend yields. Second, to account for heteroscedasticity, the estimation methodology was formulated as a weighted least squares, where the weights are determined from the market data. This method is compared with a traditional method on out-of-sample S&P 500 European options market data. The results show that estimations based on intraday data have, with statistical significance, a higher quality than the corresponding single-times estimates. Additionally, the two generalizations of the methodology are shown to improve the estimation quality further.

17.
Methodol Comput Appl Probab ; 24(4): 2633-2645, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36619375

RESUMO

In the context of estimating stochastically ordered distribution functions, the pool-adjacent-violators algorithm (PAVA) can be modified such that the computation times are reduced substantially. This is achieved by studying the dependence of antitonic weighted least squares fits on the response vector to be approximated.

18.
Wirel Pers Commun ; 124(2): 1623-1644, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34873380

RESUMO

Location-enabled Internet of things (IoT) has attracted much attention from the scientific and industrial communities given its high relevance in application domains such as agriculture, wildlife management, and infectious disease control. The frequency and accuracy of location information plays an important role in the success of these applications. However, frequent and accurate self-localization of IoT devices is challenging due to their resource-constrained nature. In this paper, we propose a new algorithm for self-localization of IoT devices using noisy received signal strength indicator (RSSI) measurements and perturbed anchor node position estimates. In the proposed algorithm, we minimize a weighted sum-square-distance-error cost function in an iterative fashion utilizing the gradient-descent method. We calculate the weights using the statistical properties of the perturbations in the measurements. We assume log-normal distribution for the RSSI-induced distance estimates due to considering the log-distance path-loss model with normally-distributed perturbations for the RSSI measurements in the logarithmic scale. We also assume normally-distributed perturbation in the anchor position estimates. We compare the performance of the proposed algorithm with that of an existing algorithm that takes a similar approach but only accounts for the perturbations in the RSSI measurements. Our simulation results show that by taking into account the error in the anchor positions, a significant improvement in the localization accuracy can be achieved. The proposed algorithm uses only a single measurement of RSSI and one estimate of each anchor position. This makes the proposed algorithm suitable for frequent and accurate localization of IoT devices.

19.
Sensors (Basel) ; 21(24)2021 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-34960531

RESUMO

This paper evaluates the performance of an integrity monitoring algorithm of global navigation satellite systems (GNSS) for the Kalman filter (KF), termed KF receiver autonomous integrity monitoring (RAIM). The algorithm checks measurement inconsistencies in the range domain and requires Schmidt KF (SKF) as the navigation processor. First, realistic carrier-smoothed pseudorange measurement error models of GNSS are integrated into KF RAIM, overcoming an important limitation of prior work. More precisely, the error covariance matrix for fault detection is modified to capture the temporal variations of individual errors with different time constants. Uncertainties of the model parameters are also taken into account. Performance of the modified KF RAIM is then analyzed with the simulated signals of the global positioning system and navigation with Indian constellation for different phases of aircraft flight. Weighted least squares (WLS) RAIM used for comparison purposes is shown to have lower protection levels. This work, however, is important because KF-based integrity monitors are required to ensure the reliability of advanced navigation methods, such as multi-sensor integration and vector receivers. A key finding of the performance analyses is as follows. Innovation-based tests with an extended KF navigation processor confuse slow ramp faults with residual measurement errors that the filter estimates, leading to missed detection. RAIM with SKF, on the other hand, can successfully detect such faults. Thus, it offers a promising solution to developing KF integrity monitoring algorithms in the range domain. The modified KF RAIM completes processing in time on a low-end computer. Some salient features are also studied to gain insights into its working principles.

20.
PeerJ ; 9: e12005, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34466291

RESUMO

Remote-sensing using normalized difference vegetation index (NDVI) has the potential of rapidly detecting the effect of water stress on field crops. However, this detection has typically been accomplished only after the stress effect led to significant changes in crop green biomass, leaf area index, angle and position, and few studies have attempted to estimate the uncertainties of the regression models. These have limited the informed interpretation of NDVI data in agricultural applications. We built a ground-based sensing cart and used it to calibrate the relationships between NDVI and leaf water potential (LWP) for wheat, corn, and cotton growing under field conditions. Both the methods of ordinary least-squares (OLS) and weighted least-squares (WLS) were employed in data analysis, and measurement errors in both LWP and NDVI were considered. We also used statistical resampling to test the effect of measurement errors of LWP on the uncertainties of model coefficients. Our data showed that obtaining a high value of the coefficient of determination did not guarantee a high prediction precision in the obtained regression models. Large prediction uncertainties were estimated for all three crops, and the regressions obtained were not always significant. The best models were obtained for cotton with a prediction uncertainty of 27%. We found that considering measurement errors for both LWP and NDVI led to reduced uncertainties in model coefficients. Also, reducing the sample size of LWP measurement led to significantly increased uncertainties in the coefficients of the linear models describing the LWP-NDVI relationship. Finally, potential strategies for reducing the uncertainty relative to the range of NDVI measurement are discussed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA