Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Plant Phenomics ; 6: 0191, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38895609

RESUMEN

Crop uniformity is a comprehensive indicator used to describe crop growth and is important for assessing crop yield and biomass potential. However, there is still a lack of continuous monitoring of uniformity throughout the growing season to explain their effects on yield and biomass. Therefore, this paper proposed a wheat uniformity quantification method based on unmanned aerial vehicle imaging technology to monitor and analyze the dynamic changes in wheat uniformity. The leaf area index (LAI), soil plant analysis development (SPAD), and fractional vegetation cover were estimated from hyperspectral images, while plant height was estimated by a point cloud model from RGB images. Based on these 4 agronomic parameters, a total of 20 uniformity indices covering multiple growing stages were calculated. The changing trends in the uniformity indices were consistent with the results of visual interpretation. The uniformity indices strongly correlated with yield and biomass were selected to construct multiple linear regression models for estimating yield and biomass. The results showed that Pielou's index of LAI had the strongest correlation with yield and biomass, with correlation coefficients of -0.760 and -0.801, respectively. The accuracies of the yield (coefficient of determination [R 2] = 0.616, root mean square error [RMSE] = 1.189 Mg/ha) and biomass estimation model (R 2 = 0.798, RMSE = 1.952 Mg/ha) using uniformity indices were better than those of the models using the mean values of the 4 agronomic parameters. Therefore, the proposed uniformity monitoring method can be used to effectively evaluate the temporal and spatial variations in wheat uniformity and can provide new insights into the prediction of yield and biomass.

2.
PLoS One ; 18(9): e0290703, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37713375

RESUMEN

Acid sulfate soil characterized by pyrite (FeS2) which produces high acidity (soil pH < 3.5) and release high amount of Al3+ and Fe2+. Application of 4 t ha-1 Ground Magnesium Limestone (GML), is a common rate used for acid sulfate soil by the rice farmers in Malaysia. Therefore, this study was conducted to evaluate the integral effect of ground magnesium limestone (GML) and calcium silicate and to determine the optimal combination on acid sulfate soils in Malaysia. The acid sulfate soils were incubated under the submerged condition for 120 days with GML (0, 2, 4, 6 t ha-1) in combination with calcium silicate (0, 1, 2, 3 t ha-1) arranged in a Completely Randomized Design (CRD). The soil was sampled after 30, 60, 90 and 120 days of incubation and analyzed for soil pH, exchangeable Al, Ca, Mg, K and available Si. A total of 2 out of 16 combinations met the desired soil requirement for rice cultivation. The desired chemical soil characteristics for rice cultivation are soil pH > 4, exchangeable Al < 2 cmolc Kg-1, exchangeable Ca > 2 cmolc kg-1, exchangeable Mg > 1 cmolc kg-1 and Si content > 43 mg kg-1. The combinations are i) 2 t ha-1 calcium silicate + 2 t ha-1 GML, and ii) 3 t ha-1 calcium silicate + 2 t ha-1 GML, respectively. These combination rates met the desired requirement of soil chemical characteristics for rice cultivation. Soil acidity was reduced by a gradual release of Ca2+ and SiO32- from calcium silicate continuously filling the exchange sites and reducing the potential of extra (free) H+ availability in the soil system. Combination of calcium silicate and GML, shows the ameliorative effect with; i) release of Ca, ii) binding of Al3+ making it inert Al-hydroxides and, iii) bind H+ to produce water molecules.


Asunto(s)
Carbonato de Calcio , Oryza , Magnesio , Sulfatos , Pirosis , Suelo , Óxidos de Azufre
3.
Plant Phenomics ; 5: 0048, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37363145

RESUMEN

Detailed observation of the phenotypic changes in rice panicle substantially helps us to understand the yield formation. In recent studies, phenotyping of rice panicles during the heading-flowering stage still lacks comprehensive analysis, especially of panicle development under different nitrogen treatments. In this work, we proposed a pipeline to automatically acquire the detailed panicle traits based on time-series images by using the YOLO v5, ResNet50, and DeepSORT models. Combined with field observation data, the proposed method was used to test whether it has an ability to identify subtle differences in panicle developments under different nitrogen treatments. The result shows that panicle counting throughout the heading-flowering stage achieved high accuracy (R2 = 0.96 and RMSE = 1.73), and heading date was estimated with an absolute error of 0.25 days. In addition, by identical panicle tracking based on the time-series images, we analyzed detailed flowering phenotypic changes of a single panicle, such as flowering duration and individual panicle flowering time. For rice population, with an increase in the nitrogen application: panicle number increased, heading date changed little, but the duration was slightly extended; cumulative flowering panicle number increased, rice flowering initiation date arrived earlier while the ending date was later; thus, the flowering duration became longer. For a single panicle, identical panicle tracking revealed that higher nitrogen application led to earlier flowering initiation date, significantly longer flowering days, and significantly longer total duration from vigorous flowering beginning to the end (total DBE). However, the vigorous flowering beginning time showed no significant differences and there was a slight decrease in daily DBE.

4.
Sensors (Basel) ; 23(9)2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37177776

RESUMEN

The leaf phenotypic traits of plants have a significant impact on the efficiency of canopy photosynthesis. However, traditional methods such as destructive sampling will hinder the continuous monitoring of plant growth, while manual measurements in the field are both time-consuming and laborious. Nondestructive and accurate measurements of leaf phenotypic parameters can be achieved through the use of 3D canopy models and object segmentation techniques. This paper proposed an automatic branch-leaf segmentation pipeline based on lidar point cloud and conducted the automatic measurement of leaf inclination angle, length, width, and area, using pear canopy as an example. Firstly, a three-dimensional model using a lidar point cloud was established using SCENE software. Next, 305 pear tree branches were manually divided into branch points and leaf points, and 45 branch samples were selected as test data. Leaf points were further marked as 572 leaf instances on these test data. The PointNet++ model was used, with 260 point clouds as training input to carry out semantic segmentation of branches and leaves. Using the leaf point clouds in the test dataset as input, a single leaf instance was extracted by means of a mean shift clustering algorithm. Finally, based on the single leaf point cloud, the leaf inclination angle was calculated by plane fitting, while the leaf length, width, and area were calculated by midrib fitting and triangulation. The semantic segmentation model was tested on 45 branches, with a mean Precisionsem, mean Recallsem, mean F1-score, and mean Intersection over Union (IoU) of branches and leaves of 0.93, 0.94, 0.93, and 0.88, respectively. For single leaf extraction, the Precisionins, Recallins, and mean coverage (mCoV) were 0.89, 0.92, and 0.87, respectively. Using the proposed method, the estimated leaf inclination, length, width, and area of pear leaves showed a high correlation with manual measurements, with correlation coefficients of 0.94 (root mean squared error: 4.44°), 0.94 (root mean squared error: 0.43 cm), 0.91 (root mean squared error: 0.39 cm), and 0.93 (root mean squared error: 5.21 cm2), respectively. These results demonstrate that the method can automatically and accurately measure the phenotypic parameters of pear leaves. This has great significance for monitoring pear tree growth, simulating canopy photosynthesis, and optimizing orchard management.


Asunto(s)
Imagenología Tridimensional , Pyrus , Imagenología Tridimensional/métodos , Árboles , Plantas , Hojas de la Planta
5.
Plant Phenomics ; 5: 0026, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36939414

RESUMEN

Developing automated soybean seed counting tools will help automate yield prediction before harvesting and improving selection efficiency in breeding programs. An integrated approach for counting and localization is ideal for subsequent analysis. The traditional method of object counting is labor-intensive and error-prone and has low localization accuracy. To quantify soybean seed directly rather than sequentially, we propose a P2PNet-Soy method. Several strategies were considered to adjust the architecture and subsequent postprocessing to maximize model performance in seed counting and localization. First, unsupervised clustering was applied to merge closely located overcounts. Second, low-level features were included with high-level features to provide more information. Third, atrous convolution with different kernel sizes was applied to low- and high-level features to extract scale-invariant features to factor in soybean size variation. Fourth, channel and spatial attention effectively separated the foreground and background for easier soybean seed counting and localization. At last, the input image was added to these extracted features to improve model performance. Using 24 soybean accessions as experimental materials, we trained the model on field images of individual soybean plants obtained from one side and tested them on images obtained from the opposite side, with all the above strategies. The superiority of the proposed P2PNet-Soy in soybean seed counting and localization over the original P2PNet was confirmed by a reduction in the value of the mean absolute error, from 105.55 to 12.94. Furthermore, the trained model worked effectively on images obtained directly from the field without background interference.

6.
Breed Sci ; 72(1): 1, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36045889
7.
Breed Sci ; 72(1): 3-18, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36045897

RESUMEN

In contrast to the rapid advances made in plant genotyping, plant phenotyping is considered a bottleneck in plant science. This has promoted high-throughput plant phenotyping (HTP) studies, resulting in an exponential increase in phenotyping-related publications. The development of HTP was originally intended for use as indoor HTP technologies for model plant species under controlled environments. However, this subsequently shifted to HTP for use in crops in fields. Although HTP in fields is much more difficult to conduct due to unstable environmental conditions compared to HTP in controlled environments, recent advances in HTP technology have allowed these difficulties to be overcome, allowing for rapid, efficient, non-destructive, non-invasive, quantitative, repeatable, and objective phenotyping. Recent HTP developments have been accelerated by the advances in data analysis, sensors, and robot technologies, including machine learning, image analysis, three dimensional (3D) reconstruction, image sensors, laser sensors, environmental sensors, and drones, along with high-speed computational resources. This article provides an overview of recent HTP technologies, focusing mainly on canopy-based phenotypes of major crops, such as canopy height, canopy coverage, canopy biomass, and canopy stressed appearance, in addition to crop organ detection and counting in the fields. Current topics in field HTP are also presented, followed by a discussion on the low rates of adoption of HTP in practical breeding programs.

8.
Sensors (Basel) ; 22(15)2022 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-35898050

RESUMEN

The increase in the number of tillers of rice significantly affects grain yield. However, this is measured only by the manual counting of emerging tillers, where the most common method is to count by hand touching. This study develops an efficient, non-destructive method for estimating the number of tillers during the vegetative and reproductive stages under flooded conditions. Unlike popular deep-learning-based approaches requiring training data and computational resources, we propose a simple image-processing pipeline following the empirical principles of synchronously emerging leaves and tillers in rice morphogenesis. Field images were taken by an unmanned aerial vehicle at a very low flying height for UAV imaging-1.5 to 3 m above the rice canopy. Subsequently, the proposed image-processing pipeline was used, which includes binarization, skeletonization, and leaf-tip detection, to count the number of long-growing leaves. The tiller number was estimated from the number of long-growing leaves. The estimated tiller number in a 1.1 m × 1.1 m area is significantly correlated with the actual number of tillers, with 60% of hills having an error of less than ±3 tillers. This study demonstrates the potential of the proposed image-sensing-based tiller-counting method to help agronomists with efficient, non-destructive field phenotyping.


Asunto(s)
Oryza , Grano Comestible , Procesamiento de Imagen Asistido por Computador , Hojas de la Planta
9.
Plant Phenomics ; 2022: 9795275, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35280929

RESUMEN

Training deep learning models typically requires a huge amount of labeled data which is expensive to acquire, especially in dense prediction tasks such as semantic segmentation. Moreover, plant phenotyping datasets pose additional challenges of heavy occlusion and varied lighting conditions which makes annotations more time-consuming to obtain. Active learning helps in reducing the annotation cost by selecting samples for labeling which are most informative to the model, thus improving model performance with fewer annotations. Active learning for semantic segmentation has been well studied on datasets such as PASCAL VOC and Cityscapes. However, its effectiveness on plant datasets has not received much importance. To bridge this gap, we empirically study and benchmark the effectiveness of four uncertainty-based active learning strategies on three natural plant organ segmentation datasets. We also study their behaviour in response to variations in training configurations in terms of augmentations used, the scale of training images, active learning batch sizes, and train-validation set splits.

10.
Front Genet ; 12: 803636, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35027920

RESUMEN

It has not been fully understood in real fields what environment stimuli cause the genotype-by-environment (G × E) interactions, when they occur, and what genes react to them. Large-scale multi-environment data sets are attractive data sources for these purposes because they potentially experienced various environmental conditions. Here we developed a data-driven approach termed Environmental Covariate Search Affecting Genetic Correlations (ECGC) to identify environmental stimuli and genes responsible for the G × E interactions from large-scale multi-environment data sets. ECGC was applied to a soybean (Glycine max) data set that consisted of 25,158 records collected at 52 environments. ECGC illustrated what meteorological factors shaped the G × E interactions in six traits including yield, flowering time, and protein content and when these factors were involved in the interactions. For example, it illustrated the relevance of precipitation around sowing dates and hours of sunshine just before maturity to the interactions observed for yield. Moreover, genome-wide association mapping on the sensitivities to the identified stimuli discovered candidate and known genes responsible for the G × E interactions. Our results demonstrate the capability of data-driven approaches to bring novel insights on the G × E interactions observed in fields.

11.
Ecol Evol ; 10(21): 12318-12326, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33209290

RESUMEN

Recent advances in Unmanned Aerial Vehicle (UAVs) and image processing have made high-throughput field phenotyping possible at plot/canopy level in the mass grown experiment. Such techniques are now expected to be used for individual level phenotyping in the single grown experiment.We found two main challenges of phenotyping individual plants in the single grown experiment: plant segmentation from weedy backgrounds and the estimation of complex traits that are difficult to measure manually.In this study, we proposed a methodological framework for field-based individual plant phenotyping by UAV. Two contributions, which are weed elimination for individual plant segmentation, and complex traits (volume and outline) extraction, have been developed. The framework demonstrated its utility in the phenotyping of Helianthus tuberosus (Jerusalem artichoke), an herbaceous perennial plant species.The proposed framework can be applied to either small and large scale phenotyping experiments.

12.
Sensors (Basel) ; 20(10)2020 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-32466108

RESUMEN

Automatic detection of intact tomatoes on plants is highly expected for low-cost and optimal management in tomato farming. Mature tomato detection has been wildly studied, while immature tomato detection, especially when occluded with leaves, is difficult to perform using traditional image analysis, which is more important for long-term yield prediction. Therefore, tomato detection that can generalize well in real tomato cultivation scenes and is robust to issues such as fruit occlusion and variable lighting conditions is highly desired. In this study, we build a tomato detection model to automatically detect intact green tomatoes regardless of occlusions or fruit growth stage using deep learning approaches. The tomato detection model used faster region-based convolutional neural network (R-CNN) with Resnet-101 and transfer learned from the Common Objects in Context (COCO) dataset. The detection on test dataset achieved high average precision of 87.83% (intersection over union ≥ 0.5) and showed a high accuracy of tomato counting (R2 = 0.87). In addition, all the detected boxes were merged into one image to compile the tomato location map and estimate their size along one row in the greenhouse. By tomato detection, counting, location and size estimation, this method shows great potential for ripeness and yield prediction.


Asunto(s)
Aprendizaje Profundo , Solanum lycopersicum , Frutas , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
13.
Plant Methods ; 16: 34, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32161624

RESUMEN

BACKGROUND: Panicle density of cereal crops such as wheat and sorghum is one of the main components for plant breeders and agronomists in understanding the yield of their crops. To phenotype the panicle density effectively, researchers agree there is a significant need for computer vision-based object detection techniques. Especially in recent times, research in deep learning-based object detection shows promising results in various agricultural studies. However, training such systems usually requires a lot of bounding-box labeled data. Since crops vary by both environmental and genetic conditions, acquisition of huge amount of labeled image datasets for each crop is expensive and time-consuming. Thus, to catalyze the widespread usage of automatic object detection for crop phenotyping, a cost-effective method to develop such automated systems is essential. RESULTS: We propose a point supervision based active learning approach for panicle detection in cereal crops. In our approach, the model constantly interacts with a human annotator by iteratively querying the labels for only the most informative images, as opposed to all images in a dataset. Our query method is specifically designed for cereal crops which usually tend to have panicles with low variance in appearance. Our method reduces labeling costs by intelligently leveraging low-cost weak labels (object centers) for picking the most informative images for which strong labels (bounding boxes) are required. We show promising results on two publicly available cereal crop datasets-Sorghum and Wheat. On Sorghum, 6 variants of our proposed method outperform the best baseline method with more than 55% savings in labeling time. Similarly, on Wheat, 3 variants of our proposed methods outperform the best baseline method with more than 50% of savings in labeling time. CONCLUSION: We proposed a cost effective method to train reliable panicle detectors for cereal crops. A low cost panicle detection method for cereal crops is highly beneficial to both breeders and agronomists. Plant breeders can obtain quick crop yield estimates to make important crop management decisions. Similarly, obtaining real time visual crop analysis is valuable for researchers to analyze the crop's response to various experimental conditions.

14.
Plant Methods ; 15: 76, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31338116

RESUMEN

BACKGROUND: Accurate estimation of heading date of paddy rice greatly helps the breeders to understand the adaptability of different crop varieties in a given location. The heading date also plays a vital role in determining grain yield for research experiments. Visual examination of the crop is laborious and time consuming. Therefore, quick and precise estimation of heading date of paddy rice is highly essential. RESULTS: In this work, we propose a simple pipeline to detect regions containing flowering panicles from ground level RGB images of paddy rice. Given a fixed region size for an image, the number of regions containing flowering panicles is directly proportional to the number of flowering panicles present. Consequently, we use the flowering panicle region counts to estimate the heading date of the crop. The method is based on image classification using Convolutional Neural Networks. We evaluated the performance of our algorithm on five time series image sequences of three different varieties of rice crops. When compared to the previous work on this dataset, the accuracy and general versatility of the method has been improved and heading date has been estimated with a mean absolute error of less than 1 day. CONCLUSION: An efficient heading date estimation method has been described for rice crops using time series RGB images of crop under natural field conditions. This study demonstrated that our method can reliably be used as a replacement of manual observation to detect the heading date of rice crops.

15.
Plant Phenomics ; 2019: 1525874, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-33313521

RESUMEN

The yield of cereal crops such as sorghum (Sorghum bicolor L. Moench) depends on the distribution of crop-heads in varying branching arrangements. Therefore, counting the head number per unit area is critical for plant breeders to correlate with the genotypic variation in a specific breeding field. However, measuring such phenotypic traits manually is an extremely labor-intensive process and suffers from low efficiency and human errors. Moreover, the process is almost infeasible for large-scale breeding plantations or experiments. Machine learning-based approaches like deep convolutional neural network (CNN) based object detectors are promising tools for efficient object detection and counting. However, a significant limitation of such deep learning-based approaches is that they typically require a massive amount of hand-labeled images for training, which is still a tedious process. Here, we propose an active learning inspired weakly supervised deep learning framework for sorghum head detection and counting from UAV-based images. We demonstrate that it is possible to significantly reduce human labeling effort without compromising final model performance (R 2 between human count and machine count is 0.88) by using a semitrained CNN model (i.e., trained with limited labeled data) to perform synthetic annotation. In addition, we also visualize key features that the network learns. This improves trustworthiness by enabling users to better understand and trust the decisions that the trained deep learning model makes.

16.
Plant Phenomics ; 2019: 2591849, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-33313523

RESUMEN

Microplot extraction (PE) is a necessary image processing step in unmanned aerial vehicle- (UAV-) based research on breeding fields. At present, it is manually using ArcGIS, QGIS, or other GIS-based software, but achieving the desired accuracy is time-consuming. We therefore developed an intuitive, easy-to-use semiautomatic program for MPE called Easy MPE to enable researchers and others to access reliable plot data UAV images of whole fields under variable field conditions. The program uses four major steps: (1) binary segmentation, (2) microplot extraction, (3) production of ∗.shp files to enable further file manipulation, and (4) projection of individual microplots generated from the orthomosaic back onto the raw aerial UAV images to preserve the image quality. Crop rows were successfully identified in all trial fields. The performance of the proposed method was evaluated by calculating the intersection-over-union (IOU) ratio between microplots determined manually and by Easy MPE: the average IOU (±SD) of all trials was 91% (±3).

18.
Hortic Res ; 5: 74, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30564372

RESUMEN

In orchards, measuring crown characteristics is essential for monitoring the dynamics of tree growth and optimizing farm management. However, it lacks a rapid and reliable method of extracting the features of trees with an irregular crown shape such as trained peach trees. Here, we propose an efficient method of segmenting the individual trees and measuring the crown width and crown projection area (CPA) of peach trees with time-series information, based on gathered images. The images of peach trees were collected by unmanned aerial vehicles in an orchard in Okayama, Japan, and then the digital surface model was generated by using a Structure from Motion (SfM) and Multi-View Stereo (MVS) based software. After individual trees were identified through the use of an adaptive threshold and marker-controlled watershed segmentation in the digital surface model, the crown widths and CPA were calculated, and the accuracy was evaluated against manual delineation and field measurement, respectively. Taking manual delineation of 12 trees as reference, the root-mean-square errors of the proposed method were 0.08 m (R 2 = 0.99) and 0.15 m (R 2 = 0.93) for the two orthogonal crown widths, and 3.87 m2 for CPA (R 2 = 0.89), while those taking field measurement of 44 trees as reference were 0.47 m (R 2 = 0.91), 0.51 m (R 2 = 0.74), and 4.96 m2 (R 2 = 0.88). The change of growth rate of CPA showed that the peach trees grew faster from May to July than from July to September, with a wide variation in relative growth rates among trees. Not only can this method save labour by replacing field measurement, but also it can allow farmers to monitor the growth of orchard trees dynamically.

19.
Front Plant Sci ; 9: 1544, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30405675

RESUMEN

Sorghum (Sorghum bicolor L. Moench) is a C4 tropical grass that plays an essential role in providing nutrition to humans and livestock, particularly in marginal rainfall environments. The timing of head development and the number of heads per unit area are key adaptation traits to consider in agronomy and breeding but are time consuming and labor intensive to measure. We propose a two-step machine-based image processing method to detect and count the number of heads from high-resolution images captured by unmanned aerial vehicles (UAVs) in a breeding trial. To demonstrate the performance of the proposed method, 52 images were manually labeled; the precision and recall of head detection were 0.87 and 0.98, respectively, and the coefficient of determination (R 2) between the manual and new methods of counting was 0.84. To verify the utility of the method in breeding programs, a geolocation-based plot segmentation method was applied to pre-processed ortho-mosaic images to extract >1000 plots from original RGB images. Forty of these plots were randomly selected and labeled manually; the precision and recall of detection were 0.82 and 0.98, respectively, and the coefficient of determination between manual and algorithm counting was 0.56, with the major source of error being related to the morphology of plants resulting in heads being displayed both within and outside the plot in which the plants were sown, i.e., being allocated to a neighboring plot. Finally, the potential applications in yield estimation from UAV-based imagery from agronomy experiments and scouting of production fields are also discussed.

20.
Sensors (Basel) ; 17(6)2017 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-28587238

RESUMEN

The measurement of air temperature is strongly influenced by environmental factors such as solar radiation, humidity, wind speed and rainfall. This is problematic in low-cost air temperature sensors, which lack a radiation shield or a forced aspiration system, exposing them to direct sunlight and condensation. In this study, we developed a machine learning-based calibration method for air temperature measurement by a low-cost sensor. An artificial neural network (ANN) was used to balance the effect of multiple environmental factors on the measurements. Data were collected over 305 days, at three different locations in Japan, and used to evaluate the performance of the approach. Data collected at the same location and at different locations were used for training and testing, and the former was also used for k-fold cross-validation, demonstrating an average improvement in mean absolute error (MAE) from 1.62 to 0.67 by applying our method. Some calibration failures were noted, due to abrupt changes in environmental conditions such as solar radiation or rainfall. The MAE was shown to decrease even when the data collected in different nearby locations were used for training and testing. However, the results also showed that negative effects arose when data obtained from widely-separated locations were used, because of the significant environmental differences between them.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...