Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
Add more filters








Publication year range
1.
Front Plant Sci ; 15: 1409194, 2024.
Article in English | MEDLINE | ID: mdl-38966142

ABSTRACT

Introduction: Cotton yield estimation is crucial in the agricultural process, where the accuracy of boll detection during the flocculation period significantly influences yield estimations in cotton fields. Unmanned Aerial Vehicles (UAVs) are frequently employed for plant detection and counting due to their cost-effectiveness and adaptability. Methods: Addressing the challenges of small target cotton bolls and low resolution of UAVs, this paper introduces a method based on the YOLO v8 framework for transfer learning, named YOLO small-scale pyramid depth-aware detection (SSPD). The method combines space-to-depth and non-strided convolution (SPD-Conv) and a small target detector head, and also integrates a simple, parameter-free attentional mechanism (SimAM) that significantly improves target boll detection accuracy. Results: The YOLO SSPD achieved a boll detection accuracy of 0.874 on UAV-scale imagery. It also recorded a coefficient of determination (R2) of 0.86, with a root mean square error (RMSE) of 12.38 and a relative root mean square error (RRMSE) of 11.19% for boll counts. Discussion: The findings indicate that YOLO SSPD can significantly improve the accuracy of cotton boll detection on UAV imagery, thereby supporting the cotton production process. This method offers a robust solution for high-precision cotton monitoring, enhancing the reliability of cotton yield estimates.

2.
Sci Rep ; 14(1): 7097, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38528045

ABSTRACT

Accurately estimating large-area crop yields, especially for soybeans, is essential for addressing global food security challenges. This study introduces a deep learning framework that focuses on precise county-level soybean yield estimation in the United States. It utilizes a wide range of multi-variable remote sensing data. The model used in this study is a state-of-the-art CNN-BiGRU model, which is enhanced by the GOA and a novel attention mechanism (GCBA). This model excels in handling intricate time series and diverse remote sensing datasets. Compared to five leading machine learning and deep learning models, our GCBA model demonstrates superior performance, particularly in the 2019 and 2020 evaluations, achieving remarkable R2, RMSE, MAE and MAPE values. This sets a new benchmark in yield estimation accuracy. Importantly, the study highlights the significance of integrating multi-source remote sensing data. It reveals that synthesizing information from various sensors and incorporating photosynthesis-related parameters significantly enhances yield estimation precision. These advancements not only provide transformative insights for precision agricultural management but also establish a solid scientific foundation for informed decision-making in global agricultural production and food security.

3.
Sensors (Basel) ; 24(6)2024 Mar 16.
Article in English | MEDLINE | ID: mdl-38544178

ABSTRACT

In the context of Industry 4.0, one of the most significant challenges is enhancing efficiency in sectors like agriculture by using intelligent sensors and advanced computing. Specifically, the task of fruit detection and counting in orchards represents a complex issue that is crucial for efficient orchard management and harvest preparation. Traditional techniques often fail to provide the timely and precise data necessary for these tasks. With the agricultural sector increasingly relying on technological advancements, the integration of innovative solutions is essential. This study presents a novel approach that combines artificial intelligence (AI), deep learning (DL), and unmanned aerial vehicles (UAVs). The proposed approach demonstrates superior real-time capabilities in fruit detection and counting, utilizing a combination of AI techniques and multi-UAV systems. The core innovation of this approach is its ability to simultaneously capture and synchronize video frames from multiple UAV cameras, converting them into a cohesive data structure and, ultimately, a continuous image. This integration is further enhanced by image quality optimization techniques, ensuring the high-resolution and accurate detection of targeted objects during UAV operations. Its effectiveness is proven by experiments, achieving a high mean average precision rate of 86.8% in fruit detection and counting, which surpasses existing technologies. Additionally, it maintains low average error rates, with a false positive rate at 14.7% and a false negative rate at 18.3%, even under challenging weather conditions like cloudiness. Overall, the practical implications of this multi-UAV imaging and DL-based approach are vast, particularly for real-time fruit recognition in orchards, marking a significant stride forward in the realm of digital agriculture that aligns with the objectives of Industry 4.0.


Subject(s)
Artificial Intelligence , Deep Learning , Fruit , Intelligence , Diagnostic Imaging
4.
Data Brief ; 52: 109952, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38226042

ABSTRACT

Conventional methods of crop yield estimation are costly, inefficient, and prone to error resulting in poor yield estimates. This affects the ability of farmers to appropriately plan and manage their crop production pipelines and market processes. There is therefore a need to develop automated methods of crop yield estimation. However, the development of accurate machine-learning methods for crop yield estimation depends on the availability of appropriate datasets. There is a lack of such datasets, especially in sub-Saharan Africa. We present curated image datasets of coffee and cashew nuts acquired in Uganda during two crop harvest seasons. The datasets were collected over nine months, from September 2022 to May 2023. The data was collected using a high-resolution camera mounted on an Unmanned Aerial Vehicle . The datasets contain 3000 coffee and 3086 cashew nut images, constituting 6086 images. Annotated objects of interest in the coffee dataset consist of five classes namely: unripe, ripening, ripe, spoilt, and coffee_tree. Annotated objects of interest in the cashew nut dataset consist of six classes namely: tree, flower, premature, unripe, ripe, and spoilt. The datasets may be used for various machine-learning tasks including flowering intensity estimation, fruit maturity stage analysis, disease diagnosis, crop variety identification, and yield estimation.

5.
Data Brief ; 51: 109772, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38020434

ABSTRACT

Bangladesh's economy is primarily driven by the agriculture sector. Rice is one of the staple food of Bangladesh. The count of panicles per unit area serves as a widely used indicator for estimating rice yield, facilitating breeding efforts, and conducting phenotypic analysis. By calculating the number of panicles within a given area, researchers and farmers can assess crop density, plant health, and prospective production. The conventional method of estimating rice yields in Bangladesh is time-consuming, inaccurate, and inefficient. To address the challenge of detecting rice panicles, this article provides a comprehensive dataset of annotated rice panicle images from Bangladesh. Data collection was done by a drone equipped with a 4 K resolution camera, and it took place on April 25, 2023, in Bonkhoria Gazipur, Bangladesh. During the day, the drone captured the rice field from various heights and perspectives. After employing various image processing techniques for curation and annotation, the dataset was generated using images extracted from drone video clips, which were then annotated with information regarding rice panicles. The dataset is the largest publicly accessible collection of rice panicle images from Bangladesh, consisting of 2193 original images and 5701 augmented images.

6.
Front Plant Sci ; 14: 1188216, 2023.
Article in English | MEDLINE | ID: mdl-37575912

ABSTRACT

Introduction: To stabilize the edible oil market, it is necessary to determine the oil yield in advance, so the accurate and fast technology of estimating rapeseed yield is of great significance in agricultural production activities. Due to the long flowering time of rapeseed and the characteristics of petal color that are obviously different from other crops, the flowering period can be carefully considered in crop classification and yield estimation. Methods: A field experiment was conducted to obtain the unmanned aerial vehicle (UAV) multispectral images. Field measurements consisted of the reflectance of flowers, leaves, and soils at the flowering stage and rapeseed yield at physiological maturity. Moreover, GF-1 and Sentinel-2 satellite images were collected to compare the applicability of yield estimation methods. The abundance of different organs of rapeseed was extracted by the spectral mixture analysis (SMA) technology, which was multiplied by vegetation indices (VIs) respectively to estimate the yield. Results: For the UAV-scale, the product of VIs and leaf abundance (AbdLF) was closely related to rapeseed yield, which was better than the VIs models for yield estimation, with the coefficient of determination (R2) above 0.78. The yield estimation models of the product of normalized difference yellowness index (NDYI), enhanced vegetation index (EVI) and AbdLF had the highest accuracy, with the coefficients of variation (CVs) below 10%. For the satellite scale, most of the estimation models of the product of VIs and rapeseed AbdLF were also improved compared with the VIs models. The yield estimation models of the product of AbdLF and renormalized difference VI (RDVI) and EVI (RDVI×AbdLF and EVI×AbdLF) had the steady improvement, with CVs below 13.1%. Furthermore, the yield estimation models of the product of AbdLF and normalized difference VI (NDVI), visible atmospherically resistant index (VARI), RDVI, and EVI had consistent performance at both UAV and satellite scales. Discussion: The results showed that considering SMA could improve the limitation of using only VIs to retrieve rapeseed yield at the flowering stage. Our results indicate that the abundance of rapeseed leaves can be a potential indicator of yield prediction during the flowering stage.

7.
Appl Radiat Isot ; 200: 110925, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37459682

ABSTRACT

The present work reports an analysis of the production yield of residues from the 6Li + 181Ta reaction in a low-energy regime. The experimental yield of 183gOs, 183mOs, 182Os, 183Re, 183Ta, 182m2Ta, and 180Ta have been measured in the 27-43 MeV energy window and compared with equilibrium and pre-equilibrium model calculations under the framework of the nuclear reaction model code, EMPIRE-3.2.2. The maximum yield measured for 183gOs is 80.5 ± 14.9 MBq/C at 40.2 MeV energy in a 2.3 mg/cm2 thick Ta target corresponding to a cross-section of 360.1 ± 34.4 mb from the 181Ta(6Li,4n)183Os reaction and that for 183Re is 1.36 ± 0.4 MBq/C at 42.75 MeV in a 2.4 mg/cm2 thick target. The model estimations agree well with the experimental yields of 183gOs and 183Re. The possible production of stable residues has been estimated using the model-predicted cross-section in the studied energy range. A comparison of production yields of 183m,gOs from 6Li- and 7Li-induced reaction on Ta demonstrates the 6Li reaction as a better candidate. Thick target yields have been evaluated for Os and Re isotopes.

8.
Field Crops Res ; 296: 108907, 2023 May 15.
Article in English | MEDLINE | ID: mdl-37193044

ABSTRACT

Context: Photosynthetic stimulations have shown promising outcomes in improving crop photosynthesis, including soybean. However, it is still unclear to what extent these changes can impact photosynthetic assimilation and yield under long-term field climate conditions. Objective: In this paper, we present a systematic evaluation of the response of canopy photosynthesis and yield to two critical parameters in leaf photosynthesis: the maximum carboxylation rate of ribulose-1,5-bisphosphate carboxylase/oxygenase (Vcmax) and the maximum electron transport of the ribulose-1,5-bisphosphate regeneration rate (Jmax). Methods: Using the field-scale crop model Soybean-BioCro and ten years of observed climate data in Urbana, Illinois, U.S., we conducted sensitivity experiments to estimate the changes in canopy photosynthesis, leaf area index, and biomass due to the changes in Vcmax and Jmax. Results: The results show that 1) Both the canopy photosynthetic assimilation (An) and pod biomass yields were more sensitive to the changes in Jmax, particularly at high atmospheric carbon-dioxide concentrations ([CO2]); 2) Higher [CO2] undermined the effectiveness of increasing the two parameters to improve An and yield; 3) Under the same [CO2], canopy light interception and canopy respiration were key factors that undermined improvements in An and yield; 4) A canopy with smaller leaf area index tended to have a higher yield improvement, and 5) Increases in assimilations and yields were highly dependent on growing-season climatic conditions. The solar radiation, temperature, and relative humidity were the main climate drivers that impacted the yield improvement, and they had opposite correlations with improved yield during the vegetative phase compared to the reproductive phase. Conclusions: In a world with elevated [CO2], genetic engineering crop photosynthesis should focus more on improving Jmax. Further, long-term climate conditions and seasonal variations must be considered to determine the improvements in soybean canopy photosynthesis and yield at the field scale. Implications: Quantifying the effectiveness of changing Vcmax and Jmax helps understand their individual and combined contributions to potential improvements in assimilation and yield. This work provides a framework for evaluating how altering the photosynthetic rate parameters impacts soybean yield and assimilation under different seasonal climate scenarios at the field scale.

9.
Front Plant Sci ; 14: 1132909, 2023.
Article in English | MEDLINE | ID: mdl-36950357

ABSTRACT

Longan yield estimation is an important practice before longan harvests. Statistical longan yield data can provide an important reference for market pricing and improving harvest efficiency and can directly determine the economic benefits of longan orchards. At present, the statistical work concerning longan yields requires high labor costs. Aiming at the task of longan yield estimation, combined with deep learning and regression analysis technology, this study proposed a method to calculate longan yield in complex natural environment. First, a UAV was used to collect video images of a longan canopy at the mature stage. Second, the CF-YD model and SF-YD model were constructed to identify Cluster_Fruits and Single_Fruits, respectively, realizing the task of automatically identifying the number of targets directly from images. Finally, according to the sample data collected from real orchards, a regression analysis was carried out on the target quantity detected by the model and the real target quantity, and estimation models were constructed for determining the Cluster_Fruits on a single longan tree and the Single_Fruits on a single Cluster_Fruit. Then, an error analysis was conducted on the data obtained from the manual counting process and the estimation model, and the average error rate regarding the number of Cluster_Fruits was 2.66%, while the average error rate regarding the number of Single_Fruits was 2.99%. The results show that the method proposed in this paper is effective at estimating longan yields and can provide guidance for improving the efficiency of longan fruit harvests.

10.
Sensors (Basel) ; 23(2)2023 Jan 11.
Article in English | MEDLINE | ID: mdl-36679645

ABSTRACT

The potential of image proximal sensing for agricultural applications has been a prolific scientific subject in the recent literature. Its main appeal lies in the sensing of precise information about plant status, which is either harder or impossible to extract from lower-resolution downward-looking image sensors such as satellite or drone imagery. Yet, many theoretical and practical problems arise when dealing with proximal sensing, especially on perennial crops such as vineyards. Indeed, vineyards exhibit challenging physical obstacles and many degrees of variability in their layout. In this paper, we present the design of a mobile camera suited to vineyards and harsh experimental conditions, as well as the results and assessments of 8 years' worth of studies using that camera. These projects ranged from in-field yield estimation (berry counting) to disease detection, providing new insights on typical viticulture problems that could also be generalized to orchard crops. Different recommendations are then provided using small case studies, such as the difficulties related to framing plots with different structures or the mounting of the sensor on a moving vehicle. While results stress the obvious importance and strong benefits of a thorough experimental design, they also indicate some inescapable pitfalls, illustrating the need for more robust image analysis algorithms and better databases. We believe sharing that experience with the scientific community can only benefit the future development of these innovative approaches.


Subject(s)
Agriculture , Algorithms , Farms , Feedback , Agriculture/methods , Image Processing, Computer-Assisted , Crops, Agricultural
11.
ACS Synth Biol ; 12(2): 524-532, 2023 02 17.
Article in English | MEDLINE | ID: mdl-36696234

ABSTRACT

DNA origami is a milestone in DNA nanotechnology. It is robust and efficient in constructing arbitrary two- and three-dimensional nanostructures. The shape and size of origami structures vary. To characterize them, an atomic force microscope, a transmission electron microscope, and other microscopes are utilized. However, the identification of various origami nanostructures heavily depends on the experience of researchers. In this study, we used the deep learning method (improved Yolox) to detect multiple DNA origami structures and estimate their yield. We designed a feature enhancement fusion network with the attention mechanism, and related parameters were researched. Experiments conducted to verify the proposed method showed that the detection accuracy was higher than that of other methods. This method can detect and estimate the DNA origami yield in complex environments, and the detection speed is in the millisecond range.


Subject(s)
Deep Learning , Nanostructures , Nucleic Acid Conformation , Nanostructures/chemistry , Nanotechnology/methods , DNA/chemistry
12.
Plant Methods ; 19(1): 8, 2023 Jan 28.
Article in English | MEDLINE | ID: mdl-36709313

ABSTRACT

BACKGROUND: The number of soybean pods is one of the most important indicators of soybean yield, pod counting is crucial for yield estimation, cultivation management, and variety breeding. Counting pods manually is slow and laborious. For crop counting, using object detection network is a common practice, but the scattered and overlapped pods make the detection and counting of the pods difficult. RESULTS: We propose an approach that we named YOLO POD, based on the YOLO X framework. On top of YOLO X, we added a block for predicting the number of pods, modified the loss function, thus constructing a multi-task model, and introduced the Convolutional Block Attention Module (CBAM). We achieve accurate identification and counting of pods without reducing the speed of inference. The results showed that the R2 between the number predicted by YOLO POD and the ground truth reached 0.967, which is improved by 0.049 compared to YOLO X, while the inference time only increased by 0.08 s. Moreover, MAE, MAPE, RMSE are only 4.18, 10.0%, 6.48 respectively, the deviation is very small. CONCLUSIONS: We have achieved the first accurate counting of soybean pods and proposed a new solution for the detection and counting of dense objects.

13.
Fundam Res ; 3(6): 951-959, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38933002

ABSTRACT

Providing accurate crop yield estimations at large spatial scales and understanding yield losses under extreme climate stress is an urgent challenge for sustaining global food security. While the data-driven deep learning approach has shown great capacity in predicting yield patterns, its capacity to detect and attribute the impacts of climatic extremes on yields remains unknown. In this study, we developed a deep neural network based multi-task learning framework to estimate variations of maize yield at the county level over the US Corn Belt from 2006 to 2018, with a special focus on the extreme yield loss in 2012. We found that our deep learning model hindcasted the yield variations with good accuracy for 2006-2018 (R2 = 0.81) and well reproduced the extreme yield anomalies in 2012 (R2 = 0.79). Further attribution analysis indicated that extreme heat stress was the major cause for yield loss, contributing to 72.5% of the yield loss, followed by anomalies of vapor pressure deficit (17.6%) and precipitation (10.8%). Our deep learning model was also able to estimate the accumulated impact of climatic factors on maize yield and identify that the silking phase was the most critical stage shaping the yield response to extreme climate stress in 2012. Our results provide a new framework of spatio-temporal deep learning to assess and attribute the crop yield response to climate variations in the data rich era.

14.
Ying Yong Sheng Tai Xue Bao ; 34(12): 3347-3356, 2023 Dec.
Article in Chinese | MEDLINE | ID: mdl-38511374

ABSTRACT

Establishing the remote sensing yield estimation model of wheat-maize rotation cultivated land can timely and accurately estimate the comprehensive grain yield. Taking the winter wheat-summer maize rotation cultivated land in Caoxian County, Shandong Province, as test object, using the Sentinel-2 images from 2018 to 2019, we compared the time-series feature classification based on QGIS platform and support vector machine algorithm to select the best method and extract sowing area of wheat-maize rotation cultivated land. Based on the correlation between wheat and maize vegetation index and the statistical yield, we screened the sensitive vegetation indices and their growth period, and obtained the vegetation index integral value of the sensitive spectral period by using the Newton-trapezoid integration method. We constructed the multiple linear regression and three machine learning (random forest, RF; neural network model, BP; support vector machine model, SVM) models based on the integral value combination to get the best and and optimized yield estimation model. The results showed that the accuracy rate of extracting wheat and maize sowing area based on time-series features using QGIS platform reached 94.6%, with the overall accuracy and Kappa coefficient were 5.9% and 0.12 higher than those of the support vector machine algorithm, respectively. The remote sensing yield estimation in sensitive spectral period was better than that in single growth period. The normalized differential vegetation index and ratio vegetation index integral group of wheat and enhanced vegetation index and structure intensify pigment vegetable index integral group of maize could more effectively aggregate spectral information. The optimal combination of vegetation index integral was difference, and the fitting accuracy of machine learning algorithm was higher than that of empirical statistical model. The optimal yield estimation model was the difference value group-random forest (DVG-RF) model of machine learning algorithm (R2=0.843, root mean square error=2.822 kg·hm-2), with a yield estimation accuracy of 93.4%. We explored the use of QGIS platform to extract the sowing area, and carried out a systematical case study on grain yield estimation method of wheat-maize rotation cultivated land. The established multi-vegetation index integral combination model was effective and feasible, which could improve accuracy and efficiency of yield estimation.


Subject(s)
Triticum , Zea mays , Remote Sensing Technology/methods , Edible Grain , China
15.
Sensors (Basel) ; 22(21)2022 Nov 07.
Article in English | MEDLINE | ID: mdl-36366269

ABSTRACT

Rice is one of the vital foods consumed in most countries throughout the world. To estimate the yield, crop counting is used to indicate improper growth, identification of loam land, and control of weeds. It is becoming necessary to grow crops healthy, precisely, and proficiently as the demand increases for food supplies. Traditional counting methods have numerous disadvantages, such as long delay times and high sensitivity, and they are easily disturbed by noise. In this research, the detection and counting of rice plants using an unmanned aerial vehicle (UAV) and aerial images with a geographic information system (GIS) are used. The technique is implemented in the area of forty acres of rice crop in Tando Adam, Sindh, Pakistan. To validate the performance of the proposed system, the obtained results are compared with the standard plant count techniques as well as approved by the agronomist after testing soil and monitoring the rice crop count in each acre of land of rice crops. From the results, it is found that the proposed system is precise and detects rice crops accurately, differentiates from other objects, and estimates the soil health based on plant counting data; however, in the case of clusters, the counting is performed in semi-automated mode.


Subject(s)
Oryza , Soil , Geographic Information Systems , Crops, Agricultural , Plant Weeds
16.
Front Plant Sci ; 13: 1001779, 2022.
Article in English | MEDLINE | ID: mdl-36275598

ABSTRACT

Scientific and accurate estimation of rice yield is of great significance to food security protection and agricultural economic development. Due to the weak penetration of high frequency microwave band, most of the backscattering comes from the rice canopy, and the backscattering coefficient is highly correlated with panicle weight, which provides a basis for inversion of wet biomass of rice ear. To solve the problem of rice yield estimation at the field scale, based on the traditional water cloud model, a modified water-cloud model based on panicle layer and the radar data with Ku band was constructed to estimate rice yield at panicle stage. The wet weight of rice ear scattering model and grain number per rice ear scattering model were constructed at field scale for rice yield estimation. In this paper, the functional area of grain production in Xiashe Village, Xin'an Town, Deqing County, Zhejiang Province, China was taken as the study area. For the first time, the MiniSAR radar system carried by DJI M600 UAV was used in September 2019 to obtain the SAR data with Ku band under polarization HH of the study area as the data source. Then the rice yield was estimated by using the newly constructed modified water-cloud model based on panicle layer. The field investigation was carried out simultaneously for verification. The study results show: the accuracies of the inversion results of wet weight of rice ear scattering model and grain number per rice ear scattering model in parcel B were 95.03% and 94.15%; and the accuracies of wet weight of rice ear scattering model and grain number per rice ear scattering model in parcel C+D+E were over 91.8%. In addition, different growth stages had effects on yield estimation accuracy. For rice at fully mature, the yield estimation accuracies of wet weight of ear and grain number per ear were basically similar, both exceeding 94%. For rice at grouting stage, the yield estimation accuracy of wet weight of ear was 92.7%, better than that of grain number per ear. It was proved that it can effectively estimate rice yield using the modified water-cloud model based on panicle layer constructed in this paper at panicle stage at field scale.

17.
Front Plant Sci ; 13: 965425, 2022.
Article in English | MEDLINE | ID: mdl-36017261

ABSTRACT

The fast and precise detection of dense litchi fruits and the determination of their maturity is of great practical significance for yield estimation in litchi orchards and robot harvesting. Factors such as complex growth environment, dense distribution, and random occlusion by leaves, branches, and other litchi fruits easily cause the predicted output based on computer vision deviate from the actual value. This study proposed a fast and precise litchi fruit detection method and application software based on an improved You Only Look Once version 5 (YOLOv5) model, which can be used for the detection and yield estimation of litchi in orchards. First, a dataset of litchi with different maturity levels was established. Second, the YOLOv5s model was chosen as a base version of the improved model. ShuffleNet v2 was used as the improved backbone network, and then the backbone network was fine-tuned to simplify the model structure. In the feature fusion stage, the CBAM module was introduced to further refine litchi's effective feature information. Considering the characteristics of the small size of dense litchi fruits, the 1,280 × 1,280 was used as the improved model input size while we optimized the network structure. To evaluate the performance of the proposed method, we performed ablation experiments and compared it with other models on the test set. The results showed that the improved model's mean average precision (mAP) presented a 3.5% improvement and 62.77% compression in model size compared with the original model. The improved model size is 5.1 MB, and the frame per second (FPS) is 78.13 frames/s at a confidence of 0.5. The model performs well in precision and robustness in different scenarios. In addition, we developed an Android application for litchi counting and yield estimation based on the improved model. It is known from the experiment that the correlation coefficient R 2 between the application test and the actual results was 0.9879. In summary, our improved method achieves high precision, lightweight, and fast detection performance at large scales. The method can provide technical means for portable yield estimation and visual recognition of litchi harvesting robots.

18.
Data Brief ; 43: 108466, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35873279

ABSTRACT

National and international Vitis variety catalogues can be used as image datasets for computer vision in viticulture. These databases archive ampelographic features and phenology of several grape varieties and plant structures images (e.g. leaf, bunch, shoots). Although these archives represent a potential database for computer vision in viticulture, plant structure images are acquired singularly and mostly not directly in the vineyard. Localization computer vision models would take advantage of multiple objects in the same image, allowing more efficient training. The present images and labels dataset was designed to overcome such limitations and provide suitable images for multiple cluster identification in white grape varieties. A group of 373 images were acquired from later view in vertical shoot position vineyards in six different Italian locations at different phenological stages. Images were then labelled in YOLO labelling format. The dataset was made available both in terms of images and labels. The real number of bunches counted in the field, and the number of bunches visible in the image (not covered by other vine structures) was recorded for a group of images in this dataset.

19.
Front Plant Sci ; 13: 911473, 2022.
Article in English | MEDLINE | ID: mdl-35747884

ABSTRACT

Accurate detection of pear flowers is an important measure for pear orchard yield estimation, which plays a vital role in improving pear yield and predicting pear price trends. This study proposed an improved YOLOv4 model called YOLO-PEFL model for accurate pear flower detection in the natural environment. Pear flower targets were artificially synthesized with pear flower's surface features. The synthetic pear flower targets and the backgrounds of the original pear flower images were used as the inputs of the YOLO-PEFL model. ShuffleNetv2 embedded by the SENet (Squeeze-and-Excitation Networks) module replacing the original backbone network of the YOLOv4 model formed the backbone of the YOLO-PEFL model. The parameters of the YOLO-PEFL model were fine-tuned to change the size of the initial anchor frame. The experimental results showed that the average precision of the YOLO-PEFL model was 96.71%, the model size was reduced by about 80%, and the average detection speed was 0.027s. Compared with the YOLOv4 model and the YOLOv4-tiny model, the YOLO-PEFL model had better performance in model size, detection accuracy, and detection speed, which effectively reduced the model deployment cost and improved the model efficiency. It implied the proposed YOLO-PEFL model could accurately detect pear flowers with high efficiency in the natural environment.

20.
Sensors (Basel) ; 22(11)2022 May 31.
Article in English | MEDLINE | ID: mdl-35684799

ABSTRACT

This work investigates the performance of five depth cameras in relation to their potential for grape yield estimation. The technologies used by these cameras include structured light (Kinect V1), active infrared stereoscopy (RealSense D415), time of flight (Kinect V2 and Kinect Azure), and LiDAR (Intel L515). To evaluate their suitability for grape yield estimation, a range of factors were investigated including their performance in and out of direct sunlight, their ability to accurately measure the shape of the grapes, and their potential to facilitate counting and sizing of individual berries. The depth cameras' performance was benchmarked using high-resolution photogrammetry scans. All the cameras except the Kinect V1 were able to operate in direct sunlight. Indoors, the RealSense D415 camera provided the most accurate depth scans of grape bunches, with a 2 mm average depth error relative to photogrammetric scans. However, its performance was reduced in direct sunlight. The time of flight and LiDAR cameras provided depth scans of grapes that had about an 8 mm depth bias. Furthermore, the individual berries manifested in the scans as pointed shape distortions. This led to an underestimation of berry sizes when applying the RANSAC sphere fitting but may help with the detection of individual berries with more advanced algorithms. Applying an opaque coating to the surface of the grapes reduced the observed distance bias and shape distortion. This indicated that these are likely caused by the cameras' transmitted light experiencing diffused scattering within the grapes. More work is needed to investigate if this distortion can be used for enhanced measurement of grape properties such as ripeness and berry size.


Subject(s)
Vitis , Algorithms , Fruit
SELECTION OF CITATIONS
SEARCH DETAIL