Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 474
Filter
1.
Front Robot AI ; 11: 1426269, 2024.
Article in English | MEDLINE | ID: mdl-39360224

ABSTRACT

High agility, maneuverability, and payload capacity, combined with small footprints, make legged robots well-suited for precision agriculture applications. In this study, we introduce a novel bionic hexapod robot designed for agricultural applications to address the limitations of traditional wheeled and aerial robots. The robot features a terrain-adaptive gait and adjustable clearance to ensure stability and robustness over various terrains and obstacles. Equipped with a high-precision Inertial Measurement Unit (IMU), the robot is able to monitor its attitude in real time to maintain balance. To enhance obstacle detection and self-navigation capabilities, we have designed an advanced version of the robot equipped with an optional advanced sensing system. This advanced version includes LiDAR, stereo cameras, and distance sensors to enable obstacle detection and self-navigation capabilities. We have tested the standard version of the robot under different ground conditions, including hard concrete floors, rugged grass, slopes, and uneven field with obstacles. The robot maintains good stability with pitch angle fluctuations ranging from -11.5° to 8.6° in all conditions and can walk on slopes with gradients up to 17°. These trials demonstrated the robot's adaptability to complex field environments and validated its ability to maintain stability and efficiency. In addition, the terrain-adaptive algorithm is more energy efficient than traditional obstacle avoidance algorithms, reducing energy consumption by 14.4% for each obstacle crossed. Combined with its flexible and lightweight design, our robot shows significant potential in improving agricultural practices by increasing efficiency, lowering labor costs, and enhancing sustainability. In our future work, we will further develop the robot's energy efficiency, durability in various environmental conditions, and compatibility with different crops and farming methods.

2.
Sci Rep ; 14(1): 23887, 2024 Oct 12.
Article in English | MEDLINE | ID: mdl-39396063

ABSTRACT

The development of soft computing methods has had a significant influence on the subject of autonomous intelligent agriculture. This paper offers a system for autonomous greenhouse navigation that employs a fuzzy control algorithm and a deep learning-based disease classification model for tomato plants, identifying illnesses using photos of tomato leaves. The primary novelty in this study is the introduction of an upgraded Deep Convolutional Generative Adversarial Network (DCGAN) that creates augmented pictures of disease tomato leaves from original genuine samples, considerably enhancing the training dataset. To find the optimum training model, four deep learning networks (VGG19, Inception-v3, DenseNet-201, and ResNet-152) were carefully compared on a dataset of nine tomato leaf disease classes. These models have validation accuracy of 92.32%, 90.83%, 96.61%, and 97.07%, respectively, when using the original PlantVillage dataset. The system then uses an enhanced dataset with ResNet-152 network design to achieve a high accuracy of 99.69%, as compared to the original dataset with ResNet-152's accuracy of 97.07%. This improvement indicates the use of the proposed DCGAN in improving the performance of the deep learning model for greenhouse plant monitoring and disease detection. Furthermore, the proposed approach may have a broader use in various agricultural scenarios, potentially altering the field of autonomous intelligent agriculture.


Subject(s)
Agriculture , Deep Learning , Plant Diseases , Plant Leaves , Solanum lycopersicum , Solanum lycopersicum/growth & development , Agriculture/methods , Robotics/methods , Algorithms , Neural Networks, Computer , Soft Computing
3.
Plants (Basel) ; 13(17)2024 Aug 31.
Article in English | MEDLINE | ID: mdl-39273919

ABSTRACT

In this study, a deep learning method combining knowledge graph and diffusion Transformer has been proposed for cucumber disease detection. By incorporating the diffusion attention mechanism and diffusion loss function, the research aims to enhance the model's ability to recognize complex agricultural disease features and to address the issue of sample imbalance efficiently. Experimental results demonstrate that the proposed method outperforms existing deep learning models in cucumber disease detection tasks. Specifically, the method achieved a precision of 93%, a recall of 89%, an accuracy of 92%, and a mean average precision (mAP) of 91%, with a frame rate of 57 frames per second (FPS). Additionally, the study successfully implemented model lightweighting, enabling effective operation on mobile devices, which supports rapid on-site diagnosis of cucumber diseases. The research not only optimizes the performance of cucumber disease detection, but also opens new possibilities for the application of deep learning in the field of agricultural disease detection.

4.
Front Plant Sci ; 15: 1396568, 2024.
Article in English | MEDLINE | ID: mdl-39228840

ABSTRACT

Precision weed management (PWM), driven by machine vision and deep learning (DL) advancements, not only enhances agricultural product quality and optimizes crop yield but also provides a sustainable alternative to herbicide use. However, existing DL-based algorithms on weed detection are mainly developed based on supervised learning approaches, typically demanding large-scale datasets with manual-labeled annotations, which can be time-consuming and labor-intensive. As such, label-efficient learning methods, especially semi-supervised learning, have gained increased attention in the broader domain of computer vision and have demonstrated promising performance. These methods aim to utilize a small number of labeled data samples along with a great number of unlabeled samples to develop high-performing models comparable to the supervised learning counterpart trained on a large amount of labeled data samples. In this study, we assess the effectiveness of a semi-supervised learning framework for multi-class weed detection, employing two well-known object detection frameworks, namely FCOS (Fully Convolutional One-Stage Object Detection) and Faster-RCNN (Faster Region-based Convolutional Networks). Specifically, we evaluate a generalized student-teacher framework with an improved pseudo-label generation module to produce reliable pseudo-labels for the unlabeled data. To enhance generalization, an ensemble student network is employed to facilitate the training process. Experimental results show that the proposed approach is able to achieve approximately 76% and 96% detection accuracy as the supervised methods with only 10% of labeled data in CottonWeedDet3 and CottonWeedDet12, respectively. We offer access to the source code (https://github.com/JiajiaLi04/SemiWeeds), contributing a valuable resource for ongoing semi-supervised learning research in weed detection and beyond.

5.
Data Brief ; 56: 110837, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39252779

ABSTRACT

WeedCube dataset consists of hyperspectral images of three crops (canola, soybean, and sugarbeet) and four invasive weeds species (kochia, common waterhemp, redroot pigweed, and common ragweed). Plants were grown in two separate greenhouses and plant canopies were captured from a top-down camera angle. A push-broom hyperspectral sensor in the visible near infrared region of 400-1000 nm was used for data collection. The dataset includes 160 calibrated images. The number of images can be further increased by selection of smaller region of interests (ROIs). Dataset is supplemented by Jupyter Notebook scripts that help in data augmentation, spectral pre-processing, ROI selection for points and images, and data visualization. The primary purpose of this dataset is to support weed classification or identification studies by enhancing existing training datasets and validating the generalization capabilities of existing models. Owing to the three-dimensional (3D) nature of hyperspectral images, this dataset can also be utilized by researchers and educators across various domains for the development and testing of deep learning algorithms, the creation of automated data processing pipelines effective for 3D data, the development of tools for 3D data visualization, the creation of innovative solutions for data compression, and addressing system memory issues associated with high-dimensional data.

6.
Heliyon ; 10(17): e36808, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39281636

ABSTRACT

This study leverages the BERTopic algorithm to analyze the evolution of research within precision agriculture, identifying 37 distinct topics categorized into eight subfields: Data Analysis, IoT, UAVs, Soil and Water Management, Crop and Pest Management, Livestock, Sustainable Agriculture, and Technology Innovation. By employing BERTopic, based on a transformer architecture, this research enhances topic refinement and diversity, distinguishing it from traditional reviews. The findings highlight a significant shift towards IoT innovations, such as security and privacy, reflecting the integration of smart technologies with traditional agricultural practices. Notably, this study introduces a comprehensive popularity index that integrates trend intensity with topic proportion, providing nuanced insights into topic dynamics across countries and journals. The analysis shows that regions with robust research and development, such as the USA and Germany, are advancing in technologies like Machine Learning and IoT, while the diversity in research topics, assessed through information entropy, indicates a varied global research scope. These insights assist scholars and research institutions in selecting research directions and provide newcomers with an understanding of the field's dynamics.

7.
Front Plant Sci ; 15: 1408047, 2024.
Article in English | MEDLINE | ID: mdl-39119495

ABSTRACT

In both plant breeding and crop management, interpretability plays a crucial role in instilling trust in AI-driven approaches and enabling the provision of actionable insights. The primary objective of this research is to explore and evaluate the potential contributions of deep learning network architectures that employ stacked LSTM for end-of-season maize grain yield prediction. A secondary aim is to expand the capabilities of these networks by adapting them to better accommodate and leverage the multi-modality properties of remote sensing data. In this study, a multi-modal deep learning architecture that assimilates inputs from heterogeneous data streams, including high-resolution hyperspectral imagery, LiDAR point clouds, and environmental data, is proposed to forecast maize crop yields. The architecture includes attention mechanisms that assign varying levels of importance to different modalities and temporal features that, reflect the dynamics of plant growth and environmental interactions. The interpretability of the attention weights is investigated in multi-modal networks that seek to both improve predictions and attribute crop yield outcomes to genetic and environmental variables. This approach also contributes to increased interpretability of the model's predictions. The temporal attention weight distributions highlighted relevant factors and critical growth stages that contribute to the predictions. The results of this study affirm that the attention weights are consistent with recognized biological growth stages, thereby substantiating the network's capability to learn biologically interpretable features. Accuracies of the model's predictions of yield ranged from 0.82-0.93 R2 ref in this genetics-focused study, further highlighting the potential of attention-based models. Further, this research facilitates understanding of how multi-modality remote sensing aligns with the physiological stages of maize. The proposed architecture shows promise in improving predictions and offering interpretable insights into the factors affecting maize crop yields, while demonstrating the impact of data collection by different modalities through the growing season. By identifying relevant factors and critical growth stages, the model's attention weights provide valuable information that can be used in both plant breeding and crop management. The consistency of attention weights with biological growth stages reinforces the potential of deep learning networks in agricultural applications, particularly in leveraging remote sensing data for yield prediction. To the best of our knowledge, this is the first study that investigates the use of hyperspectral and LiDAR UAV time series data for explaining/interpreting plant growth stages within deep learning networks and forecasting plot-level maize grain yield using late fusion modalities with attention mechanisms.

8.
Front Plant Sci ; 15: 1415884, 2024.
Article in English | MEDLINE | ID: mdl-39119504

ABSTRACT

The pollination process of kiwifruit flowers plays a crucial role in kiwifruit yield. Achieving accurate and rapid identification of the four stages of kiwifruit flowers is essential for enhancing pollination efficiency. In this study, to improve the efficiency of kiwifruit pollination, we propose a novel full-stage kiwifruit flower pollination detection algorithm named KIWI-YOLO, based on the fusion of frequency-domain features. Our algorithm leverages frequency-domain and spatial-domain information to improve recognition of contour-detailed features and integrates decision-making with contextual information. Additionally, we incorporate the Bi-Level Routing Attention (BRA) mechanism with C3 to enhance the algorithm's focus on critical areas, resulting in accurate, lightweight, and fast detection. The algorithm achieves a m A P 0.5 of 91.6% with only 1.8M parameters, the AP of the Female class and the Male class reaches 95% and 93.5%, which is an improvement of 3.8%, 1.2%, and 6.2% compared with the original algorithm. Furthermore, the Recall and F1-score of the algorithm are enhanced by 5.5% and 3.1%, respectively. Moreover, our model demonstrates significant advantages in detection speed, taking only 0.016s to process an image. The experimental results show that the algorithmic model proposed in this study can better assist the pollination of kiwifruit in the process of precision agriculture production and help the development of the kiwifruit industry.

9.
Sensors (Basel) ; 24(16)2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39204965

ABSTRACT

Winter is the season of main concern for beekeepers since the temperature, humidity, and potential infection from mites and other diseases may lead the colony to death. As a consequence, beekeepers perform invasive checks on the colonies, exposing them to further harm. This paper proposes a novel design of an instrumented beehive involving color cameras placed inside the beehive and at the bottom of it, paving the way for new frontiers in beehive monitoring. The overall acquisition system is described focusing on design choices towards an effective solution for internal, contactless, and stress-free beehive monitoring. To validate our approach, we conducted an experimental campaign in 2023 and analyzed the collected images with YOLOv8 to understand if the proposed solution can be useful for beekeepers and what kind of information can be derived from this kind of monitoring, including the presence of Varroa destructor mites inside the beehive. We experimentally found that the observation point inside the beehive is the most challenging due to the frequent movements of the bees and the difficulties related to obtaining in-focus images. However, from these images, it is possible to find Varroa destructor mites. On the other hand, the observation point at the bottom of the beehive showed great potential for understanding the overall activity of the colony.


Subject(s)
Varroidae , Bees/physiology , Bees/parasitology , Animals , Varroidae/physiology , Varroidae/pathogenicity , Beekeeping/methods
10.
Sensors (Basel) ; 24(16)2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39205020

ABSTRACT

(1) Background: Yield-monitoring systems are widely used in grain crops but are less advanced for hay and forage. Current commercial systems are generally limited to weighing individual bales, limiting the spatial resolution of maps of hay yield. This study evaluated an Uncrewed Aerial Vehicle (UAV)-based imaging system to estimate hay yield. (2) Methods: Data were collected from three 0.4 ha plots and a 35 ha hay field of red clover and timothy grass in September 2020. A multispectral camera on the UAV captured images at 30 m (20 mm pixel-1) and 50 m (35 mm pixel-1) heights. Eleven Vegetation Indices (VIs) and five texture features were calculated from the images to estimate biomass yield. Multivariate regression models (VIs and texture features vs. biomass) were evaluated. (3) Results: Model R2 values ranged from 0.31 to 0.68. (4) Conclusions: Despite strong correlations between standard VIs and biomass, challenges such as variable image resolution and clarity affected accuracy. Further research is needed before UAV-based yield estimation can provide accurate, high-resolution hay yield maps.


Subject(s)
Biomass , Remote Sensing Technology , Remote Sensing Technology/methods , Unmanned Aerial Devices , Crops, Agricultural/growth & development
11.
Sensors (Basel) ; 24(16)2024 Aug 21.
Article in English | MEDLINE | ID: mdl-39205103

ABSTRACT

Precision agriculture has revolutionized crop management and agricultural production, with LiDAR technology attracting significant interest among various technological advancements. This extensive review examines the various applications of LiDAR in precision agriculture, with a particular emphasis on its function in crop cultivation and harvests. The introduction provides an overview of precision agriculture, highlighting the need for effective agricultural management and the growing significance of LiDAR technology. The prospective advantages of LiDAR for increasing productivity, optimizing resource utilization, managing crop diseases and pesticides, and reducing environmental impact are discussed. The introduction comprehensively covers LiDAR technology in precision agriculture, detailing airborne, terrestrial, and mobile systems along with their specialized applications in the field. After that, the paper reviews the several uses of LiDAR in agricultural cultivation, including crop growth and yield estimate, disease detection, weed control, and plant health evaluation. The use of LiDAR for soil analysis and management, including soil mapping and categorization and the measurement of moisture content and nutrient levels, is reviewed. Additionally, the article examines how LiDAR is used for harvesting crops, including its use in autonomous harvesting systems, post-harvest quality evaluation, and the prediction of crop maturity and yield. Future perspectives, emergent trends, and innovative developments in LiDAR technology for precision agriculture are discussed, along with the critical challenges and research gaps that must be filled. The review concludes by emphasizing potential solutions and future directions for maximizing LiDAR's potential in precision agriculture. This in-depth review of the uses of LiDAR gives helpful insights for academics, practitioners, and stakeholders interested in using this technology for effective and environmentally friendly crop management, which will eventually contribute to the development of precision agricultural methods.


Subject(s)
Agriculture , Crops, Agricultural , Crops, Agricultural/growth & development , Agriculture/methods , Soil/chemistry , Crop Production/methods , Remote Sensing Technology/methods
12.
J Imaging ; 10(8)2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39194976

ABSTRACT

This study focuses on semantic segmentation in crop Opuntia spp. orthomosaics; this is a significant challenge due to the inherent variability in the captured images. Manual measurement of Opuntia spp. vegetation areas can be slow and inefficient, highlighting the need for more advanced and accurate methods. For this reason, we propose to use deep learning techniques to provide a more precise and efficient measurement of the vegetation area. Our research focuses on the unique difficulties posed by segmenting high-resolution images exceeding 2000 pixels, a common problem in generating orthomosaics for agricultural monitoring. The research was carried out on a Opuntia spp. cultivation located in the agricultural region of Tulancingo, Hidalgo, Mexico. The images used in this study were obtained by drones and processed using advanced semantic segmentation architectures, including DeepLabV3+, UNet, and UNet Style Xception. The results offer a comparative analysis of the performance of these architectures in the semantic segmentation of Opuntia spp., thus contributing to the development and improvement of crop analysis techniques based on deep learning. This work sets a precedent for future research applying deep learning techniques in agriculture.

13.
Heliyon ; 10(13): e34117, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39091949

ABSTRACT

The fraction of absorbed photosynthetically active radiation (FAPAR) and the photosynthesis rate (Pn) of maize canopies were identified as essential photosynthetic parameters for accurately estimating vegetation growth and productivity using multispectral vegetation indices (VIs). Despite their importance, few studies have compared the effectiveness of multispectral imagery and various machine learning techniques in estimating these photosynthetic traits under high vegetation coverage. In this study, seventeen multispectral VIs and four machine learning (ML) algorithms were utilized to determine the most suitable model for estimating maize FAPAR and Pn during the kharif and rabi seasons at Tamil Nadu Agricultural University, Coimbatore, India. Results demonstrate that indices such as OSAVI, SAVI, EVI-2, and MSAVI-2 during the kharif and MNDVIRE and MSRRE during the rabi season outperformed others in estimating FAPAR and Pn values. Among the four ML methods of random forest (RF), extreme gradient boosting (XGBoost), support vector regression (SVR), and multiple linear regression (MLR) considered, RF consistently showed the most effective fitting effect and XGBoost demonstrated the least fitting accuracy for FAPAR and Pn estimation. However, SVR with R2 = 0.873 and RMSE = 0.045 during the kharif and MLR with R2 = 0.838 and RMSE = 0.053 during the rabi season demonstrated higher fitting accuracy, particularly notable for FAPAR prediction. Similarly, in the prediction of Pn, MLR showed higher fitting accuracy with R2 = 0.741 and RMSE = 2.531 during the kharif and R2 = 0.955 and RMSE = 1.070 during the rabi season. This study demonstrated the potential of combining UAV-derived VIs with ML to develop accurate FAPAR and Pn prediction models, overcoming VI saturation in dense vegetation. It underscores the importance of optimizing these models to improve the accuracy of maize vegetation assessments during various growing seasons.

14.
HardwareX ; 19: e00557, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39108458

ABSTRACT

Spectral signatures allow the characterization of a surface from the reflected or emitted energy along the electromagnetic spectrum. This type of measurement has several potential applications in precision agriculture. However, capturing the spectral signatures of plants requires specialized instruments, either in the field or the laboratory. The cost of these instruments is high, so their incorporation in crop monitoring tasks is not massive, given the low investment in agricultural technology. This paper presents a low-cost clamp to capture spectral leaf signatures in the laboratory and the field. The clamp can be 3D printed using PLA (polylactic acid); it allows the connection of 2 optical fibers: one for a spectrometer and one for a light source. It is designed for ease of use and holds a leave firmly without causing damage, allowing data to be collected with less disturbance. The article compares signatures captured directly using a fiber and the proposed clamp; noise reduction across the spectrum is achieved with the clamp.

15.
Heliyon ; 10(15): e35050, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39170417

ABSTRACT

Sensors used in precision agriculture for the detection of heavy metals in irrigation water are generally expensive and sometimes their deployment and maintenance represent a permanent investment to keep them in operation, leaving a lasting polluting footprint in the environment at the end of their lifespan. This represents an area of opportunity to design new biological devices that can replace part, or all of the sensors currently used. In this article, a novel workflow is proposed to fully carry out the complete process of design, modeling, and simulation of reprogrammable microorganisms in silico. As a proof-of-concept, the workflow has been used to design three whole-cell biosensors for the detection of heavy metals in irrigation water, namely arsenic, mercury and lead. These biosensors are in compliance with the concentration limits established by the World Health Organization (WHO). The proposed workflow allows the design of a wide variety of completely in silico biodevices, which aids in solving problems that cannot be easily addressed with classical computing. The workflow is based on two technologies typical of synthetic biology: the design of synthetic genetic circuits, and in silico synthetic engineering, which allows us to address the design of reprogrammable microorganisms using software and hardware to develop theoretical models. These models enable the behavior prediction of complex biological systems. The output of the workflow is then exported in the form of complete genomes in SBOL, GenBank and FASTA formats, enabling their subsequent in vivo implementation in a laboratory. The present proposal enables professionals in the area of computer science to collaborate in biotechnological processes from a theoretical perspective previously or complementary to a design process carried out directly in the laboratory by molecular biologists. Therefore, key results pertaining to this work include the fully in silico workflow that leads to designs that can be tested in the lab in vitro or in vivo, and a proof-of-concept of how the workflow generates synthetic circuits in the form of three whole-cell heavy metal biosensors that were designed, modeled and simulated using the workflow. The simulations carried out show realistic spatial distributions of biosensors reacting to different concentrations (zero, low and threshold level) of heavy metal presence and at different growth phases (stationary and exponential) that are backed up by the whole design and modeling phases of the workflow.

16.
Sensors (Basel) ; 24(15)2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39123884

ABSTRACT

In strawberry cultivation, precise disease management is crucial for maximizing yields and reducing unnecessary fungicide use. Traditional methods for measuring leaf wetness duration (LWD), a critical factor in assessing the risk of fungal diseases such as botrytis fruit rot and anthracnose, have been reliant on sensors with known limitations in accuracy and reliability and difficulties with calibrating. To overcome these limitations, this study introduced an innovative algorithm for leaf wetness detection systems employing high-resolution imaging and deep learning technologies, including convolutional neural networks (CNNs). Implemented at the University of Florida's Plant Science Research and Education Unit (PSREU) in Citra, FL, USA, and expanded to three additional locations across Florida, USA, the system captured and analyzed images of a reference plate to accurately determine the wetness and, consequently, the LWD. The comparison of system outputs with manual observations across diverse environmental conditions demonstrated the enhanced accuracy and reliability of the artificial intelligence-driven approach. By integrating this system into the Strawberry Advisory System (SAS), this study provided an efficient solution to improve disease risk assessment and fungicide application strategies, promising significant economic benefits and sustainability advances in strawberry production.


Subject(s)
Artificial Intelligence , Fragaria , Plant Diseases , Plant Leaves , Fragaria/microbiology , Plant Diseases/microbiology , Neural Networks, Computer , Algorithms , Botrytis
17.
Sensors (Basel) ; 24(15)2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39123990

ABSTRACT

Biological nitrogen fixation (BNF) by symbiotic bacteria plays a vital role in sustainable agriculture. However, current quantification methods are often expensive and impractical. This study explores the potential of Raman spectroscopy, a non-invasive technique, for rapid assessment of BNF activity in soybeans. Raman spectra were obtained from soybean plants grown with and without rhizobia bacteria to identify spectral signatures associated with BNF. δN15 isotope ratio mass spectrometry (IRMS) was used to determine actual BNF percentages. Partial least squares regression (PLSR) was employed to develop a model for BNF quantification based on Raman spectra. The model explained 80% of the variation in BNF activity. To enhance the model's specificity for BNF detection regardless of nitrogen availability, a subsequent elastic net (Enet) regularisation strategy was implemented. This approach provided insights into key wavenumbers and biochemicals associated with BNF in soybeans.


Subject(s)
Glycine max , Nitrogen Fixation , Spectrum Analysis, Raman , Nitrogen Fixation/physiology , Spectrum Analysis, Raman/methods , Glycine max/metabolism , Glycine max/chemistry , Least-Squares Analysis , Fabaceae/metabolism , Nitrogen/metabolism , Symbiosis/physiology
18.
Foods ; 13(14)2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39063357

ABSTRACT

Indoor production of basil (Ocimum basilicum L.) is influenced by light spectrum, photosynthetic photon flux density (PPFD), and the photoperiod. To investigate the effects of different lighting on growth, chlorophyll content, and secondary metabolism, basil plants were grown from seedlings to fully expanded plants in microcosm devices under different light conditions: (a) white light at 250 and 380 µmol·m-2·s-1 under 16/8 h light/dark and (b) white light at 380 µmol·m-2·s-1 under 16/8 and 24/0 h light/dark. A higher yield was recorded under 380 µmol·m-2·s-1 compared to 250 µmol·m-2·s-1 (fresh and dry biomasses 260.6 ± 11.3 g vs. 144.9 ± 14.6 g and 34.1 ± 2.6 g vs. 13.2 ± 1.4 g, respectively), but not under longer photoperiods. No differences in plant height and chlorophyll content index were recorded, regardless of the PPFD level and photoperiod length. Almost the same volatile organic compounds (VOCs) were detected under the different lighting treatments, belonging to terpenes, aldehydes, alcohols, esters, and ketones. Linalool, eucalyptol, and eugenol were the main VOCs regardless of the lighting conditions. The multivariate data analysis showed a sharp separation of non-volatile metabolites in apical and middle leaves, but this was not related to different PPFD levels. Higher levels of sesquiterpenes and monoterpenes were detected in plants grown under 250 µmol·m-2·s-1 and 380 µmol·m-2·s-1, respectively. A low separation of non-volatile metabolites based on the photoperiod length and VOC overexpression under longer photoperiods were also highlighted.

19.
Sensors (Basel) ; 24(14)2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39065926

ABSTRACT

Vineyards hold considerable soil variability between regions and plots, and there is frequently large soil heterogeneity within plots. Clay content in vineyard soils is of interest with respect to soil management, environmental monitoring, and wine quality. However, spatially resolved clay mapping is laborious and expensive. Gamma-ray spectrometry (GS) is a suitable tool for predicting clay content in precision agriculture when locally calibrated, but it has scarcely been tested site-independently and in vineyards. This study evaluated GS to predict clay content with a site-independent calibration and four machine learning algorithms (Support Vector Machines, Random Forest, k-Nearest Neighbors, and Bayesian regulated neuronal networks) in eight vineyards from four German vine-growing regions. Clay content in the studied soils ranged from 62 to 647 g kg-1. The Random Forest calibration was most suitable. Test set evaluation revealed good model performance for the entire dataset with RPIQ = 4.64, RMSEP = 56.7 g kg-1, and R2 = 0.87; however, prediction quality varied between the sites. Overall, GS with the Random Forest model calibration was appropriate to predict the clay content and its spatial distribution, even for heterogeneous geopedological settings and in individual plots. Therefore, GS is considered a valuable tool for soil mapping in vineyards, where clay content and product quality are closely linked.

20.
Data Brief ; 55: 110649, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39035837

ABSTRACT

Technology infusion in agriculture has been progressing steadily, touching upon various spheres of agriculture such as crop identification, soil classification, yield prediction, disease detection, and weed-crop discrimination. On-demand crop type detection, often realized as crop mapping, is a primary requirement in agriculture. Alongside the topographic LiDAR and thermal imaging, hyperspectral remote sensing is a versatile technique for mapping and predicting various parameters of interest in agriculture. The ongoing developments in the methods and algorithms of remote sensing data analyses for crop mapping require the availability of curated, high-resolution hyperspectral datasets, varied by crop type, nutrient supply (nitrogen level), and ground truth data. Aimed at enabling the development and validation of approaches for crop mapping at the plant level, we present a high-resolution ground-based hyperspectral imaging dataset acquired over fields of two vegetable crops (cabbage, eggplant). These crops were grown on experimental plots of the University of Agricultural Sciences, Bengaluru, India, maintaining three different nitrogen levels (high, medium, and low). The datasets contain hyperspectral imagery of the vegetable crops grown under two configurations: (i) imagery, which contains only a single crop type in a scene, and (ii) imagery, which contains both crops in a single scene. In both configurations, each crop has plots representing three different nitrogen levels. Ultra-high spatial resolution hyperspectral imaging data were acquired in 400 to 900 nm with an effective spectral resolution of 3 nm and spatial resolution of 3 mm using a ground-based push-broom hyperspectral imaging system (Headwall Photonics, USA). Ground truth data were also presented. The datasets are valuable for developing and validating various methods and algorithms for precision agriculture applications, such as machine learning methods for crop mapping at plants and estimating crop growth responses to different nitrogen levels.

SELECTION OF CITATIONS
SEARCH DETAIL