Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 13: 876069, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35685013

RESUMO

Wheat stripe rusts are responsible for the major reduction in production and economic losses in the wheat industry. Thus, accurate detection of wheat stripe rust is critical to improving wheat quality and the agricultural economy. At present, the results of existing wheat stripe rust detection methods based on convolutional neural network (CNN) are not satisfactory due to the arbitrary orientation of wheat stripe rust, with a large aspect ratio. To address these problems, a WSRD-Net method based on CNN for detecting wheat stripe rust is developed in this study. The model is a refined single-stage rotation detector based on the RetinaNet, by adding the feature refinement module (FRM) into the rotation RetinaNet network to solve the problem of feature misalignment of wheat stripe rust with a large aspect ratio. Furthermore, we have built an oriented annotation dataset of in-field wheat stripe rust images, called the wheat stripe rust dataset 2021 (WSRD2021). The performance of WSRD-Net is compared to that of the state-of-the-art oriented object detection models, and results show that WSRD-Net can obtain 60.8% AP and 73.8% Recall on the wheat stripe rust dataset, higher than the other four oriented object detection models. Furthermore, through the comparison with horizontal object detection models, it is found that WSRD-Net outperforms horizontal object detection models on localization for corresponding disease areas.

2.
PLoS One ; 17(2): e0263401, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35130303

RESUMO

In the research on energy-efficient networking methods for precision agriculture, a hot topic is the energy issue of sensing nodes for individual wireless sensor networks. The sensing nodes of the wireless sensor network should be enabled to provide better services with limited energy to support wide-range and multi-scenario acquisition and transmission of three-dimensional crop information. Further, the life cycle of the sensing nodes should be maximized under limited energy. The transmission direction and node power consumption are considered, and the forward and high-energy nodes are selected as the preferred cluster heads or data-forwarding nodes. Taking the cropland cultivation of ginseng as the background, we put forward a particle swarm optimization-based networking algorithm for wireless sensor networks with excellent performance. This algorithm can be used for precision agriculture and achieve optimal equipment configuration in a network under limited energy, while ensuring reliable communication in the network. The node scale is configured as 50 to 300 nodes in the range of 500 × 500 m2, and simulated testing is conducted with the LEACH, BCDCP, and ECHERP routing protocols. Compared with the existing LEACH, BCDCP, and ECHERP routing protocols, the proposed networking method can achieve the network lifetime prolongation and mitigate the decreased degree and decreasing trend of the distance between the sensing nodes and center nodes of the sensor network, which results in a longer network life cycle and stronger environment suitability. It is an effective method that improves the sensing node lifetime for a wireless sensor network applied to cropland cultivation of ginseng.


Assuntos
Agricultura , Algoritmos , Redes de Comunicação de Computadores , Panax/crescimento & desenvolvimento , Agricultura/instrumentação , Agricultura/métodos , Agricultura/organização & administração , Técnicas Biossensoriais/instrumentação , Técnicas Biossensoriais/métodos , China , Redes de Comunicação de Computadores/instrumentação , Redes de Comunicação de Computadores/organização & administração , Simulação por Computador , Produtos Agrícolas/crescimento & desenvolvimento , Coleta de Dados/instrumentação , Coleta de Dados/métodos , Humanos , Tecnologia sem Fio/instrumentação , Tecnologia sem Fio/organização & administração
3.
Pest Manag Sci ; 78(2): 711-721, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34672074

RESUMO

BACKGROUND: Pests cause significant damage to agricultural crops and reduce crop yields. Use of manual methods of pest forecasting for integrated pest management is labor-intensive and time-consuming. Here, we present an automatic system for monitoring pests in large fields, with the aim of replacing manual forecasting. The system comprises an automatic detection and counting system and a human-computer data statistical fitting system. Image data sets of the target pests from large fields are first input into the system. The number of pests in the image is then counted both manually and using the automatic system. Finally, a mapping relationship between counts obtained using the automated system and by agricultural experts is established using the statistical fitting system. RESULTS: Trends in the pest-count curves produced using the manual and automated counting methods were very similar. To sample the number of pests for manual statistics, plants were shaken to transfer the pests from the plant to a plate. Hence, pests hiding within plant crevices were also sampled and included in the count, whereas the automatic method counted only the pests visible in the images. Therefore, the computer index threshold was much lower than the manual index threshold. However, the proposed system correctly reflected trends in pest numbers obtained using computer vision. CONCLUSION: The experimental results demonstrate that our automatic pest-monitoring system can generate pest grades and can replace manual forecasting methods in large fields. © 2021 Society of Chemical Industry.


Assuntos
Produtos Agrícolas , Controle de Pragas , Agricultura , Computadores , Interpretação Estatística de Dados
4.
Front Plant Sci ; 13: 1033544, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36777532

RESUMO

One of the main techniques in smart plant protection is pest detection using deep learning technology, which is convenient, cost-effective, and responsive. However, existing deep-learning-based methods can detect only over a dozen common types of bulk agricultural pests in structured environments. Also, such methods generally require large-scale well-labeled pest data sets for their base-class training and novel-class fine-tuning, and these significantly hinder the further promotion of deep convolutional neural network approaches in pest detection for economic crops, forestry, and emergent invasive pests. In this paper, a few-shot pest detection network is introduced to detect rarely collected pest species in natural scenarios. Firstly, a prior-knowledge auxiliary architecture for few-shot pest detection in the wild is presented. Secondly, a hierarchical few-shot pest detection data set has been built in the wild in China over the past few years. Thirdly, a pest ontology relation module is proposed to combine insect taxonomy and inter-image similarity information. Several experiments are presented according to a standard few-shot detection protocol, and the presented model achieves comparable performance to several representative few-shot detection algorithms in terms of both mean average precision (mAP) and mean average recall (mAR). The results show the promising effectiveness of the proposed few-shot detection architecture.

5.
Sensors (Basel) ; 21(5)2021 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-33668820

RESUMO

The recent explosion of large volume of standard dataset of annotated images has offered promising opportunities for deep learning techniques in effective and efficient object detection applications. However, due to a huge difference of quality between these standardized dataset and practical raw data, it is still a critical problem on how to maximize utilization of deep learning techniques in practical agriculture applications. Here, we introduce a domain-specific benchmark dataset, called AgriPest, in tiny wild pest recognition and detection, providing the researchers and communities with a standard large-scale dataset of practically wild pest images and annotations, as well as evaluation procedures. During the past seven years, AgriPest captures 49.7K images of four crops containing 14 species of pests by our designed image collection equipment in the field environment. All of the images are manually annotated by agricultural experts with up to 264.7K bounding boxes of locating pests. This paper also offers a detailed analysis of AgriPest where the validation set is split into four types of scenes that are common in practical pest monitoring applications. We explore and evaluate the performance of state-of-the-art deep learning techniques over AgriPest. We believe that the scale, accuracy, and diversity of AgriPest can offer great opportunities to researchers in computer vision as well as pest monitoring applications.


Assuntos
Agricultura , Aprendizado Profundo , Benchmarking , Produtos Agrícolas , Controle de Pragas
6.
Micromachines (Basel) ; 13(1)2021 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-35056238

RESUMO

Video object and human action detection are applied in many fields, such as video surveillance, face recognition, etc. Video object detection includes object classification and object location within the frame. Human action recognition is the detection of human actions. Usually, video detection is more challenging than image detection, since video frames are often more blurry than images. Moreover, video detection often has other difficulties, such as video defocus, motion blur, part occlusion, etc. Nowadays, the video detection technology is able to implement real-time detection, or high-accurate detection of blurry video frames. In this paper, various video object and human action detection approaches are reviewed and discussed, many of them have performed state-of-the-art results. We mainly review and discuss the classic video detection methods with supervised learning. In addition, the frequently-used video object detection and human action recognition datasets are reviewed. Finally, a summarization of the video detection is represented, e.g., the video object and human action detection methods could be classified into frame-by-frame (frame-based) detection, extracting-key-frame detection and using-temporal-information detection; the methods of utilizing temporal information of adjacent video frames are mainly the optical flow method, Long Short-Term Memory and convolution among adjacent frames.

7.
Sensors (Basel) ; 20(3)2020 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-31973039

RESUMO

Increasing grain production is essential to those areas where food is scarce. Increasing grain production by controlling crop diseases and pests in time should be effective. To construct video detection system for plant diseases and pests, and to build a real-time crop diseases and pests video detection system in the future, a deep learning-based video detection architecture with a custom backbone was proposed for detecting plant diseases and pests in videos. We first transformed the video into still frame, then sent the frame to the still-image detector for detection, and finally synthesized the frames into video. In the still-image detector, we used faster-RCNN as the framework. We used image-training models to detect relatively blurry videos. Additionally, a set of video-based evaluation metrics based on a machine learning classifier was proposed, which reflected the quality of video detection effectively in the experiments. Experiments showed that our system with the custom backbone was more suitable for detection of the untrained rice videos than VGG16, ResNet-50, ResNet-101 backbone system and YOLOv3 with our experimental environment.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Oryza/parasitologia , Doenças das Plantas/parasitologia , Gravação em Vídeo/métodos , Aprendizado Profundo
8.
Sci Rep ; 9(1): 7024, 2019 05 07.
Artigo em Inglês | MEDLINE | ID: mdl-31065055

RESUMO

Insect pests are known to be a major cause of damage to agricultural crops. This paper proposed a deep learning-based pipeline for localization and counting of agricultural pests in images by self-learning saliency feature maps. Our method integrates a convolutional neural network (CNN) of ZF (Zeiler and Fergus model) and a region proposal network (RPN) with Non-Maximum Suppression (NMS) to remove overlapping detections. First, the convolutional layers in ZF Net, without average pooling layer and fc layers, were used to compute feature maps of images, which can better retain the original pixel information through smaller convolution kernels. Then, several critical parameters of the method were optimized, including the output size, score threshold, NMS threshold, and so on. To demonstrate the practical applications of our method, different feature extraction networks were explored, including AlexNet, ResNet and ZF Net. Finally, the model trained on smaller multi-scale images was tested on original large images. Experimental results showed that our method achieved a precision of 0.93 with a miss rate of 0.10. Moreover, our model achieved a mean Accuracy Precision (mAP) of 0.885.


Assuntos
Produtos Agrícolas/parasitologia , Processamento de Imagem Assistida por Computador/métodos , Insetos/crescimento & desenvolvimento , Algoritmos , Animais , Produtos Agrícolas/crescimento & desenvolvimento , Aprendizado Profundo , Modelos Biológicos
9.
Sensors (Basel) ; 19(6)2019 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-30889917

RESUMO

Human driving behaviors are personalized and unique, and the automobile fingerprint of drivers could be helpful to automatically identify different driving behaviors and further be applied in fields such as auto-theft systems. Current research suggests that in-vehicle Controller Area Network-BUS (CAN-BUS) data can be used as an effective representation of driving behavior for recognizing different drivers. However, it is difficult to capture complex temporal features of driving behaviors in traditional methods. This paper proposes an end-to-end deep learning framework by fusing convolutional neural networks and recurrent neural networks with an attention mechanism, which is more suitable for time series CAN-BUS sensor data. The proposed method can automatically learn features of driving behaviors and model temporal features without professional knowledge in features modeling. Moreover, the method can capture salient structure features of high-dimensional sensor data and explore the correlations among multi-sensor data for rich feature representations of driving behaviors. Experimental results show that the proposed framework performs well in the real world driving behavior identification task, outperforming the state-of-the-art methods.

10.
Sensors (Basel) ; 18(12)2018 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-30486481

RESUMO

Regarding the growth of crops, one of the important factors affecting crop yield is insect disasters. Since most insect species are extremely similar, insect detection on field crops, such as rice, soybean and other crops, is more challenging than generic object detection. Presently, distinguishing insects in crop fields mainly relies on manual classification, but this is an extremely time-consuming and expensive process. This work proposes a convolutional neural network model to solve the problem of multi-classification of crop insects. The model can make full use of the advantages of the neural network to comprehensively extract multifaceted insect features. During the regional proposal stage, the Region Proposal Network is adopted rather than a traditional selective search technique to generate a smaller number of proposal windows, which is especially important for improving prediction accuracy and accelerating computations. Experimental results show that the proposed method achieves a heightened accuracy and is superior to the state-of-the-art traditional insect classification algorithms.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA