Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Sci Food Agric ; 104(11): 6615-6625, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-38523076

RESUMO

BACKGROUND: Tomato quality visual grading is greatly affected by the problems of smooth skin, uneven illumination and invisible defects that are difficult to identify. The realization of intelligent detection of postharvest epidermal defects is conducive to further improving the economic value of postharvest tomatoes. RESULTS: An image acquisition device that utilizes fluorescence technology has been designed to capture a dataset of tomato skin defects, encompassing categories such as rot defects, crack defects and imperceptible defects. The YOLOv5m model was improved by introducing Convolutional Block Attention Module and replacing part of the convolution kernels in the backbone network with Switchable Atrous Convolution. The results of comparison experiments and ablation experiments show that the Precision, Recall and mean Average Precision of the improved YOLOv5m model were 89.93%, 82.33% and 87.57%, which are higher than YOLOv5m, Faster R-CNN and YOLOv7, and the average detection time was reduced by 47.04 ms picture-1. CONCLUSION: The present study utilizes fluorescence imaging and an improved YOLOv5m model to detect tomato epidermal defects, resulting in better identification of imperceptible defects and detection of multiple categories of defects. This provides strong technical support for intelligent detection and quality grading of tomatoes. © 2024 Society of Chemical Industry.


Assuntos
Frutas , Epiderme Vegetal , Solanum lycopersicum , Solanum lycopersicum/química , Frutas/química , Epiderme Vegetal/química , Fluorescência , Imagem Óptica/instrumentação , Imagem Óptica/métodos
2.
Front Plant Sci ; 13: 868745, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35651761

RESUMO

As one of the representative algorithms of deep learning, a convolutional neural network (CNN) with the advantage of local perception and parameter sharing has been rapidly developed. CNN-based detection technology has been widely used in computer vision, natural language processing, and other fields. Fresh fruit production is an important socioeconomic activity, where CNN-based deep learning detection technology has been successfully applied to its important links. To the best of our knowledge, this review is the first on the whole production process of fresh fruit. We first introduced the network architecture and implementation principle of CNN and described the training process of a CNN-based deep learning model in detail. A large number of articles were investigated, which have made breakthroughs in response to challenges using CNN-based deep learning detection technology in important links of fresh fruit production including fruit flower detection, fruit detection, fruit harvesting, and fruit grading. Object detection based on CNN deep learning was elaborated from data acquisition to model training, and different detection methods based on CNN deep learning were compared in each link of the fresh fruit production. The investigation results of this review show that improved CNN deep learning models can give full play to detection potential by combining with the characteristics of each link of fruit production. The investigation results also imply that CNN-based detection may penetrate the challenges created by environmental issues, new area exploration, and multiple task execution of fresh fruit production in the future.

3.
Sci Total Environ ; 837: 155807, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35537509

RESUMO

The development of machine learning and deep learning provided solutions for predicting microbiota response on environmental change based on microbial high-throughput sequencing. However, there were few studies specifically clarifying the performance and practical of two types of binary classification models to find a better algorithm for the microbiota data analysis. Here, for the first time, we evaluated the performance, accuracy and running time of the binary classification models built by three machine learning methods - random forest (RF), support vector machine (SVM), logistic regression (LR), and one deep learning method - back propagation neural network (BPNN). The built models were based on the microbiota datasets that removed low-quality variables and solved the class imbalance problem. Additionally, we optimized the models by tuning. Our study demonstrated that dataset pre-processing was a necessary process for model construction. Among these 4 binary classification models, BPNN and RF were the most suitable methods for constructing microbiota binary classification models. Using these 4 models to predict multiple microbial datasets, BPNN showed the highest accuracy and the most robust performance, while the RF method was ranked second. We also constructed the optimal models by adjusting the epochs of BPNN and the n_estimators of RF for six times. The evaluation related to performances of models provided a road map for the application of artificial intelligence to assess microbial ecology.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Algoritmos , Sequenciamento de Nucleotídeos em Larga Escala , Aprendizado de Máquina , Máquina de Vetores de Suporte
4.
Animals (Basel) ; 12(8)2022 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-35454293

RESUMO

In precision dairy farming, computer vision-based approaches have been widely employed to monitor the cattle conditions (e.g., the physical, physiology, health and welfare). To this end, the accurate and effective identification of individual cow is a prerequisite. In this paper, a deep learning re-identification network model, Global and Part Network (GPN), is proposed to identify individual cow face. The GPN model, with ResNet50 as backbone network to generate a pooling of feature maps, builds three branch modules (Middle branch, Global branch and Part branch) to learn more discriminative and robust feature representation from the maps. Specifically, the Middle branch and the Global branch separately extract the global features of middle dimension and high dimension from the maps, and the Part branch extracts the local features in the unified block, all of which are integrated to act as the feature representation for cow face re-identification. By performing such strategies, the GPN model not only extracts the discriminative global and local features, but also learns the subtle differences among different cow faces. To further improve the performance of the proposed framework, a Global and Part Network with Spatial Transform (GPN-ST) model is also developed to incorporate an attention mechanism module in the Part branch. Additionally, to test the efficiency of the proposed approach, a large-scale cow face dataset is constructed, which contains 130,000 images with 3000 cows under different conditions (e.g., occlusion, change of viewpoints and illumination, blur, and background clutters). The results of various contrast experiments show that the GPN outperforms the representative re-identification methods, and the improved GPN-ST model has a higher accuracy rate (up by 2.8% and 2.2% respectively) in Rank-1 and mAP, compared with the GPN model. In conclusion, using the Global and Part feature deep network with attention mechanism can effectively ameliorate the efficiency of cow face re-identification.

5.
J Acoust Soc Am ; 150(5): 3329, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34852569

RESUMO

The connections between the vortical and near-acoustic fields of three-stream, high-speed jets are investigated for the ultimate purpose of developing linear surface-based models for the noise source. Those models would be informed by low-cost, Reynolds-averaged Navier-Stokes (RANS) computations of the flow field. The study uses two triple-stream jets, one is coaxial and the other has eccentric tertiary flow that yields noise suppression in preferred directions. Large eddy simulations (LES) validate the RANS-based models for the convective velocity Uc of the noise-generating turbulent eddies. In addition, the LES results help define a "radiator surface" on which the jet noise source model would be prescribed. The radiator surface is located near the boundary between the rotational and irrotational fields and defined as the surface on which the Uc distribution, obtained from the space-time correlations of the pressure, matches that inferred from the RANS model. The edge of the mean vorticity field is nearly coincident with the radiator surface, which suggests a RANS-based criterion for locating this surface. The two-dimensional space-time correlations show how the asymmetry of the tertiary stream and the resulting thicker low-speed flow weakens the generation of acoustic disturbances from the vortical field.

6.
Front Plant Sci ; 12: 705737, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34557214

RESUMO

The accurate detection of green citrus in natural environments is a key step in realizing the intelligent harvesting of citrus through robotics. At present, the visual detection algorithms for green citrus in natural environments still have poor accuracy and robustness due to the color similarity between fruits and backgrounds. This study proposed a multi-scale convolutional neural network (CNN) named YOLO BP to detect green citrus in natural environments. Firstly, the backbone network, CSPDarknet53, was trimmed to extract high-quality features and improve the real-time performance of the network. Then, by removing the redundant nodes of the Path Aggregation Network (PANet) and adding additional connections, a bi-directional feature pyramid network (Bi-PANet) was proposed to efficiently fuse the multilayer features. Finally, three groups of green citrus detection experiments were designed to evaluate the network performance. The results showed that the accuracy, recall, mean average precision (mAP), and detection speed of YOLO BP were 86, 91, and 91.55% and 18 frames per second (FPS), respectively, which were 2, 7, and 4.3% and 1 FPS higher than those of YOLO v4. The proposed detection algorithm had strong robustness and high accuracy in the complex orchard environment, which provides technical support for green fruit detection in natural environments.

7.
Sensors (Basel) ; 19(2)2019 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-30669645

RESUMO

Fruit detection in real outdoor conditions is necessary for automatic guava harvesting, and the branch-dependent pose of fruits is also crucial to guide a robot to approach and detach the target fruit without colliding with its mother branch. To conduct automatic, collision-free picking, this study investigates a fruit detection and pose estimation method by using a low-cost red⁻green⁻blue⁻depth (RGB-D) sensor. A state-of-the-art fully convolutional network is first deployed to segment the RGB image to output a fruit and branch binary map. Based on the fruit binary map and RGB-D depth image, Euclidean clustering is then applied to group the point cloud into a set of individual fruits. Next, a multiple three-dimensional (3D) line-segments detection method is developed to reconstruct the segmented branches. Finally, the 3D pose of the fruit is estimated using its center position and nearest branch information. A dataset was acquired in an outdoor orchard to evaluate the performance of the proposed method. Quantitative experiments showed that the precision and recall of guava fruit detection were 0.983 and 0.948, respectively; the 3D pose error was 23.43° ± 14.18°; and the execution time per fruit was 0.565 s. The results demonstrate that the developed method can be applied to a guava-harvesting robot.

8.
Sensors (Basel) ; 18(3)2018 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-29495421

RESUMO

The non-destructive testing of litchi fruit is of great significance to the fresh-keeping, storage and transportation of harvested litchis. To achieve quick and accurate micro-damage detection, a non-destructive grading test method for litchi fruits was studied using 400-1000 nm hyperspectral imaging technology. The Huaizhi litchi was chosen in this study, and the hyperspectral data average for the region of interest (ROI) of litchi fruit was extracted for spectral data analysis. Then the hyperspectral data samples of fresh and micro-damaged litchi fruits were selected, and a partial least squares discriminant analysis (PLS-DA) was used to establish a prediction model for the realization of qualitative analysis for litchis with different qualities. For the external validation set, the mean per-type recall and precision were 94.10% and 93.95%, respectively. Principal component analysis (PCA) was used to determine the sensitive wavelength for recognition of litchi quality characteristics, with the results of wavelengths corresponding to the local extremum for the weight coefficient of PC3, i.e., 694, 725 and 798 nm. Then the single-band images corresponding to each sensitive wavelength were analyzed. Finally, the 7-dimension features of the PC3 image were extracted using the Gray Level Co-occurrence Matrix (GLCM). Through image processing, least squares support vector machine (LS-SVM) modeling was conducted to classify the different qualities of litchis. The model was validated using the experiment data, and the average accuracy of the validation set was 93.75%, while the external validation set was 95%. The results indicate the feasibility of using hyperspectral imaging technology in litchi postpartum non-destructive detection and classification.


Assuntos
Litchi , Frutas , Análise dos Mínimos Quadrados , Análise de Componente Principal , Espectroscopia de Luz Próxima ao Infravermelho , Máquina de Vetores de Suporte
9.
Sensors (Basel) ; 18(4)2018 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-29587378

RESUMO

Night-time fruit-picking technology is important to picking robots. This paper proposes a method of night-time detection and picking-point positioning for green grape-picking robots to solve the difficult problem of green grape detection and picking in night-time conditions with artificial lighting systems. Taking a representative green grape named Centennial Seedless as the research object, daytime and night-time grape images were captured by a custom-designed visual system. Detection was conducted employing the following steps: (1) The RGB (red, green and blue). Color model was determined for night-time green grape detection through analysis of color features of grape images under daytime natural light and night-time artificial lighting. The R component of the RGB color model was rotated and the image resolution was compressed; (2) The improved Chan-Vese (C-V) level set model and morphological processing method were used to remove the background of the image, leaving out the grape fruit; (3) Based on the character of grape vertical suspension, combining the principle of the minimum circumscribed rectangle of fruit and the Hough straight line detection method, straight-line fitting for the fruit stem was conducted and the picking point was calculated using the stem with an angle of fitting line and vertical line less than 15°. The visual detection experiment results showed that the accuracy of grape fruit detection was 91.67% and the average running time of the proposed algorithm was 0.46 s. The picking-point calculation experiment results showed that the highest accuracy for the picking-point calculation was 92.5%, while the lowest was 80%. The results demonstrate that the proposed method of night-time green grape detection and picking-point calculation can provide technical support to the grape-picking robots.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA