Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Spectrochim Acta A Mol Biomol Spectrosc ; 323: 124897, 2024 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-39094271

RESUMO

Assessing crop seed phenotypic traits is essential for breeding innovations and germplasm enhancement. However, the tough outer layers of thin-shelled seeds present significant challenges for traditional methods aimed at the rapid assessment of their internal structures and quality attributes. This study explores the potential of combining terahertz (THz) time-domain spectroscopy and imaging with semantic segmentation models for the rapid and non-destructive examination of these traits. A total of 120 watermelon seed samples from three distinct varieties, were curated in this study, facilitating a comprehensive analysis of both their outer layers and inner kernels. Utilizing a transmission imaging modality, THz spectral images were acquired and subsequently reconstructed employing a correlation coefficient method. Deep learning-based SegNet and DeepLab V3+ models were employed for automatic tissue segmentation. Our research revealed that DeepLab V3+ significantly surpassed SegNet in both speed and accuracy. Specifically, DeepLab V3+ achieved a pixel accuracy of 96.69 % and an intersection over the union of 91.3 % for the outer layer, with the inner kernel results closely following. These results underscore the proficiency of DeepLab V3+ in distinguishing between the seed coat and kernel, thereby furnishing precise phenotypic trait analyses for seeds with thin shells. Moreover, this study accentuates the instrumental role of deep learning technologies in advancing agricultural research and practices.


Assuntos
Citrullus , Sementes , Sementes/química , Citrullus/química , Imagem Terahertz/métodos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Espectroscopia Terahertz/métodos , Semântica
2.
Front Plant Sci ; 14: 1124939, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37426958

RESUMO

The field of computer vision has shown great potential for the identification of crops at large scales based on multispectral images. However, the challenge in designing crop identification networks lies in striking a balance between accuracy and a lightweight framework. Furthermore, there is a lack of accurate recognition methods for non-large-scale crops. In this paper, we propose an improved encoder-decoder framework based on DeepLab v3+ to accurately identify crops with different planting patterns. The network employs ShuffleNet v2 as the backbone to extract features at multiple levels. The decoder module integrates a convolutional block attention mechanism that combines both channel and spatial attention mechanisms to fuse attention features across the channel and spatial dimensions. We establish two datasets, DS1 and DS2, where DS1 is obtained from areas with large-scale crop planting, and DS2 is obtained from areas with scattered crop planting. On DS1, the improved network achieves a mean intersection over union (mIoU) of 0.972, overall accuracy (OA) of 0.981, and recall of 0.980, indicating a significant improvement of 7.0%, 5.0%, and 5.7%, respectively, compared to the original DeepLab v3+. On DS2, the improved network improves the mIoU, OA, and recall by 5.4%, 3.9%, and 4.4%, respectively. Notably, the number of parameters and giga floating-point operations (GFLOPs) required by the proposed Deep-agriNet is significantly smaller than that of DeepLab v3+ and other classic networks. Our findings demonstrate that Deep-agriNet performs better in identifying crops with different planting scales, and can serve as an effective tool for crop identification in various regions and countries.

3.
J Imaging ; 8(10)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36286351

RESUMO

Compared with traditional manual inspection, inspection robots can not only meet the all-weather, real-time, and accurate inspection needs of substation inspection, they also reduce the work intensity of operation and maintenance personnel and decrease the probability of safety accidents. For the urgent demand of substation inspection robot intelligence enhancement, an environment understanding algorithm is proposed in this paper, which is an improved DeepLab V3+ neural network. The improved neural network replaces the original dilate rate combination in the ASPP (atrous spatial pyramid pooling) module with a new dilate rate combination with better segmentation accuracy of object edges and adds a CBAM (convolutional block attention module) in the two up-samplings, respectively. In order to be transplanted to the embedded platform with limited computing resources, the improved neural network is compressed. Multiple sets of comparative experiments on the standard dataset PASCAL VOC 2012 and the substation dataset have been made. Experimental results show that, compared with the DeepLab V3+, the improved DeepLab V3+ has a mean intersection-over-union (mIoU) of eight categories of 57.65% on the substation dataset, with an improvement of 6.39%, and the model size of 13.9 M, with a decrease of 147.1 M.

4.
Plant Methods ; 18(1): 109, 2022 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-36068606

RESUMO

BACKGROUND: Automatic and accurate estimation of disease severity is critical for disease management and yield loss prediction. Conventional disease severity estimation is performed using images with simple backgrounds, which is limited in practical applications. Thus, there is an urgent need to develop a method for estimating the disease severity of plants based on leaf images captured in field conditions, which is very challenging since the intensity of sunlight is constantly changing, and the image background is complicated. RESULTS: This study developed a simple and accurate image-based disease severity estimation method using an optimized neural network. A hybrid attention and transfer learning optimized semantic segmentation model was proposed to obtain the disease segmentation map. The severity was calculated by the ratio of lesion pixels to leaf pixels. The proposed method was validated using cucumber downy mildew, and powdery mildew leaves collected under natural conditions. The results showed that hybrid attention with the interaction of spatial attention and channel attention can extract fine lesion and leaf features, and transfer learning can further improve the segmentation accuracy of the model. The proposed method can accurately segment healthy leaves and lesions (MIoU = 81.23%, FWIoU = 91.89%). In addition, the severity of cucumber leaf disease was accurately estimated (R2 = 0.9578, RMSE = 1.1385). Moreover, the proposed model was compared with six different backbones and four semantic segmentation models. The results show that the proposed model outperforms the compared models under complex conditions, and can refine lesion segmentation and accurately estimate the disease severity. CONCLUSIONS: The proposed method was an efficient tool for disease severity estimation in field conditions. This study can facilitate the implementation of artificial intelligence for rapid disease severity estimation and control in agriculture.

5.
Front Comput Neurosci ; 16: 895680, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720773

RESUMO

College students learn words always under both teachers' and school administrators' control. Based on multi-modal discourse analysis theory, the analysis of English words under the synergy of different modalities, students improve the motivation and effectiveness of word learning, but there are still some problems, such as the lack of visual modal memory of pictures, incomplete word meanings, little interaction between users, and lack of resource expansion function. To this end, this paper proposes a stepped image semantic segmentation network structure based on multi-scale feature fusion and boundary optimization. The network aims at improving the accuracy of the network model, optimizing the spatial pooling pyramid module in Deeplab V3+ network, using a new activation function Funnel ReLU (FReLU) for vision tasks to replace the original non-linear activation function to obtain accuracy compensation, improving the overall image segmentation accuracy through accurate prediction of the boundaries of each class, reducing the intra-class error in the prediction results. The accuracy compensation is obtained by replacing the original linear activation function with FReLU. Experimental results on the Englishhnd dataset demonstrate that the improved network can achieve 96.35% accuracy for English characters with the same network parameters, training data and test data.

6.
Front Plant Sci ; 13: 795410, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35242151

RESUMO

The common method for evaluating the extent of grape disease is to classify the disease spots according to the area. The prerequisite for this operation is to accurately segment the disease spots. This paper presents an improved DeepLab v3+ deep learning network for the segmentation of grapevine leaf black rot spots. The ResNet101 network is used as the backbone network of DeepLab v3+, and a channel attention module is inserted into the residual module. Moreover, a feature fusion branch based on a feature pyramid network is added to the DeepLab v3+ encoder, which fuses feature maps of different levels. Test set TS1 from Plant Village and test set TS2 from an orchard field were used for testing to verify the segmentation performance of the method. In the test set TS1, the improved DeepLab v3+ had 0.848, 0.881, and 0.918 on the mean intersection over union (mIOU), recall, and F1-score evaluation indicators, respectively, which was 3.0, 2.3, and 1.7% greater than the original DeepLab v3+. In the test set TS2, the improved DeepLab v3+ improved the evaluation indicators mIOU, recall, and F1-score by 3.3, 2.5, and 1.9%, respectively. The test results show that the improved DeepLab v3+ has better segmentation performance. It is more suitable for the segmentation of grape leaf black rot spots and can be used as an effective tool for grape disease grade assessment.

7.
Front Plant Sci ; 12: 622429, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33643352

RESUMO

This study aims to provide an effective image analysis method for clover detection and botanical composition (BC) estimation in clover-grass mixture fields. Three transfer learning methods, namely, fine-tuned DeepLab V3+, SegNet, and fully convolutional network-8s (FCN-8s), were utilized to detect clover fractions (on an area basis). The detected clover fraction (CF detected ), together with auxiliary variables, viz., measured clover height (H clover ) and grass height (H grass ), were used to build multiple linear regression (MLR) and back propagation neural network (BPNN) models for BC estimation. A total of 347 clover-grass images were used to build the estimation model on clover fraction and BC. Of the 347 samples, 226 images were augmented to 904 images for training, 25 were selected for validation, and the remaining 96 samples were used as an independent dataset for testing. Testing results showed that the intersection-over-union (IoU) values based on the DeepLab V3+, SegNet, and FCN-8s were 0.73, 0.57, and 0.60, respectively. The root mean square error (RMSE) values for the three transfer learning methods were 8.5, 10.6, and 10.0%. Subsequently, models based on BPNN and MLR were built to estimate BC, by using either CF detected only or CF detected , grass height, and clover height all together. Results showed that BPNN was generally superior to MLR in terms of estimating BC. The BPNN model only using CF detected had a RMSE of 8.7%. In contrast, the BPNN model using all three variables (CF detected , H clover , and H grass ) as inputs had an RMSE of 6.6%, implying that DeepLab V3+ together with BPNN can provide good estimation of BC and can offer a promising method for improving forage management.

8.
Comput Methods Programs Biomed ; 207: 106210, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34130088

RESUMO

OBJECTIVE: In order to improve the efficiency of gastric cancer pathological slice image recognition and segmentation of cancerous regions, this paper proposes an automatic gastric cancer segmentation model based on Deeplab v3+ neural network. METHODS: Based on 1240 gastric cancer pathological slice images, this paper proposes a multi-scale input Deeplab v3+ network, _and compares it with SegNet, ICNet in sensitivity, specificity, accuracy, and Dice coefficient. RESULTS: The sensitivity of Deeplab v3+ is 91.45%, the specificity is 92.31%, the accuracy is 95.76%, and the Dice coefficient reaches 91.66%, which is more than 12% higher than the SegNet and Faster-RCNN models, and the parameter scale of the model is also greatly reduced. CONCLUSION: Our automatic gastric cancer segmentation model based on Deeplab v3+ neural network has achieved better results in improving segmentation accuracy and saving computing resources. Deeplab v3+ is worthy of further promotion in the medical image analysis and diagnosis of gastric cancer.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Gástricas , Humanos , Redes Neurais de Computação , Neoplasias Gástricas/diagnóstico por imagem
9.
Diagnostics (Basel) ; 10(3)2020 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-32121281

RESUMO

This research is concerned with malignant pulmonary nodule detection (PND) in low-dose CT scans. Due to its crucial role in the early diagnosis of lung cancer, PND has considerable potential in improving the survival rate of patients. We propose a two-stage framework that exploits the ever-growing advances in deep neural network models, and that is comprised of a semantic segmentation stage followed by localization and classification. We employ the recently published DeepLab model for semantic segmentation, and we show that it significantly improves the accuracy of nodule detection compared to the classical U-Net model and its most recent variants. Using the widely adopted Lung Nodule Analysis dataset (LUNA16), we evaluate the performance of the semantic segmentation stage by adopting two network backbones, namely, MobileNet-V2 and Xception. We present the impact of various model training parameters and the computational time on the detection accuracy, featuring a 79.1% mean intersection-over-union (mIoU) and an 88.34% dice coefficient. This represents a mIoU increase of 60% and a dice coefficient increase of 30% compared to U-Net. The second stage involves feeding the output of the DeepLab-based semantic segmentation to a localization-then-classification stage. The second stage is realized using Faster RCNN and SSD, with an Inception-V2 as a backbone. On LUNA16, the two-stage framework attained a sensitivity of 96.4%, outperforming other recent models in the literature, including deep models. Finally, we show that adopting a transfer learning approach, particularly, the DeepLab model weights of the first stage of the framework, to infer binary (malignant-benign) labels on the Kaggle dataset for pulmonary nodules achieves a classification accuracy of 95.66%, which represents approximately 4% improvement over the recent literature.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA