Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Accid Anal Prev ; 203: 107617, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38772193

RESUMO

The rapid detection of internal rail defects is critical to maintaining railway safety, but this task faces a significant challenge due to the limited computational resources of onboard detection systems. This paper presents YOLOv8n-LiteCBAM, an advanced network designed to enhance the efficiency of rail defect detection. The network designs a lightweight DepthStackNet backbone to replace the existing CSPDarkNet backbone. Further optimization is achieved through model pruning techniques and the incorporation of a novel Bidirectional Convolutional Block Attention Module (BiCBAM). Additionally, inference acceleration is realized via ONNX Runtime. Experimental results on the rail defect dataset demonstrate that our model achieves 92.9% mAP with inference speeds of 136.79 FPS on the GPU and 38.36 FPS on the CPU. The model's inference speed outperforms that of other lightweight models and ensures that it meets the real-time detection requirements of Rail Flaw Detection (RFD) vehicles traveling at 80 km/h. Consequently, the YOLOv8n-LiteCBAM network is with some potential for industrial application in the expedited detection of internal rail defects.


Assuntos
Ferrovias , Segurança , Humanos , Redes Neurais de Computação , Algoritmos
2.
Comput Biol Med ; 173: 108381, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569237

RESUMO

Multimodal medical image fusion (MMIF) technology plays a crucial role in medical diagnosis and treatment by integrating different images to obtain fusion images with comprehensive information. Deep learning-based fusion methods have demonstrated superior performance, but some of them still encounter challenges such as imbalanced retention of color and texture information and low fusion efficiency. To alleviate the above issues, this paper presents a real-time MMIF method, called a lightweight residual fusion network. First, a feature extraction framework with three branches is designed. Two independent branches are used to fully extract brightness and texture information. The fusion branch enables different modal information to be interactively fused at a shallow level, thereby better retaining brightness and texture information. Furthermore, a lightweight residual unit is designed to replace the conventional residual convolution in the model, thereby improving the fusion efficiency and reducing the overall model size by approximately 5 times. Finally, considering that the high-frequency image decomposed by the wavelet transform contains abundant edge and texture information, an adaptive strategy is proposed for assigning weights to the loss function based on the information content in the high-frequency image. This strategy effectively guides the model toward preserving intricate details. The experimental results on MRI and functional images demonstrate that the proposed method exhibits superior fusion performance and efficiency compared to alternative approaches. The code of LRFNet is available at https://github.com/HeDan-11/LRFNet.


Assuntos
Processamento de Imagem Assistida por Computador , Análise de Ondaletas
3.
Med Biol Eng Comput ; 62(1): 61-70, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37615845

RESUMO

Deep learning technology has been employed for precise medical image segmentation in recent years. However, due to the limited available datasets and real-time processing requirement, the inherently complicated structure of deep learning models restricts their application in the field of medical image processing. In this work, we present a novel lightweight LMU-Net network with improved accuracy for medical image segmentation. The multilayer perceptron (MLP) and depth-wise separable convolutions are adopted in both encoder and decoder of the LMU-Net to reduce feature loss and the number of training parameters. In addition, a lightweight channel attention mechanism and convolution operation with a larger kernel are introduced in the proposed architecture to further improve the segmentation performance. Furthermore, we employ batch normalization (BN) and group normalization (GN) interchangeably in our module to minimize the estimation shift in the network. Finally, the proposed network is evaluated and compared to other architectures on publicly accessible ISIC and BUSI datasets by carrying out robust experiments with sufficient ablation considerations. The experimental results show that the proposed LMU-Net can achieve a better overall performance than existing techniques by adopting fewer parameters.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
4.
Front Plant Sci ; 14: 1276728, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37965007

RESUMO

The rapid development of image processing technology and the improvement of computing power in recent years have made deep learning one of the main methods for plant disease identification. Currently, many neural network models have shown better performance in plant disease identification. Typically, the performance improvement of the model needs to be achieved by increasing the depth of the network. However, this also increases the computational complexity, memory requirements, and training time, which will be detrimental to the deployment of the model on mobile devices. To address this problem, a novel lightweight convolutional neural network has been proposed for plant disease detection. Skip connections are introduced into the conventional MobileNetV3 network to enrich the input features of the deep network, and the feature fusion weight parameters in the skip connections are optimized using an improved whale optimization algorithm to achieve higher classification accuracy. In addition, the bias loss substitutes the conventional cross-entropy loss to reduce the interference caused by redundant data during the learning process. The proposed model is pre-trained on the plant classification task dataset instead of using the classical ImageNet for pre-training, which further enhances the performance and robustness of the model. The constructed network achieved high performance with fewer parameters, reaching an accuracy of 99.8% on the PlantVillage dataset. Encouragingly, it also achieved a prediction accuracy of 97.8% on an apple leaf disease dataset with a complex outdoor background. The experimental results show that compared with existing advanced plant disease diagnosis models, the proposed model has fewer parameters, higher recognition accuracy, and lower complexity.

5.
Bioengineering (Basel) ; 10(8)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37627804

RESUMO

Computer vision (CV) technology and convolutional neural networks (CNNs) demonstrate superior feature extraction capabilities in the field of bioengineering. However, during the capturing process of finger-vein images, translation can cause a decline in the accuracy rate of the model, making it challenging to apply CNNs to real-time and highly accurate finger-vein recognition in various real-world environments. Moreover, despite CNNs' high accuracy, CNNs require many parameters, and existing research has confirmed their lack of shift-invariant features. Based on these considerations, this study introduces an improved lightweight convolutional neural network (ILCNN) for finger vein recognition. The proposed model incorporates a diverse branch block (DBB), adaptive polyphase sampling (APS), and coordinate attention mechanism (CoAM) with the aim of improving the model's performance in accurately identifying finger vein features. To evaluate the effectiveness of the model in finger vein recognition, we employed the finger-vein by university sains malaysia (FV-USM) and PLUSVein dorsal-palmar finger-vein (PLUSVein-FV3) public database for analysis and comparative evaluation with recent research methodologies. The experimental results indicate that the finger vein recognition model proposed in this study achieves an impressive recognition accuracy rate of 99.82% and 95.90% on the FV-USM and PLUSVein-FV3 public databases, respectively, while utilizing just 1.23 million parameters. Moreover, compared to the finger vein recognition approaches proposed in previous studies, the ILCNN introduced in this work demonstrated superior performance.

6.
Sensors (Basel) ; 23(10)2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37430704

RESUMO

The accurate detection and segmentation of accessible surface regions in water scenarios is one of the indispensable capabilities of surface unmanned vehicle systems. 'Most existing methods focus on accuracy and ignore the lightweight and real-time demands. Therefore, they are not suitable for embedded devices, which have been wildly applied in practical applications.' An edge-aware lightweight water scenario segmentation method (ELNet), which establishes a lighter yet better network with lower computation, is proposed. ELNet utilizes two-stream learning and edge-prior information. Except for the context stream, a spatial stream is expanded to learn spatial details in low-level layers with no extra computation cost in the inference stage. Meanwhile, edge-prior information is introduced to the two streams, which expands the perspectives of pixel-level visual modeling. The experimental results are 45.21% in FPS, 98.5% in detection robustness, 75.1% in F-score on MODS benchmark, 97.82% in precision, and 93.96% in F-score on USV Inland dataset. It demonstrates that ELNet uses fewer parameters to achieve comparable accuracy and better real-time performance.

7.
Front Public Health ; 10: 892418, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35692314

RESUMO

An accurate and automated segmentation of coronary arteries in X-ray angiograms is essential for cardiologists to diagnose coronary artery disease in clinics. The existing deep learning-based coronary arteries segmentation models focus on using complex networks to improve the accuracy of segmentation while ignoring the computational cost. However, performing such segmentation networks requires a high-performance device with a powerful GPU and a high bandwidth memory. To address this issue, in this study, a lightweight deep learning network is developed for a better balance between computational cost and segmentation accuracy. We have made two efforts in designing the network. On the one hand, we adopt bottleneck residual blocks to replace the internal components in the encoder and decoder of the traditional U-Net to make the network more lightweight. On the other hand, we embed the two attention modules to model long-range dependencies in spatial and channel dimensions for the accuracy of segmentation. In addition, we employ Top-hat transforms and contrast-limited adaptive histogram equalization (CLAHE) as the pre-processing strategy to enhance the coronary arteries to further improve the accuracy. Experimental evaluations conducted on the coronary angiograms dataset show that the proposed lightweight network performs well for accurate coronary artery segmentation, achieving the sensitivity, specificity, accuracy, and area under the curve (AUC) of 0.8770, 0.9789, 0.9729, and 0.9910, respectively. It is noteworthy that the proposed network contains only 0.75 M of parameters, which achieves the best performance by the comparative experiments with popular segmentation networks (such as U-Net with 31.04 M of parameters). Experimental results demonstrate that our network can achieve better performance with an extremely low number of parameters. Furthermore, the generalization experiments indicate that our network can accurately segment coronary angiograms from other coronary angiograms' databases, which demonstrates the strong generalization and robustness of our network.


Assuntos
Vasos Coronários , Processamento de Imagem Assistida por Computador , Angiografia , Vasos Coronários/diagnóstico por imagem , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos , Raios X
8.
Sensors (Basel) ; 22(1)2022 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-35009871

RESUMO

Recently, many super-resolution reconstruction (SR) feedforward networks based on deep learning have been proposed. These networks enable the reconstructed images to achieve convincing results. However, due to a large amount of computation and parameters, SR technology is greatly limited in devices with limited computing power. To trade-off the network performance and network parameters. In this paper, we propose the efficient image super-resolution network via Self-Calibrated Feature Fuse, named SCFFN, by constructing the self-calibrated feature fuse block (SCFFB). Specifically, to recover the high-frequency detail information of the image as much as possible, we propose SCFFB by self-transformation and self-fusion of features. In addition, to accelerate the network training while reducing the computational complexity of the network, we employ an attention mechanism to elaborate the reconstruction part of the network, called U-SCA. Compared with the existing transposed convolution, it can greatly reduce the computation burden of the network without reducing the reconstruction effect. We have conducted full quantitative and qualitative experiments on public datasets, and the experimental results show that the network achieves comparable performance to other networks, while we only need fewer parameters and computational resources.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA