Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 3 de 3
1.
Comput Methods Programs Biomed ; 252: 108235, 2024 May 18.
Article En | MEDLINE | ID: mdl-38776830

BACKGROUND AND OBJECTIVE: Computer-based biomedical image segmentation plays a crucial role in planning of assisted diagnostics and therapy. However, due to the variable size and irregular shape of the segmentation target, it is still a challenge to construct an effective medical image segmentation structure. Recently, hybrid architectures based on convolutional neural networks (CNNs) and transformers were proposed. However, most current backbones directly replace one or all convolutional layers with transformer blocks, regardless of the semantic gap between features. Thus, how to sufficiently and effectively eliminate the semantic gap as well as combine the global and local information is a critical challenge. METHODS: To address the challenge, we propose a novel structure, called BiU-Net, which integrates CNNs and transformers with a two-stage fusion strategy. In the first fusion stage, called Single-Scale Fusion (SSF) stage, the encoding layers of the CNNs and transformers are coupled, with both having the same feature map size. The SSF stage aims to reconstruct local features based on CNNs and long-range information based on transformers in each encoding block. In the second stage, Multi-Scale Fusion (MSF), BiU-Net interacts with multi-scale features from various encoding layers to eliminate the semantic gap between deep and shallow layers. Furthermore, a Context-Aware Block (CAB) is embedded in the bottleneck to reinforce multi-scale features in the decoder. RESULTS: Experiments on four public datasets were conducted. On the BUSI dataset, our BiU-Net achieved 85.50 % on Dice coefficient (Dice), 76.73 % on intersection over union (IoU), and 97.23 % on accuracy (ACC). Compared to the state-of-the-art method, BiU-Net improves Dice by 1.17 %. For the Monuseg dataset, the proposed method attained the highest scores, reaching 80.27 % and 67.22 % for Dice and IoU. The BiU-Net achieves 95.33 % and 81.22 % Dice on the PH2 and DRIVE datasets. CONCLUSIONS: The results of our experiments showed that BiU-Net transcends existing state-of-the-art methods on four publicly available biomedical datasets. Due to the powerful multi-scale feature extraction ability, our proposed BiU-Net is a versatile medical image segmentation framework for various types of medical images. The source code is released on (https://github.com/ZYLandy/BiU-Net).

2.
Neurophotonics ; 11(2): 025001, 2024 Apr.
Article En | MEDLINE | ID: mdl-38660382

Significance: Early diagnosis of depression is crucial for effective treatment. Our study utilizes functional near-infrared spectroscopy (fNIRS) and machine learning to accurately classify mild and severe depression, providing an objective auxiliary diagnostic tool for mental health workers. Aim: Develop prediction models to distinguish between severe and mild depression using fNIRS data. Approach: We collected the fNIRS data from 140 subjects and applied a complete ensemble empirical mode decomposition with an adaptive noise-wavelet threshold combined denoising method (CEEMDAN-WPT) to remove noise during the verbal fluency task. The temporal features (TF) and correlation features (CF) from 18 prefrontal lobe channels of subjects were extracted as predictors. Using recursive feature elimination with cross-validation, we identified optimal TF or CF and examined their role in distinguishing between severe and mild depression. Machine learning algorithms were used for classification. Results: The combination of TF and CF as inputs for the prediction model yielded higher classification accuracy than using either TF or CF alone. Among the prediction models, the SVM-based model demonstrates excellent performance in nested cross-validation, achieving an accuracy rate of 92.8%. Conclusions: The proposed model can effectively distinguish mild depression from severe depression.

3.
J Healthc Eng ; 2023: 4387134, 2023.
Article En | MEDLINE | ID: mdl-36844948

In recent years, brain magnetic resonance imaging (MRI) image segmentation has drawn considerable attention. MRI image segmentation result provides a basis for medical diagnosis. The segmentation result influences the clinical treatment directly. Nevertheless, MRI images have shortcomings such as noise and the inhomogeneity of grayscale. The performance of traditional segmentation algorithms still needs further improvement. In this paper, we propose a novel brain MRI image segmentation algorithm based on fuzzy C-means (FCM) clustering algorithm to improve the segmentation accuracy. First, we introduce multitask learning strategy into FCM to extract public information among different segmentation tasks. It combines the advantages of the two algorithms. The algorithm enables to utilize both public information among different tasks and individual information within tasks. Then, we design an adaptive task weight learning mechanism, and a weighted multitask fuzzy C-means (WMT-FCM) clustering algorithm is proposed. Under the adaptive task weight learning mechanism, each task obtains the optimal weight and achieves better clustering performance. Simulated MRI images from McConnell BrainWeb have been used to evaluate the proposed algorithm. Experimental results demonstrate that the proposed method provides more accurate and stable segmentation results than its competitors on the MRI images with various noise and intensity inhomogeneity.


Brain , Image Processing, Computer-Assisted , Humans , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Fuzzy Logic , Magnetic Resonance Imaging/methods , Algorithms , Cluster Analysis
...