Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 2 de 2
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(3): 494-502, 2024 Jun 25.
Article de Chinois | MEDLINE | ID: mdl-38932535

RÉSUMÉ

In the extraction of fetal electrocardiogram (ECG) signal, due to the unicity of the scale of the U-Net same-level convolution encoder, the size and shape difference of the ECG characteristic wave between mother and fetus are ignored, and the time information of ECG signals is not used in the threshold learning process of the encoder's residual shrinkage module. In this paper, a method of extracting fetal ECG signal based on multi-scale residual shrinkage U-Net model is proposed. First, the Inception and time domain attention were introduced into the residual shrinkage module to enhance the multi-scale feature extraction ability of the same level convolution encoder and the utilization of the time domain information of fetal ECG signal. In order to maintain more local details of ECG waveform, the maximum pooling in U-Net was replaced by Softpool. Finally, the decoder composed of the residual module and up-sampling gradually generated fetal ECG signals. In this paper, clinical ECG signals were used for experiments. The final results showed that compared with other fetal ECG extraction algorithms, the method proposed in this paper could extract clearer fetal ECG signals. The sensitivity, positive predictive value, and F1 scores in the 2013 competition data set reached 93.33%, 99.36%, and 96.09%, respectively, indicating that this method can effectively extract fetal ECG signals and has certain application values for perinatal fetal health monitoring.


Sujet(s)
Algorithmes , Électrocardiographie , Traitement du signal assisté par ordinateur , Humains , Électrocardiographie/méthodes , Grossesse , Femelle , Surveillance de l'activité foetale/méthodes , Foetus/physiologie
2.
Med Image Anal ; 75: 102293, 2022 01.
Article de Anglais | MEDLINE | ID: mdl-34800787

RÉSUMÉ

Computer-Aided Diagnosis (CAD) for dermatological diseases offers one of the most notable showcases where deep learning technologies display their impressive performance in acquiring and surpassing human experts. In such the CAD process, a critical step is concerned with segmenting skin lesions from dermoscopic images. Despite remarkable successes attained by recent deep learning efforts, much improvement is still anticipated to tackle challenging cases, e.g., segmenting lesions that are irregularly shaped, bearing low contrast, or possessing blurry boundaries. To address such inadequacies, this study proposes a novel Multi-scale Residual Encoding and Decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency. Specifically, a multi-scale residual encoding fusion module (MsR-EFM) is employed in an encoder, and a multi-scale residual decoding fusion module (MsR-DFM) is applied in a decoder to fuse multi-scale features adaptively. In addition, to enhance the representation learning capability of the newly proposed pipeline, we propose a novel multi-resolution, multi-channel feature fusion module (M2F2), which replaces conventional convolutional layers in encoder and decoder networks. Furthermore, we introduce a novel pooling module (Soft-pool) to medical image segmentation for the first time, retaining more helpful information when down-sampling and getting better segmentation performance. To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art methods on ISIC 2016, 2017, 2018, and PH2. Experimental results consistently demonstrate that the proposed Ms RED attains significantly superior segmentation performance across five popularly used evaluation criteria. Last but not least, the new model utilizes much fewer model parameters than its peer approaches, leading to a greatly reduced number of labeled samples required for model training, which in turn produces a substantially faster converging training process than its peers. The source code is available at https://github.com/duweidai/Ms-RED.


Sujet(s)
Traitement d'image par ordinateur , , Diagnostic assisté par ordinateur , Évolution de la maladie , Humains , Logiciel
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE