Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Front Bioeng Biotechnol ; 12: 1414605, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38994123

RESUMO

In recent years, deep convolutional neural network-based segmentation methods have achieved state-of-the-art performance for many medical analysis tasks. However, most of these approaches rely on optimizing the U-Net structure or adding new functional modules, which overlooks the complementation and fusion of coarse-grained and fine-grained semantic information. To address these issues, we propose a 2D medical image segmentation framework called Progressive Learning Network (PL-Net), which comprises Internal Progressive Learning (IPL) and External Progressive Learning (EPL). PL-Net offers the following advantages: 1) IPL divides feature extraction into two steps, allowing for the mixing of different size receptive fields and capturing semantic information from coarse to fine granularity without introducing additional parameters; 2) EPL divides the training process into two stages to optimize parameters and facilitate the fusion of coarse-grained information in the first stage and fine-grained information in the second stage. We conducted comprehensive evaluations of our proposed method on five medical image segmentation datasets, and the experimental results demonstrate that PL-Net achieves competitive segmentation performance. It is worth noting that PL-Net does not introduce any additional learnable parameters compared to other U-Net variants.

2.
Eur J Radiol ; 171: 111277, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38160541

RESUMO

OBJECTIVES: To explore the possibility of automatic diagnosis of congenital heart disease (CHD) and pulmonary arterial hypertension associated with CHD (PAH-CHD) from chest radiographs using artificial intelligence (AI) technology and to evaluate whether AI assistance could improve clinical diagnostic accuracy. MATERIALS AND METHODS: A total of 3255 frontal preoperative chest radiographs (1174 CHD of any type and 2081 non-CHD) were retrospectively obtained. In this study, we adopted ResNet18 pretrained with the ImageNet database to establish diagnostic models. Radiologists diagnosed CHD/PAH-CHD from 330/165 chest radiographs twice: the first time, 50% of the images were accompanied by AI-based classification; after a month, the remaining 50% were accompanied by AI-based classification. Diagnostic results were compared between the radiologists and AI models, and between radiologists with and without AI assistance. RESULTS: The AI model achieved an average area under the receiver operating characteristic curve (AUC) of 0.948 (sensitivity: 0.970, specificity: 0.982) for CHD diagnoses and an AUC of 0.778 (sensitivity: 0.632, specificity: 0.925) for identifying PAH-CHD. In the 330 balanced (165 CHD and 165 non-CHD) testing set, AI achieved higher AUCs than all 5 radiologists in the identification of CHD (0.670-0.858) and PAH-CHD (0.610-0.688). With AI assistance, the mean ± standard error AUC of radiologists was significantly improved for CHD (ΔAUC + 0.096, 95 % CI: 0.001-0.190; P = 0.048) and PAH-CHD (ΔAUC + 0.066, 95 % CI: 0.010-0.122; P = 0.031) diagnosis. CONCLUSION: Chest radiograph-based AI models can detect CHD and PAH-CHD automatically. AI assistance improved radiologists' diagnostic accuracy, which may facilitate a timely initial diagnosis of CHD and PAH-CHD.


Assuntos
Cardiopatias Congênitas , Hipertensão Pulmonar , Hipertensão Arterial Pulmonar , Humanos , Hipertensão Arterial Pulmonar/complicações , Inteligência Artificial , Estudos Retrospectivos , Cardiopatias Congênitas/complicações , Cardiopatias Congênitas/diagnóstico por imagem
3.
Phys Med Biol ; 68(24)2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-37802056

RESUMO

Objective. Deep convolutional neural networks (CNNs) have been widely applied in medical image analysis and achieved satisfactory performances. While most CNN-based methods exhibit strong feature representation capabilities, they face challenges in encoding long-range interaction information due to the limited receptive fields. Recently, the Transformer has been proposed to alleviate this issue, but its cost is greatly enlarging the model size, which may inhibit its promotion.Approach. To take strong long-range interaction modeling ability and small model size into account simultaneously, we propose a Transformer-like block-based U-shaped network for medical image segmentation, dubbed as SCA-Former. Furthermore, we propose a novel stream-cross attention (SCA) module to enforce the network to focus on finding a balance between local and global representations by extracting multi-scale and interactive features along spatial and channel dimensions. And SCA can effectively extract channel, multi-scale spatial, and long-range information for a more comprehensive feature representation.Main results. Experimental results demonstrate that SCA-Former outperforms the current state-of-the-art (SOTA) methods on three public datasets, including GLAS, ISIC 2017 and LUNG.Significance. This work exhibits a promising method to enhance the feature representation of convolutional neural networks and improve segmentation performance.


Assuntos
Redes Neurais de Computação , Rios , Processamento de Imagem Assistida por Computador
4.
Technol Health Care ; 31(1): 181-195, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35754242

RESUMO

BACKGROUND: The results of medical image segmentation can provide reliable evidence for clinical diagnosis and treatment. The U-Net proposed previously has been widely used in the field of medical image segmentation. Its encoder extracts semantic features of different scales at different stages, but does not carry out special processing for semantic features of each scale. OBJECTIVE: To improve the feature expression ability and segmentation performance of U-Net, we proposed a feature supplement and optimization U-Net (FSOU-Net). METHODS: First, we put forward the view that semantic features of different scales should be treated differently. Based on this view, we classify the semantic features automatically extracted by encoders into two categories: shallow semantic features and deep semantic features. Then, we propose the shallow feature supplement module (SFSM), which obtains fine-grained semantic features through up-sampling to supplement the shallow semantic information. Finally, we propose the deep feature optimization module (DFOM), which uses the expansive convolution of different receptive fields to obtain multi-scale features and then performs multi-scale feature fusion to optimize the deep semantic information. RESULTS: The proposed model is experimented on three medical image segmentation public datasets, and the experimental results prove the correctness of the proposed idea. The segmentation performance of the model is higher than the advanced models for medical image segmentation. Compared with baseline network U-NET, the main index of Dice index is 0.75% higher on the RITE dataset, 2.3% higher on the Kvasir-SEG dataset, and 0.24% higher on the GlaS dataset. CONCLUSIONS: The proposed method can greatly improve the feature representation ability and segmentation performance of the model.

5.
Front Psychol ; 13: 1006412, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36337546

RESUMO

Based on a 12-year bibliographic record collected from the Web of Science (Thomson Reuters) database, the present study aims to provide a macroscopic overview of the knowledge domain in financial decision making (FDM). A scientometric and bibliometric analysis was conducted on the literature published in the field from 2010 to 2021, using the CiteSpace software. The analysis focuses on the co-occurring categories, the geographic distributions, the vital references, the distribution of topics, as well as the research fronts and emerging trends of financial related decision making. The steady increase of papers published year by year demonstrated the increasing interest on this topic at the international level. The scientometric analysis of the literature showed that financial decision, investment decision, and financing decision stood out of the crowd of the research on FDM, suggesting their important role in FDM and its research. The results of citation burst analysis predicted the focus of topics, i.e., the impact of individual differences such as financial literacy, gender and age on FDM in the coming years. Different from the traditional approach of literature review, this bibliometric analysis offers a scientometric approach to reveal the status quo and the development trend of FDM by macro and quantitative means. In addition, future research directions for the field are recommended.

6.
Technol Health Care ; 30(1): 129-143, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34057109

RESUMO

BACKGROUND: The automatic segmentation of medical images is an important task in clinical applications. However, due to the complexity of the background of the organs, the unclear boundary, and the variable size of different organs, some of the features are lost during network learning, and the segmentation accuracy is low. OBJECTIVE: To address these issues, this prompted us to study whether it is possible to better preserve the deep feature information of the image and solve the problem of low segmentation caused by unclear image boundaries. METHODS: In this study, we (1) build a reliable deep learning network framework, named BGRANet,to improve the segmentation performance for medical images; (2) propose a packet rotation convolutional fusion encoder network to extract features; (3) build a boundary enhanced guided packet rotation dual attention decoder network, which is used to enhance the boundary of the segmentation map and effectively fuse more prior information; and (4) propose a multi-resolution fusion module to generate high-resolution feature maps. We demonstrate the effffectiveness of the proposed method on two publicly available datasets. RESULTS: BGRANet has been trained and tested on the prepared dataset and the experimental results show that our proposed model has better segmentation performance. For 4 class classifification (CHAOS dataset), the average dice similarity coeffiffifficient reached 91.73%. For 2 class classifification (Herlev dataset), the prediction, sensitivity, specifificity, accuracy, and Dice reached 93.75%, 94.30%, 98.19%, 97.43%, and 98.08% respectively. The experimental results show that BGRANet can improve the segmentation effffect for medical images. CONCLUSION: We propose a boundary-enhanced guided packet rotation dual attention decoder network. It achieved high segmentation accuracy with a reduced parameter number.


Assuntos
Processamento de Imagem Assistida por Computador , Atenção , Humanos
7.
Med Image Anal ; 76: 102313, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34911012

RESUMO

In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51-3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Progressão da Doença , Processamento de Imagem Assistida por Computador/métodos
8.
Comput Methods Programs Biomed ; 214: 106566, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34890992

RESUMO

BACKGROUND AND OBJECTIVE: Segmentation is a key step in biomedical image analysis tasks. Recently, convolutional neural networks (CNNs) have been increasingly applied in the field of medical image processing; however, standard models still have some drawbacks. Due to the significant loss of spatial information at the coding stage, it is often difficult to restore the details of low-level visual features using simple deconvolution, and the generated feature maps are sparse, which results in performance degradation. This prompted us to study whether it is possible to better preserve the deep feature information of the image in order to solve the sparsity problem of image segmentation models. METHODS: In this study, we (1) build a reliable deep learning network framework, named DCACNet, to improve the segmentation performance for medical images; (2) propose a multiscale cross-fusion encoding network to extract features; (3) build a dual context aggregation module to fuse the context features at different scales and capture more fine-grained deep features; and (4) propose an attention-guided cross deconvolution decoding network to generate dense feature maps. We demonstrate the effectiveness of the proposed method on two publicly available datasets. RESULTS: DCACNet was trained and tested on the prepared dataset, and the experimental results show that our proposed model has better segmentation performance than previous models. For 4-class classification (CHAOS dataset), the mean DSC coefficient reached 91.03%. For 2-class classification (Herlev dataset), the accuracy, precision, sensitivity, specificity, and Dice score reached 96.77%, 90.40%, 94.20%, 97.50%, and 97.69%, respectively. The experimental results show that DCACNet can improve the segmentation effect for medical images. CONCLUSION: DCACNet achieved promising results on the prepared dataset and improved segmentation performance. It can better retain the deep feature information of the image than other models and solve the sparsity problem of the medical image segmentation model.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Atenção , Coleta de Dados
9.
Artif Intell Med ; 107: 101899, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32828447

RESUMO

In this paper, we embed two types of attention modules in the dilated fully convolutional network (FCN) to solve biomedical image segmentation tasks efficiently and accurately. Different from previous work on image segmentation through multiscale feature fusion, we propose the fully convolutional attention network (FCANet) to aggregate contextual information at long-range and short-range distances. Specifically, we add two types of attention modules, the spatial attention module and the channel attention module, to the Res2Net network, which has a dilated strategy. The features of each location are aggregated through the spatial attention module, so that similar features promote each other in space size. At the same time, the channel attention module treats each channel of the feature map as a feature detector and emphasizes the channel dependency between any two channel maps. Finally, we weight the sum of the output features of the two types of attention modules to retain the feature information of the long-range and short-range distances, to further improve the representation of the features and make the biomedical image segmentation more accurate. In particular, we verify that the proposed attention module can seamlessly connect to any end-to-end network with minimal overhead. We perform comprehensive experiments on three public biomedical image segmentation datasets, i.e., the Chest X-ray collection, the Kaggle 2018 data science bowl and the Herlev dataset. The experimental results show that FCANet can improve the segmentation effect of biomedical images. The source code models are available at https://github.com/luhongchun/FCANet.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA