Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Methods ; 202: 40-53, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34029714

RESUMO

Automatic medical image segmentation plays an important role as a diagnostic aid in the identification of diseases and their treatment in clinical settings. Recently proposed methods based on Convolutional Neural Networks (CNNs) have demonstrated their potential in image processing tasks, including some medical image analysis tasks. Those methods can learn various feature representations with numerous weight-shared convolutional kernels, however, the missed diagnosis rate of regions of interest (ROIs) is still high in medical image segmentation. Two crucial factors behind this shortcoming, which have been overlooked, are small ROIs from medical images and the limited context information from the existing network models. In order to reduce the missed diagnosis rate of ROIs from medical images, we propose a new segmentation framework which enhances the representative capability of small ROIs (particularly in deep layers) and explicitly learns global contextual dependencies in multi-scale feature spaces. In particular, the local features and their global dependencies from each feature space are adaptively aggregated based on both the spatial and the channel dimensions. Moreover, some visualization comparisons of the learned features from our framework further boost neural networks' interpretability. Experimental results show that, in comparison to some popular medical image segmentation and general image segmentation methods, our proposed framework achieves the state-of-the-art performance on the liver tumor segmentation task with 91.18% Sensitivity, the COVID-19 lung infection segmentation task with 75.73% Sensitivity and the retinal vessel detection task with 82.68% Sensitivity. Moreover, it is possible to integrate (parts of) the proposed framework into most of the recently proposed Fully CNN-based models, in order to improve their effectiveness in medical image segmentation tasks.


Assuntos
COVID-19 , Neoplasias Hepáticas , Algoritmos , COVID-19/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
2.
Eur J Nucl Med Mol Imaging ; 47(10): 2248-2268, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32222809

RESUMO

PURPOSE: Unlike the normal organ segmentation task, automatic tumor segmentation is a more challenging task because of the existence of similar visual characteristics between tumors and their surroundings, especially on computed tomography (CT) images with severe low contrast resolution, as well as the diversity and individual characteristics of data acquisition procedures and devices. Consequently, most of the recently proposed methods have become increasingly difficult to be applied on a different tumor dataset with good results, and moreover, some tumor segmentors usually fail to generalize beyond those datasets and modalities used in their original evaluation experiments. METHODS: In order to alleviate some of the problems with the recently proposed methods, we propose a novel unified and end-to-end adversarial learning framework for automatic segmentation of any kinds of tumors from CT scans, called CTumorGAN, consisting of a Generator network and a Discriminator network. Specifically, the Generator attempts to generate segmentation results that are close to their corresponding golden standards, while the Discriminator aims to distinguish between generated samples and real tumor ground truths. More importantly, we deliberately design different modules to take into account the well-known obstacles, e.g., severe class imbalance, small tumor localization, and the label noise problem with poor expert annotation quality, and then use these modules to guide the CTumorGAN training process by utilizing multi-level supervision more effectively. RESULTS: We conduct a comprehensive evaluation on diverse loss functions for tumor segmentation and find that mean square error is more suitable for the CT tumor segmentation task. Furthermore, extensive experiments with multiple evaluation criteria on three well-established datasets, including lung tumor, kidney tumor, and liver tumor databases, also demonstrate that our CTumorGAN achieves stable and competitive performance compared with the state-of-the-art approaches for CT tumor segmentation. CONCLUSION: In order to overcome those key challenges arising from CT datasets and solve some of the main problems existing in the current deep learning-based methods, we propose a novel unified CTumorGAN framework, which can be effectively generalized to address any kinds of tumor datasets with superior performance.


Assuntos
Neoplasias Hepáticas , Neoplasias Pulmonares , Bases de Dados Factuais , Humanos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X
3.
IEEE Trans Cybern ; 53(11): 6776-6787, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36044511

RESUMO

Automatic tumor or lesion segmentation is a crucial step in medical image analysis for computer-aided diagnosis. Although the existing methods based on convolutional neural networks (CNNs) have achieved the state-of-the-art performance, many challenges still remain in medical tumor segmentation. This is because, although the human visual system can detect symmetries in 2-D images effectively, regular CNNs can only exploit translation invariance, overlooking further inherent symmetries existing in medical images, such as rotations and reflections. To solve this problem, we propose a novel group equivariant segmentation framework by encoding those inherent symmetries for learning more precise representations. First, kernel-based equivariant operations are devised on each orientation, which allows it to effectively address the gaps of learning symmetries in existing approaches. Then, to keep segmentation networks globally equivariant, we design distinctive group layers with layer-wise symmetry constraints. Finally, based on our novel framework, extensive experiments conducted on real-world clinical data demonstrate that a group equivariant Res-UNet (called GER-UNet) outperforms its regular CNN-based counterpart and the state-of-the-art segmentation methods in the tasks of hepatic tumor segmentation, COVID-19 lung infection segmentation, and retinal vessel detection. More importantly, the newly built GER-UNet also shows potential in reducing the sample complexity and the redundancy of filters, upgrading current segmentation CNNs, and delineating organs on other medical imaging modalities.


Assuntos
COVID-19 , Neoplasias , Humanos , COVID-19/diagnóstico por imagem , Redes Neurais de Computação , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador/métodos
4.
Neural Netw ; 140: 203-222, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33780873

RESUMO

Compared with the traditional analysis of computed tomography scans, automatic liver tumor segmentation can supply precise tumor volumes and reduce the inter-observer variability in estimating the tumor size and the tumor burden, which could further assist physicians to make better therapeutic choices for hepatic diseases and monitoring treatment. Among current mainstream segmentation approaches, multi-layer and multi-kernel convolutional neural networks (CNNs) have attracted much attention in diverse biomedical/medical image segmentation tasks with remarkable performance. However, an arbitrary stacking of feature maps makes CNNs quite inconsistent in imitating the cognition and the visual attention of human beings for a specific visual task. To mitigate the lack of a reasonable feature selection mechanism in CNNs, we exploit a novel and effective network architecture, called Tumor Attention Networks (TA-Net), for mining adaptive features by embedding Tumor Attention layers with multi-functional modules to assist the liver tumor segmentation task. In particular, each tumor attention layer can adaptively highlight valuable tumor features and suppress unrelated ones among feature maps from 3D and 2D perspectives. Moreover, an analysis of visualization results illustrates the effectiveness of our tumor attention modules and the interpretability of CNNs for liver tumor segmentation. Furthermore, we explore different arrangements of skip connections in information fusion. A deep ablation study is also conducted to illustrate the effects of different attention strategies for hepatic tumors. The results of extensive experiments demonstrate that the proposed TA-Net increases the liver tumor segmentation performance with a lower computation cost and a small parameter overhead over the state-of-the-art methods, under various evaluation metrics on clinical benchmark data. In addition, two additional medical image datasets are used to evaluate generalization capability of TA-Net, including the comparison with general semantic segmentation methods and a non-tumor segmentation task. All the program codes have been released at https://github.com/shuchao1212/TA-Net.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Humanos , Processamento de Imagem Assistida por Computador/normas , Tomografia Computadorizada por Raios X/normas
5.
Med Biol Eng Comput ; 57(1): 107-121, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30003400

RESUMO

With the advent of biomedical imaging technology, the number of captured and stored biomedical images is rapidly increasing day by day in hospitals, imaging laboratories and biomedical institutions. Therefore, more robust biomedical image analysis technology is needed to meet the requirement of the diagnosis and classification of various kinds of diseases using biomedical images. However, the current biomedical image classification methods and general non-biomedical image classifiers cannot extract more compact biomedical image features or capture the tiny differences between similar images with different types of diseases from the same category. In this paper, we propose a novel fused convolutional neural network to develop a more accurate and highly efficient classifier for biomedical images, which combines shallow layer features and deep layer features from the proposed deep neural network architecture. In the analysis, it was observed that the shallow layers provided more detailed local features, which could distinguish different diseases in the same category, while the deep layers could convey more high-level semantic information used to classify the diseases among the various categories. A detailed comparison of our approach with traditional classification algorithms and popular deep classifiers across several public biomedical image datasets showed the superior performance of our proposed method for biomedical image classification. In addition, we also evaluated the performance of our method in modality classification of medical images using the ImageCLEFmed dataset. Graphical abstract The graphical abstract shows the fused, deep convolutional neural network architecture proposed for biomedical image classification. In the architecture, we can clearly see the feature-fusing process going from shallow layers and the deep layers.


Assuntos
Diagnóstico por Imagem/classificação , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Aprendizado Profundo , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA