Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Comput Med Imaging Graph ; 108: 102258, 2023 09.
Article in English | MEDLINE | ID: mdl-37315396

ABSTRACT

Lung cancer has the highest mortality rate. Its diagnosis and treatment analysis depends upon the accurate segmentation of the tumor. It becomes tedious if done manually as radiologists are overburdened with numerous medical imaging tests due to the increase in cancer patients and the COVID pandemic. Automatic segmentation techniques play an essential role in assisting medical experts. The segmentation approaches based on convolutional neural networks have provided state-of-the-art performances. However, they cannot capture long-range relations due to the region-based convolutional operator. Vision Transformers can resolve this issue by capturing global multi-contextual features. To explore this advantageous feature of the vision transformer, we propose an approach for lung tumor segmentation using an amalgamation of the vision transformer and convolutional neural network. We design the network as an encoder-decoder structure with convolution blocks deployed in the initial layers of the encoder to capture the features carrying essential information and the corresponding blocks in the final layers of the decoder. The deeper layers utilize the transformer blocks with a self-attention mechanism to capture more detailed global feature maps. We use a recently proposed unified loss function that combines cross-entropy and dice-based losses for network optimization. We trained our network on a publicly available NSCLC-Radiomics dataset and tested its generalizability on our dataset collected from a local hospital. We could achieve average dice coefficients of 0.7468 and 0.6847 and Hausdorff distances of 15.336 and 17.435 on public and local test data, respectively.


Subject(s)
COVID-19 , Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Humans , Lung Neoplasms/diagnostic imaging , Diffusion Magnetic Resonance Imaging , Neural Networks, Computer , Image Processing, Computer-Assisted
2.
Comput Biol Med ; 147: 105781, 2022 08.
Article in English | MEDLINE | ID: mdl-35777084

ABSTRACT

Lung nodule segmentation plays a crucial role in early-stage lung cancer diagnosis, and early detection of lung cancer can improve the survival rate of the patients. The approaches based on convolutional neural networks (CNN) have outperformed the traditional image processing approaches in various computer vision applications, including medical image analysis. Although multiple techniques based on convolutional neural networks have provided state-of-the-art performances for medical image segmentation tasks, these techniques still have some challenges. Two main challenges are data scarcity and class imbalance, which can cause overfitting resulting in poor performance. In this study, we propose an approach based on a 3D conditional generative adversarial network for lung nodule segmentation, which generates better segmentation results by learning the data distribution, leading to better accuracy. The generator in the proposed network is based on the famous U-Net architecture with a concurrent squeeze & excitation module. The discriminator is a simple classification network with a spatial squeeze & channel excitation module, differentiating between ground truth and fake segmentation. To deal with the overfitting, we implement patch-based training. We have evaluated the proposed approach on two datasets, LUNA16 data and a local dataset. We achieved significantly improved performances with dice coefficients of 80.74% and 76.36% and sensitivities of 85.46% and 82.56% for the LUNA test set and local dataset, respectively.


Subject(s)
Image Processing, Computer-Assisted , Lung Neoplasms , Humans , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Lung Neoplasms/diagnostic imaging , Neural Networks, Computer
3.
Biomed Res Int ; 2022: 7340902, 2022.
Article in English | MEDLINE | ID: mdl-35155680

ABSTRACT

High-resolution computed tomography (HRCT) images in interstitial lung disease (ILD) screening can help improve healthcare quality. However, most of the earlier ILD classification work involves time-consuming manual identification of the region of interest (ROI) from the lung HRCT image before applying the deep learning classification algorithm. This paper has developed a two-stage hybrid approach of deep learning networks for ILD classification. A conditional generative adversarial network (c-GAN) has segmented the lung part from the HRCT images at the first stage. The c-GAN with multiscale feature extraction module has been used for accurate lung segmentation from the HRCT images with lung abnormalities. At the second stage, a pretrained ResNet50 has been used to extract the features from the segmented lung image for classification into six ILD classes using the support vector machine classifier. The proposed two-stage algorithm takes a whole HRCT as input eliminating the need for extracting the ROI and classifies the given HRCT image into an ILD class. The performance of the proposed two-stage deep learning network-based ILD classifier has improved considerably due to the stage-wise improvement of deep learning algorithm performance.


Subject(s)
Deep Learning , Lung Diseases, Interstitial/classification , Lung Diseases, Interstitial/diagnostic imaging , Humans , Tomography, X-Ray Computed
4.
Comput Methods Programs Biomed ; 213: 106501, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34752959

ABSTRACT

Automatic liver and tumor segmentation are essential steps to take decisive action in hepatic disease detection, deciding therapeutic planning, and post-treatment assessment. The computed tomography (CT) scan has become the choice of medical experts to diagnose hepatic anomalies. However, due to advancements in CT image acquisition protocol, CT scan data is growing and manual delineation of the liver and tumor from the CT volume becomes cumbersome and tedious for medical experts. Thus, the outcome becomes highly reliant on the operator's proficiency. Further, automatic liver and tumor segmentation from CT images is challenging due to complicated parenchyma, highly variable shape, and fewer voxel intensity variation among the liver, tumor, neighbouring organs, and discontinuity in liver boundaries. Recently deep learning (DL) exhibited extraordinary potential in medical image interpretation. Because of its effectiveness in performance advancement, the DL-based convolutional neural networks (CNN) gained significant interest in the medical realm. The proposed HFRU-Net is derived from the UNet architecture by modifying the skip pathways using local feature reconstruction and feature fusion mechanism that represents the detailed contextual information in the high-level features. Further, the fused features are adaptively recalibrated by learning the channel-wise interdependencies to acquire the prominent details of the modified high-level features using the squeeze-and-Excitation network (SENet). Also, in the bottleneck layer, we employed the atrous spatial pyramid pooling (ASPP) module to represent the multiscale features with dissimilar receptive fields to represent the rich spatial information in the low-level features. These amendments uplift the segmentation performance and reduce the computational complexity of the model than outperforming methods. The efficacy of the proposed model is proved by widespread experimentation on two datasets available publicly (LiTS and 3DIrcadb). The experimental result analysis illustrates that the proposed model has attained a dice similarity coefficient of 0.966 and 0.972 for liver segmentation and 0.771 and 0.776 for liver tumor segmentation on LiTS and the 3DIRCADb dataset. Further, the robustness of the HFRU-Net is confirmed on the independent LiTS challenge test dataset. The proposed model attained the global dice of 95.0% for liver segmentation and 61.4% for tumor segmentation which is comparable with the state-of-the-art methods.


Subject(s)
Image Processing, Computer-Assisted , Liver Neoplasms , Humans , Liver Neoplasms/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed
5.
Comput Med Imaging Graph ; 89: 101885, 2021 04.
Article in English | MEDLINE | ID: mdl-33684731

ABSTRACT

Automatic liver and tumor segmentation play a significant role in clinical interpretation and treatment planning of hepatic diseases. To segment liver and tumor manually from the hundreds of computed tomography (CT) images is tedious and labor-intensive; thus, segmentation becomes expert dependent. In this paper, we proposed the multi-scale approach to improve the receptive field of Convolutional Neural Network (CNN) by representing multi-scale features that extract global and local features at a more granular level. We also recalibrate channel-wise responses of the aggregated multi-scale features that enhance the high-level feature description ability of the network. The experimental results demonstrated the efficacy of a proposed model on a publicly available 3Dircadb dataset. The proposed approach achieved a dice similarity score of 97.13 % for liver and 84.15 % for tumor. The statistical significance analysis by a statistical test with a p-value demonstrated that the proposed model is statistically significant for a significance level of 0.05 (p-value < 0.05). The multi-scale approach improves the segmentation performance of the network and reduces the computational complexity and network parameters. The experimental results show that the performance of the proposed method outperforms compared with state-of-the-art methods.


Subject(s)
Neoplasms , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted , Liver/diagnostic imaging , Neural Networks, Computer
6.
J Healthc Eng ; 2018: 5940436, 2018.
Article in English | MEDLINE | ID: mdl-30356422

ABSTRACT

Breast Cancer is the most prevalent cancer among women across the globe. Automatic detection of breast cancer using Computer Aided Diagnosis (CAD) system suffers from false positives (FPs). Thus, reduction of FP is one of the challenging tasks to improve the performance of the diagnosis systems. In the present work, new FP reduction technique has been proposed for breast cancer diagnosis. It is based on appropriate integration of preprocessing, Self-organizing map (SOM) clustering, region of interest (ROI) extraction, and FP reduction. In preprocessing, contrast enhancement of mammograms has been achieved using Local Entropy Maximization algorithm. The unsupervised SOM clusters an image into number of segments to identify the cancerous region and extracts tumor regions (i.e., ROIs). However, it also detects some FPs which affects the efficiency of the algorithm. Therefore, to reduce the FPs, the output of the SOM is given to the FP reduction step which is aimed to classify the extracted ROIs into normal and abnormal class. FP reduction consists of feature mining from the ROIs using proposed local sparse curvelet coefficients followed by classification using artificial neural network (ANN). The performance of proposed algorithm has been validated using the local datasets as TMCH (Tata Memorial Cancer Hospital) and publicly available MIAS (Suckling et al., 1994) and DDSM (Heath et al., 2000) database. The proposed technique results in reduction of FPs from 0.85 to 0.02 FP/image for MIAS, 4.81 to 0.16 FP/image for DDSM, and 2.32 to 0.05 FP/image for TMCH reflecting huge improvement in classification of mammograms.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Mammography , Algorithms , Biopsy , Cluster Analysis , Databases, Factual , False Positive Reactions , Female , Humans , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated , Radiographic Image Interpretation, Computer-Assisted/methods , Sensitivity and Specificity , Software
7.
Comput Methods Programs Biomed ; 163: 1-20, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30119844

ABSTRACT

BACKGROUND AND OBJECTIVE: Early detection is the important key to reduce breast cancer mortality rate. Detecting the mammographic abnormality as a subtle sign of breast cancer is essential for the proper diagnosis and treatment. The aim of this preliminary study is to develop algorithms which detect suspicious lesions and characterize them to reduce the diagnostic errors regarding false positives and false negatives. METHODS: The proposed hybrid mechanism detects suspicious lesions automatically using connected component labeling and adaptive fuzzy region growing algorithm. A novel neighboring pixel selection algorithm reduces the computational complexity of the seeded region growing algorithm used to finalize lesion contours. These lesions are characterized using radiomic features and then classified as benign mass or malignant tumor using k-NN and SVM classifiers. Two datasets of 460 full field digital mammograms (FFDM) utilized in this clinical study consists of 210 images with malignant tumors, 30 with benign masses and 220 normal breast images that are validated by radiologists expert in mammography. RESULTS: The qualitative assessment of segmentation results by the expert radiologists shows 91.67% sensitivity and 58.33% specificity. The effects of seven geometric and 48 textural features on classification accuracy, false positives per image (FPsI), sensitivity and specificity are studied separately and together. The features together achieved the sensitivity of 84.44% and 85.56%, specificity of 91.11% and 91.67% with FPsI of 0.54 and 0.55 using k-NN and SVM classifiers respectively on local dataset. CONCLUSIONS: The overall breast cancer detection performance of proposed scheme after combining geometric and textural features with both classifiers is improved in terms of sensitivity, specificity, and FPsI.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Mammography/methods , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Breast/pathology , Case-Control Studies , Diagnosis, Computer-Assisted , False Positive Reactions , Female , Fuzzy Logic , Humans , Reproducibility of Results , Sensitivity and Specificity , Software
8.
Comput Biol Med ; 81: 64-78, 2017 02 01.
Article in English | MEDLINE | ID: mdl-28013026

ABSTRACT

Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices.


Subject(s)
Brain Diseases/diagnostic imaging , Magnetic Resonance Imaging/methods , Multimodal Imaging/methods , Neurocysticercosis/diagnostic imaging , Pattern Recognition, Automated/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Machine Learning , Reproducibility of Results , Sample Size , Sensitivity and Specificity , Wavelet Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...