RESUMO
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.
Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Glioma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Bases de Dados Factuais , Aprendizado Profundo , Humanos , Estudos RetrospectivosRESUMO
Accurate and automatic lung nodule segmentation is of prime importance for the lung cancer analysis and its fundamental step in computer-aided diagnosis (CAD) systems. However, various types of nodule and visual similarity with its surrounding chest region make it challenging to develop lung nodule segmentation algorithm. In this paper, we proposed the Deep Deconvolutional Residual Network (DDRN) based approach for the lung nodule segmentation from the CT images. Our approach is based on two key insights. Proposed deep deconvolutional residual network trained end to end and captures the diverse variety of the nodules from the 2D set of the CT images. Summation-based long skip connection from convolutional to deconvolutional part of the network preserves the spatial information lost during the pooling operation and captures the full resolution features. The proposed method is evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) dataset. Results indicate that our proposed method can successfully segment nodules and achieve the average Dice scores of 94.97%, and Jaccard index of 88.68%.
Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Diagnóstico por Computador , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios XRESUMO
[This corrects the article DOI: 10.37349/etat.2023.00158.].
RESUMO
Lung cancer has the highest mortality rate. Its diagnosis and treatment analysis depends upon the accurate segmentation of the tumor. It becomes tedious if done manually as radiologists are overburdened with numerous medical imaging tests due to the increase in cancer patients and the COVID pandemic. Automatic segmentation techniques play an essential role in assisting medical experts. The segmentation approaches based on convolutional neural networks have provided state-of-the-art performances. However, they cannot capture long-range relations due to the region-based convolutional operator. Vision Transformers can resolve this issue by capturing global multi-contextual features. To explore this advantageous feature of the vision transformer, we propose an approach for lung tumor segmentation using an amalgamation of the vision transformer and convolutional neural network. We design the network as an encoder-decoder structure with convolution blocks deployed in the initial layers of the encoder to capture the features carrying essential information and the corresponding blocks in the final layers of the decoder. The deeper layers utilize the transformer blocks with a self-attention mechanism to capture more detailed global feature maps. We use a recently proposed unified loss function that combines cross-entropy and dice-based losses for network optimization. We trained our network on a publicly available NSCLC-Radiomics dataset and tested its generalizability on our dataset collected from a local hospital. We could achieve average dice coefficients of 0.7468 and 0.6847 and Hausdorff distances of 15.336 and 17.435 on public and local test data, respectively.
Assuntos
COVID-19 , Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética , Redes Neurais de Computação , Processamento de Imagem Assistida por ComputadorRESUMO
Skin cancer is a serious condition that requires accurate diagnosis and treatment. One way to assist clinicians in this task is using computer-aided diagnosis tools that automatically segment skin lesions from dermoscopic images. We propose a novel adversarial learning-based framework called Efficient-GAN (EGAN) that uses an unsupervised generative network to generate accurate lesion masks. It consists of a generator module with a top-down squeeze excitation-based compound scaled path, an asymmetric lateral connection-based bottom-up path, and a discriminator module that distinguishes between original and synthetic masks. A morphology-based smoothing loss is also implemented to encourage the network to create smooth semantic boundaries of lesions. The framework is evaluated on the International Skin Imaging Collaboration Lesion Dataset. It outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and accuracy of 90.1%, 83.6%, and 94.5%, respectively. We also design a lightweight segmentation framework called Mobile-GAN (MGAN) that achieves comparable performance as EGAN but with an order of magnitude lower number of training parameters, thus resulting in faster inference times for low compute resource settings.
Assuntos
Lesões Acidentais , Dermatopatias , Neoplasias Cutâneas , Humanos , Dermatopatias/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem , Diagnóstico por Computador , AprendizagemRESUMO
Aim: The aim of this study was to investigate the feasibility of developing a deep learning (DL) algorithm for classifying brain metastases from non-small cell lung cancer (NSCLC) into epidermal growth factor receptor (EGFR) mutation and anaplastic lymphoma kinase (ALK) rearrangement groups and to compare the accuracy with classification based on semantic features on imaging. Methods: Data set of 117 patients was analysed from 2014 to 2018 out of which 33 patients were EGFR positive, 43 patients were ALK positive and 41 patients were negative for either mutation. Convolutional neural network (CNN) architecture efficient net was used to study the accuracy of classification using T1 weighted (T1W) magnetic resonance imaging (MRI) sequence, T2 weighted (T2W) MRI sequence, T1W post contrast (T1post) MRI sequence, fluid attenuated inversion recovery (FLAIR) MRI sequences. The dataset was divided into 80% training and 20% testing. The associations between mutation status and semantic features, specifically sex, smoking history, EGFR mutation and ALK rearrangement status, extracranial metastasis, performance status and imaging variables of brain metastasis were analysed using descriptive analysis [chi-square test (χ2)], univariate and multivariate logistic regression analysis assuming 95% confidence interval (CI). Results: In this study of 117 patients, the analysis by semantic method showed 79.2% of the patients belonged to ALK positive were non-smokers as compared to double negative groups (P = 0.03). There was a 10-fold increase in ALK positivity as compared to EGFR positivity in ring enhancing lesions patients (P = 0.015) and there was also a 6.4-fold increase in ALK positivity as compared to double negative groups in meningeal involvement patients (P = 0.004). Using CNN Efficient Net DL model, the study achieved 76% accuracy in classifying ALK rearrangement and EGFR mutations without manual segmentation of metastatic lesions. Analysis of the manually segmented dataset resulted in improved accuracy of 89% through this model. Conclusions: Both semantic features and DL model showed comparable accuracy in classifying EGFR mutation and ALK rearrangement. Both methods can be clinically used to predict mutation status while biopsy or genetic testing is undertaken.
RESUMO
Lung nodule segmentation plays a crucial role in early-stage lung cancer diagnosis, and early detection of lung cancer can improve the survival rate of the patients. The approaches based on convolutional neural networks (CNN) have outperformed the traditional image processing approaches in various computer vision applications, including medical image analysis. Although multiple techniques based on convolutional neural networks have provided state-of-the-art performances for medical image segmentation tasks, these techniques still have some challenges. Two main challenges are data scarcity and class imbalance, which can cause overfitting resulting in poor performance. In this study, we propose an approach based on a 3D conditional generative adversarial network for lung nodule segmentation, which generates better segmentation results by learning the data distribution, leading to better accuracy. The generator in the proposed network is based on the famous U-Net architecture with a concurrent squeeze & excitation module. The discriminator is a simple classification network with a spatial squeeze & channel excitation module, differentiating between ground truth and fake segmentation. To deal with the overfitting, we implement patch-based training. We have evaluated the proposed approach on two datasets, LUNA16 data and a local dataset. We achieved significantly improved performances with dice coefficients of 80.74% and 76.36% and sensitivities of 85.46% and 82.56% for the LUNA test set and local dataset, respectively.
Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de ComputaçãoRESUMO
Lung cancer is one of the deadliest types of cancers. Computed Tomography (CT) is a widely used technique to detect tumors present inside the lungs. Delineation of such tumors is particularly essential for analysis and treatment purposes. With the advancement in hardware technologies, Machine Learning and Deep Learning methods are outperforming the traditional methods in the field of medical imaging. In order to delineate lung cancer tumors, we have proposed a deep learning-based methodology which includes a maximum intensity projection based pre-processing method, two novel deep learning networks and an ensemble strategy. The two proposed networks named Deep Residual Separable Convolutional Neural Network 1 and 2 (DRS-CNN1 and DRS-CNN2) achieved better performance over the state-of-the-art U-net network and other segmentation networks. For fair comparison, we have evaluated the performances of all networks on Medical Segmentation Decathlon (MSD) and StructSeg 2019 datasets. The DRS-CNN2 achieved a mean Dice Similarity Coefficient (DSC) of 0.649, mean 95 Hausdorff Distance (HD95) of 18.26, mean Sensitivity 0.737 and a mean Precision of 0.765 on independent test sets.
Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodosRESUMO
High-resolution computed tomography (HRCT) images in interstitial lung disease (ILD) screening can help improve healthcare quality. However, most of the earlier ILD classification work involves time-consuming manual identification of the region of interest (ROI) from the lung HRCT image before applying the deep learning classification algorithm. This paper has developed a two-stage hybrid approach of deep learning networks for ILD classification. A conditional generative adversarial network (c-GAN) has segmented the lung part from the HRCT images at the first stage. The c-GAN with multiscale feature extraction module has been used for accurate lung segmentation from the HRCT images with lung abnormalities. At the second stage, a pretrained ResNet50 has been used to extract the features from the segmented lung image for classification into six ILD classes using the support vector machine classifier. The proposed two-stage algorithm takes a whole HRCT as input eliminating the need for extracting the ROI and classifies the given HRCT image into an ILD class. The performance of the proposed two-stage deep learning network-based ILD classifier has improved considerably due to the stage-wise improvement of deep learning algorithm performance.
Assuntos
Aprendizado Profundo , Doenças Pulmonares Intersticiais/classificação , Doenças Pulmonares Intersticiais/diagnóstico por imagem , Humanos , Tomografia Computadorizada por Raios XRESUMO
Automatic liver and tumor segmentation are essential steps to take decisive action in hepatic disease detection, deciding therapeutic planning, and post-treatment assessment. The computed tomography (CT) scan has become the choice of medical experts to diagnose hepatic anomalies. However, due to advancements in CT image acquisition protocol, CT scan data is growing and manual delineation of the liver and tumor from the CT volume becomes cumbersome and tedious for medical experts. Thus, the outcome becomes highly reliant on the operator's proficiency. Further, automatic liver and tumor segmentation from CT images is challenging due to complicated parenchyma, highly variable shape, and fewer voxel intensity variation among the liver, tumor, neighbouring organs, and discontinuity in liver boundaries. Recently deep learning (DL) exhibited extraordinary potential in medical image interpretation. Because of its effectiveness in performance advancement, the DL-based convolutional neural networks (CNN) gained significant interest in the medical realm. The proposed HFRU-Net is derived from the UNet architecture by modifying the skip pathways using local feature reconstruction and feature fusion mechanism that represents the detailed contextual information in the high-level features. Further, the fused features are adaptively recalibrated by learning the channel-wise interdependencies to acquire the prominent details of the modified high-level features using the squeeze-and-Excitation network (SENet). Also, in the bottleneck layer, we employed the atrous spatial pyramid pooling (ASPP) module to represent the multiscale features with dissimilar receptive fields to represent the rich spatial information in the low-level features. These amendments uplift the segmentation performance and reduce the computational complexity of the model than outperforming methods. The efficacy of the proposed model is proved by widespread experimentation on two datasets available publicly (LiTS and 3DIrcadb). The experimental result analysis illustrates that the proposed model has attained a dice similarity coefficient of 0.966 and 0.972 for liver segmentation and 0.771 and 0.776 for liver tumor segmentation on LiTS and the 3DIRCADb dataset. Further, the robustness of the HFRU-Net is confirmed on the independent LiTS challenge test dataset. The proposed model attained the global dice of 95.0% for liver segmentation and 61.4% for tumor segmentation which is comparable with the state-of-the-art methods.
Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios XRESUMO
Automatic liver and tumor segmentation play a significant role in clinical interpretation and treatment planning of hepatic diseases. To segment liver and tumor manually from the hundreds of computed tomography (CT) images is tedious and labor-intensive; thus, segmentation becomes expert dependent. In this paper, we proposed the multi-scale approach to improve the receptive field of Convolutional Neural Network (CNN) by representing multi-scale features that extract global and local features at a more granular level. We also recalibrate channel-wise responses of the aggregated multi-scale features that enhance the high-level feature description ability of the network. The experimental results demonstrated the efficacy of a proposed model on a publicly available 3Dircadb dataset. The proposed approach achieved a dice similarity score of 97.13 % for liver and 84.15 % for tumor. The statistical significance analysis by a statistical test with a p-value demonstrated that the proposed model is statistically significant for a significance level of 0.05 (p-value < 0.05). The multi-scale approach improves the segmentation performance of the network and reduces the computational complexity and network parameters. The experimental results show that the performance of the proposed method outperforms compared with state-of-the-art methods.
Assuntos
Neoplasias , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador , Fígado/diagnóstico por imagem , Redes Neurais de ComputaçãoRESUMO
Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
Assuntos
Algoritmos , Núcleo Celular , Humanos , Processamento de Imagem Assistida por ComputadorRESUMO
Glioblastoma is a WHO grade IV brain tumor, which leads to poor overall survival (OS) of patients. For precise surgical and treatment planning, OS prediction of glioblastoma (GBM) patients is highly desired by clinicians and oncologists. Radiomic research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, first-order, intensity-based volume and shape-based and textural radiomic features are extracted from fluid-attenuated inversion recovery (FLAIR) and T1ce MRI data. The region of interest is further decomposed with stationary wavelet transform with low-pass and high-pass filtering. Further, radiomic features are extracted on these decomposed images, which helped in acquiring the directional information. The efficiency of the proposed algorithm is evaluated on Brain Tumor Segmentation (BraTS) challenge training, validation, and test datasets. The proposed approach achieved 0.695, 0.571, and 0.558 on BraTS training, validation, and test datasets. The proposed approach secured the third position in BraTS 2018 challenge for the OS prediction task.
RESUMO
Purpose: Gliomas are the most common primary brain malignancies, with varying degrees of aggressiveness and prognosis. Understanding of tumor biology and intra-tumor heterogeneity is necessary for planning personalized therapy and predicting response to therapy. Accurate tumoral and intra-tumoral segmentation on MRI is the first step toward understanding the tumor biology through computational methods. The purpose of this study was to design a segmentation algorithm and evaluate its performance on pre-treatment brain MRIs obtained from patients with gliomas. Materials and Methods: In this study, we have designed a novel 3D U-Net architecture that segments various radiologically identifiable sub-regions like edema, enhancing tumor, and necrosis. Weighted patch extraction scheme from the tumor border regions is proposed to address the problem of class imbalance between tumor and non-tumorous patches. The architecture consists of a contracting path to capture context and the symmetric expanding path that enables precise localization. The Deep Convolutional Neural Network (DCNN) based architecture is trained on 285 patients, validated on 66 patients and tested on 191 patients with Glioma from Brain Tumor Segmentation (BraTS) 2018 challenge dataset. Three dimensional patches are extracted from multi-channel BraTS training dataset to train 3D U-Net architecture. The efficacy of the proposed approach is also tested on an independent dataset of 40 patients with High Grade Glioma from our tertiary cancer center. Segmentation results are assessed in terms of Dice Score, Sensitivity, Specificity, and Hausdorff 95 distance (ITCN intra-tumoral classification network). Result: Our proposed architecture achieved Dice scores of 0.88, 0.83, and 0.75 for the whole tumor, tumor core and enhancing tumor, respectively, on BraTS validation dataset and 0.85, 0.77, 0.67 on test dataset. The results were similar on the independent patients' dataset from our hospital, achieving Dice scores of 0.92, 0.90, and 0.81 for the whole tumor, tumor core and enhancing tumor, respectively. Conclusion: The results of this study show the potential of patch-based 3D U-Net for the accurate intra-tumor segmentation. From experiments, it is observed that the weighted patch-based segmentation approach gives comparable performance with the pixel-based approach when there is a thin boundary between tumor subparts.
RESUMO
Skull-stripping is an essential pre-processing step in computational neuro-imaging directly impacting subsequent analyses. Existing skull-stripping methods have primarily targeted non-pathologicallyaffected brains. Accordingly, they may perform suboptimally when applied on brain Magnetic Resonance Imaging (MRI) scans that have clearly discernible pathologies, such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. Here we present a performance evaluation of publicly available implementations of established 3D Deep Learning architectures for semantic segmentation (namely DeepMedic, 3D U-Net, FCN), with a particular focus on identifying a skull-stripping approach that performs well on brain tumor scans, and also has a low computational footprint. We have identified a retrospective dataset of 1,796 mpMRI brain tumor scans, with corresponding manually-inspected and verified gold-standard brain tissue segmentations, acquired during standard clinical practice under varying acquisition protocols at the Hospital of the University of Pennsylvania. Our quantitative evaluation identified DeepMedic as the best performing method (Dice = 97.9, Hausdorf f 95 = 2.68). We release this pre-trained model through the Cancer Imaging Phenomics Toolkit (CaPTk) platform.
RESUMO
Breast Cancer is the most prevalent cancer among women across the globe. Automatic detection of breast cancer using Computer Aided Diagnosis (CAD) system suffers from false positives (FPs). Thus, reduction of FP is one of the challenging tasks to improve the performance of the diagnosis systems. In the present work, new FP reduction technique has been proposed for breast cancer diagnosis. It is based on appropriate integration of preprocessing, Self-organizing map (SOM) clustering, region of interest (ROI) extraction, and FP reduction. In preprocessing, contrast enhancement of mammograms has been achieved using Local Entropy Maximization algorithm. The unsupervised SOM clusters an image into number of segments to identify the cancerous region and extracts tumor regions (i.e., ROIs). However, it also detects some FPs which affects the efficiency of the algorithm. Therefore, to reduce the FPs, the output of the SOM is given to the FP reduction step which is aimed to classify the extracted ROIs into normal and abnormal class. FP reduction consists of feature mining from the ROIs using proposed local sparse curvelet coefficients followed by classification using artificial neural network (ANN). The performance of proposed algorithm has been validated using the local datasets as TMCH (Tata Memorial Cancer Hospital) and publicly available MIAS (Suckling et al., 1994) and DDSM (Heath et al., 2000) database. The proposed technique results in reduction of FPs from 0.85 to 0.02 FP/image for MIAS, 4.81 to 0.16 FP/image for DDSM, and 2.32 to 0.05 FP/image for TMCH reflecting huge improvement in classification of mammograms.
Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Diagnóstico por Computador/métodos , Mamografia , Algoritmos , Biópsia , Análise por Conglomerados , Bases de Dados Factuais , Reações Falso-Positivas , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Sensibilidade e Especificidade , SoftwareRESUMO
BACKGROUND AND OBJECTIVE: Early detection is the important key to reduce breast cancer mortality rate. Detecting the mammographic abnormality as a subtle sign of breast cancer is essential for the proper diagnosis and treatment. The aim of this preliminary study is to develop algorithms which detect suspicious lesions and characterize them to reduce the diagnostic errors regarding false positives and false negatives. METHODS: The proposed hybrid mechanism detects suspicious lesions automatically using connected component labeling and adaptive fuzzy region growing algorithm. A novel neighboring pixel selection algorithm reduces the computational complexity of the seeded region growing algorithm used to finalize lesion contours. These lesions are characterized using radiomic features and then classified as benign mass or malignant tumor using k-NN and SVM classifiers. Two datasets of 460 full field digital mammograms (FFDM) utilized in this clinical study consists of 210 images with malignant tumors, 30 with benign masses and 220 normal breast images that are validated by radiologists expert in mammography. RESULTS: The qualitative assessment of segmentation results by the expert radiologists shows 91.67% sensitivity and 58.33% specificity. The effects of seven geometric and 48 textural features on classification accuracy, false positives per image (FPsI), sensitivity and specificity are studied separately and together. The features together achieved the sensitivity of 84.44% and 85.56%, specificity of 91.11% and 91.67% with FPsI of 0.54 and 0.55 using k-NN and SVM classifiers respectively on local dataset. CONCLUSIONS: The overall breast cancer detection performance of proposed scheme after combining geometric and textural features with both classifiers is improved in terms of sensitivity, specificity, and FPsI.
Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Mamografia/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Mama/patologia , Estudos de Casos e Controles , Diagnóstico por Computador , Reações Falso-Positivas , Feminino , Lógica Fuzzy , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , SoftwareRESUMO
Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images.
Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Knee osteoarthritis (OA) progression can be monitored by measuring changes in the subchondral bone structure such as area and shape from MR images as an imaging biomarker. However, measurements of these minute changes are highly dependent on the accurate segmentation of bone tissue from MR images and it is challenging task due to the complex tissue structure and inadequate image contrast/brightness. In this paper, a fully automated method for segmenting subchondral bone from knee MR images is proposed. Here, the contrast of knee MR images is enhanced using a gray-level S-curve transformation followed by automatic seed point detection using a three-dimensional multi-edge overlapping technique. Successively, bone regions are initially extracted using distance-regularized level-set evolution followed by identification and correction of leakages along the bone boundary regions using a boundary displacement technique. The performance of the developed technique is evaluated against ground truths by measuring sensitivity, specificity, dice similarity coefficient (DSC), average surface distance (AvgD) and root mean square surface distance (RMSD). An average sensitivity (91.14%), specificity (99.12%) and DSC (90.28%) with 95% confidence interval (CI) in the range 89.74-92.54%, 98.93-99.31% and 88.68-91.88% respectively is achieved for the femur bone segmentation in 8 datasets. For tibia bone, average sensitivity (90.69%), specificity (99.65%) and DSC (91.35%) with 95% CI in the range 88.59-92.79%, 99.50-99.80% and 88.68-91.88% respectively is achieved. AvgD and RMSD values for femur are 1.43 ± 0.23 (mm) and 2.10 ± 0.35 (mm) respectively while for tibia, the values are 0.95 ± 0.28 (mm) and 1.30 ± 0.42 (mm) respectively that demonstrates acceptable error between proposed method and ground truths. In conclusion, results obtained in this work demonstrate substantially significant performance with consistency and robustness that led the proposed method to be applicable for large scale and longitudinal knee OA studies in clinical settings.
Assuntos
Imageamento Tridimensional/métodos , Joelho/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Algoritmos , Humanos , Osteoartrite do Joelho/diagnóstico por imagemRESUMO
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices.