RESUMO
Brain tumor is one of the most death defying diseases nowadays. The tumor contains a cluster of abnormal cells grouped around the inner portion of human brain. It affects the brain by squeezing/ damaging healthy tissues. It also amplifies intra cranial pressure and as a result tumor cells growth increases rapidly which may lead to death. It is, therefore desirable to diagnose/ detect brain tumor at an early stage that may increase the patient survival rate. The major objective of this research work is to present a new technique for the detection of tumor. The proposed architecture accurately segments and classifies the benign and malignant tumor cases. Different spatial domain methods are applied to enhance and accurately segment the input images. Moreover Alex and Google networks are utilized for classification in which two score vectors are obtained after the softmax layer. Further, both score vectors are fused and supplied to multiple classifiers along with softmax layer. Evaluation of proposed model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets i.e., multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016 and ischemic stroke lesion segmentation (ISLES) 2018 respectively.
Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Humanos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodosRESUMO
Brain tumor detection depicts a tough job because of its shape, size and appearance variations. In this manuscript, a deep learning model is deployed to predict input slices as a tumor (unhealthy)/non-tumor (healthy). This manuscript employs a high pass filter image to prominent the inhomogeneities field effect of the MR slices and fused with the input slices. Moreover, the median filter is applied to the fused slices. The resultant slices quality is improved with smoothen and highlighted edges of the input slices. After that, based on these slices' intensity, a 4-connected seed growing algorithm is applied, where optimal threshold clusters the similar pixels from the input slices. The segmented slices are then supplied to the fine-tuned two layers proposed stacked sparse autoencoder (SSAE) model. The hyperparameters of the model are selected after extensive experiments. At the first layer, 200 hidden units and at the second layer 400 hidden units are utilized. The testing is performed on the softmax layer for the prediction of the images having tumors and no tumors. The suggested model is trained and checked on BRATS datasets i.e., 2012(challenge and synthetic), 2013, and 2013 Leaderboard, 2014, and 2015 datasets. The presented model is evaluated with a number of performance metrics which demonstrates the improved performance.
Assuntos
Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Aprendizado Profundo , Diagnóstico por Computador/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Space-occupying lesions (SOL) brain detected on brain MRI are benign and malignant tumors. Several brain tumor segmentation algorithms have been developed but there is a need for a clinically acquired dataset that is used for real-time images. This research is done to facilitate reporting of MRI done for brain tumor detection by incorporating computer-aided detection. Another objective was to make reporting unbiased by decreasing inter-observer errors and expediting daily reporting sessions to decrease radiologists' workload. This is an experimental study. The proposed dataset contains clinically acquired multiplanar, multi-sequential MRI slices (MPMSI) which are used as input to the segmentation model without any preprocessing. The proposed AJBDS-2023 consists of 10667 images of real patients imaging data with a size of 320*320*3. Acquired images have T1W, TW2, Flair, T1W contrast, ADC, and DWI sequences. Pixel-based ground-truth annotated images of the tumor core and edema of 6334 slices are made manually under the supervision of a radiologist. Quantitative assessment of AJBDS-2023 images is done by a novel U-network on 4333 MRI slices. The diagnostic accuracy of our algorithm U-Net trained on AJBDS-2023 was 77.4 precision, 82.3 DSC, 87.4 specificity, 93.8 sensitivity, and 90.4 confidence interval. An experimental analysis of AJBDS-2023 done by the U-Net segmentation model proves that the proposed AJBDS-2023 dataset has images without preprocessing, which is more challenging and provides a more realistic platform for evaluation and analysis of newly developed algorithms in this domain and helps radiologists in MRI brain reporting more realistically.
RESUMO
Worldwide, more than 1.5 million deaths are occur due to liver cancer every year. The use of computed tomography (CT) for early detection of liver cancer could save millions of lives per year. There is also an urgent need for a computerized method to interpret, detect and analyze CT scans reliably, easily, and correctly. However, precise segmentation of minute tumors is a difficult task because of variation in the shape, intensity, size, low contrast of the tumor, and the adjacent tissues of the liver. To address these concerns, a model comprised of three parts: synthetic image generation, localization, and segmentation, is proposed. An optimized generative adversarial network (GAN) is utilized for generation of synthetic images. The generated images are localized by using the improved localization model, in which deep features are extracted from pre-trained Resnet-50 models and fed into a YOLOv3 detector as an input. The proposed modified model localizes and classifies the minute liver tumor with 0.99 mean average precision (mAp). The third part is segmentation, in which pre-trained Inceptionresnetv2 employed as a base-Network of Deeplabv3 and subsequently is trained on fine-tuned parameters with annotated ground masks. The experiments reflect that the proposed approach has achieved greater than 95% accuracy in the testing phase and it is proven that, in comparison to the recently published work in this domain, this research has localized and segmented the liver and minute liver tumor with more accuracy.
RESUMO
A brain tumor is an abnormal enlargement of cells if not properly diagnosed. Early detection of a brain tumor is critical for clinical practice and survival rates. Brain tumors arise in a variety of shapes, sizes, and features, with variable treatment options. Manual detection of tumors is difficult, time-consuming, and error-prone. Therefore, a significant requirement for computerized diagnostics systems for accurate brain tumor detection is present. In this research, deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningioma, no tumor, and pituitary tumor. The classified tumor images have been passed to the proposed Seg-network where the actual infected region is segmented to analyze the tumor severity level. The outcomes of the reported research have been evaluated on three benchmark datasets such as Kaggle, 2020-BRATS, and local collected images. The model achieved greater than 90% detection scores to prove the proposed model's effectiveness.
Assuntos
Neoplasias Encefálicas , Glioma , Encéfalo , Neoplasias Encefálicas/diagnóstico , Glioma/diagnóstico , Humanos , Aprendizagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodosRESUMO
The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID-19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID-19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three-phase model is proposed for COVID-19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet-18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto-encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
Assuntos
COVID-19 , Teste para COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Tomografia Computadorizada por Raios XRESUMO
COVID-19 is a quickly spreading over 10 million persons globally. The overall number of infected patients worldwide is estimated to be around 133,381,413 people. Infection rate is being increased on daily basis. It has also caused a devastating effect on the world economy and public health. Early stage detection of this disease is mandatory to reduce the mortality rate. Artificial intelligence performs a vital role for COVID-19 detection at an initial stage using chest radiographs. The proposed methods comprise of the two phases. Deep features (DFs) are derived from its last fully connected layers of pre-trained models like AlexNet and MobileNet in phase-I. Later these feature vectors are fused serially. Best features are selected through feature selection method of PCA and passed to the SVM and KNN for classification. In phase-II, quantum transfer learning model is utilized, in which a pre-trained ResNet-18 model is applied for DF collection and then these features are supplied as an input to the 4-qubit quantum circuit for model training with the tuned hyperparameters. The proposed technique is evaluated on two publicly available x-ray imaging datasets. The proposed methodology achieved an accuracy index of 99.0% with three classes including corona virus-positive images, normal images, and pneumonia radiographs. In comparison to other recently published work, the experimental findings show that the proposed approach outperforms it.
RESUMO
Coronavirus19 is caused due to infection in the respiratory system. It is the type of RNA virus that might infect animal and human species. In the severe stage, it causes pneumonia in human beings. In this research, hand-crafted and deep microscopic features are used to classify lung infection. The proposed work consists of two phases; in phase I, infected lung region is segmented using proposed U-Net deep learning model. The hand-crafted features are extracted such as histogram orientation gradient (HOG), noise to the harmonic ratio (NHr), and segmentation based fractal texture analysis (SFTA) from the segmented image, and optimum features are selected from each feature vector using entropy. In phase II, local binary patterns (LBPs), speeded up robust feature (Surf), and deep learning features are extracted using a pretrained network such as inceptionv3, ResNet101 from the input CT images, and select optimum features based on entropy. Finally, the optimum selected features using entropy are fused in two ways, (i) The hand-crafted features (HOG, NHr, SFTA, LBP, SURF) are horizontally concatenated/fused (ii) The hand-crafted features (HOG, NHr, SFTA, LBP, SURF) are combined/fused with deep features. The fused optimum features vector is passed to the ensemble models (Boosted tree, bagged tree, and RUSBoosted tree) in two ways for the COVID19 classification, (i) classification using fused hand-crafted features (ii) classification using fusion of hand-crafted features and deep features. The proposed methodology is tested /evaluated on three benchmark datasets. Two datasets employed for experiments and results show that hand-crafted & deep microscopic feature's fusion provide better results compared to only hand-crafted fused features.
Assuntos
COVID-19 , Humanos , Inteligência , Redes Neurais de Computação , SARS-CoV-2RESUMO
BACKGROUND AND OBJECTIVE: Brain tumor occurs because of anomalous development of cells. It is one of the major reasons of death in adults around the globe. Millions of deaths can be prevented through early detection of brain tumor. Earlier brain tumor detection using Magnetic Resonance Imaging (MRI) may increase patient's survival rate. In MRI, tumor is shown more clearly that helps in the process of further treatment. This work aims to detect tumor at an early phase. METHODS: In this manuscript, Weiner filter with different wavelet bands is used to de-noise and enhance the input slices. Subsets of tumor pixels are found with Potential Field (PF) clustering. Furthermore, global threshold and different mathematical morphology operations are used to isolate the tumor region in Fluid Attenuated Inversion Recovery (Flair) and T2 MRI. For accurate classification, Local Binary Pattern (LBP) and Gabor Wavelet Transform (GWT) features are fused. RESULTS: The proposed approach is evaluated in terms of peak signal to noise ratio (PSNR), mean squared error (MSE) and structured similarity index (SSIM) yielding results as 76.38, 0.037 and 0.98 on T2 and 76.2, 0.039 and 0.98 on Flair respectively. The segmentation results have been evaluated based on pixels, individual features and fused features. At pixels level, the comparison of proposed approach is done with ground truth slices and also validated in terms of foreground (FG) pixels, background (BG) pixels, error region (ER) and pixel quality (Q). The approach achieved 0.93 FG and 0.98 BG precision and 0.010 ER on a local dataset. On multimodal brain tumor segmentation challenge dataset BRATS 2013, 0.93 FG and 0.99 BG precision and 0.005 ER are acquired. Similarly on BRATS 2015, 0.97 FG and 0.98 BG precision and 0.015 ER are obtained. In terms of quality, the average Q value and deviation are 0.88 and 0.017. At the fused feature based level, specificity, sensitivity, accuracy, area under the curve (AUC) and dice similarity coefficient (DSC) are 1.00, 0.92, 0.93, 0.96 and 0.96 on BRATS 2013, 0.90, 1.00, 0.97, 0.98 and 0.98 on BRATS 2015 and 0.90, 0.91, 0.90, 0.77 and 0.95 on local dataset respectively. CONCLUSION: The presented approach outperformed as compared to existing approaches.