RESUMO
Brain tumor analysis is essential to the timely diagnosis and effective treatment of patients. Tumor analysis is challenging because of tumor morphology factors like size, location, texture, and heteromorphic appearance in medical images. In this regard, a novel two-phase deep learning-based framework is proposed to detect and categorize brain tumors in magnetic resonance images (MRIs). In the first phase, a novel deep-boosted features space and ensemble classifiers (DBFS-EC) scheme is proposed to effectively detect tumor MRI images from healthy individuals. The deep-boosted feature space is achieved through customized and well-performing deep convolutional neural networks (CNNs), and consequently, fed into the ensemble of machine learning (ML) classifiers. While in the second phase, a new hybrid features fusion-based brain-tumor classification approach is proposed, comprised of both static and dynamic features with an ML classifier to categorize different tumor types. The dynamic features are extracted from the proposed brain region-edge net (BRAIN-RENet) CNN, which is able to learn the heteromorphic and inconsistent behavior of various tumors. In contrast, the static features are extracted by using a histogram of gradients (HOG) feature descriptor. The effectiveness of the proposed two-phase brain tumor analysis framework is validated on two standard benchmark datasets, which were collected from Kaggle and Figshare and contain different types of tumors, including glioma, meningioma, pituitary, and normal images. Experimental results suggest that the proposed DBFS-EC detection scheme outperforms the standard and achieved accuracy (99.56%), precision (0.9991), recall (0.9899), F1-Score (0.9945), MCC (0.9892), and AUC-PR (0.9990). The classification scheme, based on the fusion of feature spaces of proposed BRAIN-RENet and HOG, outperform state-of-the-art methods significantly in terms of recall (0.9913), precision (0.9906), accuracy (99.20%), and F1-Score (0.9909) in the CE-MRI dataset.
Assuntos
Neoplasias Encefálicas , Glioma , Neoplasias Meníngeas , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodosRESUMO
Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.
RESUMO
Early detection of abnormalities in chest X-rays is essential for COVID-19 diagnosis and analysis. It can be effective for controlling pandemic spread by contact tracing, as well as for effective treatment of COVID-19 infection. In the proposed work, we presented a deep hybrid learning-based framework for the detection of COVID-19 using chest X-ray images. We developed a novel computationally light and optimized deep Convolutional Neural Networks (CNNs) based framework for chest X-ray analysis. We proposed a new COV-Net to learn COVID-specific patterns from chest X-rays and employed several machine learning classifiers to enhance the discrimination power of the presented framework. Systematic exploitation of max-pooling operations facilitates the proposed COV-Net in learning the boundaries of infected patterns in chest X-rays and helps for multi-class classification of two diverse infection types along with normal images. The proposed framework has been evaluated on a publicly available benchmark dataset containing X-ray images of coronavirus-infected, pneumonia-infected, and normal patients. The empirical performance of the proposed method with developed COV-Net and support vector machine is compared with the state-of-the-art deep models which show that the proposed deep hybrid learning-based method achieves 96.69% recall, 96.72% precision, 96.73% accuracy, and 96.71% F-score. For multi-class classification and binary classification of COVID-19 and pneumonia, the proposed model achieved 99.21% recall, 99.22% precision, 99.21% F-score, and 99.23% accuracy.