Your browser doesn't support javascript.
loading
Multimodal Brain Tumor Classification Using Convolutional Tumnet Architecture.
Usha, M Padma; Kannan, G; Ramamoorthy, M.
Afiliação
  • Usha MP; Department of Electronics and Communication Engineering B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India.
  • Kannan G; Department of Electronics and Communication Engineering B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India.
  • Ramamoorthy M; Department of Artificial Intelligence and Machine Learning Saveetha School of Engineering SIMATS, Chennai, 600124, India.
Behav Neurol ; 2024: 4678554, 2024.
Article em En | MEDLINE | ID: mdl-38882177
ABSTRACT
The most common and aggressive tumor is brain malignancy, which has a short life span in the fourth grade of the disease. As a result, the medical plan may be a crucial step toward improving the well-being of a patient. Both diagnosis and therapy are part of the medical plan. Brain tumors are commonly imaged with magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). In this paper, multimodal fused imaging with classification and segmentation for brain tumors was proposed using the deep learning method. The MRI and CT brain tumor images of the same slices (308 slices of meningioma and sarcoma) are combined using three different types of pixel-level fusion methods. The presence/absence of a tumor is classified using the proposed Tumnet technique, and the tumor area is found accordingly. In the other case, Tumnet is also applied for single-modal MRI/CT (561 image slices) for classification. The proposed Tumnet was modeled with 5 convolutional layers, 3 pooling layers with ReLU activation function, and 3 fully connected layers. The first-order statistical fusion metrics for an average method of MRI-CT images are obtained as SSIM tissue at 83%, SSIM bone at 84%, accuracy at 90%, sensitivity at 96%, and specificity at 95%, and the second-order statistical fusion metrics are obtained as the standard deviation of fused images at 79% and entropy at 0.99. The entropy value confirms the presence of additional features in the fused image. The proposed Tumnet yields a sensitivity of 96%, an accuracy of 98%, a specificity of 99%, normalized values of the mean of 0.75, a standard deviation of 0.4, a variance of 0.16, and an entropy of 0.90.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias Encefálicas / Imageamento por Ressonância Magnética / Tomografia Computadorizada por Raios X / Imagem Multimodal / Aprendizado Profundo / Meningioma Limite: Humans Idioma: En Revista: Behav Neurol Assunto da revista: CIENCIAS DO COMPORTAMENTO / NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Índia

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias Encefálicas / Imageamento por Ressonância Magnética / Tomografia Computadorizada por Raios X / Imagem Multimodal / Aprendizado Profundo / Meningioma Limite: Humans Idioma: En Revista: Behav Neurol Assunto da revista: CIENCIAS DO COMPORTAMENTO / NEUROLOGIA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Índia
...