Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 150
Filtrar
1.
Curr Med Imaging ; 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39297463

RESUMO

BACKGROUND: Brain tumours represent a diagnostic challenge, especially in the imaging area, where the differentiation of normal and pathologic tissues should be precise. The use of up-to-date machine learning techniques would be of great help in terms of brain tumor identification accuracy from MRI data. Objective This research paper aims to check the efficiency of a federated learning method that joins two classifiers, such as convolutional neural networks (CNNs) and random forests (R.F.F.), with dual U-Net segmentation for federated learning. This procedure benefits the image identification task on preprocessed MRI scan pictures that have already been categorized. METHODS: In addition to using a variety of datasets, federated learning was utilized to train the CNN-RF model while taking data privacy into account. The processed MRI images with Median, Gaussian, and Wiener filters are used to filter out the noise level and make the feature extraction process easy and efficient. The surgical part used a dual U-Net layout, and the performance assessment was based on precision, recall, F1-score, and accuracy. RESULTS: The model achieved excellent classification performance on local datasets as CRPs were high, from 91.28% to 95.52% for macro, micro, and weighted averages. Throughout the process of federated averaging, the collective model outperformed by reaching 97% accuracy compared to those of 99%, which were subjected to different clients. The correctness of how data is used helps the federated averaging method convert individual model insights into a consistent global model while keeping all personal data private. CONCLUSION: The combined structure of the federated learning framework, CNN-RF hybrid model, and dual U-Net segmentation is a robust and privacypreserving approach for identifying MRI images from brain tumors. The results of the present study exhibited that the technique is promising in improving the quality of brain tumor categorization and provides a pathway for practical utilization in clinical settings.

2.
Heliyon ; 10(18): e37804, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39323802

RESUMO

Brain tumors are one of the leading causes of cancer death; screening early is the best strategy to diagnose and treat brain tumors. Magnetic Resonance Imaging (MRI) is extensively utilized for brain tumor diagnosis; nevertheless, achieving improved accuracy and performance, a critical challenge in most of the previously reported automated medical diagnostics, is a complex problem. The study introduces the Dual Vision Transformer-DSUNET model, which incorporates feature fusion techniques to provide precise and efficient differentiation between brain tumors and other brain regions by leveraging multi-modal MRI data. The impetus for this study arises from the necessity of automating the segmentation process of brain tumors in medical imaging, a critical component in the realms of diagnosis and therapy strategy. The BRATS 2020 dataset is employed to tackle this issue, an extensively utilized dataset for segmenting brain tumors. This dataset encompasses multi-modal MRI images, including T1-weighted, T2-weighted, T1Gd (contrast-enhanced), and FLAIR modalities. The proposed model incorporates the dual vision idea to comprehensively capture the heterogeneous properties of brain tumors across several imaging modalities. Moreover, feature fusion techniques are implemented to augment the amalgamation of data originating from several modalities, enhancing the accuracy and dependability of tumor segmentation. The Dual Vision Transformer-DSUNET model's performance is evaluated using the Dice Coefficient as a prevalent metric for quantifying segmentation accuracy. The results obtained from the experiment exhibit remarkable performance, with Dice Coefficient values of 91.47 % for enhanced tumors, 92.38 % for core tumors, and 90.88 % for edema. The cumulative Dice score for the entirety of the classes is 91.29 %. In addition, the model has a high level of accuracy, roughly 99.93 %, which underscores its durability and efficacy in segmenting brain tumors. Experimental findings demonstrate the integrity of the suggested architecture, which has quickly improved the detection accuracy of many brain diseases.

3.
Chin Clin Oncol ; 13(Suppl 1): AB093, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39295411

RESUMO

BACKGROUND: Central nervous system (CNS) tumours, especially glioma, are a complex disease and many challenges are encountered in their treatment. Artificial intelligence (AI) has made a colossal impact in many walks of life at a low cost. However, this avenue still needs to be explored in healthcare settings, demanding investment of resources towards growth in this area. We aim to develop machine learning (ML) algorithms to facilitate the accurate diagnosis and precise mapping of the brain tumour. METHODS: We queried the data from 2019 to 2022 and brain magnetic resonance imaging (MRI) of glioma patients were extracted. Images that had both T1-contrast and T2-fluid-attenuated inversion recovery (T2-FLAIR) volume sequences available were included. MRI images were annotated by a team supervised by a neuroradiologist. The extracted MRIs thus obtained were then fed to the preprocessing pipeline to extract brains using SynthStrip. They were further fed to the deep learning-based semantic segmentation pipelines using UNet-based architecture with convolutional neural network (CNN) at its backbone. Subsequently, the algorithm was tested to assess the efficacy in the pixel-wise diagnosis of tumours. RESULTS: In total, 69 samples of low-grade glioma (LGG) were used out of which 62 were used for fine-tuning a pre-trained model trained on brain tumor segmentation (BraTS) 2020 and 7 were used for testing. For the evaluation of the model, the Dice coefficient was used as the metric. The average Dice coefficient on the 7 test samples was 0.94. CONCLUSIONS: With the advent of technology, AI continues to modify our lifestyles. It is critical to adapt this technology in healthcare with the aim of improving the provision of patient care. We present our preliminary data for the use of ML algorithms in the diagnosis and segmentation of glioma. The promising result with comparable accuracy highlights the importance of early adaptation of this nascent technology.


Assuntos
Aprendizado Profundo , Glioma , Imageamento por Ressonância Magnética , Humanos , Glioma/classificação , Glioma/patologia , Imageamento por Ressonância Magnética/métodos , Masculino , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia , Feminino
4.
J Appl Clin Med Phys ; : e14527, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39284311

RESUMO

BACKGROUND AND OBJECTIVE: Accurate segmentation of brain tumors from multimodal magnetic resonance imaging (MRI) holds significant importance in clinical diagnosis and surgical intervention, while current deep learning methods cope with situations of multimodal MRI by an early fusion strategy that implicitly assumes that the modal relationships are linear, which tends to ignore the complementary information between modalities, negatively impacting the model's performance. Meanwhile, long-range relationships between voxels cannot be captured due to the localized character of the convolution procedure. METHOD: Aiming at this problem, we propose a multimodal segmentation network based on a late fusion strategy that employs multiple encoders and a decoder for the segmentation of brain tumors. Each encoder is specialized for processing distinct modalities. Notably, our framework includes a feature fusion module based on a 3D discrete wavelet transform aimed at extracting complementary features among the encoders. Additionally, a 3D global context-aware module was introduced to capture the long-range dependencies of tumor voxels at a high level of features. The decoder combines fused and global features to enhance the network's segmentation performance. RESULT: Our proposed model is experimented on the publicly available BraTS2018 and BraTS2021 datasets. The experimental results show competitiveness with state-of-the-art methods. CONCLUSION: The results demonstrate that our approach applies a novel concept for multimodal fusion within deep neural networks and delivers more accurate and promising brain tumor segmentation, with the potential to assist physicians in diagnosis.

5.
Heliyon ; 10(16): e36119, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39224363

RESUMO

Currently, surgery remains the primary treatment for craniocerebral tumors. Before doctors perform surgeries, they need to determine the surgical plan according to the shape, location, and size of the tumor; however, various conditions of different patients make the tumor segmentation task challenging. To improve the accuracy of determining tumor shape and realizing edge segmentation, a U-shaped network combining a residual pyramid module and a dual feature attention module is proposed. The residual pyramid module can enlarge the receptive field, extract multiscale features, and fuse original information, which solves the problem caused by the feature pyramid pooling where the local information is not related to the remote information. In addition, the dual feature attention module is proposed to replace the skip connection in the original U-Net network, enrich the features, and improve the attention of the model to space and channel features with large amounts of information to be used for more accurate brain tumor segmentation. To evaluate the performance of the proposed model, experiments were conducted on the public datasets Kaggle_3M and BraTS2021. Because the model proposed in this study is applicable to two-dimensional image segmentation, it is necessary to obtain the crosscutting images of fair class in the BraTS2021 dataset in advance. Results show that the model accuracy, Jaccard similarity coefficient, Dice similarity coefficient, and false negative rate (FNR) on the Kaggle_3M dataset are 0.9395, 0.8812, 0.8958, and 0.007, respectively. The model accuracy, Jaccard similarity coefficient, Dice similarity coefficient, and FNR on the BraTS2021 dataset were 0.9375, 0.9072, 0.8981, and 0.0087, respectively. Compared with existing algorithms, all the indicators of the proposed algorithm have been improved, but the proposed model still has certain limitations and has not been applied to actual clinical trials. For specific datasets, the generalization ability of the model needs to be further improved. In the future work, the model will be further improved to address the aforementioned limitations.

6.
Med Image Anal ; 97: 103301, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39146701

RESUMO

The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze and interpret neuroimaging data. Medical foundation models have shown promise of superior performance with better sample efficiency. This work introduces a novel approach towards creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. Our approach involves a novel two-stage pretraining approach using vision transformers. The first stage encodes anatomical structures in generally healthy brains from the large-scale unlabeled neuroimage dataset of multimodal brain magnetic resonance imaging (MRI) images from 41,400 participants. This stage of pertaining focuses on identifying key features such as shapes and sizes of different brain structures. The second pretraining stage identifies disease-specific attributes, such as geometric shapes of tumors and lesions and spatial placements within the brain. This dual-phase methodology significantly reduces the extensive data requirements usually necessary for AI model training in neuroimage segmentation with the flexibility to adapt to various imaging modalities. We rigorously evaluate our model, BrainSegFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainSegFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the model complexity and the volume of unlabeled training data derived from generally healthy brains. Both of these factors enhance the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. Our pretrained models and code are at https://github.com/lab-smile/BrainSegFounder.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Neuroimagem , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento Tridimensional/métodos , Neuroimagem/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Inteligência Artificial , Encéfalo/diagnóstico por imagem , Algoritmos
7.
J Neurosci Methods ; 410: 110247, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39128599

RESUMO

The prevalence of brain tumor disorders is currently a global issue. In general, radiography, which includes a large number of images, is an efficient method for diagnosing these life-threatening disorders. The biggest issue in this area is that it takes a radiologist a long time and is physically strenuous to look at all the images. As a result, research into developing systems based on machine learning to assist radiologists in diagnosis continues to rise daily. Convolutional neural networks (CNNs), one type of deep learning approach, have been pivotal in achieving state-of-the-art results in several medical imaging applications, including the identification of brain tumors. CNN hyperparameters are typically set manually for segmentation and classification, which might take a while and increase the chance of using suboptimal hyperparameters for both tasks. Bayesian optimization is a useful method for updating the deep CNN's optimal hyperparameters. The CNN network, however, can be considered a "black box" model because of how difficult it is to comprehend the information it stores because of its complexity. Therefore, this problem can be solved by using Explainable Artificial Intelligence (XAI) tools, which provide doctors with a realistic explanation of CNN's assessments. Implementation of deep learning-based systems in real-time diagnosis is still rare. One of the causes could be that these methods don't quantify the Uncertainty in the predictions, which could undermine trust in the AI-based diagnosis of diseases. To be used in real-time medical diagnosis, CNN-based models must be realistic and appealing, and uncertainty needs to be evaluated. So, a novel three-phase strategy is proposed for segmenting and classifying brain tumors. Segmentation of brain tumors using the DeeplabV3+ model is first performed with tuning of hyperparameters using Bayesian optimization. For classification, features from state-of-the-art deep learning models Darknet53 and mobilenetv2 are extracted and fed to SVM for classification, and hyperparameters of SVM are also optimized using a Bayesian approach. The second step is to understand whatever portion of the images CNN uses for feature extraction using XAI algorithms. Using confusion entropy, the Uncertainty of the Bayesian optimized classifier is finally quantified. Based on a Bayesian-optimized deep learning framework, the experimental findings demonstrate that the proposed method outperforms earlier techniques, achieving a 97 % classification accuracy and a 0.98 global accuracy.


Assuntos
Teorema de Bayes , Neoplasias Encefálicas , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Redes Neurais de Computação , Neuroimagem/métodos , Neuroimagem/normas
8.
Med Image Underst Anal ; 14122: 48-63, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39156493

RESUMO

Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.

9.
Neural Netw ; 180: 106657, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39186839

RESUMO

Different brain tumor magnetic resonance imaging (MRI) modalities provide diverse tumor-specific information. Previous works have enhanced brain tumor segmentation performance by integrating multiple MRI modalities. However, multi-modal MRI data are often unavailable in clinical practice. An incomplete modality leads to missing tumor-specific information, which degrades the performance of existing models. Various strategies have been proposed to transfer knowledge from a full modality network (teacher) to an incomplete modality one (student) to address this issue. However, they neglect the fact that brain tumor segmentation is a structural prediction problem that requires voxel semantic relations. In this paper, we propose a Reconstruct Incomplete Relation Network (RIRN) that transfers voxel semantic relational knowledge from the teacher to the student. Specifically, we propose two types of voxel relations to incorporate structural knowledge: Class-relative relations (CRR) and Class-agnostic relations (CAR). The CRR groups voxels into different tumor regions and constructs a relation between them. The CAR builds a global relation between all voxel features, complementing the local inter-region relation. Moreover, we use adversarial learning to align the holistic structural prediction between the teacher and the student. Extensive experimentation on both the BraTS 2018 and BraTS 2020 datasets establishes that our method outperforms all state-of-the-art approaches.

10.
Front Bioeng Biotechnol ; 12: 1392807, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39104626

RESUMO

Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.

11.
Quant Imaging Med Surg ; 14(7): 4579-4604, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39022265

RESUMO

Background: The information between multimodal magnetic resonance imaging (MRI) is complementary. Combining multiple modalities for brain tumor image segmentation can improve segmentation accuracy, which has great significance for disease diagnosis and treatment. However, different degrees of missing modality data often occur in clinical practice, which may lead to serious performance degradation or even failure of brain tumor segmentation methods relying on full-modality sequences to complete the segmentation task. To solve the above problems, this study aimed to design a new deep learning network for incomplete multimodal brain tumor segmentation. Methods: We propose a novel cross-modal attention fusion-based deep neural network (CMAF-Net) for incomplete multimodal brain tumor segmentation, which is based on a three-dimensional (3D) U-Net architecture with encoding and decoding structure, a 3D Swin block, and a cross-modal attention fusion (CMAF) block. A convolutional encoder is initially used to extract the specific features from different modalities, and an effective 3D Swin block is constructed to model the long-range dependencies to obtain richer information for brain tumor segmentation. Then, a cross-attention based CMAF module is proposed that can deal with different missing modality situations by fusing features between different modalities to learn the shared representations of the tumor regions. Finally, the fused latent representation is decoded to obtain the final segmentation result. Additionally, channel attention module (CAM) and spatial attention module (SAM) are incorporated into the network to further improve the robustness of the model; the CAM to help focus on important feature channels, and the SAM to learn the importance of different spatial regions. Results: Evaluation experiments on the widely-used BraTS 2018 and BraTS 2020 datasets demonstrated the effectiveness of the proposed CMAF-Net which achieved average Dice scores of 87.9%, 81.8%, and 64.3%, as well as Hausdorff distances of 4.21, 5.35, and 4.02 for whole tumor, tumor core, and enhancing tumor on the BraTS 2020 dataset, respectively, outperforming several state-of-the-art segmentation methods in missing modalities situations. Conclusions: The experimental results show that the proposed CMAF-Net can achieve accurate brain tumor segmentation in the case of missing modalities with promising application potential.

12.
Sci Rep ; 14(1): 17615, 2024 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-39080324

RESUMO

The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer learning, this work offers an effective method for the detection of brain tumours. Helping doctors make extremely accurate diagnoses is the goal. A transfer learning-based model has been suggested that offers high sensitivity and accuracy scores for brain tumour detection when segmentation is done using R-CNN masks. To train the model, the Inception V3, VGG-16, and ResNet-50 architectures were utilised. The Brain MRI Images for Brain Tumour Detection dataset was utilised to develop this method. This work's performance is evaluated and reported in terms of recall, specificity, sensitivity, accuracy, precision, and F1 score. A thorough analysis has been done comparing the proposed model operating with three distinct architectures: VGG-16, Inception V3, and Resnet-50. Comparing the proposed model, which was influenced by the VGG-16, to related works also revealed its performance. Achieving high sensitivity and accuracy percentages was the main goal. Using this approach, an accuracy and sensitivity of around 99% were obtained, which was much greater than current efforts.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Sensibilidade e Especificidade
13.
Diagnostics (Basel) ; 14(12)2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38928672

RESUMO

Currently, brain tumors are extremely harmful and prevalent. Deep learning technologies, including CNNs, UNet, and Transformer, have been applied in brain tumor segmentation for many years and have achieved some success. However, traditional CNNs and UNet capture insufficient global information, and Transformer cannot provide sufficient local information. Fusing the global information from Transformer with the local information of convolutions is an important step toward improving brain tumor segmentation. We propose the Group Normalization Shuffle and Enhanced Channel Self-Attention Network (GETNet), a network combining the pure Transformer structure with convolution operations based on VT-UNet, which considers both global and local information. The network includes the proposed group normalization shuffle block (GNS) and enhanced channel self-attention block (ECSA). The GNS is used after the VT Encoder Block and before the downsampling block to improve information extraction. An ECSA module is added to the bottleneck layer to utilize the characteristics of the detailed features in the bottom layer effectively. We also conducted experiments on the BraTS2021 dataset to demonstrate the performance of our network. The Dice coefficient (Dice) score results show that the values for the regions of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) were 91.77, 86.03, and 83.64, respectively. The results show that the proposed model achieves state-of-the-art performance compared with more than eleven benchmarks.

14.
Comput Biol Med ; 178: 108799, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38925087

RESUMO

Magnetic resonance imaging (MRI) has become an essential and a frontline technique in the detection of brain tumor. However, segmenting tumors manually from scans is laborious and time-consuming. This has led to an increasing trend towards fully automated methods for precise tumor segmentation in MRI scans. Accurate tumor segmentation is crucial for improved diagnosis, treatment, and prognosis. This study benchmarks and evaluates four widely used CNN-based methods for brain tumor segmentation CaPTk, 2DVNet, EnsembleUNets, and ResNet50. Using 1251 multimodal MRI scans from the BraTS2021 dataset, we compared the performance of these methods against a reference dataset of segmented images assisted by radiologists. This comparison was conducted using segmented images directly and further by radiomic features extracted from the segmented images using pyRadiomics. Performance was assessed using the Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). EnsembleUNets excelled, achieving a DSC of 0.93 and an HD of 18, outperforming the other methods. Further comparative analysis of radiomic features confirmed EnsembleUNets as the most precise segmentation method, surpassing other methods. EnsembleUNets recorded a Concordance Correlation Coefficient (CCC) of 0.79, a Total Deviation Index (TDI) of 1.14, and a Root Mean Square Error (RMSE) of 0.53, underscoring its superior performance. We also performed validation on an independent dataset of 611 samples (UPENN-GBM), which further supported the accuracy of EnsembleUNets, with a DSC of 0.85 and an HD of 17.5. These findings provide valuable insight into the efficacy of EnsembleUNets, supporting informed decisions for accurate brain tumor segmentation.


Assuntos
Benchmarking , Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos , Imagem Multimodal/métodos , Encéfalo/diagnóstico por imagem , Bases de Dados Factuais
15.
Technol Health Care ; 32(S1): 183-195, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38759048

RESUMO

BACKGROUND: Brain tumor is a highly destructive, aggressive, and fatal disease. The presence of brain tumors can disrupt the brain's ability to control body movements, consciousness, sensations, thoughts, speech, and memory. Brain tumors are often accompanied by symptoms like epilepsy, headaches, and sensory loss, leading to varying degrees of cognitive impairment in affected patients. OBJECTIVE: The study goal is to develop an effective method to detect and segment brain tumor with high accurancy. METHODS: This paper proposes a novel U-Net+⁣+ network using EfficientNet as the encoder to segment brain tumors based on MRI images. We adjust the original U-Net+⁣+ model by removing the dense skip connections between sub-networks to simplify computational complexity and improve model efficiency, while the connections of feature maps at the same resolution level are retained to bridge the semantic gap. RESULTS: The proposed segmentation model is trained and tested on Kaggle's LGG brain tumor dataset, which obtains a satisfying performance with a Dice coefficient of 0.9180. CONCLUSION: This paper conducts research on brain tumor segmentation, using the U-Net+⁣+ network with EfficientNet as an encoder to segment brain tumors based on MRI images. We adjust the original U-Net+⁣+ model to simplify calculations and maintains rich semantic spatial features at the same time. Multiple loss functions are compared in this study and their effectiveness are discussed. The experimental results shows the model achieves a high segmention result with Dice coefficient of 0.9180.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Imageamento por Ressonância Magnética/métodos , Algoritmos
16.
Comput Biol Med ; 176: 108547, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38728994

RESUMO

Self-supervised pre-training and fully supervised fine-tuning paradigms have received much attention to solve the data annotation problem in deep learning fields. Compared with traditional pre-training on large natural image datasets, medical self-supervised learning methods learn rich representations derived from unlabeled data itself thus avoiding the distribution shift between different image domains. However, nowadays state-of-the-art medical pre-training methods were specifically designed for downstream tasks making them less flexible and difficult to apply to new tasks. In this paper, we propose grid mask image modeling, a flexible and general self-supervised method to pre-train medical vision transformers for 3D medical image segmentation. Our goal is to guide networks to learn the correlations between organs and tissues by reconstructing original images based on partial observations. The relationships are consistent within the human body and invariant to disease type or imaging modality. To achieve this, we design a Siamese framework consisting of an online branch and a target branch. An adaptive and hierarchical masking strategy is employed in the online branch to (1) learn the boundaries or small contextual mutation regions within images; (2) to learn high-level semantic representations from deeper layers of the multiscale encoder. In addition, the target branch provides representations for contrastive learning to further reduce representation redundancy. We evaluate our method through segmentation performance on two public datasets. The experimental results demonstrate our method outperforms other self-supervised methods. Codes are available at https://github.com/mobiletomb/Gmim.


Assuntos
Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Aprendizado Profundo , Algoritmos , Aprendizado de Máquina Supervisionado
17.
Comput Biol Med ; 175: 108412, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38691914

RESUMO

Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos
18.
Med Biol Eng Comput ; 62(10): 3179-3191, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38789839

RESUMO

Accurate brain tumor segmentation with multi-modal MRI images is crucial, but missing modalities in clinical practice often reduce accuracy. The aim of this study is to propose a mixture-of-experts and semantic-guided network to tackle the issue of missing modalities in brain tumor segmentation. We introduce a transformer-based encoder with novel mixture-of-experts blocks. In each block, four modality experts aim for modality-specific feature learning. Learnable modality embeddings are employed to alleviate the negative effect of missing modalities. We also introduce a decoder guided by semantic information, designed to pay higher attention to various tumor regions. Finally, we conduct extensive comparison experiments with other models as well as ablation experiments to validate the performance of the proposed model on the BraTS2018 dataset. The proposed model can accurately segment brain tumor sub-regions even with missing modalities. It achieves an average Dice score of 0.81 for the whole tumor, 0.66 for the tumor core, and 0.52 for the enhanced tumor across the 15 modality combinations, achieving top or near-top results in most cases, while also exhibiting a lower computational cost. Our mixture-of-experts and sematic-guided network achieves accurate and reliable brain tumor segmentation results with missing modalities, indicating its significant potential for clinical applications. Our source code is already available at https://github.com/MaggieLSY/MESG-Net .


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Semântica , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos
19.
Int J Neural Syst ; 34(8): 2450036, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38686911

RESUMO

Magnetic Resonance Imaging (MRI) is an important diagnostic technique for brain tumors due to its ability to generate images without tissue damage or skull artifacts. Therefore, MRI images are widely used to achieve the segmentation of brain tumors. This paper is the first attempt to discuss the use of optimization spiking neural P systems to improve the threshold segmentation of brain tumor images. To be specific, a threshold segmentation approach based on optimization numerical spiking neural P systems with adaptive multi-mutation operators (ONSNPSamos) is proposed to segment brain tumor images. More specifically, an ONSNPSamo with a multi-mutation strategy is introduced to balance exploration and exploitation abilities. At the same time, an approach combining the ONSNPSamo and connectivity algorithms is proposed to address the brain tumor segmentation problem. Our experimental results from CEC 2017 benchmarks (basic, shifted and rotated, hybrid, and composition function optimization problems) demonstrate that the ONSNPSamo is better than or close to 12 optimization algorithms. Furthermore, case studies from BraTS 2019 show that the approach combining the ONSNPSamo and connectivity algorithms can more effectively segment brain tumor images than most algorithms involved.


Assuntos
Algoritmos , Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/fisiopatologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Mutação
20.
Phys Med Biol ; 69(11)2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38636503

RESUMO

Objective.Brain tumor segmentation on magnetic resonance imaging (MRI) plays an important role in assisting the diagnosis and treatment of cancer patients. Recently, cascaded U-Net models have achieved excellent performance via conducting coarse-to-fine segmentation of MRI brain tumors. However, they are still restricted by obvious global and local differences among various brain tumors, which are difficult to solve with conventional convolutions.Approach.To address the issue, this study proposes a novel Adaptive Cascaded Transformer U-Net (ACTransU-Net) for MRI brain tumor segmentation, which simultaneously integrates Transformer and dynamic convolution into a single cascaded U-Net architecture to adaptively capture global information and local details of brain tumors. ACTransU-Net first cascades two 3D U-Nets into a two-stage network to segment brain tumors from coarse to fine. Subsequently, it integrates omni-dimensional dynamic convolution modules into the second-stage shallow encoder and decoder, thereby enhancing the local detail representation of various brain tumors through dynamically adjusting convolution kernel parameters. Moreover, 3D Swin-Transformer modules are introduced into the second-stage deep encoder and decoder to capture image long-range dependencies, which helps adapt the global representation of brain tumors.Main results.Extensive experiment results evaluated on the public BraTS 2020 and BraTS 2021 brain tumor data sets demonstrate the effectiveness of ACTransU-Net, with average DSC of 84.96% and 91.37%, and HD95 of 10.81 and 7.31 mm, proving competitiveness with the state-of-the-art methods.Significance.The proposed method focuses on adaptively capturing both global information and local details of brain tumors, aiding physicians in their accurate diagnosis. In addition, it has the potential to extend ACTransU-Net for segmenting other types of lesions. The source code is available at:https://github.com/chenbn266/ACTransUnet.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA