Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Appl Clin Med Phys ; : e14527, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39284311

RESUMO

BACKGROUND AND OBJECTIVE: Accurate segmentation of brain tumors from multimodal magnetic resonance imaging (MRI) holds significant importance in clinical diagnosis and surgical intervention, while current deep learning methods cope with situations of multimodal MRI by an early fusion strategy that implicitly assumes that the modal relationships are linear, which tends to ignore the complementary information between modalities, negatively impacting the model's performance. Meanwhile, long-range relationships between voxels cannot be captured due to the localized character of the convolution procedure. METHOD: Aiming at this problem, we propose a multimodal segmentation network based on a late fusion strategy that employs multiple encoders and a decoder for the segmentation of brain tumors. Each encoder is specialized for processing distinct modalities. Notably, our framework includes a feature fusion module based on a 3D discrete wavelet transform aimed at extracting complementary features among the encoders. Additionally, a 3D global context-aware module was introduced to capture the long-range dependencies of tumor voxels at a high level of features. The decoder combines fused and global features to enhance the network's segmentation performance. RESULT: Our proposed model is experimented on the publicly available BraTS2018 and BraTS2021 datasets. The experimental results show competitiveness with state-of-the-art methods. CONCLUSION: The results demonstrate that our approach applies a novel concept for multimodal fusion within deep neural networks and delivers more accurate and promising brain tumor segmentation, with the potential to assist physicians in diagnosis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA