Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 151
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Appl Clin Med Phys ; : e14527, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39284311

RESUMEN

BACKGROUND AND OBJECTIVE: Accurate segmentation of brain tumors from multimodal magnetic resonance imaging (MRI) holds significant importance in clinical diagnosis and surgical intervention, while current deep learning methods cope with situations of multimodal MRI by an early fusion strategy that implicitly assumes that the modal relationships are linear, which tends to ignore the complementary information between modalities, negatively impacting the model's performance. Meanwhile, long-range relationships between voxels cannot be captured due to the localized character of the convolution procedure. METHOD: Aiming at this problem, we propose a multimodal segmentation network based on a late fusion strategy that employs multiple encoders and a decoder for the segmentation of brain tumors. Each encoder is specialized for processing distinct modalities. Notably, our framework includes a feature fusion module based on a 3D discrete wavelet transform aimed at extracting complementary features among the encoders. Additionally, a 3D global context-aware module was introduced to capture the long-range dependencies of tumor voxels at a high level of features. The decoder combines fused and global features to enhance the network's segmentation performance. RESULT: Our proposed model is experimented on the publicly available BraTS2018 and BraTS2021 datasets. The experimental results show competitiveness with state-of-the-art methods. CONCLUSION: The results demonstrate that our approach applies a novel concept for multimodal fusion within deep neural networks and delivers more accurate and promising brain tumor segmentation, with the potential to assist physicians in diagnosis.

2.
Entropy (Basel) ; 26(2)2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38392421

RESUMEN

Brain tumor segmentation using neural networks presents challenges in accurately capturing diverse tumor shapes and sizes while maintaining real-time performance. Additionally, addressing class imbalance is crucial for achieving accurate clinical results. To tackle these issues, this study proposes a novel N-shaped lightweight network that combines multiple feature pyramid paths and U-Net architectures. Furthermore, we ingeniously integrate hybrid attention mechanisms into various locations of depth-wise separable convolution module to improve efficiency, with channel attention found to be the most effective for skip connections in the proposed network. Moreover, we introduce a combination loss function that incorporates a newly designed weighted cross-entropy loss and dice loss to effectively tackle the issue of class imbalance. Extensive experiments are conducted on four publicly available datasets, i.e., UCSF-PDGM, BraTS 2021, BraTS 2019, and MSD Task 01 to evaluate the performance of different methods. The results demonstrate that the proposed network achieves superior segmentation accuracy compared to state-of-the-art methods. The proposed network not only improves the overall segmentation performance but also provides a favorable computational efficiency, making it a promising approach for clinical applications.

3.
BMC Med Imaging ; 23(1): 172, 2023 10 30.
Artículo en Inglés | MEDLINE | ID: mdl-37904116

RESUMEN

PURPOSE: Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor. MATERIALS AND METHODS: Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features. RESULTS: Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. CONCLUSIONS: The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo
4.
Sensors (Basel) ; 23(4)2023 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-36850942

RESUMEN

Brain tumors are among the deadliest forms of cancer, characterized by abnormal proliferation of brain cells. While early identification of brain tumors can greatly aid in their therapy, the process of manual segmentation performed by expert doctors, which is often time-consuming, tedious, and prone to human error, can act as a bottleneck in the diagnostic process. This motivates the development of automated algorithms for brain tumor segmentation. However, accurately segmenting the enhanced and core tumor regions is complicated due to high levels of inter- and intra-tumor heterogeneity in terms of texture, morphology, and shape. This study proposes a fully automatic method called the selective deeply supervised multi-scale attention network (SDS-MSA-Net) for segmenting brain tumor regions using a multi-scale attention network with novel selective deep supervision (SDS) mechanisms for training. The method utilizes a 3D input composed of five consecutive slices, in addition to a 2D slice, to maintain sequential information. The proposed multi-scale architecture includes two encoding units to extract meaningful global and local features from the 3D and 2D inputs, respectively. These coarse features are then passed through attention units to filter out redundant information by assigning lower weights. The refined features are fed into a decoder block, which upscales the features at various levels while learning patterns relevant to all tumor regions. The SDS block is introduced to immediately upscale features from intermediate layers of the decoder, with the aim of producing segmentations of the whole, enhanced, and core tumor regions. The proposed framework was evaluated on the BraTS2020 dataset and showed improved performance in brain tumor region segmentation, particularly in the segmentation of the core and enhancing tumor regions, demonstrating the effectiveness of the proposed approach. Our code is publicly available.


Asunto(s)
Neoplasias Encefálicas , Médicos , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Algoritmos , Aprendizaje
5.
J Digit Imaging ; 36(4): 1794-1807, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36856903

RESUMEN

Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient's lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN.


Asunto(s)
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Algoritmos , Artefactos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética
6.
J Digit Imaging ; 36(5): 2075-2087, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37340197

RESUMEN

Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos
7.
BMC Bioinformatics ; 22(Suppl 5): 636, 2022 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-36513986

RESUMEN

BACKGROUND: Brain tumor segmentation plays a significant role in clinical treatment and surgical planning. Recently, several deep convolutional networks have been proposed for brain tumor segmentation and have achieved impressive performance. However, most state-of-the-art models use 3D convolution networks, which require high computational costs. This makes it difficult to apply these models to medical equipment in the future. Additionally, due to the large diversity of the brain tumor and uncertain boundaries between sub-regions, some models cannot well-segment multiple tumors in the brain at the same time. RESULTS: In this paper, we proposed a lightweight hierarchical convolution network, called LHC-Net. Our network uses a multi-scale strategy which the common 3D convolution is replaced by the hierarchical convolution with residual-like connections. It improves the ability of multi-scale feature extraction and greatly reduces parameters and computation resources. On the BraTS2020 dataset, LHC-Net achieves the Dice scores of 76.38%, 90.01% and 83.32% for ET, WT and TC, respectively, which is better than that of 3D U-Net with 73.50%, 89.42% and 81.92%. Especially on the multi-tumor set, our model shows significant performance improvement. In addition, LHC-Net has 1.65M parameters and 35.58G FLOPs, which is two times fewer parameters and three times less computation compared with 3D U-Net. CONCLUSION: Our proposed method achieves automatic segmentation of tumor sub-regions from four-modal brain MRI images. LHC-Net achieves competitive segmentation performance with fewer parameters and less computation than the state-of-the-art models. It means that our model can be applied under limited medical computing resources. By using the multi-scale strategy on channels, LHC-Net can well-segment multiple tumors in the patient's brain. It has great potential for application to other multi-scale segmentation tasks.


Asunto(s)
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo , Neuroimagen , Incertidumbre , Procesamiento de Imagen Asistido por Computador
8.
NMR Biomed ; 35(5): e4657, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34859922

RESUMEN

Automatic brain tumor segmentation on MRI is a prerequisite to provide a quantitative and intuitive assistance for clinical diagnosis and treatment. Meanwhile, 3D deep neural network related brain tumor segmentation models have demonstrated considerable accuracy improvement over corresponding 2D methodologies. However, 3D brain tumor segmentation models generally suffer from high computation cost. Motivated by a recently proposed 3D dilated multi-fiber network (DMF-Net) architecture that pays more attention to reduction of computation cost, we present in this work a novel encoder-decoder neural network, ie a 3D asymmetric expectation-maximization attention network (AEMA-Net), to automatically segment brain tumors. We modify DMF-Net by introducing an asymmetric convolution block into a multi-fiber unit and a dilated multi-fiber unit to capture more powerful deep features for the brain tumor segmentation. In addition, AEMA-Net further incorporates an expectation-maximization attention (EMA) module into the DMF-Net by embedding the EMA block in the third stage of skip connection, which focuses on capturing the long-range dependence of context. We extensively evaluate AEMA-Net on three MRI brain tumor segmentation benchmarks of BraTS 2018, 2019 and 2020 datasets. Experimental results demonstrate that AEMA-Net outperforms both 3D U-Net and DMF-Net, and it achieves competitive performance compared with the state-of-the-art brain tumor segmentation methods.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Motivación , Redes Neurales de la Computación
9.
J Biomed Inform ; 133: 104173, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35998815

RESUMEN

Glioma is one of the most threatening tumors and the survival rate of the infected patient is low. The automatic segmentation of the tumors by reliable algorithms can reduce diagnosis time. In this paper, a novel 3D multi-threading dilated convolutional network (MTDC-Net) is proposed for the automatic brain tumor segmentation. First of all, a multi-threading dilated convolution (MTDC) strategy is introduced in the encoder part, so that the low dimensional structural features can be extracted and integrated better. At the same time, the pyramid matrix fusion (PMF) algorithm is used to integrate the characteristic structural information better. Secondly, in order to make the better use of context semantical information, this paper proposed a spatial pyramid convolution (SPC) operation. By using convolution with different kernel sizes, the model can aggregate more semantic information. Finally, the multi-threading adaptive pooling up-sampling (MTAU) strategy is used to increase the weight of semantic information, and improve the recognition ability of the model. And a pixel-based post-processing method is used to prevent the effects of error prediction. On the brain tumors segmentation challenge 2018 (BraTS2018) public validation dataset, the dice scores of MTDC-Net are 0.832, 0.892 and 0.809 for core, whole and enhanced of the tumor, respectively. On the BraTS2020 public validation dataset, the dice scores of MTDC-Net are 0.833, 0.896 and 0.797 for the core tumor, whole tumor and enhancing tumor, respectively. Mass numerical experiments show that MTDC-Net is a state-of-the-art network for automatic brain tumor segmentation.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Programas Informáticos
10.
BMC Med Imaging ; 22(1): 14, 2022 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-35086482

RESUMEN

BACKGROUND: For the encoding part of U-Net3+,the ability of brain tumor feature extraction is insufficient, as a result, the features can not be fused well during up-sampling, and the accuracy of segmentation will reduce. METHODS: In this study, we put forward an improved U-Net3+ segmentation network based on stage residual. In the encoder part, the encoder based on the stage residual structure is used to solve the vanishing gradient problem caused by the increasing in network depth, and enhances the feature extraction ability of the encoder which is instrumental in full feature fusion when up-sampling in the network. What's more, we replaced batch normalization (BN) layer with filter response normalization (FRN) layer to eliminate batch size impact on the network. Based on the improved U-Net3+ two-dimensional (2D) model with stage residual, IResUnet3+ three-dimensional (3D) model is constructed. We propose appropriate methods to deal with 3D data, which achieve accurate segmentation of the 3D network. RESULTS: The experimental results showed that: the sensitivity of WT, TC, and ET increased by 1.34%, 4.6%, and 8.44%, respectively. And the Dice coefficients of ET and WT were further increased by 3.43% and 1.03%, respectively. To facilitate further research, source code can be found at: https://github.com/YuOnlyLookOne/IResUnet3Plus . CONCLUSION: The improved network has a significant improvement in the segmentation task of the brain tumor BraTS2018 dataset, compared with the classical networks u-net, v-net, resunet and u-net3+, the proposed network has smaller parameters and significantly improved accuracy.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Neuroimagen/métodos , Aprendizaje Profundo , Progresión de la Enfermedad , Humanos , Imagenología Tridimensional/métodos
11.
J Digit Imaging ; 35(4): 938-946, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35293605

RESUMEN

Diagnosis of brain tumor gliomas is a challenging task in medical image analysis due to its complexity, the less regularity of tumor structures, and the diversity of tissue textures and shapes. Semantic segmentation approaches using deep learning have consistently outperformed the previous methods in this challenging task. However, deep learning is insufficient to provide the required local features related to tissue texture changes due to tumor growth. This paper designs a hybrid method arising from this need, which incorporates machine-learned and hand-crafted features. A semantic segmentation network (SegNet) is used to generate the machine-learned features, while the grey-level co-occurrence matrix (GLCM)-based texture features construct the hand-crafted features. In addition, the proposed approach only takes the region of interest (ROI), which represents the extension of the complete tumor structure, as input, and suppresses the intensity of other irrelevant area. A decision tree (DT) is used to classify the pixels of ROI MRI images into different parts of tumors, i.e. edema, necrosis and enhanced tumor. The method was evaluated on BRATS 2017 dataset. The results demonstrate that the proposed model provides promising segmentation in brain tumor structure. The F-measures for automatic brain tumor segmentation against ground truth are 0.98, 0.75 and 0.69 for whole tumor, core and enhanced tumor, respectively.


Asunto(s)
Neoplasias Encefálicas , Glioma , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Glioma/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación
12.
BMC Med Inform Decis Mak ; 21(Suppl 2): 63, 2021 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-34330265

RESUMEN

BACKGROUND: Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. METHODS: We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. RESULTS: We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. CONCLUSIONS: The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
13.
Sensors (Basel) ; 21(22)2021 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-34833602

RESUMEN

MRI images are visually inspected by domain experts for the analysis and quantification of the tumorous tissues. Due to the large volumetric data, manual reporting on the images is subjective, cumbersome, and error prone. To address these problems, automatic image analysis tools are employed for tumor segmentation and other subsequent statistical analysis. However, prior to the tumor analysis and quantification, an important challenge lies in the pre-processing. In the present study, permutations of different pre-processing methods are comprehensively investigated. In particular, the study focused on Gibbs ringing artifact removal, bias field correction, intensity normalization, and adaptive histogram equalization (AHE). The pre-processed MRI data is then passed onto 3D U-Net for automatic segmentation of brain tumors. The segmentation results demonstrated the best performance with the combination of two techniques, i.e., Gibbs ringing artifact removal and bias-field correction. The proposed technique achieved mean dice score metrics of 0.91, 0.86, and 0.70 for the whole tumor, tumor core, and enhancing tumor, respectively. The testing mean dice scores achieved by the system are 0.90, 0.83, and 0.71 for the whole tumor, core tumor, and enhancing tumor, respectively. The novelty of this work concerns a robust pre-processing sequence for improving the segmentation accuracy of MR images. The proposed method overcame the testing dice scores of the state-of-the-art methods. The results are benchmarked with the existing techniques used in the Brain Tumor Segmentation Challenge (BraTS) 2018 challenge.


Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Aumento de la Imagen , Procesamiento de Imagen Asistido por Computador
14.
J Digit Imaging ; 34(4): 905-921, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34327627

RESUMEN

The development of an automated glioma segmentation system from MRI volumes is a difficult task because of data imbalance problem. The ability of deep learning models to incorporate different layers for data representation assists medical experts like radiologists to recognize the condition of the patient and further make medical practices easier and automatic. State-of-the-art deep learning algorithms enable advancement in the medical image segmentation area, such a segmenting the volumes into sub-tumor classes. For this task, fully convolutional network (FCN)-based architectures are used to build end-to-end segmentation solutions. In this paper, we proposed a multi-level Kronecker convolutional neural network (MLKCNN) that captures information at different levels to have both local and global level contextual information. Our ML-KCNN uses Kronecker convolution, which overcomes the missing pixels problem by dilated convolution. Moreover, we used a post-processing technique to minimize false positive from segmented outputs, and the generalized dice loss (GDL) function handles the data-imbalance problem. Furthermore, the combination of connected component analysis (CCA) with conditional random fields (CRF) used as a post-processing technique achieves reduced Hausdorff distance (HD) score of 3.76 on enhancing tumor (ET), 4.88 on whole tumor (WT), and 5.85 on tumor core (TC). Dice similarity coefficient (DSC) of 0.74 on ET, 0.90 on WT, and 0.83 on TC. Qualitative and visual evaluation of our proposed method shown effectiveness of the proposed segmentation method can achieve performance that can compete with other brain tumor segmentation techniques.


Asunto(s)
Glioma , Procesamiento de Imagen Asistido por Computador , Algoritmos , Glioma/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
15.
BMC Med Imaging ; 20(1): 17, 2020 02 11.
Artículo en Inglés | MEDLINE | ID: mdl-32046685

RESUMEN

MR images (MRIs) accurate segmentation of brain lesions is important for improving cancer diagnosis, surgical planning, and prediction of outcome. However, manual and accurate segmentation of brain lesions from 3D MRIs is highly expensive, time-consuming, and prone to user biases. We present an efficient yet conceptually simple brain segmentation network (referred as Brain SegNet), which is a 3D residual framework for automatic voxel-wise segmentation of brain lesion. Our model is able to directly predict dense voxel segmentation of brain tumor or ischemic stroke regions in 3D brain MRIs. The proposed 3D segmentation network can run at about 0.5s per MRIs - about 50 times faster than previous approaches Med Image Anal 43: 98-111, 2018, Med Image Anal 36:61-78, 2017. Our model is evaluated on the BRATS 2015 benchmark for brain tumor segmentation, where it obtains state-of-the-art results, by surpassing recently published results reported in Med Image Anal 43: 98-111, 2018, Med Image Anal 36:61-78, 2017. We further applied the proposed Brain SegNet for ischemic stroke lesion outcome prediction, with impressive results achieved on the Ischemic Stroke Lesion Segmentation (ISLES) 2017 database.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador
16.
Sensors (Basel) ; 20(15)2020 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-32731598

RESUMEN

The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.


Asunto(s)
Neoplasias Encefálicas , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador
17.
J Xray Sci Technol ; 28(4): 709-726, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32444591

RESUMEN

BACKGROUND: Brain tumor segmentation plays an important role in assisting diagnosis of disease, treatment plan planning, and surgical navigation. OBJECTIVE: This study aims to improve the accuracy of tumor boundary segmentation using the multi-scale U-Net network. METHODS: In this study, a novel U-Net with dilated convolution (DCU-Net) structure is proposed for brain tumor segmentation based on the classic U-Net structure. First, the MR brain tumor images are pre-processed to alleviate the class imbalance problem by reducing the input of the background pixels. Then, the multi-scale spatial pyramid pooling is used to replace the max pooling at the end of the down-sampling path. It can expand the feature receptive field while maintaining image resolution. Finally, a dilated convolution residual block is combined to improve the skip connections in the training networks to improve the network's ability to recognize the tumor details. RESULTS: The proposed model has been evaluated using the Brain Tumor Segmentation (BRATS) 2018 Challenge training dataset and achieved the dice similarity coefficients (DSC) score of 0.91, 0.78 and 0.83 for whole tumor, core tumor and enhancing tumor segmentation, respectively. CONCLUSIONS: The experiment results indicate that the proposed model yields a promising performance in automated brain tumor segmentation.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Neoplasias Encefálicas/patología , Humanos , Redes Neurales de la Computación
18.
J Med Syst ; 43(4): 84, 2019 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-30810822

RESUMEN

The brain tumor can be created by uncontrollable increase of abnormal cells in tissue of brain and it has two kinds of tumors: one is benign and another one is malignant tumor. The benign brain tumor does not affect the adjacent normal and healthy tissue but the malignant tumor can affect the neighboring tissues of brain that can lead to the death of person. An early detection of brain tumor can be required to protect the survival of patients. Usually, the brain tumor is detected using MRI scanning method. However, the radiologists are not providing the effective tumor segmentation in MRI image due to the irregular shape of tumors and position of tumor in the brain. Accurate brain tumor segmentation is needed to locate the tumor and it is used to give the correct treatment for a patient and it provides the key to the doctor who must execute the surgery for patient. In this paper, a novel deep learning algorithm (kernel based CNN) with M-SVM is presented to segment the tumor automatically and efficiently. This presented work contains several steps that are preprocessing, feature extraction, image classification and tumor segmentation of brain. The MRI image is smoothed and enhanced by Laplacian of Gaussian filtering method (LoG) with Contrast Limited Adaptive Histrogram Equalization (CLAHE) and feature can be extracted from it based on tumor shape position, shape and surface features in brain. Consequently, the image classification is done using M-SVM depending on the selected features. From MRI image, the tumor is segmented with help of kernel based CNN method.. Experimental results of proposed method can show that this presented technique can executes brain tumor segmentation accurately reaching almost 84% in evaluation with existing algorithms.


Asunto(s)
Neoplasias Encefálicas/diagnóstico , Neoplasias Encefálicas/patología , Aprendizaje Profundo , Detección Precoz del Cáncer , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos , Máquina de Vectores de Soporte , Factores de Tiempo
19.
MAGMA ; 30(4): 397-405, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28321524

RESUMEN

OBJECTIVE: To evaluate the reliability of the standard planimetric methodology of volumetric analysis and three different open-source semi-automated approaches of brain tumor segmentation. MATERIALS AND METHODS: The volumes of subependymal giant cell astrocytomas (SEGA) examined by 30 MRI studies of 10 patients from a previous everolimus-related trial (EMINENTS study) were estimated using four methods: planimetric method (modified MacDonald ellipsoid method), ITK-Snap (pixel clustering, geodesic active contours, region competition methods), 3D Slicer (level-set thresholding), and NIRFast (k-means clustering, Markov random fields). The methods were compared, and a trial simulation was performed to determine how the choice of approach could influence the final decision about progression or response. RESULTS: Intraclass correlation coefficient was high (0.95; 95% CI 0.91-0.98). The planimetric method always overestimated the size of the tumor, while virtually no mean difference was found between ITK-Snap and 3D Slicer (P = 0.99). NIRFast underestimated the volume and presented a proportional bias. During the trial simulation, a moderate level of agreement between all the methods (kappa 0.57-0.71, P < 0.002) was noted. CONCLUSION: Semi-automated segmentation can ease oncological follow-up but the moderate level of agreement between segmentation methods suggests that the reference standard volumetric method for SEGA tumors should be revised and chosen carefully, as the selection of volumetry tool may influence the conclusion about tumor progression or response.


Asunto(s)
Astrocitoma/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Neuroimagen/métodos , Simulación por Computador , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagenología Tridimensional/estadística & datos numéricos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/estadística & datos numéricos , Neuroimagen/estadística & datos numéricos , Reproducibilidad de los Resultados , Carga Tumoral
20.
J Xray Sci Technol ; 25(2): 301-312, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28269819

RESUMEN

BACKGROUND: Brain tumor segmentation is a challenging task for its variation in intensity. The phenomenon is caused by the inhomogeneous content of tumor tissue and the choice of imaging modality. In 2010 Zhang developed the Selective Binary Gaussian Filtering Regularizing Level Set (SBGFRLS) model that combined the merits of edge-based and region-based segmentation. OBJECTIVE: To improve the SBGFRLS method by modifying the singed pressure force (SPF) term with multiple image information and demonstrate effectiveness of proposed method on clinical images. METHODS: In original SBGFRLS model, the contour evolution direction mainly depends on the SPF. By introducing a directional term in SPF, the metric could control the evolution direction. The SPF is altered by statistic values enclosed by the contour. This concept can be extended to jointly incorporate multiple image information. The new SPF term is expected to bring a solution for blur edge problem in brain tumor segmentation. The proposed method is validated with clinical images including pre- and post-contrast magnetic resonance images. The accuracy and robustness is compared with sensitivity, specificity, DICE similarity coefficient and Jaccard similarity index. RESULTS: Experimental results show improvement, in particular the increase of sensitivity at the same specificity, in segmenting all types of tumors except for the diffused tumor. CONCLUSION: The novel brain tumor segmentation method is clinical-oriented with fast, robust and accurate implementation and a minimal user interaction. The method effectively segmented homogeneously enhanced, non-enhanced, heterogeneously-enhanced, and ring-enhanced tumor under MR imaging. Though the method is limited by identifying edema and diffuse tumor, several possible solutions are suggested to turn the curve evolution into a fully functional clinical diagnosis tool.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Humanos , Distribución Normal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA