Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Digit Imaging ; 34(4): 905-921, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34327627

RESUMEN

The development of an automated glioma segmentation system from MRI volumes is a difficult task because of data imbalance problem. The ability of deep learning models to incorporate different layers for data representation assists medical experts like radiologists to recognize the condition of the patient and further make medical practices easier and automatic. State-of-the-art deep learning algorithms enable advancement in the medical image segmentation area, such a segmenting the volumes into sub-tumor classes. For this task, fully convolutional network (FCN)-based architectures are used to build end-to-end segmentation solutions. In this paper, we proposed a multi-level Kronecker convolutional neural network (MLKCNN) that captures information at different levels to have both local and global level contextual information. Our ML-KCNN uses Kronecker convolution, which overcomes the missing pixels problem by dilated convolution. Moreover, we used a post-processing technique to minimize false positive from segmented outputs, and the generalized dice loss (GDL) function handles the data-imbalance problem. Furthermore, the combination of connected component analysis (CCA) with conditional random fields (CRF) used as a post-processing technique achieves reduced Hausdorff distance (HD) score of 3.76 on enhancing tumor (ET), 4.88 on whole tumor (WT), and 5.85 on tumor core (TC). Dice similarity coefficient (DSC) of 0.74 on ET, 0.90 on WT, and 0.83 on TC. Qualitative and visual evaluation of our proposed method shown effectiveness of the proposed segmentation method can achieve performance that can compete with other brain tumor segmentation techniques.


Asunto(s)
Glioma , Procesamiento de Imagen Asistido por Computador , Algoritmos , Glioma/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
2.
J Digit Imaging ; 33(6): 1443-1464, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32666364

RESUMEN

Several neuroimaging processing applications consider skull stripping as a crucial pre-processing step. Due to complex anatomical brain structure and intensity variations in brain magnetic resonance imaging (MRI), an appropriate skull stripping is an important part. The process of skull stripping basically deals with the removal of the skull region for clinical analysis in brain segmentation tasks, and its accuracy and efficiency are quite crucial for diagnostic purposes. It requires more accurate and detailed methods for differentiating brain regions and the skull regions and is considered as a challenging task. This paper is focused on the transition of the conventional to the machine- and deep-learning-based automated skull stripping methods for brain MRI images. It is observed in this study that deep learning approaches have outperformed conventional and machine learning techniques in many ways, but they have their limitations. It also includes the comparative analysis of the current state-of-the-art skull stripping methods, a critical discussion of some challenges, model of quantifying parameters, and future work directions.


Asunto(s)
Aprendizaje Profundo , Cráneo , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Neuroimagen , Cráneo/diagnóstico por imagen
3.
Comput Biol Med ; 133: 104410, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33894501

RESUMEN

Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Encéfalo , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
4.
J Med Imaging (Bellingham) ; 8(1): 014003, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33585661

RESUMEN

Purpose: A brain tumor is deadly as its exact extraction is tricky. However, at times, its removal is the only way to save a patient, leaving very little room for the doctors to make a mistake. Image segmentation algorithms can be used to detect tumor in magnetic resonance imaging (MRI). Irregularity in size, location, and shape of tumor in brain with imbalanced distribution of classes in the dataset make this a challenging task. To deal with these challenges, a region of interest (ROI) is extracted from images by removing redundant information. Approach: We present a process to extract ROIs by converting images into neutrosophic domain. Two modalities FLAIR and T2 were diffused to reduce inhomogeneity in nontumorous regions and then anisotropic diffusion is applied to reduce the noise. The ROIs, which are tumorous regions, were extracted using neutrosophic technique based on the modified S-function. The extracted ROIs were refined by applying the morphological operations in the end. Results: We evaluated our proposed method using three datasets including BraTS 2019 and compared the results with state-of-the-art methods. The parameters sensitivity, false negative rate, and ratio of ROI area to slice area were calculated to evaluate the proposed method. These parameters indicate that the proposed method achieved more than 98% sensitivity, 1.5% false negative rate, and removed more than 80% redundancy. Conclusions: Evaluating parameters indicate that the method proposed has removed most of the redundant data from MRI images and extracted ROIs are composed of tumorous region.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA