Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 65
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Neuroimage ; 290: 120560, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38431181

RESUMEN

Brain extraction and image quality assessment are two fundamental steps in fetal brain magnetic resonance imaging (MRI) 3D reconstruction and quantification. However, the randomness of fetal position and orientation, the variability of fetal brain morphology, maternal organs around the fetus, and the scarcity of data samples, all add excessive noise and impose a great challenge to automated brain extraction and quality assessment of fetal MRI slices. Conventionally, brain extraction and quality assessment are typically performed independently. However, both of them focus on the brain image representation, so they can be jointly optimized to ensure the network learns more effective features and avoid overfitting. To this end, we propose a novel two-stage dual-task deep learning framework with a brain localization stage and a dual-task stage for joint brain extraction and quality assessment of fetal MRI slices. Specifically, the dual-task module compactly contains a feature extraction module, a quality assessment head and a segmentation head with feature fusion for simultaneous brain extraction and quality assessment. Besides, a transformer architecture is introduced into the feature extraction module and the segmentation head. We utilize a multi-step training strategy to guarantee a stable and successful training of all modules. Finally, we validate our method by a 5-fold cross-validation and ablation study on a dataset with fetal brain MRI slices in different qualities, and perform a cross-dataset validation in addition. Experiments show that the proposed framework achieves very promising performance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Embarazo , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Cabeza , Feto/diagnóstico por imagen
2.
Eur Radiol ; 34(2): 1190-1199, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37615767

RESUMEN

OBJECTIVES: Existing brain extraction models should be further optimized to provide more information for oncological analysis. We aimed to develop an nnU-Net-based deep learning model for automated brain extraction on contrast-enhanced T1-weighted (T1CE) images in presence of brain tumors. METHODS: This is a multi-center, retrospective study involving 920 patients. A total of 720 cases with four types of intracranial tumors from private institutions were collected and set as the training group and the internal test group. Mann-Whitney U test (U test) was used to investigate if the model performance was associated with pathological types and tumor characteristics. Then, the generalization of model was independently tested on public datasets consisting of 100 glioma and 100 vestibular schwannoma cases. RESULTS: In the internal test, the model achieved promising performance with median Dice similarity coefficient (DSC) of 0.989 (interquartile range (IQR), 0.988-0.991), and Hausdorff distance (HD) of 6.403 mm (IQR, 5.099-8.426 mm). U test suggested a slightly descending performance in meningioma and vestibular schwannoma group. The results of U test also suggested that there was a significant difference in peritumoral edema group, with median DSC of 0.990 (IQR, 0.989-0.991, p = 0.002), and median HD of 5.916 mm (IQR, 5.000-8.000 mm, p = 0.049). In the external test, our model also showed to be robust performance, with median DSC of 0.991 (IQR, 0.983-0.998) and HD of 8.972 mm (IQR, 6.164-13.710 mm). CONCLUSIONS: For automated processing of MRI neuroimaging data presence of brain tumors, the proposed model can perform brain extraction including important superficial structures for oncological analysis. CLINICAL RELEVANCE STATEMENT: The proposed model serves as a radiological tool for image preprocessing in tumor cases, focusing on superficial brain structures, which could streamline the workflow and enhance the efficiency of subsequent radiological assessments. KEY POINTS: • The nnU-Net-based model is capable of segmenting significant superficial structures in brain extraction. • The proposed model showed feasible performance, regardless of pathological types or tumor characteristics. • The model showed generalization in the public datasets.


Asunto(s)
Neoplasias Encefálicas , Neoplasias Meníngeas , Neuroma Acústico , Humanos , Estudios Retrospectivos , Neuroma Acústico/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen
3.
BMC Med Imaging ; 23(1): 124, 2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37700250

RESUMEN

BACKGROUND: Brain extraction is an essential prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion recognition, location, and segmentation. Segmentation using a fully convolutional neural network (FCN) yields high accuracy but a relatively slow extraction speed. METHODS: This paper proposes an integrated algorithm, FABEM, to address the above issues. This method first uses threshold segmentation, closed operation, convolutional neural network (CNN), and image filling to generate a specific mask. Then, it detects the number of connected regions of the mask. If the number of connected regions equals 1, the extraction is done by directly multiplying with the original image. Otherwise, the mask was further segmented using the region growth method for original images with single-region brain distribution. Conversely, for images with multi-region brain distribution, Deeplabv3 + is used to adjust the mask. Finally, the mask is multiplied with the original image to complete the extraction. RESULTS: The algorithm and 5 FCN models were tested on 24 datasets containing different lesions, and the algorithm's performance showed MPA = 0.9968, MIoU = 0.9936, and MBF = 0.9963, comparable to the Deeplabv3+. Still, its extraction speed is much faster than the Deeplabv3+. It can complete the brain extraction of a head CT image in about 0.43 s, about 3.8 times that of the Deeplabv3+. CONCLUSION: Thus, this method can achieve accurate brain extraction from head CT images faster, creating a good basis for subsequent brain volume measurement and feature extraction of intracranial lesions.


Asunto(s)
Algoritmos , Encéfalo , Humanos , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
4.
Neuroimage ; 260: 119474, 2022 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-35842095

RESUMEN

The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.


Asunto(s)
Encéfalo , Cráneo , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Medios de Contraste , Cabeza , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Recién Nacido , Imagen por Resonancia Magnética/métodos , Cráneo/diagnóstico por imagen , Cráneo/patología
5.
Neuroimage ; 258: 119341, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35654376

RESUMEN

Brain extraction (masking of extra-cerebral tissues) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head 3D scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwen.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neuroimagen/métodos
6.
J Digit Imaging ; 35(2): 374-384, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35083619

RESUMEN

This study proposed and evaluated a two-dimensional (2D) slice-based multi-view U-Net (MVU-Net) architecture for skull stripping. The proposed model fused all three TI-weighted brain magnetic resonance imaging (MRI) views, i.e., axial, coronal, and sagittal. This 2D method performed equally well as a three-dimensional (3D) model of skull stripping. while using fewer computational resources. The predictions of all three views were fused linearly, producing a final brain mask with better accuracy and efficiency. Meanwhile, two publicly available datasets-the Internet Brain Segmentation Repository (IBSR) and Neurofeedback Skull-stripped (NFBS) repository-were trained and tested. The MVU-Net, U-Net, and skip connection U-Net (SCU-Net) architectures were then compared. For the IBSR dataset, compared to U-Net and SC-UNet, the MVU-Net architecture attained better mean dice score coefficient (DSC), sensitivity, and specificity, at 0.9184, 0.9397, and 0.9908, respectively. Similarly, the MVU-Net architecture achieved better mean DSC, sensitivity, and specificity, at 0.9681, 0.9763, and 0.9954, respectively, than the U-Net and SC-UNet for the NFBS dataset.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neurorretroalimentación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Internet , Imagen por Resonancia Magnética/métodos , Cráneo/diagnóstico por imagen
7.
J Magn Reson Imaging ; 2021 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-34137113

RESUMEN

BACKGROUND: Manual brain extraction from magnetic resonance (MR) images is time-consuming and prone to intra- and inter-rater variability. Several automated approaches have been developed to alleviate these constraints, including deep learning pipelines. However, these methods tend to reduce their performance in unseen magnetic resonance imaging (MRI) scanner vendors and different imaging protocols. PURPOSE: To present and evaluate for clinical use PARIETAL, a pre-trained deep learning brain extraction method. We compare its reproducibility in a scan/rescan analysis and its robustness among scanners of different manufacturers. STUDY TYPE: Retrospective. POPULATION: Twenty-one subjects (12 women) with age range 22-48 years acquired using three different MRI scanner machines including scan/rescan in each of them. FIELD STRENGTH/SEQUENCE: T1-weighted images acquired in a 3-T Siemens with magnetization prepared rapid gradient-echo sequence and two 1.5 T scanners, Philips and GE, with spin-echo and spoiled gradient-recalled (SPGR) sequences, respectively. ASSESSMENT: Analysis of the intracranial cavity volumes obtained for each subject on the three different scanners and the scan/rescan acquisitions. STATISTICAL TESTS: Parametric permutation tests of the differences in volumes to rank and statistically evaluate the performance of PARIETAL compared to state-of-the-art methods. RESULTS: The mean absolute intracranial volume differences obtained by PARIETAL in the scan/rescan analysis were 1.88 mL, 3.91 mL, and 4.71 mL for Siemens, GE, and Philips scanners, respectively. PARIETAL was the best-ranked method on Siemens and GE scanners, while decreasing to Rank 2 on the Philips images. Intracranial differences for the same subject between scanners were 5.46 mL, 27.16 mL, and 30.44 mL for GE/Philips, Siemens/Philips, and Siemens/GE comparison, respectively. The permutation tests revealed that PARIETAL was always in Rank 1, obtaining the most similar volumetric results between scanners. DATA CONCLUSION: PARIETAL accurately segments the brain and it generalizes to images acquired at different sites without the need of training or fine-tuning it again. PARIETAL is publicly available. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY STAGE: 2.

8.
Sensors (Basel) ; 21(21)2021 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-34770479

RESUMEN

Ischemic stroke is one of the leading causes of death among the aged population in the world. Experimental stroke models with rodents play a fundamental role in the investigation of the mechanism and impairment of cerebral ischemia. For its celerity and veracity, the 2,3,5-triphenyltetrazolium chloride (TTC) staining of rat brains has been extensively adopted to visualize the infarction, which is subsequently photographed for further processing. Two important tasks are to segment the brain regions and to compute the midline that separates the brain. This paper investigates automatic brain extraction and hemisphere segmentation algorithms in camera-based TTC-stained rat images. For rat brain extraction, a saliency region detection scheme on a superpixel image is exploited to extract the brain regions from the raw complicated image. Subsequently, the initial brain slices are refined using a parametric deformable model associated with color image transformation. For rat hemisphere segmentation, open curve evolution guided by the gradient vector flow in a medial subimage is developed to compute the midline. A wide variety of TTC-stained rat brain images captured by a smartphone were produced and utilized to evaluate the proposed segmentation frameworks. Experimental results on the segmentation of rat brains and cerebral hemispheres indicated that the developed schemes achieved high accuracy with average Dice scores of 92.33% and 97.15%, respectively. The established segmentation algorithms are believed to be potential and beneficial to facilitate experimental stroke study with TTC-stained rat brain images.


Asunto(s)
Isquemia Encefálica , Cerebro , Accidente Cerebrovascular , Algoritmos , Animales , Encéfalo/diagnóstico por imagen , Isquemia Encefálica/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Ratas , Accidente Cerebrovascular/diagnóstico por imagen , Sales de Tetrazolio
9.
Neuroimage ; 220: 117081, 2020 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-32603860

RESUMEN

Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Glioma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Bases de Datos Factuales , Aprendizaje Profundo , Humanos , Estudios Retrospectivos
10.
J Digit Imaging ; 33(6): 1443-1464, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32666364

RESUMEN

Several neuroimaging processing applications consider skull stripping as a crucial pre-processing step. Due to complex anatomical brain structure and intensity variations in brain magnetic resonance imaging (MRI), an appropriate skull stripping is an important part. The process of skull stripping basically deals with the removal of the skull region for clinical analysis in brain segmentation tasks, and its accuracy and efficiency are quite crucial for diagnostic purposes. It requires more accurate and detailed methods for differentiating brain regions and the skull regions and is considered as a challenging task. This paper is focused on the transition of the conventional to the machine- and deep-learning-based automated skull stripping methods for brain MRI images. It is observed in this study that deep learning approaches have outperformed conventional and machine learning techniques in many ways, but they have their limitations. It also includes the comparative analysis of the current state-of-the-art skull stripping methods, a critical discussion of some challenges, model of quantifying parameters, and future work directions.


Asunto(s)
Aprendizaje Profundo , Cráneo , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Neuroimagen , Cráneo/diagnóstico por imagen
11.
Hum Brain Mapp ; 40(17): 4952-4964, 2019 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-31403237

RESUMEN

Brain extraction is a critical preprocessing step in the analysis of neuroimaging studies conducted with magnetic resonance imaging (MRI) and influences the accuracy of downstream analyses. The majority of brain extraction algorithms are, however, optimized for processing healthy brains and thus frequently fail in the presence of pathologically altered brain or when applied to heterogeneous MRI datasets. Here we introduce a new, rigorously validated algorithm (termed HD-BET) relying on artificial neural networks that aim to overcome these limitations. We demonstrate that HD-BET outperforms six popular, publicly available brain extraction algorithms in several large-scale neuroimaging datasets, including one from a prospective multicentric trial in neuro-oncology, yielding state-of-the-art performance with median improvements of +1.16 to +2.50 points for the Dice coefficient and -0.66 to -2.51 mm for the Hausdorff distance. Importantly, the HD-BET algorithm, which shows robust performance in the presence of pathology or treatment-induced tissue alterations, is applicable to a broad range of MRI sequence types and is not influenced by variations in MRI hardware and acquisition parameters encountered in both research and clinical practice. For broader accessibility, the HD-BET prediction algorithm is made freely available (www.neuroAI-HD.org) and may become an essential component for robust, automated, high-throughput processing of MRI neuroimaging data.


Asunto(s)
Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Algoritmos , Humanos , Neuroimagen/métodos
12.
Neuroimage ; 175: 32-44, 2018 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-29604454

RESUMEN

Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10-4, two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases.


Asunto(s)
Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Animales , Teorema de Bayes , Femenino , Macaca mulatta , Masculino
13.
Neuroimage ; 176: 431-445, 2018 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-29730494

RESUMEN

Brain extraction from 3D medical images is a common pre-processing step. A variety of approaches exist, but they are frequently only designed to perform brain extraction from images without strong pathologies. Extracting the brain from images exhibiting strong pathologies, for example, the presence of a brain tumor or of a traumatic brain injury (TBI), is challenging. In such cases, tissue appearance may substantially deviate from normal tissue appearance and hence violates algorithmic assumptions for standard approaches to brain extraction; consequently, the brain may not be correctly extracted. This paper proposes a brain extraction approach which can explicitly account for pathologies by jointly modeling normal tissue appearance and pathologies. Specifically, our model uses a three-part image decomposition: (1) normal tissue appearance is captured by principal component analysis (PCA), (2) pathologies are captured via a total variation term, and (3) the skull and surrounding tissue is captured by a sparsity term. Due to its convexity, the resulting decomposition model allows for efficient optimization. Decomposition and image registration steps are alternated to allow statistical modeling of normal tissue appearance in a fixed atlas coordinate system. As a beneficial side effect, the decomposition model allows for the identification of potentially pathological areas and the reconstruction of a quasi-normal image in atlas space. We demonstrate the effectiveness of our approach on four datasets: the publicly available IBSR and LPBA40 datasets which show normal image appearance, the BRATS dataset containing images with brain tumors, and a dataset containing clinical TBI images. We compare the performance with other popular brain extraction models: ROBEX, BEaST, MASS, BET, BSE and a recently proposed deep learning approach. Our model performs better than these competing approaches on all four datasets. Specifically, our model achieves the best median (97.11) and mean (96.88) Dice scores over all datasets. The two best performing competitors, ROBEX and MASS, achieve scores of 96.23/95.62 and 96.67/94.25 respectively. Hence, our approach is an effective method for high quality brain extraction for a wide variety of images.


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/patología , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Modelos Teóricos , Neuroimagen/métodos , Humanos , Análisis de Componente Principal
14.
Neuroimage ; 170: 482-494, 2018 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-28807870

RESUMEN

This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-value<0.001) and magnetic field strength (p-value<0.001) have statistically significant impacts on skull stripping results.


Asunto(s)
Encéfalo/diagnóstico por imagen , Consenso , Conjuntos de Datos como Asunto , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Campos Magnéticos , Masculino , Persona de Mediana Edad , Cráneo/diagnóstico por imagen , Programas Informáticos
15.
Hum Brain Mapp ; 39(11): 4241-4257, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-29972616

RESUMEN

Brain extraction is an important first step in many magnetic resonance neuroimaging studies. Due to variability in brain morphology and in the appearance of the brain due to differences in scanner acquisition parameters, the development of a generally applicable brain extraction algorithm has proven challenging. Learning-based brain extraction algorithms in particular perform well when the target and training images are sufficiently similar, but often perform worse when this condition is not met. In this study, we propose a new patch-based multi-atlas segmentation method for brain extraction which is specifically developed for accurate and robust processing across datasets. Using a diverse collection of labeled images from 5 different datasets, extensive comparisons were made with 9 other commonly used brain extraction methods, both before and after applying error correction (a machine learning method for automatically correcting segmentation errors) to each method. The proposed method performed equal to or better than the other methods in each of two segmentation scenarios: a challenging inter-dataset segmentation scenario in which no dataset-specific atlases were used (mean Dice coefficient 98.57%, volumetric correlation 0.994 across datasets following error correction), and an intra-dataset segmentation scenario in which only dataset-specific atlases were used (mean Dice coefficient 99.02%, volumetric correlation 0.998 across datasets following error correction). Furthermore, combined with error correction, the proposed method runs in less than one-tenth of the time required by the other top-performing methods in the challenging inter-dataset comparisons. Validation on an independent multi-centre dataset also confirmed the excellent performance of the proposed method.


Asunto(s)
Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Adulto , Anciano , Algoritmos , Atlas como Asunto , Niño , Femenino , Humanos , Masculino , Estudios Multicéntricos como Asunto , Reconocimiento de Normas Patrones Automatizadas/métodos , Adulto Joven
16.
Neuroimage ; 146: 132-147, 2017 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-27864083

RESUMEN

Automatic skull-stripping or brain extraction of magnetic resonance (MR) images is often a fundamental step in many neuroimage processing pipelines. The accuracy of subsequent image processing relies on the accuracy of the skull-stripping. Although many automated stripping methods have been proposed in the past, it is still an active area of research particularly in the context of brain pathology. Most stripping methods are validated on T1-w MR images of normal brains, especially because high resolution T1-w sequences are widely acquired and ground truth manual brain mask segmentations are publicly available for normal brains. However, different MR acquisition protocols can provide complementary information about the brain tissues, which can be exploited for better distinction between brain, cerebrospinal fluid, and unwanted tissues such as skull, dura, marrow, or fat. This is especially true in the presence of pathology, where hemorrhages or other types of lesions can have similar intensities as skull in a T1-w image. In this paper, we propose a sparse patch based Multi-cONtrast brain STRipping method (MONSTR),2 where non-local patch information from one or more atlases, which contain multiple MR sequences and reference delineations of brain masks, are combined to generate a target brain mask. We compared MONSTR with four state-of-the-art, publicly available methods: BEaST, SPECTRE, ROBEX, and OptiBET. We evaluated the performance of these methods on 6 datasets consisting of both healthy subjects and patients with various pathologies. Three datasets (ADNI, MRBrainS, NAMIC) are publicly available, consisting of 44 healthy volunteers and 10 patients with schizophrenia. Other three in-house datasets, comprising 87 subjects in total, consisted of patients with mild to severe traumatic brain injury, brain tumors, and various movement disorders. A combination of T1-w, T2-w were used to skull-strip these datasets. We show significant improvement in stripping over the competing methods on both healthy and pathological brains. We also show that our multi-contrast framework is robust and maintains accurate performance across different types of acquisitions and scanners, even when using normal brains as atlases to strip pathological brains, demonstrating that our algorithm is applicable even when reference segmentations of pathological brains are not available to be used as atlases.


Asunto(s)
Mapeo Encefálico , Encéfalo/anatomía & histología , Encéfalo/patología , Imagen por Resonancia Magnética , Cráneo/anatomía & histología , Cráneo/patología , Algoritmos , Atlas como Asunto , Encéfalo/diagnóstico por imagen , Medios de Contraste , Bases de Datos Factuales , Humanos , Aumento de la Imagen , Procesamiento de Imagen Asistido por Computador , Reconocimiento de Normas Patrones Automatizadas , Cráneo/diagnóstico por imagen
17.
Neuroimage ; 155: 460-472, 2017 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-28408290

RESUMEN

Most fetal brain MRI reconstruction algorithms rely only on brain tissue-relevant voxels of low-resolution (LR) images to enhance the quality of inter-slice motion correction and image reconstruction. Consequently the fetal brain needs to be localized and extracted as a first step, which is usually a laborious and time consuming manual or semi-automatic task. We have proposed in this work to use age-matched template images as prior knowledge to automatize brain localization and extraction. This has been achieved through a novel automatic brain localization and extraction method based on robust template-to-slice block matching and deformable slice-to-template registration. Our template-based approach has also enabled the reconstruction of fetal brain images in standard radiological anatomical planes in a common coordinate space. We have integrated this approach into our new reconstruction pipeline that involves intensity normalization, inter-slice motion correction, and super-resolution (SR) reconstruction. To this end we have adopted a novel approach based on projection of every slice of the LR brain masks into the template space using a fusion strategy. This has enabled the refinement of brain masks in the LR images at each motion correction iteration. The overall brain localization and extraction algorithm has shown to produce brain masks that are very close to manually drawn brain masks, showing an average Dice overlap measure of 94.5%. We have also demonstrated that adopting a slice-to-template registration and propagation of the brain mask slice-by-slice leads to a significant improvement in brain extraction performance compared to global rigid brain extraction and consequently in the quality of the final reconstructed images. Ratings performed by two expert observers show that the proposed pipeline can achieve similar reconstruction quality to reference reconstruction based on manual slice-by-slice brain extraction. The proposed brain mask refinement and reconstruction method has shown to provide promising results in automatic fetal brain MRI segmentation and volumetry in 26 fetuses with gestational age range of 23 to 38 weeks.


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/embriología , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Diagnóstico Prenatal/métodos , Femenino , Edad Gestacional , Humanos , Embarazo
18.
Neuroimage ; 129: 460-469, 2016 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-26808333

RESUMEN

Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N=53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N=135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Neuroimagen/métodos , Humanos , Aumento de la Imagen/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Cráneo
19.
J Digit Imaging ; 29(3): 365-79, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-26628083

RESUMEN

The high resolution magnetic resonance (MR) brain images contain some non-brain tissues such as skin, fat, muscle, neck, and eye balls compared to the functional images namely positron emission tomography (PET), single photon emission computed tomography (SPECT), and functional magnetic resonance imaging (fMRI) which usually contain relatively less non-brain tissues. The presence of these non-brain tissues is considered as a major obstacle for automatic brain image segmentation and analysis techniques. Therefore, quantitative morphometric studies of MR brain images often require a preliminary processing to isolate the brain from extra-cranial or non-brain tissues, commonly referred to as skull stripping. This paper describes the available methods on skull stripping and an exploratory review of recent literature on the existing skull stripping methods.


Asunto(s)
Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Cráneo/diagnóstico por imagen , Mapeo Encefálico , Humanos
20.
Neuroimage ; 114: 379-85, 2015 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-25862260

RESUMEN

BACKGROUND: X-ray computed tomography (CT) imaging of the brain is commonly used in diagnostic settings. Although CT scans are primarily used in clinical practice, they are increasingly used in research. A fundamental processing step in brain imaging research is brain extraction - the process of separating the brain tissue from all other tissues. Methods for brain extraction have either been 1) validated but not fully automated, or 2) fully automated and informally proposed, but never formally validated. AIM: To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. METHODS: All images were thresholded using a 0-100 Hounsfield unit (HU) range. In one variant of the pipeline, data were smoothed using a 3-dimensional Gaussian kernel (σ=1mm(3)) and re-thresholded to 0-100HU; in the other, data were not smoothed. BET was applied using 1 of 3 fractional intensity (FI) thresholds: 0.01, 0.1, or 0.35 and any holes in the brain mask were filled. For validation against a manual segmentation, 36 images from patients with intracranial hemorrhage were selected from 19 different centers from the MISTIE (Minimally Invasive Surgery plus recombinant-tissue plasminogen activator for Intracerebral Evacuation) stroke trial. Intracranial masks of the brain were manually created by one expert CT reader. The resulting brain tissue masks were quantitatively compared to the manual segmentations using sensitivity, specificity, accuracy, and the Dice Similarity Index (DSI). Brain extraction performance across smoothing and FI thresholds was compared using the Wilcoxon signed-rank test. The intracranial volume (ICV) of each scan was estimated by multiplying the number of voxels in the brain mask by the dimensions of each voxel for that scan. From this, we calculated the ICV ratio comparing manual and automated segmentation: ICVautomated/ICVmanual. To estimate the performance in a large number of scans, brain masks were generated from the 6 BET pipelines for 1095 longitudinal scans from 129 patients. Failure rates were estimated from visual inspection. ICV of each scan was estimated and an intraclass correlation (ICC) was estimated using a one-way ANOVA. RESULTS: Smoothing images improves brain extraction results using BET for all measures except specificity (all p<0.01, uncorrected), irrespective of the FI threshold. Using an FI of 0.01 or 0.1 performed better than 0.35. Thus, all reported results refer only to smoothed data using an FI of 0.01 or 0.1. Using an FI of 0.01 had a higher median sensitivity (0.9901) than an FI of 0.1 (0.9884, median difference: 0.0014, p<0.001), accuracy (0.9971 vs. 0.9971; median difference: 0.0001, p<0.001), and DSI (0.9895 vs. 0.9894; median difference: 0.0004, p<0.001) and lower specificity (0.9981 vs. 0.9982; median difference: -0.0001, p<0.001). These measures are all very high indicating that a range of FI values may produce visually indistinguishable brain extractions. Using smoothed data and an FI of 0.01, the mean (SD) ICV ratio was 1.002 (0.008); the mean being close to 1 indicates the ICV estimates are similar for automated and manual segmentation. In the 1095 longitudinal scans, this pipeline had a low failure rate (5.2%) and the ICC estimate was high (0.929, 95% CI: 0.91, 0.945) for successfully extracted brains. CONCLUSION: BET performs well at brain extraction on thresholded, 1mm(3) smoothed CT images with an FI of 0.01 or 0.1. Smoothing before applying BET is an important step not previously discussed in the literature. Analysis code is provided.


Asunto(s)
Encéfalo/patología , Hemorragias Intracraneales/patología , Tomografía Computarizada por Rayos X/métodos , Femenino , Cabeza , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA