RESUMEN
Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.
RESUMEN
Motion artifacts are a pervasive problem in MRI, leading to misdiagnosis or mischaracterization in population-level imaging studies. Current retrospective rigid intra-slice motion correction techniques jointly optimize estimates of the image and the motion parameters. In this paper, we use a deep network to reduce the joint image-motion parameter search to a search over rigid motion parameters alone. Our network produces a reconstruction as a function of two inputs: corrupted k-space data and motion parameters. We train the network using simulated, motion-corrupted k-space data generated with known motion parameters. At test-time, we estimate unknown motion parameters by minimizing a data consistency loss between the motion parameters, the network-based image reconstruction given those parameters, and the acquired measurements. Intra-slice motion correction experiments on simulated and realistic 2D fast spin echo brain MRI achieve high reconstruction fidelity while providing the benefits of explicit data consistency optimization. Our code is publicly available at https://www.github.com/nalinimsingh/neuroMoCo.
RESUMEN
Surface-based cortical registration is an important topic in medical image analysis and facilitates many downstream applications. Current approaches for cortical registration are mainly driven by geometric features, such as sulcal depth and curvature, and often assume that registration of folding patterns leads to alignment of brain function. However, functional variability of anatomically corresponding areas across subjects has been widely reported, particularly in higher-order cognitive areas. In this work, we present JOSA, a novel cortical registration framework that jointly models the mismatch between geometry and function while simultaneously learning an unbiased population-specific atlas. Using a semi-supervised training strategy, JOSA achieves superior registration performance in both geometry and function to the state-of-the-art methods but without requiring functional data at inference. This learning framework can be extended to any auxiliary data to guide spherical registration that is available during training but is difficult or impossible to obtain during inference, such as parcellations, architectonic identity, transcriptomic information, and molecular profiles. By recognizing the mismatch between geometry and function, JOSA provides new insights into the future development of registration methods using joint analysis of brain structure and function.
Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Algoritmos , Imagenología Tridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Atlas como AsuntoRESUMEN
Affine image registration is a cornerstone of medical-image analysis. While classical algorithms can achieve excellent accuracy, they solve a time-consuming optimization for every image pair. Deep-learning (DL) methods learn a function that maps an image pair to an output transform. Evaluating the function is fast, but capturing large transforms can be challenging, and networks tend to struggle if a test-image characteristic shifts from the training domain, such as the resolution. Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image. We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image without preprocessing. First, we leverage a strategy that trains networks with widely varying images synthesized from label maps, yielding robust performance across acquisition specifics unseen at training. Second, we optimize the spatial overlap of select anatomical labels. This enables networks to distinguish anatomy of interest from irrelevant structures, removing the need for preprocessing that excludes content which would impinge on anatomy-specific registration. Third, we combine the affine model with a deformable hypernetwork that lets users choose the optimal deformation-field regularity for their specific data, at registration time, in a fraction of the time required by classical methods. This framework is applicable to learning anatomy-aware, acquisition-agnostic registration of any anatomy with any architecture, as long as label maps are available for training. We analyze how competing architectures learn affine transforms and compare state-of-the-art registration tools across an extremely diverse set of neuroimaging data, aiming to truly capture the behavior of methods in the real world. SynthMorph demonstrates high accuracy and is available at https://w3id.org/synthmorph, as a single complete end-to-end solution for registration of brain magnetic resonance imaging (MRI) data.
RESUMEN
We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable ï¬nancial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected diï¬erences between post mortem conï¬rmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).
Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer's or Parkinson's. Donated brains usually go to 'brain banks', institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as 'the hippocampus displays some atrophy', rather than concrete 'numerical' measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.
Asunto(s)
Enfermedad de Alzheimer , Encéfalo , Imagenología Tridimensional , Aprendizaje Automático , Humanos , Imagenología Tridimensional/métodos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Fotograbar/métodos , Disección , Imagen por Resonancia Magnética/métodos , Neuropatología/métodos , Neuroimagen/métodosRESUMEN
We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain-connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, thoroughly capturing representations from the input data. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.
RESUMEN
Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.
RESUMEN
Deep learning has allowed for remarkable progress in many medical scenarios. Deep learning prediction models often require 105-107 examples. It is currently unknown whether deep learning can also enhance predictions of symptoms post-stroke in real-world samples of stroke patients that are often several magnitudes smaller. Such stroke outcome predictions however could be particularly instrumental in guiding acute clinical and rehabilitation care decisions. We here compared the capacities of classically used linear and novel deep learning algorithms in their prediction of stroke severity. Our analyses relied on a total of 1430 patients assembled from the MRI-Genetics Interface Exploration collaboration and a Massachusetts General Hospital-based study. The outcome of interest was National Institutes of Health Stroke Scale-based stroke severity in the acute phase after ischaemic stroke onset, which we predict by means of MRI-derived lesion location. We automatically derived lesion segmentations from diffusion-weighted clinical MRI scans, performed spatial normalization and included a principal component analysis step, retaining 95% of the variance of the original data. We then repeatedly separated a train, validation and test set to investigate the effects of sample size; we subsampled the train set to 100, 300 and 900 and trained the algorithms to predict the stroke severity score for each sample size with regularized linear regression and an eight-layered neural network. We selected hyperparameters on the validation set. We evaluated model performance based on the explained variance (R2) in the test set. While linear regression performed significantly better for a sample size of 100 patients, deep learning started to significantly outperform linear regression when trained on 900 patients. Average prediction performance improved by â¼20% when increasing the sample size 9× [maximum for 100 patients: 0.279 ± 0.005 (R2, 95% confidence interval), 900 patients: 0.337 ± 0.006]. In summary, for sample sizes of 900 patients, deep learning showed a higher prediction performance than typically employed linear methods. These findings suggest the existence of non-linear relationships between lesion location and stroke severity that can be utilized for an improved prediction performance for larger sample sizes.
RESUMEN
We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite "FreeSurfer" ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
RESUMEN
Brain cells are arranged in laminar, nuclear, or columnar structures, spanning a range of scales. Here, we construct a reliable cell census in the frontal lobe of human cerebral cortex at micrometer resolution in a magnetic resonance imaging (MRI)-referenced system using innovative imaging and analysis methodologies. MRI establishes a macroscopic reference coordinate system of laminar and cytoarchitectural boundaries. Cell counting is obtained with a digital stereological approach on the 3D reconstruction at cellular resolution from a custom-made inverted confocal light-sheet fluorescence microscope (LSFM). Mesoscale optical coherence tomography enables the registration of the distorted histological cell typing obtained with LSFM to the MRI-based atlas coordinate system. The outcome is an integrated high-resolution cellular census of Broca's area in a human postmortem specimen, within a whole-brain reference space atlas.
Asunto(s)
Área de Broca , Corteza Cerebral , Humanos , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Mapeo EncefálicoRESUMEN
Brain surface-based image registration, an important component of brain image analysis, establishes spatial correspondence between cortical surfaces. Existing iterative and learning-based approaches focus on accurate registration of folding patterns of the cerebral cortex, and assume that geometry predicts function and thus functional areas will also be well aligned. However, structure/functional variability of anatomically corresponding areas across subjects has been widely reported. In this work, we introduce a learning-based cortical registration framework, JOSA, which jointly aligns folding patterns and functional maps while simultaneously learning an optimal atlas. We demonstrate that JOSA can substantially improve registration performance in both anatomical and functional domains over existing methods. By employing a semi-supervised training strategy, the proposed framework obviates the need for functional data during inference, enabling its use in broad neuroscientific domains where functional data may not be observed. The source code of JOSA will be released to the public at https://voxelmorph.net.
RESUMEN
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.
Asunto(s)
Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Subject motion can cause artifacts in clinical MRI, frequently necessitating repeat scans. We propose to alleviate this inefficiency by predicting artifact scores from partial multi-shot multi-slice acquisitions, which may guide the operator in aborting corrupted scans early.
RESUMEN
Motion artifacts can negatively impact diagnosis, patient experience, and radiology workflow especially when a patient recall is required. Detecting motion artifacts while the patient is still in the scanner could potentially improve workflow and reduce costs by enabling immediate corrective action. We demonstrate in a clinical k-space dataset that using cross-correlation between adjacent phase-encoding lines can detect motion artifacts directly from raw k-space in multi-shot multi-slice scans. We train a split-attention residual network to examine the performance in predicting motion artifact severity. The network is trained on simulated data and tested on real clinical data.
RESUMEN
Learning-based image reconstruction models, such as those based on the U-Net, require a large set of labeled images if good generalization is to be guaranteed. In some imaging domains, however, labeled data with pixel- or voxel-level label accuracy are scarce due to the cost of acquiring them. This problem is exacerbated further in domains like medical imaging, where there is no single ground truth label, resulting in large amounts of repeat variability in the labels. Therefore, training reconstruction networks to generalize better by learning from both labeled and unlabeled examples (called semi-supervised learning) is problem of practical and theoretical interest. However, traditional semi-supervised learning methods for image reconstruction often necessitate handcrafting a differentiable regularizer specific to some given imaging problem, which can be extremely time-consuming. In this work, we propose "supervision by denoising" (SUD), a framework to supervise reconstruction models using their own denoised output as labels. SUD unifies stochastic averaging and spatial denoising techniques under a spatio-temporal denoising framework and alternates denoising and model weight update steps in an optimization framework for semi-supervision. As example applications, we apply SUD to two problems from biomedical imaging-anatomical brain reconstruction (3D) and cortical parcellation (2D)-to demonstrate a significant improvement in reconstruction over supervised-only and ensembling baselines. Our code available at https://github.com/seannz/sud.
RESUMEN
We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, capturing representations from the input data thoroughly. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.
RESUMEN
The convolutional neural network (CNN) is one of the most commonly used architectures for computer vision tasks. The key building block of a CNN is the convolutional kernel that aggregates information from the pixel neighborhood and shares weights across all pixels. A standard CNN's capacity, and thus its performance, is directly related to the number of learnable kernel weights, which is determined by the number of channels and the kernel size (support). In this paper, we present the hyper-convolution, a novel building block that implicitly encodes the convolutional kernel using spatial coordinates. Unlike a regular convolutional kernel, whose weights are independently learned, hyper-convolution kernel weights are correlated through an encoder that maps spatial coordinates to their corresponding values. Hyper-convolutions decouple kernel size from the total number of learnable parameters, enabling a more flexible architecture design. We demonstrate in our experiments that replacing regular convolutions with hyper-convolutions can improve performance with less parameters, and increase robustness against noise. We provide our code here: https://github.com/tym002/Hyper-Convolution.
Asunto(s)
Algoritmos , Redes Neurales de la Computación , HumanosRESUMEN
Despite advances in data augmentation and transfer learning, convolutional neural networks (CNNs) difficultly generalise to unseen domains. When segmenting brain scans, CNNs are highly sensitive to changes in resolution and contrast: even within the same MRI modality, performance can decrease across datasets. Here we introduce SynthSeg, the first segmentation CNN robust against changes in contrast and resolution. SynthSeg is trained with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation strategy where we fully randomise the contrast and resolution of the synthetic training data. Consequently, SynthSeg can segment real scans from a wide range of target domains without retraining or fine-tuning, which enables straightforward analysis of huge amounts of heterogeneous clinical data. Because SynthSeg only requires segmentations to be trained (no images), it can learn from labels obtained by automated methods on diverse populations (e.g., ageing and diseased), thus achieving robustness to a wide range of morphological variability. We demonstrate SynthSeg on 5,000 scans of six modalities (including CT) and ten resolutions, where it exhibits unparallelled generalisation compared with supervised CNNs, state-of-the-art domain adaptation, and Bayesian segmentation. Finally, we demonstrate the generalisability of SynthSeg by applying it to cardiac MRI and CT scans.
Asunto(s)
Imagen por Resonancia Magnética , Neuroimagen , Humanos , Teorema de Bayes , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
BACKGROUND AND OBJECTIVES: While chronological age is one of the most influential determinants of poststroke outcomes, little is known of the impact of neuroimaging-derived biological "brain age." We hypothesized that radiomics analyses of T2-FLAIR images texture would provide brain age estimates and that advanced brain age of patients with stroke will be associated with cardiovascular risk factors and worse functional outcomes. METHODS: We extracted radiomics from T2-FLAIR images acquired during acute stroke clinical evaluation. Brain age was determined from brain parenchyma radiomics using an ElasticNet linear regression model. Subsequently, relative brain age (RBA), which expresses brain age in comparison with chronological age-matched peers, was estimated. Finally, we built a linear regression model of RBA using clinical cardiovascular characteristics as inputs and a logistic regression model of favorable functional outcomes taking RBA as input. RESULTS: We reviewed 4,163 patients from a large multisite ischemic stroke cohort (mean age = 62.8 years, 42.0% female patients). T2-FLAIR radiomics predicted chronological ages (mean absolute error = 6.9 years, r = 0.81). After adjustment for covariates, RBA was higher and therefore described older-appearing brains in patients with hypertension, diabetes mellitus, a history of smoking, and a history of a prior stroke. In multivariate analyses, age, RBA, NIHSS, and a history of prior stroke were all significantly associated with functional outcome (respective adjusted odds ratios: 0.58, 0.76, 0.48, 0.55; all p-values < 0.001). Moreover, the negative effect of RBA on outcome was especially pronounced in minor strokes. DISCUSSION: T2-FLAIR radiomics can be used to predict brain age and derive RBA. Older-appearing brains, characterized by a higher RBA, reflect cardiovascular risk factor accumulation and are linked to worse outcomes after stroke.
Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Niño , Femenino , Humanos , Masculino , Persona de Mediana Edad , Encéfalo/diagnóstico por imagen , Isquemia Encefálica/diagnóstico por imagen , Isquemia Encefálica/complicaciones , Accidente Cerebrovascular Isquémico/complicaciones , Imagen por Resonancia Magnética/métodos , Accidente Cerebrovascular/complicacionesRESUMEN
This study aimed to investigate the influence of stroke lesions in predefined highly interconnected (rich-club) brain regions on functional outcome post-stroke, determine their spatial specificity and explore the effects of biological sex on their relevance. We analyzed MRI data recorded at index stroke and ~3-months modified Rankin Scale (mRS) data from patients with acute ischemic stroke enrolled in the multisite MRI-GENIE study. Spatially normalized structural stroke lesions were parcellated into 108 atlas-defined bilateral (sub)cortical brain regions. Unfavorable outcome (mRS > 2) was modeled in a Bayesian logistic regression framework. Effects of individual brain regions were captured as two compound effects for (i) six bilateral rich club and (ii) all further non-rich club regions. In spatial specificity analyses, we randomized the split into "rich club" and "non-rich club" regions and compared the effect of the actual rich club regions to the distribution of effects from 1000 combinations of six random regions. In sex-specific analyses, we introduced an additional hierarchical level in our model structure to compare male and female-specific rich club effects. A total of 822 patients (age: 64.7[15.0], 39% women) were analyzed. Rich club regions had substantial relevance in explaining unfavorable functional outcome (mean of posterior distribution: 0.08, area under the curve: 0.8). In particular, the rich club-combination had a higher relevance than 98.4% of random constellations. Rich club regions were substantially more important in explaining long-term outcome in women than in men. All in all, lesions in rich club regions were associated with increased odds of unfavorable outcome. These effects were spatially specific and more pronounced in women.