Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Hum Brain Mapp ; 44(4): 1579-1592, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36440953

RESUMEN

This study aimed to investigate the influence of stroke lesions in predefined highly interconnected (rich-club) brain regions on functional outcome post-stroke, determine their spatial specificity and explore the effects of biological sex on their relevance. We analyzed MRI data recorded at index stroke and ~3-months modified Rankin Scale (mRS) data from patients with acute ischemic stroke enrolled in the multisite MRI-GENIE study. Spatially normalized structural stroke lesions were parcellated into 108 atlas-defined bilateral (sub)cortical brain regions. Unfavorable outcome (mRS > 2) was modeled in a Bayesian logistic regression framework. Effects of individual brain regions were captured as two compound effects for (i) six bilateral rich club and (ii) all further non-rich club regions. In spatial specificity analyses, we randomized the split into "rich club" and "non-rich club" regions and compared the effect of the actual rich club regions to the distribution of effects from 1000 combinations of six random regions. In sex-specific analyses, we introduced an additional hierarchical level in our model structure to compare male and female-specific rich club effects. A total of 822 patients (age: 64.7[15.0], 39% women) were analyzed. Rich club regions had substantial relevance in explaining unfavorable functional outcome (mean of posterior distribution: 0.08, area under the curve: 0.8). In particular, the rich club-combination had a higher relevance than 98.4% of random constellations. Rich club regions were substantially more important in explaining long-term outcome in women than in men. All in all, lesions in rich club regions were associated with increased odds of unfavorable outcome. These effects were spatially specific and more pronounced in women.


Asunto(s)
Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Femenino , Humanos , Masculino , Persona de Mediana Edad , Teorema de Bayes , Encéfalo , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Accidente Cerebrovascular Isquémico/patología , Modelos Neurológicos
2.
Neuroimage ; 260: 119474, 2022 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-35842095

RESUMEN

The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.


Asunto(s)
Encéfalo , Cráneo , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Medios de Contraste , Cabeza , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Recién Nacido , Imagen por Resonancia Magnética/métodos , Cráneo/diagnóstico por imagen , Cráneo/patología
3.
Neuroimage ; 244: 118610, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34571161

RESUMEN

A tool was developed to automatically segment several subcortical limbic structures (nucleus accumbens, basal forebrain, septal nuclei, hypothalamus without mammillary bodies, the mammillary bodies, and fornix) using only a T1-weighted MRI as input. This tool fills an unmet need as there are few, if any, publicly available tools to segment these clinically relevant structures. A U-Net with spatial, intensity, contrast, and noise augmentation was trained using 39 manually labeled MRI data sets. In general, the Dice scores, true positive rates, false discovery rates, and manual-automatic volume correlation were very good relative to comparable tools for other structures. A diverse data set of 698 subjects were segmented using the tool; evaluation of the resulting labelings showed that the tool failed in less than 1% of cases. Test-retest reliability of the tool was excellent. The automatically segmented volume of all structures except mammillary bodies showed effectiveness at detecting either clinical AD effects, age effects, or both. This tool will be publicly released with FreeSurfer (surfer.nmr.mgh.harvard.edu/fswiki/ScLimbic). Together with the other cortical and subcortical limbic segmentations, this tool will allow FreeSurfer to provide a comprehensive view of the limbic system in an automated way.


Asunto(s)
Aprendizaje Profundo , Sistema Límbico/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Prosencéfalo Basal/diagnóstico por imagen , Femenino , Fórnix/diagnóstico por imagen , Humanos , Masculino , Persona de Mediana Edad , Núcleo Accumbens/diagnóstico por imagen , Reproducibilidad de los Resultados , Núcleos Septales/diagnóstico por imagen , Adulto Joven
4.
Neuroimage ; 245: 118758, 2021 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-34838949

RESUMEN

The default mode network (DMN) mediates self-awareness and introspection, core components of human consciousness. Therapies to restore consciousness in patients with severe brain injuries have historically targeted subcortical sites in the brainstem, thalamus, hypothalamus, basal forebrain, and basal ganglia, with the goal of reactivating cortical DMN nodes. However, the subcortical connectivity of the DMN has not been fully mapped, and optimal subcortical targets for therapeutic neuromodulation of consciousness have not been identified. In this work, we created a comprehensive map of DMN subcortical connectivity by combining high-resolution functional and structural datasets with advanced signal processing methods. We analyzed 7 Tesla resting-state functional MRI (rs-fMRI) data from 168 healthy volunteers acquired in the Human Connectome Project. The rs-fMRI blood-oxygen-level-dependent (BOLD) data were temporally synchronized across subjects using the BrainSync algorithm. Cortical and subcortical DMN nodes were jointly analyzed and identified at the group level by applying a novel Nadam-Accelerated SCAlable and Robust (NASCAR) tensor decomposition method to the synchronized dataset. The subcortical connectivity map was then overlaid on a 7 Tesla 100 µm ex vivo MRI dataset for neuroanatomic analysis using automated segmentation of nuclei within the brainstem, thalamus, hypothalamus, basal forebrain, and basal ganglia. We further compared the NASCAR subcortical connectivity map with its counterpart generated from canonical seed-based correlation analyses. The NASCAR method revealed that BOLD signal in the central lateral nucleus of the thalamus and ventral tegmental area of the midbrain is strongly correlated with that of the DMN. In an exploratory analysis, additional subcortical sites in the median and dorsal raphe, lateral hypothalamus, and caudate nuclei were correlated with the cortical DMN. We also found that the putamen and globus pallidus are negatively correlated (i.e., anti-correlated) with the DMN, providing rs-fMRI evidence for the mesocircuit hypothesis of human consciousness, whereby a striatopallidal feedback system modulates anterior forebrain function via disinhibition of the central thalamus. Seed-based analyses yielded similar subcortical DMN connectivity, but the NASCAR result showed stronger contrast and better spatial alignment with dopamine immunostaining data. The DMN subcortical connectivity map identified here advances understanding of the subcortical regions that contribute to human consciousness and can be used to inform the selection of therapeutic targets in clinical trials for patients with disorders of consciousness.


Asunto(s)
Ganglios Basales/fisiología , Mapeo Encefálico , Tronco Encefálico/fisiología , Estado de Conciencia/fisiología , Red en Modo Predeterminado/fisiología , Hipotálamo/fisiología , Mesencéfalo/fisiología , Tálamo/fisiología , Adulto , Ganglios Basales/diagnóstico por imagen , Mapeo Encefálico/métodos , Tronco Encefálico/diagnóstico por imagen , Conectoma , Red en Modo Predeterminado/diagnóstico por imagen , Imagen Eco-Planar/métodos , Humanos , Hipotálamo/diagnóstico por imagen , Mesencéfalo/diagnóstico por imagen , Tálamo/diagnóstico por imagen
5.
Neuroimage ; 237: 118113, 2021 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-33940143

RESUMEN

Accurate and reliable whole-brain segmentation is critical to longitudinal neuroimaging studies. We undertake a comparative analysis of two subcortical segmentation methods, Automatic Segmentation (ASEG) and Sequence Adaptive Multimodal Segmentation (SAMSEG), recently provided in the open-source neuroimaging package FreeSurfer 7.1, with regard to reliability, bias, sensitivity to detect longitudinal change, and diagnostic sensitivity to Alzheimer's disease. First, we assess intra- and inter-scanner reliability for eight bilateral subcortical structures: amygdala, caudate, hippocampus, lateral ventricles, nucleus accumbens, pallidum, putamen and thalamus. For intra-scanner analysis we use a large sample of participants (n = 1629) distributed across the lifespan (age range = 4-93 years) and acquired on a 1.5T Siemens Avanto (n = 774) and a 3T Siemens Skyra (n = 855) scanners. For inter-scanner analysis we use a sample of 24 participants scanned on the day with three models of Siemens scanners: 1.5T Avanto, 3T Skyra and 3T Prisma. Second, we test how each method detects volumetric age change using longitudinal follow up scans (n = 491 for Avanto and n = 245 for Skyra; interscan interval = 1-10 years). Finally, we test sensitivity to clinically relevant change. We compare annual rate of hippocampal atrophy in cognitively normal older adults (n = 20), patients with mild cognitive impairment (n = 20) and Alzheimer's disease (n = 20). We find that both ASEG and SAMSEG are reliable and lead to the detection of within-person longitudinal change, although with notable differences between age-trajectories for most structures, including hippocampus and amygdala. In summary, SAMSEG yields significantly lower differences between repeated measures for intra- and inter-scanner analysis without compromising sensitivity to changes and demonstrating ability to detect clinically relevant longitudinal changes.


Asunto(s)
Envejecimiento , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Imagen por Resonancia Magnética/normas , Neuroimagen/normas , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Enfermedad de Alzheimer/patología , Atrofia , Encéfalo/patología , Niño , Preescolar , Disfunción Cognitiva/patología , Femenino , Hipocampo/diagnóstico por imagen , Hipocampo/patología , Humanos , Interpretación de Imagen Asistida por Computador , Procesamiento de Imagen Asistido por Computador , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
6.
Neuroimage ; 221: 117161, 2020 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-32702486

RESUMEN

Non-rigid cortical registration is an important and challenging task due to the geometric complexity of the human cortex and the high degree of inter-subject variability. A conventional solution is to use a spherical representation of surface properties and perform registration by aligning cortical folding patterns in that space. This strategy produces accurate spatial alignment, but often requires high computational cost. Recently, convolutional neural networks (CNNs) have demonstrated the potential to dramatically speed up volumetric registration. However, due to distortions introduced by projecting a sphere to a 2D plane, a direct application of recent learning-based methods to surfaces yields poor results. In this study, we present SphereMorph, a diffeomorphic registration framework for cortical surfaces using deep networks that addresses these issues. SphereMorph uses a UNet-style network associated with a spherical kernel to learn the displacement field and warps the sphere using a modified spatial transformer layer. We propose a resampling weight in computing the data fitting loss to account for distortions introduced by polar projection, and demonstrate the performance of our proposed method on two tasks, including cortical parcellation and group-wise functional area alignment. The experiments show that the proposed SphereMorph is capable of modeling the geometric registration problem in a CNN framework and demonstrate superior registration accuracy and computational efficiency. The source code of SphereMorph will be released to the public upon acceptance of this manuscript at https://github.com/voxelmorph/spheremorph.


Asunto(s)
Envejecimiento , Enfermedad de Alzheimer/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen , Aprendizaje Profundo , Imagen por Resonancia Magnética/métodos , Modelos Teóricos , Neuroimagen/métodos , Aprendizaje Automático no Supervisado , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Adulto Joven
7.
Neuroimage ; 223: 117287, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32853816

RESUMEN

Despite the crucial role of the hypothalamus in the regulation of the human body, neuroimaging studies of this structure and its nuclei are scarce. Such scarcity partially stems from the lack of automated segmentation tools, since manual delineation suffers from scalability and reproducibility issues. Due to the small size of the hypothalamus and the lack of image contrast in its vicinity, automated segmentation is difficult and has been long neglected by widespread neuroimaging packages like FreeSurfer or FSL. Nonetheless, recent advances in deep machine learning are enabling us to tackle difficult segmentation problems with high accuracy. In this paper we present a fully automated tool based on a deep convolutional neural network, for the segmentation of the whole hypothalamus and its subregions from T1-weighted MRI scans. We use aggressive data augmentation in order to make the model robust to T1-weighted MR scans from a wide array of different sources, without any need for preprocessing. We rigorously assess the performance of the presented tool through extensive analyses, including: inter- and intra-rater variability experiments between human observers; comparison of our tool with manual segmentation; comparison with an automated method based on multi-atlas segmentation; assessment of robustness by quality control analysis of a larger, heterogeneous dataset (ADNI); and indirect evaluation with a volumetric study performed on ADNI. The presented model outperforms multi-atlas segmentation scores as well as inter-rater accuracy level, and approaches intra-rater precision. Our method does not require any preprocessing and runs in less than a second on a GPU, and approximately 10 seconds on a CPU. The source code as well as the trained model are publicly available at https://github.com/BBillot/hypothalamus_seg, and will also be distributed with FreeSurfer.


Asunto(s)
Mapeo Encefálico/métodos , Hipotálamo/anatomía & histología , Hipotálamo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Anciano , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Aprendizaje Profundo , Femenino , Humanos , Masculino
8.
Artículo en Inglés | MEDLINE | ID: mdl-38665679

RESUMEN

We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain-connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, thoroughly capturing representations from the input data. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.

9.
ArXiv ; 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38463507

RESUMEN

Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.

10.
Brain Commun ; 6(1): fcae007, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38274570

RESUMEN

Deep learning has allowed for remarkable progress in many medical scenarios. Deep learning prediction models often require 105-107 examples. It is currently unknown whether deep learning can also enhance predictions of symptoms post-stroke in real-world samples of stroke patients that are often several magnitudes smaller. Such stroke outcome predictions however could be particularly instrumental in guiding acute clinical and rehabilitation care decisions. We here compared the capacities of classically used linear and novel deep learning algorithms in their prediction of stroke severity. Our analyses relied on a total of 1430 patients assembled from the MRI-Genetics Interface Exploration collaboration and a Massachusetts General Hospital-based study. The outcome of interest was National Institutes of Health Stroke Scale-based stroke severity in the acute phase after ischaemic stroke onset, which we predict by means of MRI-derived lesion location. We automatically derived lesion segmentations from diffusion-weighted clinical MRI scans, performed spatial normalization and included a principal component analysis step, retaining 95% of the variance of the original data. We then repeatedly separated a train, validation and test set to investigate the effects of sample size; we subsampled the train set to 100, 300 and 900 and trained the algorithms to predict the stroke severity score for each sample size with regularized linear regression and an eight-layered neural network. We selected hyperparameters on the validation set. We evaluated model performance based on the explained variance (R2) in the test set. While linear regression performed significantly better for a sample size of 100 patients, deep learning started to significantly outperform linear regression when trained on 900 patients. Average prediction performance improved by ∼20% when increasing the sample size 9× [maximum for 100 patients: 0.279 ± 0.005 (R2, 95% confidence interval), 900 patients: 0.337 ± 0.006]. In summary, for sample sizes of 900 patients, deep learning showed a higher prediction performance than typically employed linear methods. These findings suggest the existence of non-linear relationships between lesion location and stroke severity that can be utilized for an improved prediction performance for larger sample sizes.

11.
bioRxiv ; 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-37333251

RESUMEN

We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite "FreeSurfer" ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).

12.
Elife ; 122024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38896568

RESUMEN

We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).


Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer's or Parkinson's. Donated brains usually go to 'brain banks', institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as 'the hippocampus displays some atrophy', rather than concrete 'numerical' measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise ­ especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.


Asunto(s)
Enfermedad de Alzheimer , Encéfalo , Imagenología Tridimensional , Aprendizaje Automático , Humanos , Imagenología Tridimensional/métodos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Fotograbar/métodos , Disección , Imagen por Resonancia Magnética/métodos , Neuropatología/métodos , Neuroimagen/métodos
13.
ArXiv ; 2023 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-37205262

RESUMEN

We tackle classification based on brain connectivity derived from diffusion magnetic resonance images. We propose a machine-learning model inspired by graph convolutional networks (GCNs), which takes a brain connectivity input graph and processes the data separately through a parallel GCN mechanism with multiple heads. The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes, capturing representations from the input data thoroughly. To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification. This quantifies the degree to which the connectome varies depending on the sex, which is important for improving our understanding of health and disease in both sexes. We show experiments on two publicly available datasets: PREVENT-AD (347 subjects) and OASIS3 (771 subjects). The proposed model demonstrates the highest performance compared to the existing machine-learning algorithms we tested, including classical methods and (graph and non-graph) deep learning. We provide a detailed analysis of each component of our model.

14.
Med Image Anal ; 86: 102796, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36948069

RESUMEN

The convolutional neural network (CNN) is one of the most commonly used architectures for computer vision tasks. The key building block of a CNN is the convolutional kernel that aggregates information from the pixel neighborhood and shares weights across all pixels. A standard CNN's capacity, and thus its performance, is directly related to the number of learnable kernel weights, which is determined by the number of channels and the kernel size (support). In this paper, we present the hyper-convolution, a novel building block that implicitly encodes the convolutional kernel using spatial coordinates. Unlike a regular convolutional kernel, whose weights are independently learned, hyper-convolution kernel weights are correlated through an encoder that maps spatial coordinates to their corresponding values. Hyper-convolutions decouple kernel size from the total number of learnable parameters, enabling a more flexible architecture design. We demonstrate in our experiments that replacing regular convolutions with hyper-convolutions can improve performance with less parameters, and increase robustness against noise. We provide our code here: https://github.com/tym002/Hyper-Convolution.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos
15.
Artículo en Inglés | MEDLINE | ID: mdl-37692094

RESUMEN

Subject motion can cause artifacts in clinical MRI, frequently necessitating repeat scans. We propose to alleviate this inefficiency by predicting artifact scores from partial multi-shot multi-slice acquisitions, which may guide the operator in aborting corrupted scans early.

16.
Med Image Anal ; 90: 102962, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37769550

RESUMEN

We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.


Asunto(s)
Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen , Procesamiento de Imagen Asistido por Computador/métodos
17.
ArXiv ; 2023 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-37744470

RESUMEN

Brain surface-based image registration, an important component of brain image analysis, establishes spatial correspondence between cortical surfaces. Existing iterative and learning-based approaches focus on accurate registration of folding patterns of the cerebral cortex, and assume that geometry predicts function and thus functional areas will also be well aligned. However, structure/functional variability of anatomically corresponding areas across subjects has been widely reported. In this work, we introduce a learning-based cortical registration framework, JOSA, which jointly aligns folding patterns and functional maps while simultaneously learning an optimal atlas. We demonstrate that JOSA can substantially improve registration performance in both anatomical and functional domains over existing methods. By employing a semi-supervised training strategy, the proposed framework obviates the need for functional data during inference, enabling its use in broad neuroscientific domains where functional data may not be observed. The source code of JOSA will be released to the public at https://voxelmorph.net.

18.
Artículo en Inglés | MEDLINE | ID: mdl-37505997

RESUMEN

Learning-based image reconstruction models, such as those based on the U-Net, require a large set of labeled images if good generalization is to be guaranteed. In some imaging domains, however, labeled data with pixel- or voxel-level label accuracy are scarce due to the cost of acquiring them. This problem is exacerbated further in domains like medical imaging, where there is no single ground truth label, resulting in large amounts of repeat variability in the labels. Therefore, training reconstruction networks to generalize better by learning from both labeled and unlabeled examples (called semi-supervised learning) is problem of practical and theoretical interest. However, traditional semi-supervised learning methods for image reconstruction often necessitate handcrafting a differentiable regularizer specific to some given imaging problem, which can be extremely time-consuming. In this work, we propose "supervision by denoising" (SUD), a framework to supervise reconstruction models using their own denoised output as labels. SUD unifies stochastic averaging and spatial denoising techniques under a spatio-temporal denoising framework and alternates denoising and model weight update steps in an optimization framework for semi-supervision. As example applications, we apply SUD to two problems from biomedical imaging-anatomical brain reconstruction (3D) and cortical parcellation (2D)-to demonstrate a significant improvement in reconstruction over supervised-only and ensembling baselines. Our code available at https://github.com/seannz/sud.

19.
Med Image Anal ; 86: 102789, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36857946

RESUMEN

Despite advances in data augmentation and transfer learning, convolutional neural networks (CNNs) difficultly generalise to unseen domains. When segmenting brain scans, CNNs are highly sensitive to changes in resolution and contrast: even within the same MRI modality, performance can decrease across datasets. Here we introduce SynthSeg, the first segmentation CNN robust against changes in contrast and resolution. SynthSeg is trained with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation strategy where we fully randomise the contrast and resolution of the synthetic training data. Consequently, SynthSeg can segment real scans from a wide range of target domains without retraining or fine-tuning, which enables straightforward analysis of huge amounts of heterogeneous clinical data. Because SynthSeg only requires segmentations to be trained (no images), it can learn from labels obtained by automated methods on diverse populations (e.g., ageing and diseased), thus achieving robustness to a wide range of morphological variability. We demonstrate SynthSeg on 5,000 scans of six modalities (including CT) and ten resolutions, where it exhibits unparallelled generalisation compared with supervised CNNs, state-of-the-art domain adaptation, and Bayesian segmentation. Finally, we demonstrate the generalisability of SynthSeg by applying it to cardiac MRI and CT scans.


Asunto(s)
Imagen por Resonancia Magnética , Neuroimagen , Humanos , Teorema de Bayes , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
20.
Artículo en Inglés | MEDLINE | ID: mdl-37565069

RESUMEN

Motion artifacts can negatively impact diagnosis, patient experience, and radiology workflow especially when a patient recall is required. Detecting motion artifacts while the patient is still in the scanner could potentially improve workflow and reduce costs by enabling immediate corrective action. We demonstrate in a clinical k-space dataset that using cross-correlation between adjacent phase-encoding lines can detect motion artifacts directly from raw k-space in multi-shot multi-slice scans. We train a split-attention residual network to examine the performance in predicting motion artifact severity. The network is trained on simulated data and tested on real clinical data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA