RESUMEN
Brain extraction and image quality assessment are two fundamental steps in fetal brain magnetic resonance imaging (MRI) 3D reconstruction and quantification. However, the randomness of fetal position and orientation, the variability of fetal brain morphology, maternal organs around the fetus, and the scarcity of data samples, all add excessive noise and impose a great challenge to automated brain extraction and quality assessment of fetal MRI slices. Conventionally, brain extraction and quality assessment are typically performed independently. However, both of them focus on the brain image representation, so they can be jointly optimized to ensure the network learns more effective features and avoid overfitting. To this end, we propose a novel two-stage dual-task deep learning framework with a brain localization stage and a dual-task stage for joint brain extraction and quality assessment of fetal MRI slices. Specifically, the dual-task module compactly contains a feature extraction module, a quality assessment head and a segmentation head with feature fusion for simultaneous brain extraction and quality assessment. Besides, a transformer architecture is introduced into the feature extraction module and the segmentation head. We utilize a multi-step training strategy to guarantee a stable and successful training of all modules. Finally, we validate our method by a 5-fold cross-validation and ablation study on a dataset with fetal brain MRI slices in different qualities, and perform a cross-dataset validation in addition. Experiments show that the proposed framework achieves very promising performance.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Embarazo , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Cabeza , Feto/diagnóstico por imagenRESUMEN
Precise segmentation of subcortical structures from infant brain magnetic resonance (MR) images plays an essential role in studying early subcortical structural and functional developmental patterns and diagnosis of related brain disorders. However, due to the dynamic appearance changes, low tissue contrast, and tiny subcortical size in infant brain MR images, infant subcortical segmentation is a challenging task. In this paper, we propose a context-guided, attention-based, coarse-to-fine deep framework to precisely segment the infant subcortical structures. At the coarse stage, we aim to directly predict the signed distance maps (SDMs) from multi-modal intensity images, including T1w, T2w, and the ratio of T1w and T2w images, with an SDM-Unet, which can leverage the spatial context information, including the structural position information and the shape information of the target structure, to generate high-quality SDMs. At the fine stage, the predicted SDMs, which encode spatial-context information of each subcortical structure, are integrated with the multi-modal intensity images as the input to a multi-source and multi-path attention Unet (M2A-Unet) for achieving refined segmentation. Both the 3D spatial and channel attention blocks are added to guide the M2A-Unet to focus more on the important subregions and channels. We additionally incorporate the inner and outer subcortical boundaries as extra labels to help precisely estimate the ambiguous boundaries. We validate our method on an infant MR image dataset and on an unrelated neonatal MR image dataset. Compared to eleven state-of-the-art methods, the proposed framework consistently achieves higher segmentation accuracy in both qualitative and quantitative evaluations of infant MR images and also exhibits good generalizability in the neonatal dataset.
Asunto(s)
Encefalopatías , Encéfalo , Recién Nacido , Humanos , Lactante , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Spatiotemporal (four-dimensional) infant-dedicated brain atlases are essential for neuroimaging analysis of early dynamic brain development. However, due to the substantial technical challenges in the acquisition and processing of infant brain MR images, 4D atlases densely covering the dynamic brain development during infancy are still scarce. Few existing ones generally have fuzzy tissue contrast and low spatiotemporal resolution, leading to degraded accuracy of atlas-based normalization and subsequent analyses. To address this issue, in this paper, we construct a 4D structural MRI atlas for infant brains based on the UNC/UMN Baby Connectome Project (BCP) dataset, which features a high spatial resolution, extensive age-range coverage, and densely sampled time points. Specifically, 542 longitudinal T1w and T2w scans from 240 typically developing infants up to 26-month of age were utilized for our atlas construction. To improve the co-registration accuracy of the infant brain images, which typically exhibit dynamic appearance with low tissue contrast, we employed the state-of-the-art registration method and leveraged our generated reliable brain tissue probability maps in addition to the intensity images to improve the alignment of individual images. To achieve consistent region labeling on both infant and adult brain images for facilitating region-based analysis across ages, we mapped the widely used Desikan cortical parcellation onto our atlas by following an age-decreasing mapping manner. Meanwhile, the typical subcortical structures were manually delineated to facilitate the studies related to the subcortex. Compared with the existing infant brain atlases, our 4D atlas has much higher spatiotemporal resolution and preserves more structural details, and thus can boost accuracy in neurodevelopmental analysis during infancy.
Asunto(s)
Conectoma , Adulto , Encéfalo/diagnóstico por imagen , Estudios de Cohortes , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Lactante , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodosRESUMEN
Longitudinal brain imaging atlases with densely sampled time-points and ancillary anatomical information are of fundamental importance in studying early developmental characteristics of human and non-human primate brains during infancy, which feature extremely dynamic imaging appearance, brain shape and size. However, for non-human primates, which are highly valuable animal models for understanding human brains, the existing brain atlases are mainly developed based on adults or adolescents, denoting a notable lack of temporally densely-sampled atlases covering the dynamic early brain development. To fill this critical gap, in this paper, we construct a comprehensive set of longitudinal brain atlases and associated tissue probability maps (gray matter, white matter, and cerebrospinal fluid) with totally 12 time-points from birth to 4 years of age (i.e., 1, 2, 3, 4, 5, 6, 9, 12, 18, 24, 36, and 48 months of age) based on 175 longitudinal structural MRI scans from 39 typically-developing cynomolgus macaques, by leveraging state-of-the-art computational techniques tailored for early developing brains. Furthermore, to facilitate region-based analysis using our atlases, we also provide two popular hierarchy parcellations, i.e., cortical hierarchy maps (6 levels) and subcortical hierarchy maps (6 levels), on our longitudinal macaque brain atlases. These early developing atlases, which have the densest time-points during infancy (to the best of our knowledge), will greatly facilitate the studies of macaque brain development.
Asunto(s)
Encéfalo/crecimiento & desarrollo , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Animales , Sustancia Gris/crecimiento & desarrollo , Procesamiento de Imagen Asistido por Computador , Macaca fascicularis , Sustancia Blanca/crecimiento & desarrolloRESUMEN
As non-human primates, macaques have a close phylogenetic relationship to human beings and have been proven to be a valuable and widely used animal model in human neuroscience research. Accurate skull stripping (aka. brain extraction) of brain magnetic resonance imaging (MRI) is a crucial prerequisite in neuroimaging analysis of macaques. Most of the current skull stripping methods can achieve satisfactory results for human brains, but when applied to macaque brains, especially during early brain development, the results are often unsatisfactory. In fact, the early dynamic, regionally-heterogeneous development of macaque brains, accompanied by poor and age-related contrast between different anatomical structures, poses significant challenges for accurate skull stripping. To overcome these challenges, we propose a fully-automated framework to effectively fuse the age-specific intensity information and domain-invariant prior knowledge as important guiding information for robust skull stripping of developing macaques from 0 to 36 months of age. Specifically, we generate Signed Distance Map (SDM) and Center of Gravity Distance Map (CGDM) based on the intermediate segmentation results as guidance. Instead of using local convolution, we fuse all information using the Dual Self-Attention Module (DSAM), which can capture global spatial and channel-dependent information of feature maps. To extensively evaluate the performance, we adopt two relatively-large challenging MRI datasets from rhesus macaques and cynomolgus macaques, respectively, with a total of 361 scans from two different scanners with different imaging protocols. We perform cross-validation by using one dataset for training and the other one for testing. Our method outperforms five popular brain extraction tools and three deep-learning-based methods on cross-source MRI datasets without any transfer learning.
Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/anatomía & histología , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Macaca , Imagen por Resonancia MagnéticaRESUMEN
Temporally consistent and accurate registration and parcellation of longitudinal cortical surfaces is of great importance in studying longitudinal morphological and functional changes of human brains. However, most existing methods are developed for registration or parcellation of a single cortical surface. When applying to longitudinal studies, these methods independently register/parcellate each surface from longitudinal scans, thus often generating longitudinally inconsistent and inaccurate results, especially in small or ambiguous cortical regions. Essentially, longitudinal cortical surface registration and parcellation are highly correlated tasks with inherently shared constraints on both spatial and temporal feature representations, which are unfortunately ignored in existing methods. To this end, we unprecedentedly propose a novel semi-supervised learning framework to exploit these inherent relationships from limited labeled data and extensive unlabeled data for more robust and consistent registration and parcellation of longitudinal cortical surfaces. Our method utilizes the spherical topology characteristic of cortical surfaces. It employs a spherical network to function as an encoder, which extracts high-level cortical features. Subsequently, we build two specialized decoders dedicated to the tasks of registration and parcellation, respectively. To extract more meaningful spatial features, we design a novel parcellation map similarity loss to utilize the relationship between registration and parcellation tasks, i.e., the parcellation map warped by the deformation field in registration should match the atlas parcellation map, thereby providing extra supervision for the registration task and augmented data for parcellation task by warping the atlas parcellation map to unlabeled surfaces. To enable temporally more consistent feature representation, we additionally enforce longitudinal consistency among longitudinal surfaces after registering them together using their concatenated features. Experiments on two longitudinal datasets of infants and adults have shown that our method achieves significant improvements on both registration/parcellation accuracy and longitudinal consistency compared to existing methods, especially in small and challenging cortical regions.
Asunto(s)
Corteza Cerebral , Imagen por Resonancia Magnética , Aprendizaje Automático Supervisado , Humanos , Imagen por Resonancia Magnética/métodos , Estudios Longitudinales , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/anatomía & histología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Fetal Magnetic Resonance Imaging (MRI) is challenged by fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion frequently occurs in the acquisition of spatially adjacent slices. Motion correction for each slice is thus critical for the reconstruction of 3D fetal brain MRI. In this paper, we propose a novel multi-task learning framework that adopts a coarse-to-fine strategy to jointly learn the pose estimation parameters for motion correction and tissue segmentation map of each slice in fetal MRI. Particularly, we design a regression-based segmentation loss as a deep supervision to learn anatomically more meaningful features for pose estimation and segmentation. In the coarse stage, a U-Net-like network learns the features shared for both tasks. In the refinement stage, to fully utilize the anatomical information, signed distance maps constructed from the coarse segmentation are introduced to guide the feature learning for both tasks. Finally, iterative incorporation of the signed distance maps further improves the performance of both regression and segmentation progressively. Experimental results of cross-validation across two different fetal datasets acquired with different scanners and imaging protocols demonstrate the effectiveness of the proposed method in reducing the pose estimation error and obtaining superior tissue segmentation results simultaneously, compared with state-of-the-art methods.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Feto/diagnóstico por imagen , Movimiento (Física) , Encéfalo/diagnóstico por imagenRESUMEN
Spherical mapping of cortical surface meshes provides a more convenient and accurate space for cortical surface registration and analysis and thus has been widely adopted in neuroimaging field. Conventional approaches typically first inflate and project the original cortical surface mesh onto a sphere to generate an initial spherical mesh which contains large distortions. Then they iteratively reshape the spherical mesh to minimize the metric (distance), area or angle distortions. However, these methods suffer from two major issues: 1) the iterative optimization process is computationally expensive, making them not suitable for large-scale data processing; 2) when metric distortion cannot be further minimized, either area or angle distortion is minimized at the expense of the other, which is not flexible to generate application-specific meshes based on both of them. To address these issues, for the first time, we propose a deep learning-based algorithm to learn the mapping between the original cortical surface and spherical surface meshes. Specifically, we take advantage of the Spherical U-Net model to learn the spherical diffeomorphic deformation field for minimizing the distortions between the icosahedron-reparameterized original surface and spherical surface meshes. The end-to-end unsupervised learning scheme is very flexible to incorporate various optimization objectives. We further integrate it into a coarse-to-fine multi-resolution framework for better correcting fine-scaled distortions. We have validated our method on 800+ cortical surfaces, demonstrating reduced distortions than FreeSurfer (the most popularly used tool), while speeding up the process from 20 min to 5 s.
RESUMEN
Brain cortical surfaces, which have an intrinsic spherical topology, are typically represented by triangular meshes and mapped onto a spherical manifold in neuroimaging analysis. Inspired by the strong capability of feature learning in Convolutional Neural Networks (CNNs), spherical CNNs have been developed accordingly and achieved many successes in cortical surface analysis. Motivated by the recent success of the transformer, in this paper, for the first of time, we extend the transformer into the spherical space and propose the spherical transformer, which can better learn contextual and structural features than spherical CNNs. We applied the spherical transformer in the important task of automatic quality assessment of infant cortical surfaces, which is a necessary procedure to identify problematic cases due to extremely low tissue contrast and strong motion effects in pediatric brain MRI studies. Experiments on 1,860 infant cortical surfaces validated its superior effectiveness and efficiency in comparison with spherical CNNs.
RESUMEN
Motivated by the recent great success of attention modeling in computer vision, it is highly desired to extend the Transformer architecture from the conventional Euclidean space to non-Euclidean spaces. Given the intrinsic spherical topology of brain cortical surfaces in neuroimaging, in this study, we propose a novel Spherical Transformer, an effective general-purpose backbone using the self-attention mechanism for analysis of cortical surface data represented by triangular meshes. By mapping the cortical surface onto a sphere and splitting it uniformly into overlapping spherical surface patches, we encode the long-range dependency within each patch by the self-attention operation and formulate the cross-patch feature transmission via overlapping regions. By limiting the self-attention computation to local patches, our proposed Spherical Transformer preserves detailed contextual information and enjoys great efficiency with linear computational complexity with respect to the patch size. Moreover, to better process longitudinal cortical surfaces, which are increasingly popular in neuroimaging studies, we unprecedentedly propose the spatiotemporal self-attention operation to jointly extract the spatial context and dynamic developmental patterns within a single layer, thus further enlarging the expressive power of the generated representation. To comprehensively evaluate the performance of our Spherical Transformer, we validate it on a surface-level prediction task and a vertex-level dense prediction task, respectively, i.e., the cognition prediction and cortical thickness map development prediction, which are important in early brain development mapping. Both applications demonstrate the competitive performance of our Spherical Transformer in comparison with the state-of-the-art methods.
RESUMEN
Cortical surface registration and parcellation are two essential steps in neuroimaging analysis. Conventionally, they are performed independently as two tasks, ignoring the inherent connections of these two closely-related tasks. Essentially, both tasks rely on meaningful cortical feature representations, so they can be jointly optimized by learning shared useful cortical features. To this end, we propose a deep learning framework for joint cortical surface registration and parcellation. Specifically, our approach leverages the spherical topology of cortical surfaces and uses a spherical network as the shared encoder to first learn shared features for both tasks. Then we train two task-specific decoders for registration and parcellation, respectively. We further exploit the more explicit connection between them by incorporating the novel parcellation map similarity loss to enforce the boundary consistency of regions, thereby providing extra supervision for the registration task. Conversely, parcellation network training also benefits from the registration, which provides a large amount of augmented data by warping one surface with manual parcellation map to another surface, especially when only few manually-labeled surfaces are available. Experiments on a dataset with more than 600 cortical surfaces show that our approach achieves large improvements on both parcellation and registration accuracy (over separately trained networks) and enables training high-quality parcellation and registration models using much fewer labeled data.
RESUMEN
Spatiotemporal (4D) cortical surface atlas during infancy plays an important role for surface-based visualization, normalization and analysis of the dynamic early brain development. Conventional atlas construction methods typically rely on classical group-wise registration on sub-populations and ignore longitudinal constraints, thus having three main issues: 1) constructing templates at discrete time points; 2) resulting in longitudinal inconsistency among different age's atlases; and 3) taking extremely long runtime. To address these issues, in this paper, we propose a fast unsupervised learning-based surface atlas construction framework incorporating longitudinal constraints to enforce the within-subject temporal correspondence in the atlas space. To well handle the difficulties of learning large deformations, we propose a multi-level multimodal spherical registration network to perform cortical surface registration in a coarse-to-fine manner. Thus, only small deformations need to be estimated at each resolution level using the registration network, which further improves registration accuracy and atlas quality. Our constructed 4D infant cortical surface atlas based on 625 longitudinal scans from 291 infants is temporally continuous, in contrast to the state-of-the-art UNC 4D Infant Surface Atlas, which only provides the atlases at a few discrete sparse time points. By evaluating the intra- and inter-subject spatial normalization accuracy after alignment onto the atlas, our atlas demonstrates more detailed and fine-grained cortical patterns, thus leading to higher accuracy in surface registration.
RESUMEN
Cortical surface registration is an essential step and prerequisite for surface-based neuroimaging analysis. It aligns cortical surfaces across individuals and time points to establish cross-sectional and longitudinal cortical correspondences to facilitate neuroimaging studies. Though achieving good performance, available methods are either time consuming or not flexible to extend to multiple or high dimensional features. Considering the explosive availability of large-scale and multimodal brain MRI data, fast surface registration methods that can flexibly handle multimodal features are desired. In this study, we develop a Superfast Spherical Surface Registration (S3Reg) framework for the cerebral cortex. Leveraging an end-to-end unsupervised learning strategy, S3Reg offers great flexibility in the choice of input feature sets and output similarity measures for registration, and meanwhile reduces the registration time significantly. Specifically, we exploit the powerful learning capability of spherical Convolutional Neural Network (CNN) to directly learn the deformation fields in spherical space and implement diffeomorphic design with "scaling and squaring" layers to guarantee topology-preserving deformations. To handle the polar-distortion issue, we construct a novel spherical CNN model using three orthogonal Spherical U-Nets. Experiments are performed on two different datasets to align both adult and infant multimodal cortical features. Results demonstrate that our S3Reg shows superior or comparable performance with state-of-the-art methods, while improving the registration time from 1 min to 10 sec.
Asunto(s)
Aprendizaje Profundo , Adulto , Estudios Transversales , Humanos , Procesamiento de Imagen Asistido por Computador , Lactante , Imagen por Resonancia Magnética , Redes Neurales de la Computación , NeuroimagenRESUMEN
Longitudinal infant dedicated cerebellum atlases play a fundamental role in characterizing and understanding the dynamic cerebellum development during infancy. However, due to the limited spatial resolution, low tissue contrast, tiny folding structures, and rapid growth of the cerebellum during this stage, it is challenging to build such atlases while preserving clear folding details. Furthermore, the existing atlas construction methods typically independently build discrete atlases based on samples for each age group without considering the within-subject temporal consistency, which is critical for large-scale longitudinal studies. To fill this gap, we propose an age-conditional multi-stage learning framework to construct longitudinally consistent 4D infant cerebellum atlases. Specifically, 1) A joint affine and deformable atlas construction framework is proposed to accurately build temporally continuous atlases based on the entire cohort, and rapidly warp the new images to the atlas space; 2) A longitudinal constraint is employed to enforce the within-subject temporal consistency during atlas building; 3) A Correntropy based regularization loss is further exploited to enhance the robustness of our framework. Our atlases are constructed based on 405 longitudinal scans from 187 healthy infants with age ranging from 6 to 27 months, and are compared to the atlases built by state-of-the-art algorithms. Results demonstrate that our atlases preserve more structural details and fine-grained cerebellum folding patterns, which ensure higher accuracy in subsequent atlas-based registration and segmentation tasks.
RESUMEN
Brain atlases are of fundamental importance for analyzing the dynamic neurodevelopment in fetal brain studies. Since the brain size, shape, and anatomical structures change rapidly during the prenatal period, it is essential to construct a spatiotemporal (4D) atlas equipped with tissue probability maps, which can preserve sharper early brain folding patterns for accurately characterizing dynamic changes in fetal brains and provide tissue prior informations for related tasks, e.g., segmentation, registration, and parcellation. In this work, we propose a novel unsupervised age-conditional learning framework to build temporally continuous fetal brain atlases by incorporating tissue segmentation maps, which outperforms previous traditional atlas construction methods in three aspects. First, our framework enables learning age-conditional deformable templates by leveraging the entire collection. Second, we leverage reliable brain tissue segmentation maps in addition to the low-contrast noisy intensity images to enhance the alignment of individual images. Third, a novel loss function is designed to enforce the similarity between the learned tissue probability map on the atlas and each subject tissue segmentation map after registration, thereby providing extra anatomical consistency supervision for atlas building. Our 4D temporally-continuous fetal brain atlases are constructed based on 82 healthy fetuses from 22 to 32 gestational weeks. Compared with the atlases built by the state-of-the-art algorithms, our atlases preserve more structural details and sharper folding patterns. Together with the learned tissue probability maps, our 4D fetal atlases provide a valuable reference for spatial normalization and analysis of fetal brain development.
RESUMEN
Convolutional Neural Networks (CNNs) have achieved overwhelming success in learning-related problems for 2D/3D images in the Euclidean space. However, unlike in the Euclidean space, the shapes of many structures in medical imaging have an inherent spherical topology in a manifold space, e.g., the convoluted brain cortical surfaces represented by triangular meshes. There is no consistent neighborhood definition and thus no straightforward convolution/pooling operations for such cortical surface data. In this paper, leveraging the regular and hierarchical geometric structure of the resampled spherical cortical surfaces, we create the 1-ring filter on spherical cortical triangular meshes and accordingly develop convolution/pooling operations for constructing Spherical U-Net for cortical surface data. However, the regular nature of the 1-ring filter makes it inherently limited to model fixed geometric transformations. To further enhance the transformation modeling capability of Spherical U-Net, we introduce the deformable convolution and deformable pooling to cortical surface data and accordingly propose the Spherical Deformable U-Net (SDU-Net). Specifically, spherical offsets are learned to freely deform the 1-ring filter on the sphere to adaptively localize cortical structures with different sizes and shapes. We then apply the SDU-Net to two challenging and scientifically important tasks in neuroimaging: cortical surface parcellation and cortical attribute map prediction. Both applications validate the competitive performance of our approach in accuracy and computational efficiency in comparison with state-of-the-art methods.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Imagenología TridimensionalRESUMEN
Current spherical surface registration methods achieve good performance on alignment and spatial normalization of cortical surfaces across individuals in neuroimaging analysis. However, they are computationally intensive, since they have to optimize an objective function independently for each pair of surfaces. In this paper, we present a fast learning-based algorithm that makes use of the recent development in spherical Convolutional Neural Networks (CNNs) for spherical cortical surface registration. Given a set of surface pairs without supervised information such as ground truth deformation fields or anatomical landmarks, we formulate the registration as a parametric function and learn its parameters by enforcing the feature similarity between one surface and the other one warped by the estimated deformation field using the function. Then, given a new pair of surfaces, we can quickly infer the spherical deformation field registering one surface to the other one. We model this parametric function using three orthogonal Spherical U-Nets and use spherical transform layers to warp the spherical surfaces, while imposing smoothness constraints on the deformation field. All the layers in the network are well-defined and differentiable, thus the parameters can be effectively learned. We show that our method achieves accurate cortical alignment results on 102 subjects, comparable to two state-of-the-art methods: Spherical Demons and MSM, while runs much faster.
RESUMEN
Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.
RESUMEN
Structured illumination microscopy (SIM) surpasses the optical diffraction limit and offers a two-fold enhancement in resolution over diffraction limited microscopy. However, it requires both intense illumination and multiple acquisitions to produce a single high-resolution image. Using deep learning to augment SIM, we obtain a five-fold reduction in the number of raw images required for super-resolution SIM, and generate images under extreme low light conditions (at least 100× fewer photons). We validate the performance of deep neural networks on different cellular structures and achieve multi-color, live-cell super-resolution imaging with greatly reduced photobleaching.
Asunto(s)
Microscopía/métodos , Animales , Aprendizaje Profundo , Fibroblastos/química , Procesamiento de Imagen Asistido por Computador , Ratones , Microscopía/instrumentaciónRESUMEN
Increasing multi-site infant neuroimaging datasets are facilitating the research on understanding early brain development with larger sample size and bigger statistical power. However, a joint analysis of cortical properties (e.g., cortical thickness) is unavoidably facing the problem of non-biological variance introduced by differences in MRI scanners. To address this issue, in this paper, we propose cycle-consistent adversarial networks based on spherical cortical surface to harmonize cortical thickness maps between different scanners. We combine the spherical U-Net and CycleGAN to construct a surface-to-surface CycleGAN (S2SGAN). Specifically, we model the harmonization from scanner X to scanner Y as a surface-to-surface translation task. The first goal of harmonization is to learn a mapping G X : X â Y such that the distribution of surface thickness maps from G X (X) is indistinguishable from Y. Since this mapping is highly under-constrained, with the second goal of harmonization to preserve individual differences, we utilize the inverse mapping G Y : Y â X and the cycle consistency loss to enforce G Y (G X (X)) ≈ X (and vice versa). Furthermore, we incorporate the correlation coefficient loss to guarantee the structure consistency between the original and the generated surface thickness maps. Quantitative evaluation on both synthesized and real infant cortical data demonstrates the superior ability of our method in removing unwanted scanner effects and preserving individual differences simultaneously, compared to the state-of-the-art methods.