Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 754
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-32386147

RESUMO

Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread rapidly across the world. Due to the large number of infected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed, and could largely reduce the efforts of clinicians and accelerate the diagnosis process. Chest computed tomography (CT) has been recognized as an informative tool for diagnosis of the disease. In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images. To fully explore multiple features describing CT images from different views, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability. Specifically, the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP) and also a large margin is guaranteed between different types of pneumonia. In this way, our model can well avoid overfitting compared to the case of directly projecting highdimensional features into classes. Extensive experimental results show that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the number of training data.

2.
Artigo em Inglês | MEDLINE | ID: mdl-32396089

RESUMO

In this paper, we introduce an image quality assessment (IQA) method for pediatric T1- and T2-weighted MR images. IQA is first performed slice-wise using a nonlocal residual neural network (NR-Net) and then volume-wise by agglomerating the slice QA results using random forest. Our method requires only a small amount of quality-annotated images for training and is designed to be robust to annotation noise that might occur due to rater errors and the inevitable mix of good and bad slices in an image volume. Using a small set of quality-assessed images, we pre-train NR-Net to annotate each image slice with an initial quality rating (i.e., pass, questionable, fail), which we then refine by semi-supervised learning and iterative self-training. Experimental results demonstrate that our method, trained using only samples of modest size, exhibit great generalizability, capable of real-time (milliseconds per volume) large-scale IQA with nearperfect accuracy.

3.
Med Image Anal ; 63: 101709, 2020 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-32417715

RESUMO

Functional connectivity networks (FCNs) based on functional magnetic resonance imaging (fMRI) have been widely applied to analyzing and diagnosing brain diseases, such as Alzheimer's disease (AD) and its prodrome stage, i.e., mild cognitive impairment (MCI). Existing studies usually use Pearson correlation coefficient (PCC) method to construct FCNs, and then extract network measures (e.g., clustering coefficients) as features to learn a diagnostic model. However, the valuable observation information in network construction (e.g., specific contributions of different time points), as well as high-level and high-order network features are neglected in these studies. In this paper, we first define a novel weighted correlation kernel (called wc-kernel) to measure the correlation of brain regions, by which weighting factors are learned in a data-driven manner to characterize the contributions of different time points, thus conveying the richer interaction information among brain regions compared with the PCC method. Furthermore, we build a wc-kernel based convolutional neural network (CNN) (called wck-CNN) framework for learning the hierarchical (i.e., from local to global and also from low-level to high-level) features for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic FCNs using our proposed wc-kernels. Then, we define another three layers to sequentially extract local (brain region specific), global (brain network specific) and temporal features from the constructed dynamic FCNs for classification. Experimental results on 174 subjects (a total of 563 scans) with rest-state fMRI (rs-fMRI) data from ADNI database demonstrate the efficacy of our proposed method.

4.
IEEE Rev Biomed Eng ; 2020 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-32305937

RESUMO

The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.

5.
Artigo em Inglês | MEDLINE | ID: mdl-32305905

RESUMO

The resting-state functional magnetic resonance imaging (rs-fMRI) reflects functional activity of brain regions by blood-oxygen-level dependent (BOLD) signals. Up to now, many computer-aided diagnosis methods based on rs-fMRI have been developed for Autism Spectrum Disorder (ASD). These methods are mostly the binary classification approaches to determine whether a subject is an ASD patient or not. However, the disease often consists of several sub-categories, which are complex and thus still confusing to many automatic classification methods. Besides, existing methods usually focus on the functional connectivity (FC) features in grey matter regions, which only account for a small portion of the rs-fMRI data. Recently, the possibility to reveal the connectivity information in the white matter regions of rs-fMRI has drawn high attention. To this end, we propose to use the patch-based functional correlation tensor (PBFCT) features extracted from rs-fMRI in white matter, in addition to the traditional FC features from gray matter, to develop a novel multi-class ASD diagnosis method in this work. Our method has two stages. Specifically, in the first stage of multi-source domain adaptation (MSDA), the source subjects belonging to multiple clinical centers (thus called as source domains) are all transformed into the same target feature space. Thus each subject in the target domain can be linearly reconstructed by the transformed subjects. In the second stage of multi-view sparse representation (MVSR), a multi-view classifier for multi-class ASD diagnosis is developed by jointly using both views of the FC and PBFCT features. The experimental results using the ABIDE dataset verify the effectiveness of our method, which is capable of accurately classifying each subject into a respective ASD sub-category.

6.
Artigo em Inglês | MEDLINE | ID: mdl-32340932

RESUMO

OBJECTIVE: To estimate a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects due to facial trauma. METHODS: We proposed an automatic facial bone shape estimation framework using pre-traumatic conventional portrait photos and post-traumatic head computed tomography (CT) scans via a 3D face reconstruction and a deformable shape model. Specifically, a three-dimensional (3D) face was first reconstructed from the patient's pre-traumatic portrait photos. Second, a correlation model between the skin and bone surfaces was constructed using a sparse representation based on the CT images of training normal subjects. Third, by feeding the reconstructed 3D face into the correlation model, an initial reference shape model was generated. In addition, we refined the initial estimation by applying non-rigid surface matching between the initially estimated shape and the patient's post-traumatic bone based on the adaptive-focus deformable shape model (AFDSM). Furthermore, a statistical shape model, built from the training normal subjects, was utilized to constrain the deformation process to avoid overfitting. RESULTS AND CONCLUSION: The proposed method was evaluated using both synthetic and real patient data. Experimental results show that the patient's abnormal facial bony structure can be recovered using our method, and the estimated reference shape model is considered clinically acceptable by an experienced CMF surgeon. SIGNIFICANCE: The proposed method is more suitable to the complex CMF defects for CMF reconstructive surgical planning.

7.
Hum Brain Mapp ; 2020 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-32163221

RESUMO

Brain functional network has been increasingly used in understanding brain functions and diseases. While many network construction methods have been proposed, the progress in the field still largely relies on static pairwise Pearson's correlation-based functional network and group-level comparisons. We introduce a "Brain Network Construction and Classification (BrainNetClass)" toolbox to promote more advanced brain network construction methods to the filed, including some state-of-the-art methods that were recently developed to capture complex and high-order interactions among brain regions. The toolbox also integrates a well-accepted and rigorous classification framework based on brain connectome features toward individualized disease diagnosis in a hope that the advanced network modeling could boost the subsequent classification. BrainNetClass is a MATLAB-based, open-source, cross-platform toolbox with both graphical user-friendly interfaces and a command line mode targeting cognitive neuroscientists and clinicians for promoting reliability, reproducibility, and interpretability of connectome-based, computer-aided diagnosis. It generates abundant classification-related results from network presentations to contributing features that have been largely ignored by most studies to grant users the ability of evaluating the disease diagnostic model and its robustness and generalizability. We demonstrate the effectiveness of the toolbox on real resting-state functional MRI datasets. BrainNetClass (v1.0) is available at https://github.com/zzstefan/BrainNetClass.

8.
Med Image Anal ; 62: 101663, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32120269

RESUMO

Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this paper, we introduce a novel deep learning network that fuses complementary information from spatial and wavelet domains to synthesize 7T T1-weighted images from their 3T counterparts. Our deep learning network leverages wavelet transformation to facilitate effective multi-scale reconstruction, taking into account both low-frequency tissue contrast and high-frequency anatomical details. Our network utilizes a novel wavelet-based affine transformation (WAT) layer, which modulates feature maps from the spatial domain with information from the wavelet domain. Extensive experimental results demonstrate the capability of the proposed method in synthesizing high-quality 7T images with better tissue contrast and greater details, outperforming state-of-the-art methods.

9.
Artigo em Inglês | MEDLINE | ID: mdl-32217472

RESUMO

Multi-modal neuroimages, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), can provide complementary structural and functional information of the brain, thus facilitating automated brain disease identification. Incomplete data problem is unavoidable in multi-modal neuroimage studies due to patient dropouts and/or poor data quality. Conventional methods usually discard data-missing subjects, thus significantly reducing the number of training samples. Even though several deep learning methods have been proposed, they usually rely on pre-defined regions-of-interest in neuroimages, requiring disease-specific expert knowledge. To this end, we propose a spatially-constrained Fisher representation framework for brain disease diagnosis with incomplete multi-modal neuroimages. We first impute missing PET images based on their corresponding MRI scans using a hybrid generative adversarial network. With the complete (after imputation) MRI and PET data, we then develop a spatially-constrained Fisher representation network to extract statistical descriptors of neuroimages for disease diagnosis, assuming that these descriptors follow a Gaussian mixture model with a strong spatial constraint (i.e., images from different subjects have similar anatomical structures). Experimental results on three databases suggest that our method can synthesize reasonable neuroimages and achieve promising results in brain disease identification, compared with several state-of-the-art methods.

10.
Hum Brain Mapp ; 41(4): 865-881, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32026598

RESUMO

Major depressive disorder (MDD) is a serious mental illness characterized by dysfunctional connectivity among distributed brain regions. Previous connectome studies based on functional magnetic resonance imaging (fMRI) have focused primarily on undirected functional connectivity and existing directed effective connectivity (EC) studies concerned mostly task-based fMRI and incorporated only a few brain regions. To overcome these limitations and understand whether MDD is mediated by within-network or between-network connectivities, we applied spectral dynamic causal modeling to estimate EC of a large-scale network with 27 regions of interests from four distributed functional brain networks (default mode, executive control, salience, and limbic networks), based on large sample-size resting-state fMRI consisting of 100 healthy subjects and 100 individuals with first-episode drug-naive MDD. We applied a newly developed parametric empirical Bayes (PEB) framework to test specific hypotheses. We showed that MDD altered EC both within and between high-order functional networks. Specifically, MDD is associated with reduced excitatory connectivity mainly within the default mode network (DMN), and between the default mode and salience networks. In addition, the network-averaged inhibitory EC within the DMN was found to be significantly elevated in the MDD. The coexistence of the reduced excitatory but increased inhibitory causal connections within the DMNs may underlie disrupted self-recognition and emotional control in MDD. Overall, this study emphasizes that MDD could be associated with altered causal interactions among high-order brain functional networks.

11.
Adv Exp Med Biol ; 1213: 23-44, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32030661

RESUMO

Medical images have been widely used in clinics, providing visual representations of under-skin tissues in human body. By applying different imaging protocols, diverse modalities of medical images with unique characteristics of visualization can be produced. Considering the cost of scanning high-quality single modality images or homogeneous multiple modalities of images, medical image synthesis methods have been extensively explored for clinical applications. Among them, deep learning approaches, especially convolutional neural networks (CNNs) and generative adversarial networks (GANs), have rapidly become dominating for medical image synthesis in recent years. In this chapter, based on a general review of the medical image synthesis methods, we will focus on introducing typical CNNs and GANs models for medical image synthesis. Especially, we will elaborate our recent work about low-dose to high-dose PET image synthesis, and cross-modality MR image synthesis, using these models.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Humanos
12.
Artigo em Inglês | MEDLINE | ID: mdl-32031933

RESUMO

Precisely labeling teeth on digitalized 3D dental surface models is the precondition for tooth position rearrangements in orthodontic treatment planning. However, it is a challenging task primarily due to the abnormal and varying appearance of patients' teeth. The emerging utilization of intraoral scanners (IOSs) in clinics further increases the difficulty in automated tooth labeling, as the raw surfaces acquired by IOS are typically low-quality at gingival and deep intraoral regions. In recent years, some pioneering end-to-end methods (e.g., PointNet) have been proposed in the communities of computer vision and graphics to consume directly raw surface for 3D shape segmentation. Although these methods are potentially applicable to our task, most of them fail to capture fine-grained local geometric context that is critical to the identification of small teeth with varying shapes and appearances. In this paper, we propose an end-to-end deep-learning method, called MeshSegNet, for automated tooth labeling on raw dental surfaces. Using multiple raw surface attributes as inputs, MeshSegNet integrates a series of graph-constrained learning modules along its forward path to hierarchically extract multi-scale local contextual features. Then, a dense fusion strategy is applied to combine local-to-global geometric features for the learning of higher-level features for mesh cell annotation. The predictions produced by our MeshSegNet are further post-processed by a graph-cut refinement step for final segmentation. We evaluated MeshSegNet using a real-patient dataset consisting of raw maxillary surfaces acquired by 3D IOS. Experimental results, performed 5-fold cross-validation, demonstrate that MeshSegNet significantly outperforms state-of-the-art deep learning methods for 3D shape segmentation.

13.
Med Image Anal ; 61: 101654, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32066065

RESUMO

Objective and quantitative assessment of fundus image quality is essential for the diagnosis of retinal diseases. The major factors in fundus image quality assessment are image artifact, clarity, and field definition. Unfortunately, most of existing quality assessment methods focus on the quality of overall image, without interpretable quality feedback for real-time adjustment. Furthermore, these models are often sensitive to the specific imaging devices, and cannot generalize well under different imaging conditions. This paper presents a new multi-task domain adaptation framework to automatically assess fundus image quality. The proposed framework provides interpretable quality assessment with both quantitative scores and quality visualization for potential real-time image recapture with proper adjustment. In particular, the present approach can detect optic disc and fovea structures as landmarks, to assist the assessment through coarse-to-fine feature encoding. The framework also exploit semi-tied adversarial discriminative domain adaptation to make the model generalizable across different data sources. Experimental results demonstrated that the proposed algorithm outperforms different state-of-the-art approaches and achieves an area under the ROC curve of 0.9455 for the overall quality classification.

14.
Artigo em Inglês | MEDLINE | ID: mdl-32091997

RESUMO

Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.

15.
Artigo em Inglês | MEDLINE | ID: mdl-31905135

RESUMO

Glioblastoma (GBM) is the most common and deadly malignant brain tumor. For personalized treatment, an accurate pre-operative prognosis for GBM patients is highly desired. Recently, many machine learning-based methods have been adopted to predict overall survival (OS) time based on the pre-operative mono- or multi-modal imaging phenotype. The genotypic information of GBM has been proven to be strongly indicative of the prognosis; however, this has not been considered in the existing imaging-based OS prediction methods. The main reason is that the tumor genotype is unavailable pre-operatively unless deriving from craniotomy. In this paper, we propose a new deep learning-based OS prediction method for GBM patients, which can derive tumor genotype-related features from pre-operative multimodal magnetic resonance imaging (MRI) brain data and feed them to OS prediction. Specifically, we propose a multi-task convolutional neural network (CNN) to accomplish both tumor genotype and OS prediction tasks jointly. As the network can benefit from learning tumor genotype-related features for genotype prediction, the accuracy of predicting OS time can be prominently improved. In the experiments, multimodal MRI brain dataset of 120 GBM patients, with as many as four different genotypic/molecular biomarkers, are used to evaluate our method. Our method achieves the highest OS prediction accuracy compared to other state-of-the-art methods.

16.
Med Image Anal ; 60: 101630, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31927474

RESUMO

Fusing multi-modality data is crucial for accurate identification of brain disorder as different modalities can provide complementary perspectives of complex neurodegenerative disease. However, there are at least four common issues associated with the existing fusion methods. First, many existing fusion methods simply concatenate features from each modality without considering the correlations among different modalities. Second, most existing methods often make prediction based on a single classifier, which might not be able to address the heterogeneity of the Alzheimer's disease (AD) progression. Third, many existing methods often employ feature selection (or reduction) and classifier training in two independent steps, without considering the fact that the two pipelined steps are highly related to each other. Forth, there are missing neuroimaging data for some of the participants (e.g., missing PET data), due to the participants' "no-show" or dropout. In this paper, to address the above issues, we propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our proposed models outperform other state-of-the-art methods.

17.
Hum Brain Mapp ; 41(8): 1985-2003, 2020 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-31930620

RESUMO

Studying the early dynamic development of cortical folding with remarkable individual variability is critical for understanding normal early brain development and related neurodevelopmental disorders. This study focuses on the fingerprinting capability and the individual variability of cortical folding during early brain development. Specifically, we aim to explore (a) whether the developing neonatal cortical folding is unique enough to be considered as a "fingerprint" that can reliably identify an individual within a cohort of infants; (b) which cortical regions manifest more individual variability and thus contribute more for infant identification; (c) whether the infant twins can be distinguished by cortical folding. Hence, for the first time, we conduct infant individual identification and individual variability analysis involving twins based on the developing cortical folding features (mean curvature, average convexity, and sulcal depth) in 472 neonates with 1,141 longitudinal MRI scans. Experimental results show that the infant individual identification achieves 100% accuracy when using the neonatal cortical folding features to predict the identities of 1- and 2-year-olds. Besides, we observe high identification capability in the high-order association cortices (i.e., prefrontal, lateral temporal, and inferior parietal regions) and two unimodal cortices (i.e., precentral gyrus and lateral occipital cortex), which largely overlap with the regions encoding remarkable individual variability in cortical folding during the first 2 years. For twins study, we show that even for monozygotic twins with identical genes and similar developmental environments, their cortical folding features are unique enough for accurate individual identification; and in some high-order association cortices, the differences between monozygotic twin pairs are significantly lower than those between dizygotic twins. This study thus provides important insights into individual identification and individual variability based on cortical folding during infancy.

18.
Artigo em Inglês | MEDLINE | ID: mdl-31940526

RESUMO

Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.

19.
Artigo em Inglês | MEDLINE | ID: mdl-31995472

RESUMO

Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the segmentation model. To this end, we propose the label denoising module and embed it into the iterative training scheme of the label denoising network (LDnet) for segmentation. The labels of the training voxels are predicted by the tentative LDnet, while the label denoising module identifies the voxels with unreliable labels. As only the good training voxels are preserved, the iteratively re-trained LDnet can refine its segmentation capability gradually. Our results are remarkable, i.e., reaching ~94% (prostate), ~91% (bladder), and ~86% (rectum) of the Dice Similarity Coefficients (DSCs), compared to the case of fully supervised learning upon high-quality voxel-wise annotations and also superior to several state-of-the-art approaches. To our best knowledge, this is the first work to achieve voxel-wise segmentation in CT images from simple 3D bounding box annotations, which can greatly reduce many labeling efforts and meet the demands of the practical clinical applications.

20.
Neuroinformatics ; 18(2): 319-331, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31898145

RESUMO

Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA