Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 827
Filtrar
1.
Med Image Anal ; 71: 102060, 2021 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-33957558

RESUMO

The dearth of annotated data is a major hurdle in building reliable image segmentation models. Manual annotation of medical images is tedious, time-consuming, and significantly variable across imaging modalities. The need for annotation can be ameliorated by leveraging an annotation-rich source modality in learning a segmentation model for an annotation-poor target modality. In this paper, we introduce a diverse data augmentation generative adversarial network (DDA-GAN) to train a segmentation model for an unannotated target image domain by borrowing information from an annotated source image domain. This is achieved by generating diverse augmented data for the target domain by one-to-many source-to-target translation. The DDA-GAN uses unpaired images from the source and target domains and is an end-to-end convolutional neural network that (i) explicitly disentangles domain-invariant structural features related to segmentation from domain-specific appearance features, (ii) combines structural features from the source domain with appearance features randomly sampled from the target domain for data augmentation, and (iii) train the segmentation model with the augmented data in the target domain and the annotations from the source domain. The effectiveness of our method is demonstrated both qualitatively and quantitatively in comparison with the state of the art for segmentation of craniomaxillofacial bony structures via MRI and cardiac substructures via CT.

2.
J Cereb Blood Flow Metab ; : 271678X211005875, 2021 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-33818186

RESUMO

Perivascular space facilitates cerebral interstitial water clearance. However, it is unclear how dilated perivascular space (dPVS) affects the interstitial water of surrounding white matter. We aimed to determine the presence and extent of changes in normal-appearing white matter water components around dPVS in different populations. Twenty healthy elderly subjects and 15 elderly subjects with severe cerebral small vessel disease (CSVD, with lacunar infarction 6 months before the scan) were included in our study. And other 28 healthy adult subjects were enrolled under a different scanning parameter to see if the results are comparable. The normal-appearing white matter around dPVS was categorized into 10 layers (1 mm thickness each) based on their distance to dPVS. We evaluated the mean isotropic-diffusing water volume fraction in each layer. We discovered a significantly reduced free-water content in the layers closely adjacent to the dPVS in the healthy elderlies. however, this reduction around dPVS was weaker in the CSVD subjects. We also discovered an elevated free-water content within dPVS. DPVS played different roles in healthy subjects or CSVD subjects. The reduced water content around dPVS in healthy subjects suggests these MR-visible PVSs are not always related to the stagnation of fluid.

3.
IEEE Trans Med Imaging ; PP2021 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-33848243

RESUMO

Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.

4.
Med Image Anal ; 71: 102039, 2021 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-33831595

RESUMO

Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.

5.
Med Image Anal ; 71: 102076, 2021 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-33930828

RESUMO

Structural magnetic resonance imaging (MRI) has shown great clinical and practical values in computer-aided brain disorder identification. Multi-site MRI data increase sample size and statistical power, but are susceptible to inter-site heterogeneity caused by different scanners, scanning protocols, and subject cohorts. Multi-site MRI harmonization (MMH) helps alleviate the inter-site difference for subsequent analysis. Some MMH methods performed at imaging level or feature extraction level are concise but lack robustness and flexibility to some extent. Even though several machine/deep learning-based methods have been proposed for MMH, some of them require a portion of labeled data in the to-be-analyzed target domain or ignore the potential contributions of different brain regions to the identification of brain disorders. In this work, we propose an attention-guided deep domain adaptation (AD2A) framework for MMH and apply it to automated brain disorder identification with multi-site MRIs. The proposed framework does not need any category label information of target data, and can also automatically identify discriminative regions in whole-brain MR images. Specifically, the proposed AD2A is composed of three key modules: (1) an MRI feature encoding module to extract representations of input MRIs, (2) an attention discovery module to automatically locate discriminative dementia-related regions in each whole-brain MRI scan, and (3) a domain transfer module trained with adversarial learning for knowledge transfer between the source and target domains. Experiments have been performed on 2572 subjects from four benchmark datasets with T1-weighted structural MRIs, with results demonstrating the effectiveness of the proposed method in both tasks of brain disorder identification and disease progression prediction.

6.
BMC Med Imaging ; 21(1): 57, 2021 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-33757431

RESUMO

BACKGROUND: Spatial and temporal lung infection distributions of coronavirus disease 2019 (COVID-19) and their changes could reveal important patterns to better understand the disease and its time course. This paper presents a pipeline to analyze statistically these patterns by automatically segmenting the infection regions and registering them onto a common template. METHODS: A VB-Net is designed to automatically segment infection regions in CT images. After training and validating the model, we segmented all the CT images in the study. The segmentation results are then warped onto a pre-defined template CT image using deformable registration based on lung fields. Then, the spatial distributions of infection regions and those during the course of the disease are calculated at the voxel level. Visualization and quantitative comparison can be performed between different groups. We compared the distribution maps between COVID-19 and community acquired pneumonia (CAP), between severe and critical COVID-19, and across the time course of the disease. RESULTS: For the performance of infection segmentation, comparing the segmentation results with manually annotated ground-truth, the average Dice is 91.6% ± 10.0%, which is close to the inter-rater difference between two radiologists (the Dice is 96.1% ± 3.5%). The distribution map of infection regions shows that high probability regions are in the peripheral subpleural (up to 35.1% in probability). COVID-19 GGO lesions are more widely spread than consolidations, and the latter are located more peripherally. Onset images of severe COVID-19 (inpatients) show similar lesion distributions but with smaller areas of significant difference in the right lower lobe compared to critical COVID-19 (intensive care unit patients). About the disease course, critical COVID-19 patients showed four subsequent patterns (progression, absorption, enlargement, and further absorption) in our collected dataset, with remarkable concurrent HU patterns for GGO and consolidations. CONCLUSIONS: By segmenting the infection regions with a VB-Net and registering all the CT images and the segmentation results onto a template, spatial distribution patterns of infections can be computed automatically. The algorithm provides an effective tool to visualize and quantify the spatial patterns of lung infection diseases and their changes during the disease course. Our results demonstrate different patterns between COVID-19 and CAP, between severe and critical COVID-19, as well as four subsequent disease course patterns of the severe COVID-19 patients studied, with remarkable concurrent HU patterns for GGO and consolidations.


Assuntos
/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Progressão da Doença , Humanos , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
7.
IEEE Trans Med Imaging ; PP2021 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-33784617

RESUMO

Cortical surface registration is an essential step and prerequisite for surface-based neuroimaging analysis. It aligns cortical surfaces across individuals and time points to establish cross-sectional and longitudinal cortical correspondences to facilitate neuroimaging studies. Though achieving good performance, available methods are either time consuming or not flexible to extend to multiple or high dimensional features. Considering the explosive availability of large-scale and multimodal brain MRI data, fast surface registration methods that can flexibly handle multimodal features are desired. In this study, we develop a Superfast Spherical Surface Registration (S3Reg) framework for the cerebral cortex. Leveraging an end-to-end unsupervised learning strategy, S3Reg offers great flexibility in the choice of input feature sets and output similarity measures for registration, and meanwhile reduces the registration time significantly. Specifically, we exploit the powerful learning capability of spherical Convolutional Neural Network (CNN) to directly learn the deformation fields in spherical space and implement diffeomorphic design with "scaling and squaring" layers to guarantee topology-preserving deformations. To handle the polar-distortion issue, we construct a novel spherical CNN model using three orthogonal Spherical U-Nets. Experiments are performed on two different datasets to align both adult and infant multimodal cortical features. Results demonstrate that our S3Reg shows superior or comparable performance with state-of-the-art methods, while improving the registration time from 1 min to 10 sec.

8.
Phys Med Biol ; 66(6): 065031, 2021 03 17.
Artigo em Inglês | MEDLINE | ID: mdl-33729998

RESUMO

The worldwide spread of coronavirus disease (COVID-19) has become a threat to global public health. It is of great importance to rapidly and accurately screen and distinguish patients with COVID-19 from those with community-acquired pneumonia (CAP). In this study, a total of 1,658 patients with COVID-19 and 1,027 CAP patients underwent thin-section CT and were enrolled. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to the conventional CT severity score (CT-SS) and radiomics features. An infection size-aware random forest method (iSARF) was proposed for discriminating COVID-19 from CAP. Experimental results show that the proposed method yielded its best performance when using the handcrafted features, with a sensitivity of 90.7%, a specificity of 87.2%, and an accuracy of 89.4% over state-of-the-art classifiers. Additional tests on 734 subjects, with thick slice images, demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making.


Assuntos
/diagnóstico por imagem , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Pneumonia/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adulto , Idoso , Diagnóstico por Computador , Diagnóstico Diferencial , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem , Pulmão/virologia , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
9.
Comput Med Imaging Graph ; 89: 101899, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33761446

RESUMO

Computed tomography (CT) screening is essential for early lung cancer detection. With the development of artificial intelligence techniques, it is particularly desirable to explore the ability of current state-of-the-art methods and to analyze nodule features in terms of a large population. In this paper, we present an artificial-intelligence lung image analysis system (ALIAS) for nodule detection and segmentation. And after segmenting the nodules, the locations, sizes, as well as imaging features are computed at the population level for studying the differences between benign and malignant nodules. The results provide better understanding of the underlying imaging features and their ability for early lung cancer diagnosis.

10.
Med Image Anal ; 70: 101918, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33676100

RESUMO

Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.

11.
Artigo em Inglês | MEDLINE | ID: mdl-33656999

RESUMO

Accurate prediction of clinical scores (of neuropsychological tests) based on noninvasive structural magnetic resonance imaging (MRI) helps understand the pathological stage of dementia (e.g., Alzheimer's disease (AD)) and forecast its progression. Existing machine/deep learning approaches typically preselect dementia-sensitive brain locations for MRI feature extraction and model construction, potentially leading to undesired heterogeneity between different stages and degraded prediction performance. Besides, these methods usually rely on prior anatomical knowledge (e.g., brain atlas) and time-consuming nonlinear registration for the preselection of brain locations, thereby ignoring individual-specific structural changes during dementia progression because all subjects share the same preselected brain regions. In this article, we propose a multi-task weakly-supervised attention network (MWAN) for the joint regression of multiple clinical scores from baseline MRI scans. Three sequential components are included in MWAN: 1) a backbone fully convolutional network for extracting MRI features; 2) a weakly supervised dementia attention block for automatically identifying subject-specific discriminative brain locations; and 3) an attention-aware multitask regression block for jointly predicting multiple clinical scores. The proposed MWAN is an end-to-end and fully trainable deep learning model in which dementia-aware holistic feature learning and multitask regression model construction are integrated into a unified framework. Our MWAN method was evaluated on two public AD data sets for estimating clinical scores of mini-mental state examination (MMSE), clinical dementia rating sum of boxes (CDRSB), and AD assessment scale cognitive subscale (ADAS-Cog). Quantitative experimental results demonstrate that our method produces superior regression performance compared with state-of-the-art methods. Importantly, qualitative results indicate that the dementia-sensitive brain locations automatically identified by our MWAN method well retain individual specificities and are biologically meaningful.

12.
Orthod Craniofac Res ; 2021 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-33711187

RESUMO

OBJECTIVE: This study aimed to quantify the 3D asymmetry of the maxilla in patients with unilateral cleft lip and palate (UCP) and investigate the defect factors responsible for the variability of the maxilla on the cleft side using a deep-learning-based CBCT image segmentation protocol. SETTING AND SAMPLE POPULATION: Cone beam computed tomography (CBCT) images of 60 patients with UCP were acquired. The samples in this study consisted of 39 males and 21 females, with a mean age of 11.52 years (SD = 3.27 years; range of 8-18 years). MATERIALS AND METHODS: The deep-learning-based protocol was used to segment the maxilla and defect initially, followed by manual refinement. Paired t-tests were performed to characterize the maxillary asymmetry. A multiple linear regression was carried out to investigate the relationship between the defect parameters and those of the cleft side of the maxilla. RESULTS: The cleft side of the maxilla demonstrated a significant decrease in maxillary volume and length as well as alveolar length, anterior width, posterior width, anterior height and posterior height. A significant increase in maxillary anterior width was demonstrated on the cleft side of the maxilla. There was a close relationship between the defect parameters and those of the cleft side of the maxilla. CONCLUSIONS: Based on the 3D volumetric segmentations, significant hypoplasia of the maxilla on the cleft side existed in the pyriform aperture and alveolar crest area near the defect. The defect structures appeared to contribute to the variability of the maxilla on the cleft side.

13.
Med Image Anal ; 69: 101978, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33588121

RESUMO

How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.


Assuntos
/diagnóstico por imagem , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Aprendizado Profundo , Feminino , Humanos , Lactente , Recém-Nascido , Masculino , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Adulto Jovem
14.
Phys Med Biol ; 2021 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-33607630

RESUMO

The worldwide spread of coronavirus disease (COVID-19) has become a threatening risk for global public health. It is of great importance to rapidly and accurately screen patients with COVID-19 from community acquired pneumonia (CAP). In this study, a total of 1658 patients with COVID-19 and 1027 CAP patients underwent thin-section CT. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to conventional CT severity score (CT-SS) and Radiomics features. An infection Size Aware Random Forest method (iSARF) was used for classification. Experimental results show that the proposed method yielded best performance when using the handcrafted features with sensitivity of 91.6%, specificity of 86.8%, and accuracy of 89.8% over state-of-the-art classifiers. Additional test on 734 subjects with thick slice images demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making. Furthermore, the data of extracted features will be made available after the review process.

15.
IEEE Trans Med Imaging ; 40(4): 1279-1289, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33444133

RESUMO

Brain connectivity alterations associated with mental disorders have been widely reported in both functional MRI (fMRI) and diffusion MRI (dMRI). However, extracting useful information from the vast amount of information afforded by brain networks remains a great challenge. Capturing network topology, graph convolutional networks (GCNs) have demonstrated to be superior in learning network representations tailored for identifying specific brain disorders. Existing graph construction techniques generally rely on a specific brain parcellation to define regions-of-interest (ROIs) to construct networks, often limiting the analysis into a single spatial scale. In addition, most methods focus on the pairwise relationships between the ROIs and ignore high-order associations between subjects. In this letter, we propose a mutual multi-scale triplet graph convolutional network (MMTGCN) to analyze functional and structural connectivity for brain disorder diagnosis. We first employ several templates with different scales of ROI parcellation to construct coarse-to-fine brain connectivity networks for each subject. Then, a triplet GCN (TGCN) module is developed to learn functional/structural representations of brain connectivity networks at each scale, with the triplet relationship among subjects explicitly incorporated into the learning process. Finally, we propose a template mutual learning strategy to train different scale TGCNs collaboratively for disease classification. Experimental results on 1,160 subjects from three datasets with fMRI or dMRI data demonstrate that our MMTGCN outperforms several state-of-the-art methods in identifying three types of brain disorders.

16.
Med Image Anal ; 69: 101949, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33387908

RESUMO

Automatic and accurate segmentation of dental models is a fundamental task in computer-aided dentistry. Previous methods can achieve satisfactory segmentation results on normal dental models; however, they fail to robustly handle challenging clinical cases such as dental models with missing, crowding, or misaligned teeth before orthodontic treatments. In this paper, we propose a novel end-to-end learning-based method, called TSegNet, for robust and efficient tooth segmentation on 3D scanned point cloud data of dental models. Our algorithm detects all the teeth using a distance-aware tooth centroid voting scheme in the first stage, which ensures the accurate localization of tooth objects even with irregular positions on abnormal dental models. Then, a confidence-aware cascade segmentation module in the second stage is designed to segment each individual tooth and resolve ambiguities caused by aforementioned challenging cases. We evaluated our method on a large-scale real-world dataset consisting of dental models scanned before or after orthodontic treatments. Extensive evaluations, ablation studies and comparisons demonstrate that our method can generate accurate tooth labels robustly in various challenging cases and significantly outperforms state-of-the-art approaches by 6.5% of Dice Coefficient, 3.0% of F1 score in term of accuracy, while achieving 20 times speedup of computational time.

17.
IEEE Trans Med Imaging ; 40(4): 1217-1228, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33417540

RESUMO

Convolutional Neural Networks (CNNs) have achieved overwhelming success in learning-related problems for 2D/3D images in the Euclidean space. However, unlike in the Euclidean space, the shapes of many structures in medical imaging have an inherent spherical topology in a manifold space, e.g., the convoluted brain cortical surfaces represented by triangular meshes. There is no consistent neighborhood definition and thus no straightforward convolution/pooling operations for such cortical surface data. In this paper, leveraging the regular and hierarchical geometric structure of the resampled spherical cortical surfaces, we create the 1-ring filter on spherical cortical triangular meshes and accordingly develop convolution/pooling operations for constructing Spherical U-Net for cortical surface data. However, the regular nature of the 1-ring filter makes it inherently limited to model fixed geometric transformations. To further enhance the transformation modeling capability of Spherical U-Net, we introduce the deformable convolution and deformable pooling to cortical surface data and accordingly propose the Spherical Deformable U-Net (SDU-Net). Specifically, spherical offsets are learned to freely deform the 1-ring filter on the sphere to adaptively localize cortical structures with different sizes and shapes. We then apply the SDU-Net to two challenging and scientifically important tasks in neuroimaging: cortical surface parcellation and cortical attribute map prediction. Both applications validate the competitive performance of our approach in accuracy and computational efficiency in comparison with state-of-the-art methods.

18.
IEEE Trans Med Imaging ; 40(5): 1363-1376, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33507867

RESUMO

To better understand early brain development in health and disorder, it is critical to accurately segment infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep learning-based methods have achieved state-of-the-art performance; h owever, one of the major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners. To promote methodological development in the community, the iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods. T raining/validation subjects are from UNC (MAP) and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory University. By the time of writing, there are 30 automatic segmentation methods participated in the iSeg-2019. In this article, 8 top-ranked methods were reviewed by detailing their pipelines/implementations, presenting experimental results, and evaluating performance across different sites in terms of whole brain, regions of interest, and gyral landmark curves. We further pointed out their limitations and possible directions for addressing the multi-site issue. We find that multi-site consistency is still an open issue. We hope that the multi-site dataset in the iSeg-2019 and this review article will attract more researchers to address the challenging and critical multi-site issue in practice.

19.
Med Image Anal ; 69: 101953, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33460880

RESUMO

Alzheimers disease (AD) is a complex neurodegenerative disease. Its early diagnosis and treatment have been a major concern of researchers. Currently, the multi-modality data representation learning of this disease is gradually becoming an emerging research field, attracting widespread attention. However, in practice, data from multiple modalities are only partially available, and most of the existing multi-modal learning algorithms can not deal with the incomplete multi-modality data. In this paper, we propose an Auto-Encoder based Multi-View missing data Completion framework (AEMVC) to learn common representations for AD diagnosis. Specifically, we firstly map the original complete view to a latent space using an auto-encoder network framework. Then, the latent representations measuring statistical dependence learned from the complete view are used to complement the kernel matrix of the incomplete view in the kernel space. Meanwhile, the structural information of original data and the inherent association between views are maintained by graph regularization and Hilbert-Schmidt Independence Criterion (HSIC) constraints. Finally, a kernel based multi-view method is applied to the learned kernel matrix for the acquisition of common representations. Experimental results achieved on Alzheimers Disease Neuroimaging Initiative (ADNI) datasets validate the effectiveness of the proposed method.

20.
Artif Intell Med ; 111: 101998, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33461691

RESUMO

Due to low tissue contrast, irregular shape, and large location variance, segmenting the objects from different medical imaging modalities (e.g., CT, MR) is considered as an important yet challenging task. In this paper, a novel method is presented for interactive medical image segmentation with the following merits. (1) Its design is fundamentally different from previous pure patch-based and image-based segmentation methods. It is observed that during delineation, the physician repeatedly check the intensity from area inside-object to outside-object to determine the boundary, which indicates that comparison in an inside-out manner is extremely important. Thus, the method innovatively models the segmentation task as learning the representation of bi-directional sequential patches, starting from (or ending in) the given central point of the object. This can be realized by the proposed ConvRNN network embedded with a gated memory propagation unit. (2) Unlike previous interactive methods (requiring bounding box or seed points), the proposed method only asks the physician to merely click on the rough central point of the object before segmentation, which could simultaneously enhance the performance and reduce the segmentation time. (3) The method is utilized in a multi-level framework for better performance. It has been systematically evaluated in three different segmentation tasks, including CT kidney tumor, MR prostate, and PROMISE12 challenge, showing promising results compared with state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...