Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Med Image Anal ; 93: 103095, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38310678

ABSTRACT

Segmenting prostate from magnetic resonance imaging (MRI) is a critical procedure in prostate cancer staging and treatment planning. Considering the nature of labeled data scarcity for medical images, semi-supervised learning (SSL) becomes an appealing solution since it can simultaneously exploit limited labeled data and a large amount of unlabeled data. However, SSL relies on the assumption that the unlabeled images are abundant, which may not be satisfied when the local institute has limited image collection capabilities. An intuitive solution is to seek support from other centers to enrich the unlabeled image pool. However, this further introduces data heterogeneity, which can impede SSL that works under identical data distribution with certain model assumptions. Aiming at this under-explored yet valuable scenario, in this work, we propose a separated collaborative learning (SCL) framework for semi-supervised prostate segmentation with multi-site unlabeled MRI data. Specifically, on top of the teacher-student framework, SCL exploits multi-site unlabeled data by: (i) Local learning, which advocates local distribution fitting, including the pseudo label learning that reinforces confirmation of low-entropy easy regions and the cyclic propagated real label learning that leverages class prototypes to regularize the distribution of intra-class features; (ii) External multi-site learning, which aims to robustly mine informative clues from external data, mainly including the local-support category mutual dependence learning, which takes the spirit that mutual information can effectively measure the amount of information shared by two variables even from different domains, and the stability learning under strong adversarial perturbations to enhance robustness to heterogeneity. Extensive experiments on prostate MRI data from six different clinical centers show that our method can effectively generalize SSL on multi-site unlabeled data and significantly outperform other semi-supervised segmentation methods. Besides, we validate the extensibility of our method on the multi-class cardiac MRI segmentation task with data from four different clinical centers.


Subject(s)
Interdisciplinary Placement , Prostatic Neoplasms , Male , Humans , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Entropy , Magnetic Resonance Imaging
2.
Med Phys ; 51(3): 1832-1846, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37672318

ABSTRACT

BACKGROUND: View planning for the acquisition of cardiac magnetic resonance (CMR) imaging remains a demanding task in clinical practice. PURPOSE: Existing approaches to its automation relied either on an additional volumetric image not typically acquired in clinic routine, or on laborious manual annotations of cardiac structural landmarks. This work presents a clinic-compatible, annotation-free system for automatic CMR view planning. METHODS: The system mines the spatial relationship-more specifically, locates the intersecting lines-between the target planes and source views, and trains U-Net-based deep networks to regress heatmaps defined by distances from the intersecting lines. On the one hand, the intersection lines are the prescription lines prescribed by the technologists at the time of image acquisition using cardiac landmarks, and retrospectively identified from the spatial relationship. On the other hand, as the spatial relationship is self-contained in properly stored data, for example, in the DICOM format, the need for additional manual annotation is eliminated. In addition, the interplay of the multiple target planes predicted in a source view is utilized in a stacked hourglass architecture consisting of repeated U-Net-style building blocks to gradually improve the regression. Then, a multiview planning strategy is proposed to aggregate information from the predicted heatmaps for all the source views of a target plane, for a globally optimal prescription, mimicking the similar strategy practiced by skilled human prescribers. For performance evaluation, the retrospectively identified planes prescribed by the technologists are used as the ground truth, and the plane angle differences and localization distances between the planes prescribed by our system and the ground truth are compared. RESULTS: The retrospective experiments include 181 clinical CMR exams, which are randomly split into training, validation, and test sets in the ratio of 64:16:20. Our system yields the mean angular difference and point-to-plane distance of 5.68 ∘ $^\circ$ and 3.12 mm, respectively, on the held-out test set. It not only achieves superior accuracy to existing approaches including conventional atlas-based and newer deep-learning-based in prescribing the four standard CMR planes but also demonstrates prescription of the first cardiac-anatomy-oriented plane(s) from the body-oriented scout. CONCLUSIONS: The proposed system demonstrates accurate automatic CMR view plane prescription based on deep learning on properly archived data, without the need for further manual annotation. This work opens a new direction for automatic view planning of anatomy-oriented medical imaging beyond CMR.


Subject(s)
Heart , Magnetic Resonance Imaging, Cine , Humans , Retrospective Studies , Magnetic Resonance Imaging, Cine/methods , Heart/diagnostic imaging , Magnetic Resonance Imaging , Automation
3.
Med Image Anal ; 91: 103019, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37944431

ABSTRACT

Layer segmentation is important to quantitative analysis of retinal optical coherence tomography (OCT). Recently, deep learning based methods have been developed to automate this task and yield remarkable performance. However, due to the large spatial gap and potential mismatch between the B-scans of an OCT volume, all of them were based on 2D segmentation of individual B-scans, which may lose the continuity and diagnostic information of the retinal layers in 3D space. Besides, most of these methods required dense annotation of the OCT volumes, which is labor-intensive and expertise-demanding. This work presents a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) to obtain continuous 3D retinal layer surfaces from OCT volumes, which works well with both full and sparse annotations. The 2D features of individual B-scans are extracted by an encoder consisting of 2D convolutions. These 2D features are then used to produce the alignment displacement vectors and layer segmentation by two 3D decoders coupled via a spatial transformer module. Two losses are proposed to utilize the retinal layers' natural property of being smooth for B-scan alignment and layer segmentation, respectively, and are the key to the semi-supervised learning with sparse annotation. The entire framework is trained end-to-end. To the best of our knowledge, this is the first work that attempts 3D retinal layer segmentation in volumetric OCT images based on CNNs. Experiments on a synthetic dataset and three public clinical datasets show that our framework can effectively align the B-scans for potential motion correction, and achieves superior performance to state-of-the-art 2D deep learning methods in terms of both layer segmentation accuracy and cross-B-scan 3D continuity in both fully and semi-supervised settings, thus offering more clinical values than previous works.


Subject(s)
Retina , Tomography, Optical Coherence , Humans , Retina/diagnostic imaging , Neural Networks, Computer , Supervised Machine Learning
4.
Med Image Anal ; 88: 102880, 2023 08.
Article in English | MEDLINE | ID: mdl-37413792

ABSTRACT

Semi-supervised learning has greatly advanced medical image segmentation since it effectively alleviates the need of acquiring abundant annotations from experts, wherein the mean-teacher model, known as a milestone of perturbed consistency learning, commonly serves as a standard and simple baseline. Inherently, learning from consistency can be regarded as learning from stability under perturbations. Recent improvement leans toward more complex consistency learning frameworks, yet, little attention is paid to the consistency target selection. Considering that the ambiguous regions from unlabeled data contain more informative complementary clues, in this paper, we improve the mean-teacher model to a novel ambiguity-consensus mean-teacher (AC-MT) model. Particularly, we comprehensively introduce and benchmark a family of plug-and-play strategies for ambiguous target selection from the perspectives of entropy, model uncertainty and label noise self-identification, respectively. Then, the estimated ambiguity map is incorporated into the consistency loss to encourage consensus between the two models' predictions in these informative regions. In essence, our AC-MT aims to find out the most worthwhile voxel-wise targets from the unlabeled data, and the model especially learns from the perturbed stability of these informative regions. The proposed methods are extensively evaluated on left atrium segmentation and brain tumor segmentation. Encouragingly, our strategies bring substantial improvement over recent state-of-the-art methods. The ablation study further demonstrates our hypothesis and shows impressive results under various extreme annotation conditions.


Subject(s)
Benchmarking , Brain Neoplasms , Humans , Brain Neoplasms/diagnostic imaging , Consensus , Entropy , Heart Atria , Supervised Machine Learning , Image Processing, Computer-Assisted
5.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 13553-13566, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37432804

ABSTRACT

Unsupervised domain adaption has been widely adopted in tasks with scarce annotated data. Unfortunately, mapping the target-domain distribution to the source-domain unconditionally may distort the essential structural information of the target-domain data, leading to inferior performance. To address this issue, we first propose to introduce active sample selection to assist domain adaptation regarding the semantic segmentation task. By innovatively adopting multiple anchors instead of a single centroid, both source and target domains can be better characterized as multimodal distributions, in which way more complementary and informative samples are selected from the target domain. With only a little workload to manually annotate these active samples, the distortion of the target-domain distribution can be effectively alleviated, achieving a large performance gain. In addition, a powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem and further improve the segmentation performance. Extensive experiments are conducted on public datasets, and the results demonstrate that the proposed approach outperforms state-of-the-art methods by large margins and achieves similar performance to the fully-supervised upperbound, i.e., 71.4% mIoU on GTA5 and 71.8% mIoU on SYNTHIA. The effectiveness of each component is also verified by thorough ablation studies.

6.
IEEE Trans Med Imaging ; 42(12): 3579-3589, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37440389

ABSTRACT

Medical contrastive vision-language pretraining has shown great promise in many downstream tasks, such as data-efficient/zero-shot recognition. Current studies pretrain the network with contrastive loss by treating the paired image-reports as positive samples and the unpaired ones as negative samples. However, unlike natural datasets, many medical images or reports from different cases could have large similarity especially for the normal cases, and treating all the unpaired ones as negative samples could undermine the learned semantic structure and impose an adverse effect on the representations. Therefore, we design a simple yet effective approach for better contrastive learning in medical vision-language field. Specifically, by simplifying the computation of similarity between medical image-report pairs into the calculation of the inter-report similarity, the image-report tuples are divided into positive, negative, and additional neutral groups. With this better categorization of samples, more suitable contrastive loss is constructed. For evaluation, we perform extensive experiments by applying the proposed model-agnostic strategy to two state-of-the-art pretraining frameworks. The consistent improvements on four common downstream tasks, including cross-modal retrieval, zero-shot/data-efficient image classification, and image segmentation, demonstrate the effectiveness of the proposed strategy in medical field.


Subject(s)
Semantics , Triage , Language
7.
Comput Biol Med ; 159: 106595, 2023 06.
Article in English | MEDLINE | ID: mdl-37087780

ABSTRACT

BACKGROUND: Medical images such as Optical Coherence Tomography (OCT) images acquired from different devices may show significantly different intensity profiles. An automatic segmentation model trained on images from one device may perform poorly when applied to images acquired using another device, resulting in a lack of generalizability. This study addresses this issue using domain adaptation methods improved by Cycle-Consistent Generative Adversarial Networks (CycleGAN), especially when the ground-truth labels are only available in the source domain. METHODS: A two-stage pipeline is proposed to generate segmentation in the target domain. The first stage involves the training of a state-of-the-art segmentation model in the source domain. The second stage aims to adapt the images from the target domain to the source domain. The adapted target domain images are segmented using the model in the first stage. Ablation tests were performed with integration of different loss functions, and the statistical significance of these models is reported. Both the segmentation performance and the adapted image quality metrics were evaluated. RESULTS: Regarding the segmentation Dice score, the proposed model ssppg achieves a significant improvement of 46.24% compared to without adaptation and reaches 87.4% of the upper limit of the segmentation performance. Furthermore, image quality metrics, including FID and KID scores, indicate that adapted images with better segmentation also have better image qualities. CONCLUSION: The proposed method demonstrates the effectiveness of segmentation-driven domain adaptation in retinal imaging processing. It reduces the labor cost of manual labeling, incorporates prior anatomic information to regulate and guide domain adaptation, and provides insights into improving segmentation qualities in image domains without labels.


Subject(s)
Retina , Tomography, Optical Coherence , Retina/diagnostic imaging , Image Processing, Computer-Assisted/methods
8.
Bioinformatics ; 38(19): 4654-4655, 2022 09 30.
Article in English | MEDLINE | ID: mdl-35951750

ABSTRACT

SUMMARY: Recent whole-brain mapping projects are collecting increasingly larger sets of high-resolution brain images using a variety of imaging, labeling and sample preparation techniques. Both mining and analysis of these data require reliable and robust cross-modal registration tools. We recently developed the mBrainAligner, a pipeline for performing cross-modal registration of the whole mouse brain. However, using this tool requires scripting or command-line skills to assemble and configure the different modules of mBrainAligner for accommodating different registration requirements and platform settings. In this application note, we present mBrainAligner-Web, a web server with a user-friendly interface that allows to configure and run mBrainAligner locally or remotely across platforms. AVAILABILITY AND IMPLEMENTATION: mBrainAligner-Web is available at http://mbrainaligner.ahu.edu.cn/ with source code at https://github.com/reaneyli/mBrainAligner-web. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Computers , Software , Animals , Mice , Brain/diagnostic imaging
9.
IEEE Trans Med Imaging ; 41(11): 3062-3073, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35604969

ABSTRACT

Manually segmenting medical images is expertise-demanding, time-consuming and laborious. Acquiring massive high-quality labeled data from experts is often infeasible. Unfortunately, without sufficient high-quality pixel-level labels, the usual data-driven learning-based segmentation methods often struggle with deficient training. As a result, we are often forced to collect additional labeled data from multiple sources with varying label qualities. However, directly introducing additional data with low-quality noisy labels may mislead the network training and undesirably offset the efficacy provided by those high-quality labels. To address this issue, we propose a Mean-Teacher-assisted Confident Learning (MTCL) framework constructed by a teacher-student architecture and a label self-denoising process to robustly learn segmentation from a small set of high-quality labeled data and plentiful low-quality noisy labeled data. Particularly, such a synergistic framework is capable of simultaneously and robustly exploiting (i) the additional dark knowledge inside the images of low-quality labeled set via perturbation-based unsupervised consistency, and (ii) the productive information of their low-quality noisy labels via explicit label refinement. Comprehensive experiments on left atrium segmentation with simulated noisy labels and hepatic and retinal vessel segmentation with real-world noisy labels demonstrate the superior segmentation performance of our approach as well as its effectiveness on label denoising.

10.
IEEE J Biomed Health Inform ; 26(7): 3174-3184, 2022 07.
Article in English | MEDLINE | ID: mdl-35324450

ABSTRACT

Semi-supervised learning has substantially advanced medical image segmentation since it alleviates the heavy burden of acquiring the costly expert-examined annotations. Especially, the consistency-based approaches have attracted more attention for their superior performance, wherein the real labels are only utilized to supervise their paired images via supervised loss while the unlabeled images are exploited by enforcing the perturbation-based "unsupervised" consistency without explicit guidance from those real labels. However, intuitively, the expert-examined real labels contain more reliable supervision signals. Observing this, we ask an unexplored but interesting question: can we exploit the unlabeled data via explicit real label supervision for semi-supervised training? To this end, we discard the previous perturbation-based consistency but absorb the essence of non-parametric prototype learning. Based on the prototypical networks, we then propose a novel cyclic prototype consistency learning (CPCL) framework, which is constructed by a labeled-to-unlabeled (L2U) prototypical forward process and an unlabeled-to-labeled (U2L) backward process. Such two processes synergistically enhance the segmentation network by encouraging morediscriminative and compact features. In this way, our framework turns previous "unsupervised" consistency into new "supervised" consistency, obtaining the "all-around real label supervision" property of our method. Extensive experiments on brain tumor segmentation from MRI and kidney segmentation from CT images show that our CPCL can effectively exploit the unlabeled data and outperform other state-of-the-art semi-supervised medical image segmentation methods.


Subject(s)
Brain Neoplasms , Supervised Machine Learning , Humans , Kidney , Magnetic Resonance Imaging
11.
Article in English | MEDLINE | ID: mdl-37250854

ABSTRACT

In order to tackle the difficulty associated with the ill-posed nature of the image registration problem, regularization is often used to constrain the solution space. For most learning-based registration approaches, the regularization usually has a fixed weight and only constrains the spatial transformation. Such convention has two limitations: (i) Besides the laborious grid search for the optimal fixed weight, the regularization strength of a specific image pair should be associated with the content of the images, thus the "one value fits all" training scheme is not ideal; (ii) Only spatially regularizing the transformation may neglect some informative clues related to the ill-posedness. In this study, we propose a mean-teacher based registration framework, which incorporates an additional temporal consistency regularization term by encouraging the teacher model's prediction to be consistent with that of the student model. More importantly, instead of searching for a fixed weight, the teacher enables automatically adjusting the weights of the spatial regularization and the temporal consistency regularization by taking advantage of the transformation uncertainty and appearance uncertainty. Extensive experiments on the challenging abdominal CT-MRI registration show that our training strategy can promisingly advance the original learning-based method in terms of efficient hyperparameter tuning and a better tradeoff between accuracy and smoothness.

12.
Nature ; 598(7879): 174-181, 2021 10.
Article in English | MEDLINE | ID: mdl-34616072

ABSTRACT

Dendritic and axonal morphology reflects the input and output of neurons and is a defining feature of neuronal types1,2, yet our knowledge of its diversity remains limited. Here, to systematically examine complete single-neuron morphologies on a brain-wide scale, we established a pipeline encompassing sparse labelling, whole-brain imaging, reconstruction, registration and analysis. We fully reconstructed 1,741 neurons from cortex, claustrum, thalamus, striatum and other brain regions in mice. We identified 11 major projection neuron types with distinct morphological features and corresponding transcriptomic identities. Extensive projectional diversity was found within each of these major types, on the basis of which some types were clustered into more refined subtypes. This diversity follows a set of generalizable principles that govern long-range axonal projections at different levels, including molecular correspondence, divergent or convergent projection, axon termination pattern, regional specificity, topography, and individual cell variability. Although clear concordance with transcriptomic profiles is evident at the level of major projection type, fine-grained morphological diversity often does not readily correlate with transcriptomic subtypes derived from unsupervised clustering, highlighting the need for single-cell cross-modality studies. Overall, our study demonstrates the crucial need for quantitative description of complete single-cell anatomy in cell-type classification, as single-cell morphological diversity reveals a plethora of ways in which different cell types and their individual members may contribute to the configuration and function of their respective circuits.


Subject(s)
Brain/cytology , Cell Shape , Neurons/classification , Neurons/metabolism , Single-Cell Analysis , Atlases as Topic , Biomarkers/metabolism , Brain/anatomy & histology , Brain/embryology , Brain/metabolism , Gene Expression Regulation, Developmental , Humans , Neocortex/anatomy & histology , Neocortex/cytology , Neocortex/embryology , Neocortex/metabolism , Neurogenesis , Neuroglia/cytology , Neurons/cytology , RNA-Seq , Reproducibility of Results
13.
Comput Med Imaging Graph ; 94: 101988, 2021 12.
Article in English | MEDLINE | ID: mdl-34717264

ABSTRACT

Computer-assistant diagnosis of retinal disease relies heavily on the accurate detection of retinal boundaries and other pathological features such as fluid accumulation. Optical coherence tomography (OCT) is a non-invasive ophthalmological imaging technique that has become a standard modality in the field due to its ability to detect cross-sectional retinal pathologies at the micrometer level. In this work, we presented a novel framework to achieve simultaneous retinal layers and fluid segmentation. A dual-branch deep neural network, termed LF-UNet, was proposed which combines the expansion path of the U-Net and original fully convolutional network, with a dilated network. In addition, we introduced a cascaded network framework to include the anatomical awareness embedded in the volumetric image. Cross validation experiments showed that the proposed LF-UNet has superior performance compared to the state-of-the-art methods, and that incorporating the relative positional map structural prior information could further improve the performance regardless of the network. The generalizability of the proposed network was demonstrated on an independent dataset acquired from the same types of device with different field of view, or images acquired from different device.


Subject(s)
Retinal Diseases , Tomography, Optical Coherence , Cross-Sectional Studies , Humans , Neural Networks, Computer , Retina/diagnostic imaging , Retinal Diseases/diagnostic imaging , Tomography, Optical Coherence/methods
14.
Front Neurosci ; 14: 853, 2020.
Article in English | MEDLINE | ID: mdl-33192235

ABSTRACT

Methods: Alzheimer's disease and Frontotemporal dementia are the first and third most common forms of dementia. Due to their similar clinical symptoms, they are easily misdiagnosed as each other even with sophisticated clinical guidelines. For disease-specific intervention and treatment, it is essential to develop a computer-aided system to improve the accuracy of their differential diagnosis. Recent advances in deep learning have delivered some of the best performance for medical image recognition tasks. However, its application to the differential diagnosis of AD and FTD pathology has not been explored. Approach: In this study, we proposed a novel deep learning based framework to distinguish between brain images of normal aging individuals and subjects with AD and FTD. Specifically, we combined the multi-scale and multi-type MRI-base image features with Generative Adversarial Network data augmentation technique to improve the differential diagnosis accuracy. Results: Each of the multi-scale, multitype, and data augmentation methods improved the ability for differential diagnosis for both AD and FTD. A 10-fold cross validation experiment performed on a large sample of 1,954 images using the proposed framework achieved a high overall accuracy of 88.28%. Conclusions: The salient contributions of this study are three-fold: (1) our experiments demonstrate that the combination of multiple structural features extracted at different scales with our proposed deep neural network yields superior performance than individual features; (2) we show that the use of Generative Adversarial Network for data augmentation could further improve the discriminant ability of the network regarding challenging tasks such as differentiating dementia sub-types; (3) and finally, we show that ensemble classifier strategy could make the network more robust and stable.

15.
Hum Hered ; 84(2): 59-72, 2019.
Article in English | MEDLINE | ID: mdl-31430752

ABSTRACT

BACKGROUND/AIMS: Alzheimer's disease (AD) is a chronic neurodegenerative disease that causes memory loss and a decline in cognitive abilities. AD is the sixth leading cause of death in the USA, affecting an estimated 5 million Americans. To assess the association between multiple genetic variants and multiple measurements of structural changes in the brain, a recent study of AD used a multivariate measure of linear dependence, the RV coefficient. The authors decomposed the RV coefficient into contributions from individual variants and displayed these contributions graphically. METHODS: We investigate the properties of such a "contribution plot" in terms of an underlying linear model, and discuss shrinkage estimation of the components of the plot when the correlation signal may be sparse. RESULTS: The contribution plot is applied to simulated data and to genomic and brain imaging data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). CONCLUSIONS: The contribution plot with shrinkage estimation can reveal truly associated explanatory variables.


Subject(s)
Alzheimer Disease/diagnostic imaging , Alzheimer Disease/genetics , Biomarkers/metabolism , Brain/diagnostic imaging , Neuroimaging , Apolipoproteins E/genetics , Computer Simulation , Genotype , Humans , Phenotype , Polymorphism, Single Nucleotide/genetics
16.
Med Image Anal ; 54: 100-110, 2019 05.
Article in English | MEDLINE | ID: mdl-30856455

ABSTRACT

As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. These images can help reveal disease-related alterations below the surface of the retina, such as the presence of edema, or accumulation of fluid which can distort vision, and are an indication of disruptions in the vasculature of the retina. In this paper, a new framework is proposed for multiclass fluid segmentation and detection in the retinal OCT images. Based on the intensity of OCT images and retinal layer segmentations provided by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The proposed framework won the first place in the MICCAI RETOUCH challenge in 2017 on both the segmentation performance (mean Dice: 0.7667) and the detection performance (mean AUC: 1.00) tasks.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Retina/diagnostic imaging , Tomography, Optical Coherence/methods , Humans , Imaging, Three-Dimensional
17.
IEEE Trans Med Imaging ; 38(8): 1858-1874, 2019 08.
Article in English | MEDLINE | ID: mdl-30835214

ABSTRACT

Retinal swelling due to the accumulation of fluid is associated with the most vision-threatening retinal diseases. Optical coherence tomography (OCT) is the current standard of care in assessing the presence and quantity of retinal fluid and image-guided treatment management. Deep learning methods have made their impact across medical imaging, and many retinal OCT analysis methods have been proposed. However, it is currently not clear how successful they are in interpreting the retinal fluid on OCT, which is due to the lack of standardized benchmarks. To address this, we organized a challenge RETOUCH in conjunction with MICCAI 2017, with eight teams participating. The challenge consisted of two tasks: fluid detection and fluid segmentation. It featured for the first time: all three retinal fluid types, with annotated images provided by two clinical centers, which were acquired with the three most common OCT device vendors from patients with two different retinal diseases. The analysis revealed that in the detection task, the performance on the automated fluid detection was within the inter-grader variability. However, in the segmentation task, fusing the automated methods produced segmentations that were superior to all individual methods, indicating the need for further improvements in the segmentation performance.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Retina/diagnostic imaging , Tomography, Optical Coherence/methods , Algorithms , Databases, Factual , Humans , Retinal Diseases/diagnostic imaging
18.
Hum Brain Mapp ; 40(5): 1507-1527, 2019 04 01.
Article in English | MEDLINE | ID: mdl-30431208

ABSTRACT

When analyzing large multicenter databases, the effects of multiple confounding covariates increase the variability in the data and may reduce the ability to detect changes due to the actual effect of interest, for example, changes due to disease. Efficient ways to evaluate the effect of covariates toward the data harmonization are therefore important. In this article, we showcase techniques to assess the "goodness of harmonization" of covariates. We analyze 7,656 MR images in the multisite, multiscanner Alzheimer's Disease Neuroimaging Initiative (ADNI) database. We present a comparison of three methods for estimating total intracranial volume to assess their robustness and correct the brain structure volumes using the residual method and the proportional (normalization by division) method. We then evaluated the distribution of brain structure volumes over the entire ADNI database before and after accounting for multiple covariates such as total intracranial volume, scanner field strength, sex, and age using two techniques: (a) Zscapes, a panoramic visualization technique to analyze the entire database and (b) empirical cumulative distributions functions. The results from this study highlight the importance of assessing the goodness of data harmonization as a necessary preprocessing step when pooling large data set with multiple covariates, prior to further statistical data analysis.


Subject(s)
Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Aged , Aged, 80 and over , Aging , Cognitive Dysfunction/diagnostic imaging , Cross-Sectional Studies , Data Interpretation, Statistical , Databases, Factual , Disease Progression , Female , Humans , Image Processing, Computer-Assisted , Longitudinal Studies , Magnetic Resonance Imaging , Male , Reproducibility of Results , Sex Characteristics
19.
Neuroimage Clin ; 18: 802-813, 2018.
Article in English | MEDLINE | ID: mdl-29876266

ABSTRACT

Fluorodeoxyglucose positron emission tomography (FDG-PET) imaging based 3D topographic brain glucose metabolism patterns from normal controls (NC) and individuals with dementia of Alzheimer's type (DAT) are used to train a novel multi-scale ensemble classification model. This ensemble model outputs a FDG-PET DAT score (FPDS) between 0 and 1 denoting the probability of a subject to be clinically diagnosed with DAT based on their metabolism profile. A novel 7 group image stratification scheme is devised that groups images not only based on their associated clinical diagnosis but also on past and future trajectories of the clinical diagnoses, yielding a more continuous representation of the different stages of DAT spectrum that mimics a real-world clinical setting. The potential for using FPDS as a DAT biomarker was validated on a large number of FDG-PET images (N=2984) obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database taken across the proposed stratification, and a good classification AUC (area under the curve) of 0.78 was achieved in distinguishing between images belonging to subjects on a DAT trajectory and those images taken from subjects not progressing to a DAT diagnosis. Further, the FPDS biomarker achieved state-of-the-art performance on the mild cognitive impairment (MCI) to DAT conversion prediction task with an AUC of 0.81, 0.80, 0.77 for the 2, 3, 5 years to conversion windows respectively.


Subject(s)
Alzheimer Disease/diagnosis , Brain/diagnostic imaging , Cognitive Dysfunction/diagnosis , Positron-Emission Tomography , Aged , Aged, 80 and over , Biomarkers/metabolism , Brain/metabolism , Disease Progression , Humans , Neuroimaging/methods , Positron-Emission Tomography/methods , Radiopharmaceuticals/metabolism
20.
Sci Rep ; 8(1): 5697, 2018 04 09.
Article in English | MEDLINE | ID: mdl-29632364

ABSTRACT

Alzheimer's Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve early diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1-3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature.


Subject(s)
Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Fluorodeoxyglucose F18/metabolism , Multimodal Imaging/methods , Radiopharmaceuticals/metabolism , Aged , Aged, 80 and over , Alzheimer Disease/metabolism , Brain/metabolism , Case-Control Studies , Cognitive Dysfunction/metabolism , Deep Learning , Early Diagnosis , Humans , Magnetic Resonance Imaging/methods , Middle Aged , Neural Networks, Computer , Positron-Emission Tomography/methods , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...