RESUMO
Research on segmentation of the hippocampus in magnetic resonance images through deep learning convolutional neural networks (CNNs) shows promising results, suggesting that these methods can identify small structural abnormalities of the hippocampus, which are among the earliest and most frequent brain changes associated with Alzheimer disease (AD). However, CNNs typically achieve the highest accuracy on datasets acquired from the same domain as the training dataset. Transfer learning allows domain adaptation through further training on a limited dataset. In this study, we applied transfer learning on a network called spatial warping network segmentation (SWANS), developed and trained in a previous study. We used MR images of patients with clinical diagnoses of mild cognitive impairment (MCI) and AD, segmented by two different raters. By using transfer learning techniques, we developed four new models, using different training methods. Testing was performed using 26% of the original dataset, which was excluded from training as a hold-out test set. In addition, 10% of the overall training dataset was used as a hold-out validation set. Results showed that all the new models achieved better hippocampal segmentation quality than the baseline SWANS model (ps < .001), with high similarity to the manual segmentations (mean dice [best model] = 0.878 ± 0.003). The best model was chosen based on visual assessment and volume percentage error (VPE). The increased precision in estimating hippocampal volumes allows the detection of small hippocampal abnormalities already present in the MCI phase (SD = [3.9 ± 0.6]%), which may be crucial for early diagnosis.
Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Aprendizado Profundo , Doença de Alzheimer/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem , Hipocampo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de ComputaçãoRESUMO
Increasingly large MRI neuroimaging datasets are becoming available, including many highly multi-site multi-scanner datasets. Combining the data from the different scanners is vital for increased statistical power; however, this leads to an increase in variance due to nonbiological factors such as the differences in acquisition protocols and hardware, which can mask signals of interest. We propose a deep learning based training scheme, inspired by domain adaptation techniques, which uses an iterative update approach to aim to create scanner-invariant features while simultaneously maintaining performance on the main task of interest, thus reducing the influence of scanner on network predictions. We demonstrate the framework for regression, classification and segmentation tasks with two different network architectures. We show that not only can the framework harmonise many-site datasets but it can also adapt to many data scenarios, including biased datasets and limited training labels. Finally, we show that the framework can be extended for the removal of other known confounds in addition to scanner. The overall framework is therefore flexible and should be applicable to a wide range of neuroimaging studies.
Assuntos
Conjuntos de Dados como Assunto , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Encéfalo/fisiologia , HumanosRESUMO
Both normal ageing and neurodegenerative diseases cause morphological changes to the brain. Age-related brain changes are subtle, nonlinear, and spatially and temporally heterogenous, both within a subject and across a population. Machine learning models are particularly suited to capture these patterns and can produce a model that is sensitive to changes of interest, despite the large variety in healthy brain appearance. In this paper, the power of convolutional neural networks (CNNs) and the rich UK Biobank dataset, the largest database currently available, are harnessed to address the problem of predicting brain age. We developed a 3D CNN architecture to predict chronological age, using a training dataset of 12,802 T1-weighted MRI images and a further 6,885 images for testing. The proposed method shows competitive performance on age prediction, but, most importantly, the CNN prediction errors ΔBrainAge=AgePredicted-AgeTrue correlated significantly with many clinical measurements from the UK Biobank in the female and male groups. In addition, having used images from only one imaging modality in this experiment, we examined the relationship between ΔBrainAge and the image-derived phenotypes (IDPs) from all other imaging modalities in the UK Biobank, showing correlations consistent with known patterns of ageing. Furthermore, we show that the use of nonlinearly registered images to train CNNs can lead to the network being driven by artefacts of the registration process and missing subtle indicators of ageing, limiting the clinical relevance. Due to the longitudinal aspect of the UK Biobank study, in the future it will be possible to explore whether the ΔBrainAge from models such as this network were predictive of any health outcomes.
Assuntos
Envelhecimento , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Imageamento Tridimensional , Masculino , Pessoa de Meia-Idade , FenótipoRESUMO
Since the rise of deep learning, new medical segmentation methods have rapidly been proposed with extremely promising results, often reporting marginal improvements on the previous state-of-the-art (SOTA) method. However, on visual inspection errors are often revealed, such as topological mistakes (e.g. holes or folds), that are not detected using traditional evaluation metrics. Incorrect topology can often lead to errors in clinically required downstream image processing tasks. Therefore, there is a need for new methods to focus on ensuring segmentations are topologically correct. In this work, we present TEDS-Net: a segmentation network that preserves anatomical topology whilst maintaining segmentation performance that is competitive with SOTA baselines. Further, we show how current SOTA segmentation methods can introduce problematic topological errors. TEDS-Net achieves anatomically plausible segmentation by using learnt topology-preserving fields to deform a prior. Traditionally, topology-preserving fields are described in the continuous domain and begin to break down when working in the discrete domain. Here, we introduce additional modifications that more strictly enforce topology preservation. We illustrate our method on an open-source medical heart dataset, performing both single and multi-structure segmentation, and show that the generated fields contain no folding voxels, which corresponds to full topology preservation on individual structures whilst vastly outperforming the other baselines on overall scene topology. The code is available at: https://github.com/mwyburd/TEDS-Net.
Assuntos
Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos , Imageamento por Ressonância Magnética/métodosRESUMO
Deep learning approaches for clinical predictions based on magnetic resonance imaging data have shown great promise as a translational technology for diagnosis and prognosis in neurological disorders, but its clinical impact has been limited. This is partially attributed to the opaqueness of deep learning models, causing insufficient understanding of what underlies their decisions. To overcome this, we trained convolutional neural networks on structural brain scans to differentiate dementia patients from healthy controls, and applied layerwise relevance propagation to procure individual-level explanations of the model predictions. Through extensive validations we demonstrate that deviations recognized by the model corroborate existing knowledge of structural brain aberrations in dementia. By employing the explainable dementia classifier in a longitudinal dataset of patients with mild cognitive impairment, we show that the spatially rich explanations complement the model prediction when forecasting transition to dementia and help characterize the biological manifestation of disease in the individual brain. Overall, our work exemplifies the clinical potential of explainable artificial intelligence in precision medicine.
RESUMO
Acquisition of high quality manual annotations is vital for the development of segmentation algorithms. However, to create them we require a substantial amount of expert time and knowledge. Large numbers of labels are required to train convolutional neural networks due to the vast number of parameters that must be learned in the optimisation process. Here, we develop the STAMP algorithm to allow the simultaneous training and pruning of a UNet architecture for medical image segmentation with targeted channelwise dropout to make the network robust to the pruning. We demonstrate the technique across segmentation tasks and imaging modalities. It is then shown that, through online pruning, we are able to train networks to have much higher performance than the equivalent standard UNet models while reducing their size by more than 85% in terms of parameters. This has the potential to allow networks to be directly trained on datasets where very low numbers of labels are available.
Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizagem , Imageamento por Ressonância Magnética/métodos , Redes Neurais de ComputaçãoRESUMO
Combining deep learning image analysis methods and large-scale imaging datasets offers many opportunities to neuroscience imaging and epidemiology. However, despite these opportunities and the success of deep learning when applied to a range of neuroimaging tasks and domains, significant barriers continue to limit the impact of large-scale datasets and analysis tools. Here, we examine the main challenges and the approaches that have been explored to overcome them. We focus on issues relating to data availability, interpretability, evaluation, and logistical challenges and discuss the problems that still need to be tackled to enable the success of "big data" deep learning approaches beyond research.
Assuntos
Aprendizado de MáquinaRESUMO
Robust automated segmentation of white matter hyperintensities (WMHs) in different datasets (domains) is highly challenging due to differences in acquisition (scanner, sequence), population (WMH amount and location) and limited availability of manual segmentations to train supervised algorithms. In this work we explore various domain adaptation techniques such as transfer learning and domain adversarial learning methods, including domain adversarial neural networks and domain unlearning, to improve the generalisability of our recently proposed triplanar ensemble network, which is our baseline model. We used datasets with variations in intensity profile, lesion characteristics and acquired using different scanners. For the source domain, we considered a dataset consisting of data acquired from 3 different scanners, while the target domain consisted of 2 datasets. We evaluated the domain adaptation techniques on the target domain datasets, and additionally evaluated the performance on the source domain test dataset for the adversarial techniques. For transfer learning, we also studied various training options such as minimal number of unfrozen layers and subjects required for fine-tuning in the target domain. On comparing the performance of different techniques on the target dataset, domain adversarial training of neural network gave the best performance, making the technique promising for robust WMH segmentation.