RESUMO
BACKGROUND: Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS: We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS: Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION: We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Assuntos
Data Warehousing , Gadolínio , Humanos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Processamento de Imagem Assistida por Computador/métodosRESUMO
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Assuntos
Encefalopatias/terapia , Aprendizado Profundo , Encefalopatias/classificação , Encefalopatias/genética , Diagnóstico Diferencial , Progressão da Doença , Humanos , Medicina de Precisão/métodos , Smartphone , Resultado do TratamentoRESUMO
PURPOSE OF REVIEW: Machine learning is an artificial intelligence technique that allows computers to perform a task without being explicitly programmed. Machine learning can be used to assist diagnosis and prognosis of brain disorders. Although the earliest articles date from more than ten years ago, research increases at a very fast pace. RECENT FINDINGS: Recent works using machine learning for diagnosis have moved from classification of a given disease versus controls to differential diagnosis. Intense research has been devoted to the prediction of the future patient state. Although a lot of earlier works focused on neuroimaging as data source, the current trend is on the integration of multimodal data. In terms of targeted diseases, dementia remains dominant but approaches have been developed for a wide variety of neurological and psychiatric diseases. SUMMARY: Machine learning is extremely promising for assisting diagnosis and prognosis in brain disorders. Nevertheless, we argue that key challenges remain to be addressed by the community for bringing these tools in clinical routine: good practices regarding validation and reproducible research need to be more widely adopted; extensive generalization studies are required; interpretable models are needed to overcome the limitations of black-box approaches.
Assuntos
Encefalopatias/diagnóstico , Aprendizado de Máquina , Neuroimagem/métodos , Inteligência Artificial , Encefalopatias/diagnóstico por imagem , Humanos , PrognósticoRESUMO
A large number of papers have introduced novel machine learning and feature extraction methods for automatic classification of Alzheimer's disease (AD). However, while the vast majority of these works use the public dataset ADNI for evaluation, they are difficult to reproduce because different key components of the validation are often not readily available. These components include selected participants and input data, image preprocessing and cross-validation procedures. The performance of the different approaches is also difficult to compare objectively. In particular, it is often difficult to assess which part of the method (e.g. preprocessing, feature extraction or classification algorithms) provides a real improvement, if any. In the present paper, we propose a framework for reproducible and objective classification experiments in AD using three publicly available datasets (ADNI, AIBL and OASIS). The framework comprises: i) automatic conversion of the three datasets into a standard format (BIDS); ii) a modular set of preprocessing pipelines, feature extraction and classification methods, together with an evaluation framework, that provide a baseline for benchmarking the different components. We demonstrate the use of the framework for a large-scale evaluation on 1960 participants using T1 MRI and FDG PET data. In this evaluation, we assess the influence of different modalities, preprocessing, feature types (regional or voxel-based features), classifiers, training set sizes and datasets. Performances were in line with the state-of-the-art. FDG PET outperformed T1 MRI for all classification tasks. No difference in performance was found for the use of different atlases, image smoothing, partial volume correction of FDG PET images, or feature type. Linear SVM and L2-logistic regression resulted in similar performance and both outperformed random forests. The classification performance increased along with the number of subjects used for training. Classifiers trained on ADNI generalized well to AIBL and OASIS. All the code of the framework and the experiments is publicly available: general-purpose tools have been integrated into the Clinica software (www.clinica.run) and the paper-specific code is available at: https://gitlab.icm-institute.org/aramislab/AD-ML.
Assuntos
Doença de Alzheimer/diagnóstico por imagem , Interpretação Estatística de Dados , Conjuntos de Dados como Assunto , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Tomografia por Emissão de Pósitrons/métodos , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/metabolismo , Doença de Alzheimer/patologia , Atlas como Assunto , Feminino , Fluordesoxiglucose F18 , Humanos , Masculino , Pessoa de Meia-Idade , Compostos RadiofarmacêuticosRESUMO
AIM: To accurately quantify the radioactivity concentration measured by PET, emission data need to be corrected for photon attenuation; however, the MRI signal cannot easily be converted into attenuation values, making attenuation correction (AC) in PET/MRI challenging. In order to further improve the current vendor-implemented MR-AC methods for absolute quantification, a number of prototype methods have been proposed in the literature. These can be categorized into three types: template/atlas-based, segmentation-based, and reconstruction-based. These proposed methods in general demonstrated improvements compared to vendor-implemented AC, and many studies report deviations in PET uptake after AC of only a few percent from a gold standard CT-AC. Using a unified quantitative evaluation with identical metrics, subject cohort, and common CT-based reference, the aims of this study were to evaluate a selection of novel methods proposed in the literature, and identify the ones suitable for clinical use. METHODS: In total, 11 AC methods were evaluated: two vendor-implemented (MR-ACDIXON and MR-ACUTE), five based on template/atlas information (MR-ACSEGBONE (Koesters et al., 2016), MR-ACONTARIO (Anazodo et al., 2014), MR-ACBOSTON (Izquierdo-Garcia et al., 2014), MR-ACUCL (Burgos et al., 2014), and MR-ACMAXPROB (Merida et al., 2015)), one based on simultaneous reconstruction of attenuation and emission (MR-ACMLAA (Benoit et al., 2015)), and three based on image-segmentation (MR-ACMUNICH (Cabello et al., 2015), MR-ACCAR-RiDR (Juttukonda et al., 2015), and MR-ACRESOLUTE (Ladefoged et al., 2015)). We selected 359 subjects who were scanned using one of the following radiotracers: [18F]FDG (210), [11C]PiB (51), and [18F]florbetapir (98). The comparison to AC with a gold standard CT was performed both globally and regionally, with a special focus on robustness and outlier analysis. RESULTS: The average performance in PET tracer uptake was within ±5% of CT for all of the proposed methods, with the average±SD global percentage bias in PET FDG uptake for each method being: MR-ACDIXON (-11.3±3.5)%, MR-ACUTE (-5.7±2.0)%, MR-ACONTARIO (-4.3±3.6)%, MR-ACMUNICH (3.7±2.1)%, MR-ACMLAA (-1.9±2.6)%, MR-ACSEGBONE (-1.7±3.6)%, MR-ACUCL (0.8±1.2)%, MR-ACCAR-RiDR (-0.4±1.9)%, MR-ACMAXPROB (-0.4±1.6)%, MR-ACBOSTON (-0.3±1.8)%, and MR-ACRESOLUTE (0.3±1.7)%, ordered by average bias. The overall best performing methods (MR-ACBOSTON, MR-ACMAXPROB, MR-ACRESOLUTE and MR-ACUCL, ordered alphabetically) showed regional average errors within ±3% of PET with CT-AC in all regions of the brain with FDG, and the same four methods, as well as MR-ACCAR-RiDR, showed that for 95% of the patients, 95% of brain voxels had an uptake that deviated by less than 15% from the reference. Comparable performance was obtained with PiB and florbetapir. CONCLUSIONS: All of the proposed novel methods have an average global performance within likely acceptable limits (±5% of CT-based reference), and the main difference among the methods was found in the robustness, outlier analysis, and clinical feasibility. Overall, the best performing methods were MR-ACBOSTON, MR-ACMAXPROB, MR-ACRESOLUTE and MR-ACUCL, ordered alphabetically. These methods all minimized the number of outliers, standard deviation, and average global and local error. The methods MR-ACMUNICH and MR-ACCAR-RiDR were both within acceptable quantitative limits, so these methods should be considered if processing time is a factor. The method MR-ACSEGBONE also demonstrates promising results, and performs well within the likely acceptable quantitative limits. For clinical routine scans where processing time can be a key factor, this vendor-provided solution currently outperforms most methods. With the performance of the methods presented here, it may be concluded that the challenge of improving the accuracy of MR-AC in adult brains with normal anatomy has been solved to a quantitatively acceptable degree, which is smaller than the quantification reproducibility in PET imaging.
Assuntos
Encéfalo/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem , Demência/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia por Emissão de Pósitrons/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Feminino , Humanos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/normas , Masculino , Pessoa de Meia-Idade , Tomografia por Emissão de Pósitrons/normas , Compostos Radiofarmacêuticos , Adulto JovemRESUMO
BACKGROUND: Increasing age is the biggest risk factor for dementia, of which Alzheimer's disease is the commonest cause. The pathological changes underpinning Alzheimer's disease are thought to develop at least a decade prior to the onset of symptoms. Molecular positron emission tomography and multi-modal magnetic resonance imaging allow key pathological processes underpinning cognitive impairment - including ß-amyloid depostion, vascular disease, network breakdown and atrophy - to be assessed repeatedly and non-invasively. This enables potential determinants of dementia to be delineated earlier, and therefore opens a pre-symptomatic window where intervention may prevent the onset of cognitive symptoms. METHODS/DESIGN: This paper outlines the clinical, cognitive and imaging protocol of "Insight 46", a neuroscience sub-study of the MRC National Survey of Health and Development. This is one of the oldest British birth cohort studies and has followed 5362 individuals since their birth in England, Scotland and Wales during one week in March 1946. These individuals have been tracked in 24 waves of data collection incorporating a wide range of health and functional measures, including repeat measures of cognitive function. Now aged 71 years, a small fraction have overt dementia, but estimates suggest that ~1/3 of individuals in this age group may be in the preclinical stages of Alzheimer's disease. Insight 46 is recruiting 500 study members selected at random from those who attended a clinical visit at 60-64 years and on whom relevant lifecourse data are available. We describe the sub-study design and protocol which involves a prospective two time-point (0, 24 month) data collection covering clinical, neuropsychological, ß-amyloid positron emission tomography and magnetic resonance imaging, biomarker and genetic information. Data collection started in 2015 (age 69) and aims to be completed in 2019 (age 73). DISCUSSION: Through the integration of data on the socioeconomic environment and on physical, psychological and cognitive function from 0 to 69 years, coupled with genetics, structural and molecular imaging, and intensive cognitive and neurological phenotyping, Insight 46 aims to identify lifetime factors which influence brain health and cognitive ageing, with particular focus on Alzheimer's disease and cerebrovascular disease. This will provide an evidence base for the rational design of disease-modifying trials.
Assuntos
Diagnóstico Precoce , Projetos de Pesquisa , Idoso , Doença de Alzheimer/diagnóstico , Biomarcadores/análise , Demência/diagnóstico , Inglaterra , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , EscóciaRESUMO
Positron Emission Tomography/Magnetic Resonance Imaging (PET/MR) scanners are expected to offer a new range of clinical applications. Attenuation correction is an essential requirement for quantification of PET data but MRI images do not directly provide a patient-specific attenuation map. Methods We further validate and extend a Computed Tomography (CT) and attenuation map (µ-map) synthesis method based on pre-acquired MRI-CT image pairs. The validation consists of comparing the CT images synthesised with the proposed method to the original CT images. PET images were acquired using two different tracers ((18)F-FDG and (18)F-florbetapir). They were then reconstructed and corrected for attenuation using the synthetic µ-maps and compared to the reference PET images corrected with the CT-based µ-maps. During the validation, we observed that the CT synthesis was inaccurate in areas such as the neck and the cerebellum, and propose a refinement to mitigate these problems, as well as an extension of the method to multi-contrast MRI data. Results With the improvements proposed, a significant enhancement in CT synthesis, which results in a reduced absolute error and a decrease in the bias when reconstructing PET images, was observed. For both tracers, on average, the absolute difference between the reference PET images and the PET images corrected with the proposed method was less than 2%, with a bias inferior to 1%. Conclusion With the proposed method, attenuation information can be accurately derived from MRI images by synthesising CT using routine anatomical sequences. MRI sequences, or combination of sequences, can be used to synthesise CT images, as long as they provide sufficient anatomical information.
Assuntos
Compostos de Anilina , Etilenoglicóis , Fluordesoxiglucose F18 , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Imagem Multimodal , Tomografia por Emissão de Pósitrons , Encéfalo/diagnóstico por imagem , Humanos , Traçadores Radioativos , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios XRESUMO
Containing the medical data of millions of patients, clinical data warehouses (CDWs) represent a great opportunity to develop computational tools. Magnetic resonance images (MRIs) are particularly sensitive to patient movements during image acquisition, which will result in artefacts (blurring, ghosting and ringing) in the reconstructed image. As a result, a significant number of MRIs in CDWs are corrupted by these artefacts and may be unusable. Since their manual detection is impossible due to the large number of scans, it is necessary to develop tools to automatically exclude (or at least identify) images with motion in order to fully exploit CDWs. In this paper, we propose a novel transfer learning method from research to clinical data for the automatic detection of motion in 3D T1-weighted brain MRI. The method consists of two steps: a pre-training on research data using synthetic motion, followed by a fine-tuning step to generalise our pre-trained model to clinical data, relying on the labelling of 4045 images. The objectives were both (1) to be able to exclude images with severe motion, (2) to detect mild motion artefacts. Our approach achieved excellent accuracy for the first objective with a balanced accuracy nearly similar to that of the annotators (balanced accuracy>80 %). However, for the second objective, the performance was weaker and substantially lower than that of human raters. Overall, our framework will be useful to take advantage of CDWs in medical imaging and highlight the importance of a clinical validation of models trained on research data.
Assuntos
Artefatos , Data Warehousing , Humanos , Movimento (Física) , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância MagnéticaRESUMO
In this paper, we propose a new method to perform data augmentation in a reliable way in the High Dimensional Low Sample Size (HDLSS) setting using a geometry-based variational autoencoder (VAE). Our approach combines the proposal of 1) a new VAE model, the latent space of which is modeled as a Riemannian manifold and which combines both Riemannian metric learning and normalizing flows and 2) a new generation scheme which produces more meaningful samples especially in the context of small data sets. The method is tested through a wide experimental study where its robustness to data sets, classifiers and training samples size is stressed. It is also validated on a medical imaging classification task on the challenging ADNI database where a small number of 3D brain magnetic resonance images (MRIs) are considered and augmented using the proposed VAE framework. In each case, the proposed method allows for a significant and reliable gain in the classification metrics. For instance, balanced accuracy jumps from 66.3% to 74.3% for a state-of-the-art convolutional neural network classifier trained with 50 MRIs of cognitively normal (CN) and 50 Alzheimer disease (AD) patients and from 77.7% to 86.3% when trained with 243 CN and 210 AD while improving greatly sensitivity and specificity metrics.
RESUMO
A variety of algorithms have been proposed for computer-aided diagnosis of dementia from anatomical brain MRI. These approaches achieve high accuracy when applied to research data sets but their performance on real-life clinical routine data has not been evaluated yet. The aim of this work was to study the performance of such approaches on clinical routine data, based on a hospital data warehouse, and to compare the results to those obtained on a research data set. The clinical data set was extracted from the hospital data warehouse of the Greater Paris area, which includes 39 different hospitals. The research set was composed of data from the Alzheimer's Disease Neuroimaging Initiative data set. In the clinical set, the population of interest was identified by exploiting the diagnostic codes from the 10th revision of the International Classification of Diseases that are assigned to each patient. We studied how the imbalance of the training sets, in terms of contrast agent injection and image quality, may bias the results. We demonstrated that computer-aided diagnosis performance was strongly biased upwards (over 17 percent points of balanced accuracy) by the confounders of image quality and contrast agent injection, a phenomenon known as the Clever Hans effect or shortcut learning. When these biases were removed, the performance was very poor. In any case, the performance was considerably lower than on the research data set. Our study highlights that there are still considerable challenges for translating dementia computer-aided diagnosis systems to clinical routine.
Assuntos
Doença de Alzheimer , Meios de Contraste , Humanos , Data Warehousing , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Doença de Alzheimer/diagnóstico por imagem , Aprendizado de Máquina , ComputadoresRESUMO
INTRODUCTION: The Centiloid scale aims to harmonize amyloid beta (Aß) positron emission tomography (PET) measures across different analysis methods. As Centiloids were created using PET/computerized tomography (CT) data and are influenced by scanner differences, we investigated the Centiloid transformation with data from Insight 46 acquired with PET/magnetic resonanceimaging (MRI). METHODS: We transformed standardized uptake value ratios (SUVRs) from 432 florbetapir PET/MRI scans processed using whole cerebellum (WC) and white matter (WM) references, with and without partial volume correction. Gaussian-mixture-modelling-derived cutpoints for Aß PET positivity were converted. RESULTS: The Centiloid cutpoint was 14.2 for WC SUVRs. The relationship between WM and WC uptake differed between the calibration and testing datasets, producing implausibly low WM-based Centiloids. Linear adjustment produced a WM-based cutpoint of 18.1. DISCUSSION: Transformation of PET/MRI florbetapir data to Centiloids is valid. However, further understanding of the effects of acquisition or biological factors on the transformation using a WM reference is needed. HIGHLIGHTS: Centiloid conversion of amyloid beta positron emission tomography (PET) data aims to standardize results.Centiloid values can be influenced by differences in acquisition.We converted florbetapir PET/magnetic resonance imaging data from a large birth cohort.Whole cerebellum referenced values could be reliably transformed to Centiloids.White matter referenced values may be less generalizable between datasets.
RESUMO
BACKGROUND AND OBJECTIVE: As deep learning faces a reproducibility crisis and studies on deep learning applied to neuroimaging are contaminated by methodological flaws, there is an urgent need to provide a safe environment for deep learning users to help them avoid common pitfalls that will bias and discredit their results. Several tools have been proposed to help deep learning users design their framework for neuroimaging data sets. Software overview: We present here ClinicaDL, one of these software tools. ClinicaDL interacts with BIDS, a standard format in the neuroimaging field, and its derivatives, so it can be used with a large variety of data sets. Moreover, it checks the absence of data leakage when inferring the results of new data with trained networks, and saves all necessary information to guarantee the reproducibility of results. The combination of ClinicaDL and its companion project Clinica allows performing an end-to-end neuroimaging analysis, from the download of raw data sets to the interpretation of trained networks, including neuroimaging preprocessing, quality check, label definition, architecture search, and network training and evaluation. CONCLUSIONS: We implemented ClinicaDL to bring answers to three common issues encountered by deep learning users who are not always familiar with neuroimaging data: (1) the format and preprocessing of neuroimaging data sets, (2) the contamination of the evaluation procedure by data leakage and (3) a lack of reproducibility. We hope that its use by researchers will allow producing more reliable and thus valuable scientific studies in our field.
Assuntos
Aprendizado Profundo , Software , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Reprodutibilidade dos TestesRESUMO
Many studies on machine learning (ML) for computer-aided diagnosis have so far been mostly restricted to high-quality research data. Clinical data warehouses, gathering routine examinations from hospitals, offer great promises for training and validation of ML models in a realistic setting. However, the use of such clinical data warehouses requires quality control (QC) tools. Visual QC by experts is time-consuming and does not scale to large datasets. In this paper, we propose a convolutional neural network (CNN) for the automatic QC of 3D T1-weighted brain MRI for a large heterogeneous clinical data warehouse. To that purpose, we used the data warehouse of the hospitals of the Greater Paris area (Assistance Publique-Hôpitaux de Paris [AP-HP]). Specifically, the objectives were: 1) to identify images which are not proper T1-weighted brain MRIs; 2) to identify acquisitions for which gadolinium was injected; 3) to rate the overall image quality. We used 5000 images for training and validation and a separate set of 500 images for testing. In order to train/validate the CNN, the data were annotated by two trained raters according to a visual QC protocol that we specifically designed for application in the setting of a data warehouse. For objectives 1 and 2, our approach achieved excellent accuracy (balanced accuracy and F1-score >90%), similar to the human raters. For objective 3, the performance was good but substantially lower than that of human raters. Nevertheless, the automatic approach accurately identified (balanced accuracy and F1-score >80%) low quality images, which would typically need to be excluded. Overall, our approach shall be useful for exploiting hospital data warehouses in medical image computing.
Assuntos
Data Warehousing , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Controle de QualidadeRESUMO
BACKGROUND: Temporary disruption of the blood-brain barrier (BBB) using pulsed ultrasound leads to the clearance of both amyloid and tau from the brain, increased neurogenesis, and mitigation of cognitive decline in pre-clinical models of Alzheimer's disease (AD) while also increasing BBB penetration of therapeutic antibodies. The goal of this pilot clinical trial was to investigate the safety and efficacy of this approach in patients with mild AD using an implantable ultrasound device. METHODS: An implantable, 1-MHz ultrasound device (SonoCloud-1) was implanted under local anesthesia in the skull (extradural) of 10 mild AD patients to target the left supra-marginal gyrus. Over 3.5 months, seven ultrasound sessions in combination with intravenous infusion of microbubbles were performed twice per month to temporarily disrupt the BBB. 18F-florbetapir and 18F-fluorodeoxyglucose positron emission tomography (PET) imaging were performed on a combined PET/MRI scanner at inclusion and at 4 and 8 months after the initiation of sonications to monitor the brain metabolism and amyloid levels along with cognitive evaluations. The evolution of cognitive and neuroimaging features was compared to that of a matched sample of control participants taken from the Alzheimer's Disease Neuroimaging Initiative (ADNI). RESULTS: A total of 63 BBB opening procedures were performed in nine subjects. The procedure was well-tolerated. A non-significant decrease in amyloid accumulation at 4 months of - 6.6% (SD = 7.2%) on 18F-florbetapir PET imaging in the sonicated gray matter targeted by the ultrasound transducer was observed compared to baseline in six subjects that completed treatments and who had evaluable imaging scans. No differences in the longitudinal change in the glucose metabolism were observed compared to the neighboring or contralateral regions or to the change observed in the same region in ADNI participants. No significant effect on cognition evolution was observed in comparison with the ADNI participants as expected due to the small sample size and duration of the trial. CONCLUSIONS: These results demonstrate the safety of ultrasound-based BBB disruption and the potential of this technology to be used as a therapy for AD patients. Research of this technique in a larger clinical trial with a device designed to sonicate larger volumes of tissue and in combination with disease-modifying drugs may further enhance the effects observed. TRIAL REGISTRATION: ClinicalTrials.gov, NCT03119961.
Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/metabolismo , Doença de Alzheimer/terapia , Barreira Hematoencefálica/diagnóstico por imagem , Barreira Hematoencefálica/metabolismo , Encéfalo/diagnóstico por imagem , Encéfalo/metabolismo , Disfunção Cognitiva/metabolismo , Humanos , Neuroimagem/métodos , Projetos Piloto , Tomografia por Emissão de Pósitrons/métodosRESUMO
Purpose: In clinical practice, positron emission tomography (PET) images are mostly analyzed visually, but the sensitivity and specificity of this approach greatly depend on the observer's experience. Quantitative analysis of PET images would alleviate this problem by helping define an objective limit between normal and pathological findings. We present an anomaly detection framework for the individual analysis of PET images. Approach: We created subject-specific abnormality maps that summarize the pathology's topographical distribution in the brain by comparing the subject's PET image to a model of healthy PET appearance that is specific to the subject under investigation. This model was generated from demographically and morphologically matched PET scans from a control dataset. Results: We generated abnormality maps for healthy controls, patients at different stages of Alzheimer's disease and with different frontotemporal dementia syndromes. We showed that no anomalies were detected for the healthy controls and that the anomalies detected from the patients with dementia coincided with the regions where abnormal uptake was expected. We also validated the proposed framework using the abnormality maps as inputs of a classifier and obtained higher classification accuracies than when using the PET images themselves as inputs. Conclusions: The proposed method was able to automatically locate and characterize the areas characteristic of dementia from PET images. The abnormality maps are expected to (i) help clinicians in their diagnosis by highlighting, in a data-driven fashion, the pathological areas, and (ii) improve the interpretability of subsequent analyses, such as computer-aided diagnosis or spatiotemporal modeling.
RESUMO
Alzheimer's disease (AD) is characterized by the progressive alterations seen in brain images which give rise to the onset of various sets of symptoms. The variability in the dynamics of changes in both brain images and cognitive impairments remains poorly understood. This paper introduces AD Course Map a spatiotemporal atlas of Alzheimer's disease progression. It summarizes the variability in the progression of a series of neuropsychological assessments, the propagation of hypometabolism and cortical thinning across brain regions and the deformation of the shape of the hippocampus. The analysis of these variations highlights strong genetic determinants for the progression, like possible compensatory mechanisms at play during disease progression. AD Course Map also predicts the patient's cognitive decline with a better accuracy than the 56 methods benchmarked in the open challenge TADPOLE. Finally, AD Course Map is used to simulate cohorts of virtual patients developing Alzheimer's disease. AD Course Map offers therefore new tools for exploring the progression of AD and personalizing patients care.
Assuntos
Doença de Alzheimer , Encéfalo , Idoso , Humanos , Masculino , NeuroimagemRESUMO
Diffusion MRI is the modality of choice to study alterations of white matter. In past years, various works have used diffusion MRI for automatic classification of Alzheimer's disease. However, classification performance obtained with different approaches is difficult to compare because of variations in components such as input data, participant selection, image preprocessing, feature extraction, feature rescaling (FR), feature selection (FS) and cross-validation (CV) procedures. Moreover, these studies are also difficult to reproduce because these different components are not readily available. In a previous work (Samper-González et al. 2018), we propose an open-source framework for the reproducible evaluation of AD classification from T1-weighted (T1w) MRI and PET data. In the present paper, we first extend this framework to diffusion MRI data. Specifically, we add: conversion of diffusion MRI ADNI data into the BIDS standard and pipelines for diffusion MRI preprocessing and feature extraction. We then apply the framework to compare different components. First, FS has a positive impact on classification results: highest balanced accuracy (BA) improved from 0.76 to 0.82 for task CN vs AD. Secondly, voxel-wise features generally gives better performance than regional features. Fractional anisotropy (FA) and mean diffusivity (MD) provided comparable results for voxel-wise features. Moreover, we observe that the poor performance obtained in tasks involving MCI were potentially caused by the small data samples, rather than by the data imbalance. Furthermore, no extensive classification difference exists for different degree of smoothing and registration methods. Besides, we demonstrate that using non-nested validation of FS leads to unreliable and over-optimistic results: 5% up to 40% relative increase in BA. Lastly, with proper FR and FS, the performance of diffusion MRI features is comparable to that of T1w MRI. All the code of the framework and the experiments are publicly available: general-purpose tools have been integrated into the Clinica software package ( www.clinica.run ) and the paper-specific code is available at: https://github.com/aramis-lab/AD-ML .
Assuntos
Doença de Alzheimer/classificação , Doença de Alzheimer/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Feminino , Humanos , MasculinoRESUMO
We performed a systematic review of studies focusing on the automatic prediction of the progression of mild cognitive impairment to Alzheimer's disease (AD) dementia, and a quantitative analysis of the methodological choices impacting performance. This review included 172 articles, from which 234 experiments were extracted. For each of them, we reported the used data set, the feature types, the algorithm type, performance and potential methodological issues. The impact of these characteristics on the performance was evaluated using a multivariate mixed effect linear regressions. We found that using cognitive, fluorodeoxyglucose-positron emission tomography or potentially electroencephalography and magnetoencephalography variables significantly improved predictive performance compared to not including them, whereas including other modalities, in particular T1 magnetic resonance imaging, did not show a significant effect. The good performance of cognitive assessments questions the wide use of imaging for predicting the progression to AD and advocates for exploring further fine domain-specific cognitive assessments. We also identified several methodological issues, including the absence of a test set, or its use for feature selection or parameter tuning in nearly a fourth of the papers. Other issues, found in 15% of the studies, cast doubts on the relevance of the method to clinical practice. We also highlight that short-term predictions are likely not to be better than predicting that subjects stay stable over time. These issues highlight the importance of adhering to good practices for the use of machine learning as a decision support system for the clinical practice.
Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Disfunção Cognitiva/diagnóstico por imagem , Progressão da Doença , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Tomografia por Emissão de PósitronsRESUMO
We present Clinica (www.clinica.run), an open-source software platform designed to make clinical neuroscience studies easier and more reproducible. Clinica aims for researchers to (i) spend less time on data management and processing, (ii) perform reproducible evaluations of their methods, and (iii) easily share data and results within their institution and with external collaborators. The core of Clinica is a set of automatic pipelines for processing and analysis of multimodal neuroimaging data (currently, T1-weighted MRI, diffusion MRI, and PET data), as well as tools for statistics, machine learning, and deep learning. It relies on the brain imaging data structure (BIDS) for the organization of raw neuroimaging datasets and on established tools written by the community to build its pipelines. It also provides converters of public neuroimaging datasets to BIDS (currently ADNI, AIBL, OASIS, and NIFD). Processed data include image-valued scalar fields (e.g., tissue probability maps), meshes, surface-based scalar fields (e.g., cortical thickness maps), or scalar outputs (e.g., regional averages). These data follow the ClinicA Processed Structure (CAPS) format which shares the same philosophy as BIDS. Consistent organization of raw and processed neuroimaging files facilitates the execution of single pipelines and of sequences of pipelines, as well as the integration of processed data into statistics or machine learning frameworks. The target audience of Clinica is neuroscientists or clinicians conducting clinical neuroscience studies involving multimodal imaging, and researchers developing advanced machine learning algorithms applied to neuroimaging data.
RESUMO
We ranked third in the Predictive Analytics Competition (PAC) 2019 challenge by achieving a mean absolute error (MAE) of 3.33 years in predicting age from T1-weighted MRI brain images. Our approach combined seven algorithms that allow generating predictions when the number of features exceeds the number of observations, in particular, two versions of best linear unbiased predictor (BLUP), support vector machine (SVM), two shallow convolutional neural networks (CNNs), and the famous ResNet and Inception V1. Ensemble learning was derived from estimating weights via linear regression in a hold-out subset of the training sample. We further evaluated and identified factors that could influence prediction accuracy: choice of algorithm, ensemble learning, and features used as input/MRI image processing. Our prediction error was correlated with age, and absolute error was greater for older participants, suggesting to increase the training sample for this subgroup. Our results may be used to guide researchers to build age predictors on healthy individuals, which can be used in research and in the clinics as non-specific predictors of disease status.