RESUMO
Automated quantification of brain tissues on MR images has greatly contributed to the diagnosis and follow-up of neurological pathologies across various life stages. However, existing solutions are specifically designed for certain age ranges, limiting their applicability in monitoring brain development from infancy to late adulthood. This retrospective study aims to develop and validate a brain segmentation model across pediatric and adult populations. First, we trained a deep learning model to segment tissues and brain structures using T1-weighted MR images from 390 patients (age range: 2-81 years) across four different datasets. Subsequently, the model was validated on a cohort of 280 patients from six distinct test datasets (age range: 4-90 years). In the initial experiment, the proposed deep learning-based pipeline, icobrain-dl, demonstrated segmentation accuracy comparable to both pediatric and adult-specific models across diverse age groups. Subsequently, we evaluated intra- and inter-scanner variability in measurements of various tissues and structures in both pediatric and adult populations computed by icobrain-dl. Results demonstrated significantly higher reproducibility compared to similar brain quantification tools, including childmetrix, FastSurfer, and the medical device icobrain v5.9 (p-value< 0.01). Finally, we explored the potential clinical applications of icobrain-dl measurements in diagnosing pediatric patients with Cerebral Visual Impairment and adult patients with Alzheimer's Disease.
Assuntos
Encéfalo , Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Adulto , Encéfalo/diagnóstico por imagem , Idoso , Criança , Adolescente , Pré-Escolar , Idoso de 80 Anos ou mais , Pessoa de Meia-Idade , Adulto Jovem , Feminino , Masculino , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos TestesRESUMO
Most data-driven methods are very susceptible to data variability. This problem is particularly apparent when applying Deep Learning (DL) to brain Magnetic Resonance Imaging (MRI), where intensities and contrasts vary due to acquisition protocol, scanner- and center-specific factors. Most publicly available brain MRI datasets originate from the same center and are homogeneous in terms of scanner and used protocol. As such, devising robust methods that generalize to multi-scanner and multi-center data is crucial for transferring these techniques into clinical practice. We propose a novel data augmentation approach based on Gaussian Mixture Models (GMM-DA) with the goal of increasing the variability of a given dataset in terms of intensities and contrasts. The approach allows to augment the training dataset such that the variability in the training set compares to what is seen in real world clinical data, while preserving anatomical information. We compare the performance of a state-of-the-art U-Net model trained for segmenting brain structures with and without the addition of GMM-DA. The models are trained and evaluated on single- and multi-scanner datasets. Additionally, we verify the consistency of test-retest results on same-patient images (same and different scanners). Finally, we investigate how the presence of bias field influences the performance of a model trained with GMM-DA. We found that the addition of the GMM-DA improves the generalization capability of the DL model to other scanners not present in the training data, even when the train set is already multi-scanner. Besides, the consistency between same-patient segmentation predictions is improved, both for same-scanner and different-scanner repetitions. We conclude that GMM-DA could increase the transferability of DL models into clinical scenarios.
RESUMO
Brain volumes computed from magnetic resonance images have potential for assisting with the diagnosis of individual dementia patients, provided that they have low measurement error and high reliability. In this paper we describe and validate icobrain dm, an automatic tool that segments brain structures that are relevant for differential diagnosis of dementia, such as the hippocampi and cerebral lobes. Experiments were conducted in comparison to the widely used FreeSurfer software. The hippocampus segmentations were compared against manual segmentations, with significantly higher Dice coefficients obtained with icobrain dm (25-75th quantiles: 0.86-0.88) than with FreeSurfer (25-75th quantiles: 0.80-0.83). Other brain structures were also compared against manual delineations, with icobrain dm showing lower volumetric errors overall. Test-retest experiments show that the precision of all measurements is higher for icobrain dm than for FreeSurfer except for the parietal cortex volume. Finally, when comparing volumes obtained from Alzheimer's disease patients against age-matched healthy controls, all measures achieved high diagnostic performance levels when discriminating patients from cognitively healthy controls, with the temporal cortex volume measured by icobrain dm reaching the highest diagnostic performance level (area under the receiver operating characteristic curve = 0.99) in this dataset.
Assuntos
Doença de Alzheimer/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Software , HumanosRESUMO
In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Técnicas de Diagnóstico Oftalmológico , Humanos , Retina/diagnóstico por imagemRESUMO
Se tratan 20 pacientes con Síndrome de ATM, que presentan dolor al inicio del tratamiento, con estimulación eléctrica transcutánea. Se usa un criterio anatómico, basado en un estudio previo en cadáveres, para la ubicación de los electrodos sobre ramas cutáneas del Trigémino y plexo cervical. Se obtiene un resultado de un 70 por ciento de alivio total después de 3 sesiones consecutivas de una hora cada una, en días diferentes. El 30 por ciento de los pacientes restantes también presentó analgesia, pero en un grado menor, sin efectos colaterales negativos