Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Biomed Inform ; 149: 104567, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38096945

RESUMEN

Acute ischemic stroke is a leading cause of mortality and morbidity worldwide. Timely identification of the extent of a stroke is crucial for effective treatment, whereas spatio-temporal (4D) Computed Tomography Perfusion (CTP) imaging is playing a critical role in this process. Recently, the first deep learning-based methods that leverage the full spatio-temporal nature of perfusion imaging for predicting stroke lesion outcomes have been proposed. However, clinical information is typically not integrated into the learning process, which may be helpful to improve the tissue outcome prediction given the known influence of various factors (i.e., physiological, demographic, and treatment factors) on lesion growth. Cross-attention, a multimodal fusion strategy, has been successfully used to combine information from multiple sources, but it has yet to be applied to stroke lesion outcome prediction. Therefore, this work aimed to develop and evaluate a novel multimodal and spatio-temporal deep learning model that utilizes cross-attention to combine information from 4D CTP and clinical metadata simultaneously to predict stroke lesion outcomes. The proposed model was evaluated using a dataset of 70 acute ischemic stroke patients, demonstrating significantly improved volume estimates (mean error = 19 ml) compared to a baseline unimodal approach (mean error = 35 ml, p< 0.05). The proposed model allows generating attention maps and counterfactual outcome scenarios to investigate the relevance of clinical variables in predicting stroke lesion outcomes at a patient level, helping to provide a better understanding of the model's decision-making process.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Humanos , Isquemia Encefálica/diagnóstico por imagen , Isquemia Encefálica/terapia , Tomografía Computarizada Cuatridimensional , Accidente Cerebrovascular/diagnóstico por imagen , Accidente Cerebrovascular/terapia , Análisis Espacio-Temporal , Perfusión
2.
Hum Brain Mapp ; 43(8): 2554-2566, 2022 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-35138012

RESUMEN

Biological brain age predicted using machine learning models based on high-resolution imaging data has been suggested as a potential biomarker for neurological and cerebrovascular diseases. In this work, we aimed to develop deep learning models to predict the biological brain age using structural magnetic resonance imaging and angiography datasets from a large database of 2074 adults (21-81 years). Since different imaging modalities can provide complementary information, combining them might allow to identify more complex aging patterns, with angiography data, for instance, showing vascular aging effects complementary to the atrophic brain tissue changes seen in T1-weighted MRI sequences. We used saliency maps to investigate the contribution of cortical, subcortical, and arterial structures to the prediction. Our results show that combining T1-weighted and angiography MR data led to a significantly improved brain age prediction accuracy, with a mean absolute error of 3.85 years comparing the predicted and chronological age. The most predictive brain regions included the lateral sulcus, the fourth ventricle, and the amygdala, while the brain arteries contributing the most to the prediction included the basilar artery, the middle cerebral artery M2 segments, and the left posterior cerebral artery. Our study proposes a framework for brain age prediction using multimodal imaging, which gives accurate predictions and allows identifying the most predictive regions for this task, which can serve as a surrogate for the brain regions that are most affected by aging.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Adulto , Anciano , Anciano de 80 o más Años , Envejecimiento , Angiografía , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Preescolar , Humanos , Aprendizaje Automático , Angiografía por Resonancia Magnética , Imagen por Resonancia Magnética/métodos , Persona de Mediana Edad , Adulto Joven
3.
Sensors (Basel) ; 21(11)2021 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-34199735

RESUMEN

Recent research in computer vision has shown that original images used for training of deep learning models can be reconstructed using so-called inversion attacks. However, the feasibility of this attack type has not been investigated for complex 3D medical images. Thus, the aim of this study was to examine the vulnerability of deep learning techniques used in medical imaging to model inversion attacks and investigate multiple quantitative metrics to evaluate the quality of the reconstructed images. For the development and evaluation of model inversion attacks, the public LPBA40 database consisting of 40 brain MRI scans with corresponding segmentations of the gyri and deep grey matter brain structures were used to train two popular deep convolutional neural networks, namely a U-Net and SegNet, and corresponding inversion decoders. Matthews correlation coefficient, the structural similarity index measure (SSIM), and the magnitude of the deformation field resulting from non-linear registration of the original and reconstructed images were used to evaluate the reconstruction accuracy. A comparison of the similarity metrics revealed that the SSIM is best suited to evaluate the reconstruction accuray, followed closely by the magnitude of the deformation field. The quantitative evaluation of the reconstructed images revealed SSIM scores of 0.73±0.12 and 0.61±0.12 for the U-Net and the SegNet, respectively. The qualitative evaluation showed that training images can be reconstructed with some degradation due to blurring but can be correctly matched to the original images in the majority of the cases. In conclusion, the results of this study indicate that it is possible to reconstruct patient data used for training of convolutional neural networks and that the SSIM is a good metric to assess the reconstruction accuracy.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Imagen por Resonancia Magnética , Redes Neurales de la Computación
4.
Sensors (Basel) ; 20(5)2020 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-32143297

RESUMEN

Deformable image registration is still a challenge when the considered images have strong variations in appearance and large initial misalignment. A huge performance gap currently remains for fast-moving regions in videos or strong deformations of natural objects. We present a new semantically guided and two-step deep deformation network that is particularly well suited for the estimation of large deformations. We combine a U-Net architecture that is weakly supervised with segmentation information to extract semantically meaningful features with multiple stages of nonrigid spatial transformer networks parameterized with low-dimensional B-spline deformations. Combining alignment loss and semantic loss functions together with a regularization penalty to obtain smooth and plausible deformations, we achieve superior results in terms of alignment quality compared to previous approaches that have only considered a label-driven alignment loss. Our network model advances the state of the art for inter-subject face part alignment and motion tracking in medical cardiac magnetic resonance imaging (MRI) sequences in comparison to the FlowNet and Label-Reg, two recent deep-learning registration frameworks. The models are compact, very fast in inference, and demonstrate clear potential for a variety of challenging tracking and/or alignment tasks in computer vision and medical image analysis.

5.
Sensors (Basel) ; 20(11)2020 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-32503190

RESUMEN

3D facial landmarks are known to be diagnostically relevant biometrics for many genetic syndromes. The objective of this study was to extend a state-of-the-art image-based 2D facial landmarking algorithm for the challenging task of 3D landmark identification on subjects with genetic syndromes, who often have moderate to severe facial dysmorphia. The automatic 3D facial landmarking algorithm presented here uses 2D image-based facial detection and landmarking models to identify 12 landmarks on 3D facial surface scans. The landmarking algorithm was evaluated using a test set of 444 facial scans with ground truth landmarks identified by two different human observers. Three hundred and sixty nine of the subjects in the test set had a genetic syndrome that is associated with facial dysmorphology. For comparison purposes, the manual landmarks were also used to initialize a non-linear surface-based registration of a non-syndromic atlas to each subject scan. Compared to the average intra- and inter-observer landmark distances of 1.1 mm and 1.5 mm respectively, the average distance between the manual landmark positions and those produced by the automatic image-based landmarking algorithm was 2.5 mm. The average error of the registration-based approach was 3.1 mm. Comparing the distributions of Procrustes distances from the mean for each landmarking approach showed that the surface registration algorithm produces a systemic bias towards the atlas shape. In summary, the image-based automatic landmarking approach performed well on this challenging test set, outperforming a semi-automatic surface registration approach, and producing landmark errors that are comparable to state-of-the-art 3D geometry-based facial landmarking algorithms evaluated on non-syndromic subjects.


Asunto(s)
Cara , Enfermedades Genéticas Congénitas/diagnóstico por imagen , Imagenología Tridimensional , Algoritmos , Cara/diagnóstico por imagen , Humanos
6.
Comput Med Imaging Graph ; 114: 102376, 2024 06.
Artículo en Inglés | MEDLINE | ID: mdl-38537536

RESUMEN

Acute ischemic stroke is a critical health condition that requires timely intervention. Following admission, clinicians typically use perfusion imaging to facilitate treatment decision-making. While deep learning models leveraging perfusion data have demonstrated the ability to predict post-treatment tissue infarction for individual patients, predictions are often represented as binary or probabilistic masks that are not straightforward to interpret or easy to obtain. Moreover, these models typically rely on large amounts of subjectively segmented data and non-standard perfusion analysis techniques. To address these challenges, we propose a novel deep learning approach that directly predicts follow-up computed tomography images from full spatio-temporal 4D perfusion scans through a temporal compression. The results show that this method leads to realistic follow-up image predictions containing the infarcted tissue outcomes. The proposed compression method achieves comparable prediction results to using perfusion maps as inputs but without the need for perfusion analysis or arterial input function selection. Additionally, separate models trained on 45 patients treated with thrombolysis and 102 treated with thrombectomy showed that each model correctly captured the different patient-specific treatment effects as shown by image difference maps. The findings of this work clearly highlight the potential of our method to provide interpretable stroke treatment decision support without requiring manual annotations.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Humanos , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Accidente Cerebrovascular Isquémico/terapia , Tomografía Computarizada Cuatridimensional , Isquemia Encefálica/diagnóstico por imagen , Accidente Cerebrovascular/diagnóstico por imagen , Accidente Cerebrovascular/terapia , Imagen de Perfusión/métodos , Perfusión
7.
NPJ Parkinsons Dis ; 10(1): 43, 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38409244

RESUMEN

Parkinson's disease (PD) is the second most common neurodegenerative disease. Accurate PD diagnosis is crucial for effective treatment and prognosis but can be challenging, especially at early disease stages. This study aimed to develop and evaluate an explainable deep learning model for PD classification from multimodal neuroimaging data. The model was trained using one of the largest collections of T1-weighted and diffusion-tensor magnetic resonance imaging (MRI) datasets. A total of 1264 datasets from eight different studies were collected, including 611 PD patients and 653 healthy controls (HC). These datasets were pre-processed and non-linearly registered to the MNI PD25 atlas. Six imaging maps describing the macro- and micro-structural integrity of brain tissues complemented with age and sex parameters were used to train a convolutional neural network (CNN) to classify PD/HC subjects. Explainability of the model's decision-making was achieved using SmoothGrad saliency maps, highlighting important brain regions. The CNN was trained using a 75%/10%/15% train/validation/test split stratified by diagnosis, sex, age, and study, achieving a ROC-AUC of 0.89, accuracy of 80.8%, specificity of 82.4%, and sensitivity of 79.1% on the test set. Saliency maps revealed that diffusion tensor imaging data, especially fractional anisotropy, was more important for the classification than T1-weighted data, highlighting subcortical regions such as the brainstem, thalamus, amygdala, hippocampus, and cortical areas. The proposed model, trained on a large multimodal MRI database, can classify PD patients and HC subjects with high accuracy and clinically reasonable explanations, suggesting that micro-structural brain changes play an essential role in the disease course.

8.
Artículo en Inglés | MEDLINE | ID: mdl-38942737

RESUMEN

OBJECTIVE: Artificial intelligence (AI) models trained using medical images for clinical tasks often exhibit bias in the form of subgroup performance disparities. However, since not all sources of bias in real-world medical imaging data are easily identifiable, it is challenging to comprehensively assess their impacts. In this article, we introduce an analysis framework for systematically and objectively investigating the impact of biases in medical images on AI models. MATERIALS AND METHODS: Our framework utilizes synthetic neuroimages with known disease effects and sources of bias. We evaluated the impact of bias effects and the efficacy of 3 bias mitigation strategies in counterfactual data scenarios on a convolutional neural network (CNN) classifier. RESULTS: The analysis revealed that training a CNN model on the datasets containing bias effects resulted in expected subgroup performance disparities. Moreover, reweighing was the most successful bias mitigation strategy for this setup. Finally, we demonstrated that explainable AI methods can aid in investigating the manifestation of bias in the model using this framework. DISCUSSION: The value of this framework is showcased in our findings on the impact of bias scenarios and efficacy of bias mitigation in a deep learning model pipeline. This systematic analysis can be easily expanded to conduct further controlled in silico trials in other investigations of bias in medical imaging AI. CONCLUSION: Our novel methodology for objectively studying bias in medical imaging AI can help support the development of clinical decision-support tools that are robust and responsible.

9.
Front Artif Intell ; 7: 1301997, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38384277

RESUMEN

Distributed learning is a promising alternative to central learning for machine learning (ML) model training, overcoming data-sharing problems in healthcare. Previous studies exploring federated learning (FL) or the traveling model (TM) setup for medical image-based disease classification often relied on large databases with a limited number of centers or simulated artificial centers, raising doubts about real-world applicability. This study develops and evaluates a convolution neural network (CNN) for Parkinson's disease classification using data acquired by 83 diverse real centers around the world, mostly contributing small training samples. Our approach specifically makes use of the TM setup, which has proven effective in scenarios with limited data availability but has never been used for image-based disease classification. Our findings reveal that TM is effective for training CNN models, even in complex real-world scenarios with variable data distributions. After sufficient training cycles, the TM-trained CNN matches or slightly surpasses the performance of the centrally trained counterpart (AUROC of 83% vs. 80%). Our study highlights, for the first time, the effectiveness of TM in 3D medical image classification, especially in scenarios with limited training samples and heterogeneous distributed data. These insights are relevant for situations where ML models are supposed to be trained using data from small or remote medical centers, and rare diseases with sparse cases. The simplicity of this approach enables a broad application to many deep learning tasks, enhancing its clinical utility across various contexts and medical facilities.

10.
IEEE J Biomed Health Inform ; 28(4): 2047-2054, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38198251

RESUMEN

Sharing multicenter imaging datasets can be advantageous to increase data diversity and size but may lead to spurious correlations between site-related biological and non-biological image features and target labels, which machine learning (ML) models may exploit as shortcuts. To date, studies analyzing how and if deep learning models may use such effects as a shortcut are scarce. Thus, the aim of this work was to investigate if site-related effects are encoded in the feature space of an established deep learning model designed for Parkinson's disease (PD) classification based on T1-weighted MRI datasets. Therefore, all layers of the PD classifier were frozen, except for the last layer of the network, which was replaced by a linear layer that was exclusively re-trained to predict three potential bias types (biological sex, scanner type, and originating site). Our findings based on a large database consisting of 1880 MRI scans collected across 41 centers show that the feature space of the established PD model (74% accuracy) can be used to classify sex (75% accuracy), scanner type (79% accuracy), and site location (71% accuracy) with high accuracies despite this information never being explicitly provided to the PD model during original training. Overall, the results of this study suggest that trained image-based classifiers may use unwanted shortcuts that are not meaningful for the actual clinical task at hand. This finding may explain why many image-based deep learning models do not perform well when applied to data from centers not contributing to the training set.


Asunto(s)
Enfermedad de Parkinson , Humanos , Enfermedad de Parkinson/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Aprendizaje Automático , Máquina de Vectores de Soporte
11.
Heliyon ; 9(11): e21567, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38027770

RESUMEN

Although gray matter atrophy is commonly observed with aging, it is highly variable, even among healthy people of the same age. This raises the question of what other factors may contribute to gray matter atrophy. Previous studies have reported that risk factors for cardiometabolic diseases are associated with accelerated brain aging. However, these studies were primarily based on standard correlation analyses, which do not unveil a causal relationship. While randomized controlled trials are typically required to investigate true causality, in this work, we investigated an alternative method by exploring data-driven causal discovery and inference techniques on observational data. Accordingly, this feasibility study used clinical and quantified gray matter volume data from 22,793 subjects from the UK biobank cohort without any known neurological disease. Our method identified that age, sex, body mass index (BMI), body fat percentage (BFP), and smoking exhibit a causal relationship with gray matter volume. Interventions on the causal network revealed that higher BMI and BFP values significantly increased the chance of gray matter atrophy in males, whereas this was not the case in females.

12.
J Am Med Inform Assoc ; 30(12): 1925-1933, 2023 11 17.
Artículo en Inglés | MEDLINE | ID: mdl-37669158

RESUMEN

OBJECTIVE: This work investigates if deep learning (DL) models can classify originating site locations directly from magnetic resonance imaging (MRI) scans with and without correction for intensity differences. MATERIAL AND METHODS: A large database of 1880 T1-weighted MRI scans collected across 41 sites originally for Parkinson's disease (PD) classification was used to classify sites in this study. Forty-six percent of the datasets are from PD patients, while 54% are from healthy participants. After preprocessing the T1-weighted scans, 2 additional data types were generated: intensity-harmonized T1-weighted scans and log-Jacobian deformation maps resulting from nonlinear atlas registration. Corresponding DL models were trained to classify sites for each data type. Additionally, logistic regression models were used to investigate the contribution of biological (age, sex, disease status) and non-biological (scanner type) variables to the models' decision. RESULTS: A comparison of the 3 different types of data revealed that DL models trained using T1-weighted and intensity-harmonized T1-weighted scans can classify sites with an accuracy of 85%, while the model using log-Jacobian deformation maps achieved a site classification accuracy of 54%. Disease status and scanner type were found to be significant confounders. DISCUSSION: Our results demonstrate that MRI scans encode relevant site-specific information that models could use as shortcuts that cannot be removed using simple intensity harmonization methods. CONCLUSION: The ability of DL models to exploit site-specific biases as shortcuts raises concerns about their reliability, generalization, and deployability in clinical settings.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos , Neuroimagen
13.
Neuroinformatics ; 21(1): 45-55, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36083416

RESUMEN

Although current research aims to improve deep learning networks by applying knowledge about the healthy human brain and vice versa, the potential of using such networks to model and study neurodegenerative diseases remains largely unexplored. In this work, we present an in-depth feasibility study modeling progressive dementia in silico with deep convolutional neural networks. Therefore, networks were trained to perform visual object recognition and then progressively injured by applying neuronal as well as synaptic injury. After each iteration of injury, network object recognition accuracy, saliency map similarity between the intact and injured networks, and internal activations of the degenerating models were evaluated. The evaluation revealed that cognitive function of the network progressively decreased with increasing injury load whereas this effect was much more pronounced for synaptic damage. The effects of neurodegeneration found for the in silico model are especially similar to the loss of visual cognition seen in patients with posterior cortical atrophy.


Asunto(s)
Aprendizaje Profundo , Demencia , Humanos , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Simulación por Computador
14.
Int J Comput Assist Radiol Surg ; 18(5): 827-836, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36607506

RESUMEN

PURPOSE: Multiple medical imaging modalities are used for clinical follow-up ischemic stroke analysis. Mixed-modality datasets are challenging, both for clinical rating purposes and for training machine learning models. While image-to-image translation methods have been applied to harmonize stroke patient images to a single modality, they have only been used for paired data so far. In the more common unpaired scenario, the standard cycle-consistent generative adversarial network (CycleGAN) method is not able to translate the stroke lesions properly. Thus, the aim of this work was to develop and evaluate a novel image-to-image translation regularization approach for unpaired 3D follow-up stroke patient datasets. METHODS: A modified CycleGAN was used to translate images between 238 non-contrast computed tomography (NCCT) and 244 fluid-attenuated inversion recovery (FLAIR) MRI datasets, two of the most relevant follow-up modalities in clinical practice. We introduced an additional attention-guided mechanism to encourage an improved translation of the lesion and a gradient-consistency loss to preserve structural brain morphology. RESULTS: The proposed modifications were able to preserve the overall quality provided by the CycleGAN translation. This was confirmed by the FID score and gradient correlation results. Furthermore, the lesion preservation was significantly improved compared to a standard CycleGAN. This was evaluated for location and volume with segmentation models, which were trained on real datasets and applied to the translated test images. Here, the Dice score coefficient resulted in 0.81 and 0.62 for datasets translated to FLAIR and NCCT, respectively, compared to 0.57 and 0.50 for the corresponding datasets translated using a standard CycleGAN. Finally, an analysis of the distribution of mean lesion intensities showed substantial improvements. CONCLUSION: The results of this work show that the proposed image-to-image translation method is effective at preserving stroke lesions in unpaired modality translation, supporting its potential as a tool for stroke image analysis in real-life scenarios.


Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular Isquémico , Humanos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
15.
Neuroimage Clin ; 38: 103405, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37079936

RESUMEN

INTRODUCTION: Parkinson's disease (PD) is a severe neurodegenerative disease that affects millions of people. Early diagnosis is important to facilitate prompt interventions to slow down disease progression. However, accurate PD diagnosis can be challenging, especially in the early disease stages. The aim of this work was to develop and evaluate a robust explainable deep learning model for PD classification trained from one of the largest collections of T1-weighted magnetic resonance imaging datasets. MATERIALS AND METHODS: A total of 2,041 T1-weighted MRI datasets from 13 different studies were collected, including 1,024 datasets from PD patients and 1,017 datasets from age- and sex-matched healthy controls (HC). The datasets were skull stripped, resampled to isotropic resolution, bias field corrected, and non-linearly registered to the MNI PD25 atlas. The Jacobian maps derived from the deformation fields together with basic clinical parameters were used to train a state-of-the-art convolutional neural network (CNN) to classify PD and HC subjects. Saliency maps were generated to display the brain regions contributing the most to the classification task as a means of explainable artificial intelligence. RESULTS: The CNN model was trained using an 85%/5%/10% train/validation/test split stratified by diagnosis, sex, and study. The model achieved an accuracy of 79.3%, precision of 80.2%, specificity of 81.3%, sensitivity of 77.7%, and AUC-ROC of 0.87 on the test set while performing similarly on an independent test set. Saliency maps computed for the test set data highlighted frontotemporal regions, the orbital-frontal cortex, and multiple deep gray matter structures as most important. CONCLUSION: The developed CNN model, trained on a large heterogenous database, was able to differentiate PD patients from HC subjects with high accuracy with clinically feasible classification explanations. Future research should aim to investigate the combination of multiple imaging modalities with deep learning and on validating these results in a prospective trial as a clinical decision support system.


Asunto(s)
Aprendizaje Profundo , Enfermedades Neurodegenerativas , Enfermedad de Parkinson , Humanos , Inteligencia Artificial , Imagen por Resonancia Magnética/métodos , Enfermedad de Parkinson/patología , Estudios Prospectivos , Masculino , Femenino
16.
Front Comput Neurosci ; 17: 1274824, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38105786

RESUMEN

The aim of this work was to enhance the biological feasibility of a deep convolutional neural network-based in-silico model of neurodegeneration of the visual system by equipping it with a mechanism to simulate neuroplasticity. Therefore, deep convolutional networks of multiple sizes were trained for object recognition tasks and progressively lesioned to simulate neurodegeneration of the visual cortex. More specifically, the injured parts of the network remained injured while we investigated how the added retraining steps were able to recover some of the model's object recognition baseline performance. The results showed with retraining, model object recognition abilities are subject to a smoother and more gradual decline with increasing injury levels than without retraining and, therefore, more similar to the longitudinal cognition impairments of patients diagnosed with Alzheimer's disease (AD). Moreover, with retraining, the injured model exhibits internal activation patterns similar to those of the healthy baseline model when compared to the injured model without retraining. Furthermore, we conducted this analysis on a network that had been extensively pruned, resulting in an optimized number of parameters or synapses. Our findings show that this network exhibited remarkably similar capability to recover task performance with decreasingly viable pathways through the network. In conclusion, adding a retraining step to the in-silico setup that simulates neuroplasticity improves the model's biological feasibility considerably and could prove valuable to test different rehabilitation approaches in-silico.

17.
Eur J Hum Genet ; 31(9): 1010-1016, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36750664

RESUMEN

Human genetic syndromes are often challenging to diagnose clinically. Facial phenotype is a key diagnostic indicator for hundreds of genetic syndromes and computer-assisted facial phenotyping is a promising approach to assist diagnosis. Most previous approaches to automated face-based syndrome diagnosis have analyzed different datasets of either 2D images or surface mesh-based 3D facial representations, making direct comparisons of performance challenging. In this work, we developed a set of subject-matched 2D and 3D facial representations, which we then analyzed with the aim of comparing the performance of 2D and 3D image-based approaches to computer-assisted syndrome diagnosis. This work represents the most comprehensive subject-matched analyses to date on this topic. In our analyses of 1907 subject faces representing 43 different genetic syndromes, 3D surface-based syndrome classification models significantly outperformed 2D image-based models trained and evaluated on the same subject faces. These results suggest that the clinical adoption of 3D facial scanning technology and continued collection of syndromic 3D facial scan data may substantially improve face-based syndrome diagnosis.


Asunto(s)
Cara , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Síndrome , Imagenología Tridimensional/métodos
18.
IEEE Trans Biomed Eng ; 69(9): 2947-2957, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35271438

RESUMEN

OBJECTIVE: Statistical shape models have been successfully used in numerous biomedical image analysis applications where prior shape information is helpful such as organ segmentation or data augmentation when training deep learning models. However, training such models requires large data sets, which are often not available and, hence, shape models frequently fail to represent local details of unseen shapes. This work introduces a kernel-based method to alleviate this problem via so-called model localization. It is specifically designed to be used in large-scale shape modeling scenarios like deep learning data augmentation and fits seamlessly into the classical shape modeling framework. METHOD: Relying on recent advances in multi-level shape model localization via distance-based covariance matrix manipulations and Grassmannian-based level fusion, this work proposes a novel and computationally efficient kernel-based localization technique. Moreover, a novel way to improve the specificity of such models via normalizing flow-based density estimation is presented. RESULTS: The method is evaluated on the publicly available JSRT/SCR chest X-ray and IXI brain data sets. The results confirm the effectiveness of the kernelized formulation and also highlight the models' improved specificity when utilizing the proposed density estimation method. CONCLUSION: This work shows that flexible and specific shape models from few training samples can be generated in a computationally efficient way by combining ideas from kernel theory and normalizing flows. SIGNIFICANCE: The proposed method together with its publicly available implementation allows to build shape models from few training samples directly usable for applications like data augmentation.


Asunto(s)
Algoritmos , Modelos Estadísticos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía
19.
Int J Comput Assist Radiol Surg ; 17(7): 1213-1224, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35128605

RESUMEN

PURPOSE: This work aims for a systematic comparison of popular shape and appearance models. Here, two statistical and four deep-learning-based shape and appearance models are compared and evaluated in terms of their expressiveness described by their generalization ability and specificity as well as further properties like input data format, interpretability and latent space distribution and dimension. METHODS: Classical shape models and their locality-based extension are considered next to autoencoders, variational autoencoders, diffeomorphic autoencoders and generative adversarial networks. The approaches are evaluated in terms of generalization ability, specificity and likeness depending on the amount of training data. Furthermore, various latent space metrics are presented in order to capture further major characteristics of the models. RESULTS: The experimental setup showed that locality statistical shape models yield best results in terms of generalization ability for 2D and 3D shape modeling. However, the deep learning approaches show strongly improved specificity. In the case of simultaneous shape and appearance modeling, the neural networks are able to generate more realistic and diverse appearances. A major drawback of the deep-learning models is, however, their impaired interpretability and ambiguity of the latent space. CONCLUSIONS: It can be concluded that for applications not requiring particularly good specificity, shape modeling can be reliably established with locality-based statistical shape models, especially when it comes to 3D shapes. However, deep learning approaches are more worthwhile in terms of appearance modeling.


Asunto(s)
Modelos Estadísticos , Redes Neurales de la Computación , Humanos
20.
Med Image Anal ; 82: 102610, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36103772

RESUMEN

For the diagnosis and precise treatment of acute ischemic stroke, predicting the final location and volume of lesions is of great clinical interest. Current deep learning-based prediction methods mainly use perfusion parameter maps, which can be calculated from spatio-temporal (4D) CT perfusion (CTP) imaging data, to estimate the tissue outcome of an acute ischemic stroke. However, this calculation relies on a deconvolution operation, an ill-posed problem requiring strong regularization and definition of an arterial input function. Thus, improved predictions might be achievable if the deep learning models were applied directly to acute 4D CTP data rather than perfusion maps. In this work, a novel deep spatio-temporal convolutional neural network is proposed for predicting treatment-dependent stroke lesion outcomes by making full use of raw 4D CTP data. By merging a U-Net-like architecture with temporal convolutional networks, we efficiently process the spatio-temporal information available in CTP datasets to make a tissue outcome prediction. The proposed method was evaluated on 147 patients using a 10-fold cross validation, which demonstrated that the proposed 3D+time model (mean Dice=0.45) significantly outperforms both a 2D+time variant of our approach (mean Dice=0.43) and a state-of-the-art method that uses perfusion maps (mean Dice=0.38). These results show that 4D CTP datasets include more predictive information than perfusion parameter maps, and that the proposed method is an efficient approach to make use of this complex data.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Humanos , Isquemia Encefálica/diagnóstico por imagen , Tomografía Computarizada Cuatridimensional , Redes Neurales de la Computación , Imagen de Perfusión/métodos , Accidente Cerebrovascular/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA