Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8081-8093, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37018678

ABSTRACT

Natural science datasets frequently violate assumptions of independence. Samples may be clustered (e.g., by study site, subject, or experimental batch), leading to spurious associations, poor model fitting, and confounded analyses. While largely unaddressed in deep learning, this problem has been handled in the statistics community through mixed effects models, which separate cluster-invariant fixed effects from cluster-specific random effects. We propose a general-purpose framework for Adversarially-Regularized Mixed Effects Deep learning (ARMED) models through non-intrusive additions to existing neural networks: 1) an adversarial classifier constraining the original model to learn only cluster-invariant features, 2) a random effects subnetwork capturing cluster-specific features, and 3) an approach to apply random effects to clusters unseen during training. We apply ARMED to dense, convolutional, and autoencoder neural networks on 4 datasets including simulated nonlinear data, dementia prognosis and diagnosis, and live-cell image analysis. Compared to prior techniques, ARMED models better distinguish confounded from true associations in simulations and learn more biologically plausible features in clinical applications. They can also quantify inter-cluster variance and visualize cluster effects in data. Finally, ARMED matches or improves performance on data from clusters seen during training (5-28% relative improvement) and generalization to unseen clusters (2-9% relative improvement) versus conventional models.

2.
Clin Ophthalmol ; 16: 2685-2697, 2022.
Article in English | MEDLINE | ID: mdl-36003072

ABSTRACT

Purpose: To establish optical coherence tomography (OCT)/angiography (OCTA) parameter ranges for healthy eyes (HE) and glaucomatous eyes (GE) for a North Texas based population; to develop a machine learning (ML) tool and to identify the most accurate diagnostic parameters for clinical glaucoma diagnosis. Patients and Methods: In this retrospective cross-sectional study, we included 1371 eligible eyes, 462 HE and 909 GE (377 ocular hypertension, 160 mild, 156 moderate, 216 severe), from 735 subjects. Demographic data and full OCTA parameters were collected. A Kruskal-Wallis test was used to produce the normative database. Models were trained to solve a two-class problem (HE vs GE) and four-class problem (HE vs mild vs moderate vs severe GE). A rigorous nested, stratified, group, 5×10 fold cross-validation strategy was applied to partition the data. Six ML algorithms were compared using classical and deep learning approaches. Over 2500 ML models were optimized using random search, with performance compared using mean validation accuracy. Final performance was reported on held-out test data using accuracy and F1 score. Decision trees and feature importance were produced for the final model. Results: We found differences across glaucoma severities for age, gender, hypertension, Black and Asian race, and all OCTA parameters, except foveal avascular zone area and perimeter (p<0.05). The XGBoost algorithm achieved the highest test performance for both the two-class (F1 score 83.8%; accuracy 83.9%; standard deviation 0.03%) and four-class (F1 score 62.4%; accuracy 71.3%; standard deviation 0.013%) problem. A set of interpretable decision trees provided the most important predictors of the final model; inferior temporal and inferior hemisphere vessel density and peripapillary retinal nerve fiber layer thickness were identified as key diagnostic parameters. Conclusion: This study established a normative database for our North Texas based population and created ML tools utilizing OCT/A that may aid clinicians in glaucoma management.

3.
Nat Commun ; 13(1): 3328, 2022 06 09.
Article in English | MEDLINE | ID: mdl-35680911

ABSTRACT

Gene expression covaries with brain activity as measured by resting state functional magnetic resonance imaging (MRI). However, it is unclear how genomic differences driven by disease state can affect this relationship. Here, we integrate from the ABIDE I and II imaging cohorts with datasets of gene expression in brains of neurotypical individuals and individuals with autism spectrum disorder (ASD) with regionally matched brain activity measurements from fMRI datasets. We identify genes linked with brain activity whose association is disrupted in ASD. We identified a subset of genes that showed a differential developmental trajectory in individuals with ASD compared with controls. These genes are enriched in voltage-gated ion channels and inhibitory neurons, pointing to excitation-inhibition imbalance in ASD. We further assessed differences at the regional level showing that the primary visual cortex is the most affected region in ASD. Our results link disrupted brain expression patterns of individuals with ASD to brain activity and show developmental, cell type, and regional enrichment of activity linked genes.


Subject(s)
Autism Spectrum Disorder , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/genetics , Brain/diagnostic imaging , Brain Mapping/methods , Gene Expression , Humans , Magnetic Resonance Imaging/methods , Neural Pathways
4.
Neuroimage ; 241: 118402, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34274419

ABSTRACT

Magnetoencephalography (MEG) is a functional neuroimaging tool that records the magnetic fields induced by neuronal activity; however, signal from non-neuronal sources can corrupt the data. Eye-blinks, saccades, and cardiac activity are three of the most common sources of non-neuronal artifacts. They can be measured by affixing eye proximal electrodes, as in electrooculography (EOG), and chest electrodes, as in electrocardiography (ECG), however this complicates imaging setup, decreases patient comfort, and can induce further artifacts from movement. This work proposes an EOG- and ECG-free approach to identify eye-blinks, saccades, and cardiac activity signals for automated artifact suppression. The contribution of this work is three-fold. First, using a data driven, multivariate decomposition approach based on Independent Component Analysis (ICA), a highly accurate artifact classifier is constructed as an amalgam of deep 1-D and 2-D Convolutional Neural Networks (CNNs) to automate the identification and removal of ubiquitous whole brain artifacts including eye-blink, saccade, and cardiac artifacts. The specific architecture of this network is optimized through an unbiased, computer-based hyperparameter random search. Second, visualization methods are applied to the learned abstraction to reveal what features the model uses and to bolster user confidence in the model's training and potential for generalization. Finally, the model is trained and tested on both resting-state and task MEG data from 217 subjects, and achieves a new state-of-the-art in artifact detection accuracy of 98.95% including 96.74% sensitivity and 99.34% specificity on the held out test-set. This work automates MEG processing for both clinical and research use, adapts to the acquired acquisition time, and can obviate the need for EOG or ECG electrodes for artifact detection.


Subject(s)
Artifacts , Brain/physiology , Magnetoencephalography/methods , Neural Networks, Computer , Signal Processing, Computer-Assisted , Adolescent , Adult , Aged , Blinking/physiology , Child , Female , Humans , Magnetoencephalography/standards , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...