Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
J Transl Med ; 22(1): 265, 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38468358

ABSTRACT

BACKGROUND: Identifying individuals with mild cognitive impairment (MCI) at risk of progressing to Alzheimer's disease (AD) provides a unique opportunity for early interventions. Therefore, accurate and long-term prediction of the conversion from MCI to AD is desired but, to date, remains challenging. Here, we developed an interpretable deep learning model featuring a novel design that incorporates interaction effects and multimodality to improve the prediction accuracy and horizon for MCI-to-AD progression. METHODS: This multi-center, multi-cohort retrospective study collected structural magnetic resonance imaging (sMRI), clinical assessments, and genetic polymorphism data of 252 patients with MCI at baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our deep learning model was cross-validated on the ADNI-1 and ADNI-2/GO cohorts and further generalized in the ongoing ADNI-3 cohort. We evaluated the model performance using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score. RESULTS: On the cross-validation set, our model achieved superior results for predicting MCI conversion within 4 years (AUC, 0.962; accuracy, 92.92%; sensitivity, 88.89%; specificity, 95.33%) compared to all existing studies. In the independent test, our model exhibited consistent performance with an AUC of 0.939 and an accuracy of 92.86%. Integrating interaction effects and multimodal data into the model significantly increased prediction accuracy by 4.76% (P = 0.01) and 4.29% (P = 0.03), respectively. Furthermore, our model demonstrated robustness to inter-center and inter-scanner variability, while generating interpretable predictions by quantifying the contribution of multimodal biomarkers. CONCLUSIONS: The proposed deep learning model presents a novel perspective by combining interaction effects and multimodality, leading to more accurate and longer-term predictions of AD progression, which promises to improve pre-dementia patient care.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Deep Learning , Humans , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/genetics , Retrospective Studies , Magnetic Resonance Imaging/methods , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/genetics , Cognitive Dysfunction/pathology , Disease Progression
2.
Cancer Sci ; 114(2): 690-701, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36114747

ABSTRACT

Accurately predicting patient survival is essential for cancer treatment decision. However, the prognostic prediction model based on histopathological images of stomach cancer patients is still yet to be developed. We propose a deep learning-based model (MultiDeepCox-SC) that predicts overall survival in patients with stomach cancer by integrating histopathological images, clinical data, and gene expression data. The MultiDeepCox-SC not only automatedly selects patches with more information for survival prediction, without manual labeling for histopathological images, but also identifies genetic and clinical risk factors associated with survival in stomach cancer. The prognostic accuracy of the MultiDeepCox-SC (C-index = 0.744) surpasses the result only based on histopathological image (C-index = 0.660). The risk score of our model was still an independent predictor of survival outcome after adjustment for potential confounders, including pathologic stage, grade, age, race, and gender on The Cancer Genome Atlas dataset (hazard ratio 1.555, p = 3.53e-08) and the external test set (hazard ratio 2.912, p = 9.42e-4). Our fully automated online prognostic tool based on histopathological images, clinical data, and gene expression data could be utilized to improve pathologists' efficiency and accuracy (https://yu.life.sjtu.edu.cn/DeepCoxSC).


Subject(s)
Deep Learning , Stomach Neoplasms , Humans , Stomach Neoplasms/genetics , Prognosis , Risk Factors
3.
Bioinformatics ; 37(20): 3436-3443, 2021 Oct 25.
Article in English | MEDLINE | ID: mdl-33978703

ABSTRACT

MOTIVATION: Enhancers are important functional elements in genome sequences. The identification of enhancers is a very challenging task due to the great diversity of enhancer sequences and the flexible localization on genomes. Till now, the interactions between enhancers and genes have not been fully understood yet. To speed up the studies of the regulatory roles of enhancers, computational tools for the prediction of enhancers have emerged in recent years. Especially, thanks to the ENCODE project and the advances of high-throughput experimental techniques, a large amount of experimentally verified enhancers have been annotated on the human genome, which allows large-scale predictions of unknown enhancers using data-driven methods. However, except for human and some model organisms, the validated enhancer annotations are scarce for most species, leading to more difficulties in the computational identification of enhancers for their genomes. RESULTS: In this study, we propose a deep learning-based predictor for enhancers, named CrepHAN, which is featured by a hierarchical attention neural network and word embedding-based representations for DNA sequences. We use the experimentally supported data of the human genome to train the model, and perform experiments on human and other mammals, including mouse, cow and dog. The experimental results show that CrepHAN has more advantages on cross-species predictions, and outperforms the existing models by a large margin. Especially, for human-mouse cross-predictions, the area under the receiver operating characteristic (ROC) curve (AUC) score of ROC curve is increased by 0.033∼0.145 on the combined tissue dataset and 0.032∼0.109 on tissue-specific datasets. AVAILABILITY AND IMPLEMENTATION: bcmi.sjtu.edu.cn/∼yangyang/CrepHAN.html. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

4.
Cell Rep Methods ; 4(4): 100742, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38554701

ABSTRACT

The pathogenesis of Alzheimer disease (AD) involves complex gene regulatory changes across different cell types. To help decipher this complexity, we introduce single-cell Bayesian biclustering (scBC), a framework for identifying cell-specific gene network biomarkers in scRNA and snRNA-seq data. Through biclustering, scBC enables the analysis of perturbations in functional gene modules at the single-cell level. Applying the scBC framework to AD snRNA-seq data reveals the perturbations within gene modules across distinct cell groups and sheds light on gene-cell correlations during AD progression. Notably, our method helps to overcome common challenges in single-cell data analysis, including batch effects and dropout events. Incorporating prior knowledge further enables the framework to yield more biologically interpretable results. Comparative analyses on simulated and real-world datasets demonstrate the precision and robustness of our approach compared to other state-of-the-art biclustering methods. scBC holds potential for unraveling the mechanisms underlying polygenic diseases characterized by intricate gene coexpression patterns.


Subject(s)
Alzheimer Disease , Disease Progression , Single-Cell Analysis , Transcriptome , Humans , Alzheimer Disease/genetics , Alzheimer Disease/metabolism , Alzheimer Disease/pathology , Single-Cell Analysis/methods , Transcriptome/genetics , Cluster Analysis , Bayes Theorem , Gene Expression Profiling/methods , Gene Regulatory Networks/genetics
5.
Nat Commun ; 15(1): 5700, 2024 Jul 07.
Article in English | MEDLINE | ID: mdl-38972896

ABSTRACT

Identifying spatially variable genes (SVGs) is crucial for understanding the spatiotemporal characteristics of diseases and tissue structures, posing a distinctive challenge in spatial transcriptomics research. We propose HEARTSVG, a distribution-free, test-based method for fast and accurately identifying spatially variable genes in large-scale spatial transcriptomic data. Extensive simulations demonstrate that HEARTSVG outperforms state-of-the-art methods with higher F 1 scores (average F 1 Score=0.948), improved computational efficiency, scalability, and reduced false positives (FPs). Through analysis of twelve real datasets from various spatial transcriptomic technologies, HEARTSVG identifies a greater number of biologically significant SVGs (average AUC = 0.792) than other comparative methods without prespecifying spatial patterns. Furthermore, by clustering SVGs, we uncover two distinct tumor spatial domains characterized by unique spatial expression patterns, spatial-temporal locations, and biological functions in human colorectal cancer data, unraveling the complexity of tumors.


Subject(s)
Gene Expression Profiling , Transcriptome , Humans , Gene Expression Profiling/methods , Colorectal Neoplasms/genetics , Computational Biology/methods , Algorithms , Gene Expression Regulation, Neoplastic , Computer Simulation , Databases, Genetic
6.
Cell Rep Med ; 5(5): 101536, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38697103

ABSTRACT

Spatial transcriptomics (ST) provides insights into the tumor microenvironment (TME), which is closely associated with cancer prognosis, but ST has limited clinical availability. In this study, we provide a powerful deep learning system to augment TME information based on histological images for patients without ST data, thereby empowering precise cancer prognosis. The system provides two connections to bridge existing gaps. The first is the integrated graph and image deep learning (IGI-DL) model, which predicts ST expression based on histological images with a 0.171 increase in mean correlation across three cancer types compared with five existing methods. The second connection is the cancer prognosis prediction model, based on TME depicted by spatial gene expression. Our survival model, using graphs with predicted ST features, achieves superior accuracy with a concordance index of 0.747 and 0.725 for The Cancer Genome Atlas breast cancer and colorectal cancer cohorts, outperforming other survival models. For the external Molecular and Cellular Oncology colorectal cancer cohort, our survival model maintains a stable advantage.


Subject(s)
Deep Learning , Neoplasms , Tumor Microenvironment , Humans , Prognosis , Neoplasms/pathology , Neoplasms/genetics , Neoplasms/diagnosis , Transcriptome/genetics , Gene Expression Regulation, Neoplastic , Female , Breast Neoplasms/pathology , Breast Neoplasms/genetics , Breast Neoplasms/diagnosis
7.
J Hematol Oncol ; 14(1): 154, 2021 09 26.
Article in English | MEDLINE | ID: mdl-34565412

ABSTRACT

BACKGROUND: Liver cancer remains the leading cause of cancer death globally, and the treatment strategies are distinct for each type of malignant hepatic tumors. However, the differential diagnosis before surgery is challenging and subjective. This study aims to build an automatic diagnostic model for differentiating malignant hepatic tumors based on patients' multimodal medical data including multi-phase contrast-enhanced computed tomography and clinical features. METHODS: Our study consisted of 723 patients from two centers, who were pathologically diagnosed with HCC, ICC or metastatic liver cancer. The training set and the test set consisted of 499 and 113 patients from center 1, respectively. The external test set consisted of 111 patients from center 2. We proposed a deep learning model with the modular design of SpatialExtractor-TemporalEncoder-Integration-Classifier (STIC), which take the advantage of deep CNN and gated RNN to effectively extract and integrate the diagnosis-related radiological and clinical features of patients. The code is publicly available at https://github.com/ruitian-olivia/STIC-model . RESULTS: The STIC model achieved an accuracy of 86.2% and AUC of 0.893 for classifying HCC and ICC on the test set. When extended to differential diagnosis of malignant hepatic tumors, the STIC model achieved an accuracy of 72.6% on the test set, comparable with the diagnostic level of doctors' consensus (70.8%). With the assistance of the STIC model, doctors achieved better performance than doctors' consensus diagnosis, with an increase of 8.3% in accuracy and 26.9% in sensitivity for ICC diagnosis on average. On the external test set from center 2, the STIC model achieved an accuracy of 82.9%, which verify the model's generalization ability. CONCLUSIONS: We incorporated deep CNN and gated RNN in the STIC model design for differentiating malignant hepatic tumors based on multi-phase CECT and clinical features. Our model can assist doctors to achieve better diagnostic performance, which is expected to serve as an AI assistance system and promote the precise treatment of liver cancer.


Subject(s)
Carcinoma, Hepatocellular/diagnostic imaging , Liver Neoplasms/diagnostic imaging , Liver/diagnostic imaging , Deep Learning , Diagnosis, Computer-Assisted , Diagnosis, Differential , Humans , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL