Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 25(2)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38349062

RESUMEN

Single-cell RNA sequencing (scRNA-seq) has emerged as a powerful tool to gain biological insights at the cellular level. However, due to technical limitations of the existing sequencing technologies, low gene expression values are often omitted, leading to inaccurate gene counts. Existing methods, including advanced deep learning techniques, struggle to reliably impute gene expressions due to a lack of mechanisms that explicitly consider the underlying biological knowledge of the system. In reality, it has long been recognized that gene-gene interactions may serve as reflective indicators of underlying biology processes, presenting discriminative signatures of the cells. A genomic data analysis framework that is capable of leveraging the underlying gene-gene interactions is thus highly desirable and could allow for more reliable identification of distinctive patterns of the genomic data through extraction and integration of intricate biological characteristics of the genomic data. Here we tackle the problem in two steps to exploit the gene-gene interactions of the system. We first reposition the genes into a 2D grid such that their spatial configuration reflects their interactive relationships. To alleviate the need for labeled ground truth gene expression datasets, a self-supervised 2D convolutional neural network is employed to extract the contextual features of the interactions from the spatially configured genes and impute the omitted values. Extensive experiments with both simulated and experimental scRNA-seq datasets are carried out to demonstrate the superior performance of the proposed strategy against the existing imputation methods.


Asunto(s)
Aprendizaje Profundo , Epistasis Genética , Análisis de Datos , Genómica , Expresión Génica , Perfilación de la Expresión Génica , Análisis de Secuencia de ARN
2.
IEEE Trans Med Imaging ; 42(7): 1932-1943, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37018314

RESUMEN

The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.


Asunto(s)
Algoritmos , Diagnóstico por Imagen , Radiografía , Retina
3.
Med Image Comput Comput Assist Interv ; 13431: 231-240, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36321855

RESUMEN

The white-matter (micro-)structural architecture of the brain promotes synchrony among neuronal populations, giving rise to richly patterned functional connections. A fundamental problem for systems neuroscience is determining the best way to relate structural and functional networks quantified by diffusion tensor imaging and resting-state functional MRI. As one of the state-of-the-art approaches for network analysis, graph convolutional networks (GCN) have been separately used to analyze functional and structural networks, but have not been applied to explore inter-network relationships. In this work, we propose to couple the two networks of an individual by adding inter-network edges between corresponding brain regions, so that the joint structure-function graph can be directly analyzed by a single GCN. The weights of inter-network edges are learnable, reflecting non-uniform structure-function coupling strength across the brain. We apply our Joint-GCN to predict age and sex of 662 participants from the public dataset of the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA) based on their functional and micro-structural white-matter networks. Our results support that the proposed Joint-GCN outperforms existing multi-modal graph learning approaches for analyzing structural and functional networks.

4.
Front Oncol ; 12: 878061, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35875110

RESUMEN

Background and Aims: Microvascular invasion (MVI) is a well-known risk factor for poor prognosis in hepatocellular carcinoma (HCC). This study aimed to develop a deep convolutional neural network (DCNN) model based on contrast-enhanced ultrasound (CEUS) to predict MVI, and thus to predict prognosis in patients with HCC. Methods: A total of 436 patients with surgically resected HCC who underwent preoperative CEUS were retrospectively enrolled. Patients were divided into training (n = 301), validation (n = 102), and test (n = 33) sets. A clinical model (Clinical model), a CEUS video-based DCNN model (CEUS-DCNN model), and a fusion model based on CEUS video and clinical variables (CECL-DCNN model) were built to predict MVI. Survival analysis was used to evaluate the clinical performance of the predicted MVI. Results: Compared with the Clinical model, the CEUS-DCNN model exhibited similar sensitivity, but higher specificity (71.4% vs. 38.1%, p = 0.03) in the test group. The CECL-DCNN model showed significantly higher specificity (81.0% vs. 38.1%, p = 0.005) and accuracy (78.8% vs. 51.5%, p = 0.009) than the Clinical model, with an AUC of 0.865. The Clinical predicted MVI could not significantly distinguish OS or RFS (both p > 0.05), while the CEUS-DCNN predicted MVI could only predict the earlier recurrence (hazard ratio [HR] with 95% confidence interval [CI 2.92 [1.1-7.75], p = 0.024). However, the CECL-DCNN predicted MVI was a significant prognostic factor for both OS (HR with 95% CI: 6.03 [1.7-21.39], p = 0.009) and RFS (HR with 95% CI: 3.3 [1.23-8.91], p = 0.011) in the test group. Conclusions: The proposed CECL-DCNN model based on preoperative CEUS video can serve as a noninvasive tool to predict MVI status in HCC, thereby predicting poor prognosis.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA