Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; 42(8): 2462-2473, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37028064

RESUMEN

Cancer survival prediction requires exploiting related multimodal information (e.g., pathological, clinical and genomic features, etc.) and it is even more challenging in clinical practices due to the incompleteness of patient's multimodal data. Furthermore, existing methods lack sufficient intra- and inter-modal interactions, and suffer from significant performance degradation caused by missing modalities. This manuscript proposes a novel hybrid graph convolutional network, entitled HGCN, which is equipped with an online masked autoencoder paradigm for robust multimodal cancer survival prediction. Particularly, we pioneer modeling the patient's multimodal data into flexible and interpretable multimodal graphs with modality-specific preprocessing. HGCN integrates the advantages of graph convolutional networks (GCNs) and a hypergraph convolutional network (HCN) through node message passing and a hyperedge mixing mechanism to facilitate intra-modal and inter-modal interactions between multimodal graphs. With HGCN, the potential for multimodal data to create more reliable predictions of patient's survival risk is dramatically increased compared to prior methods. Most importantly, to compensate for missing patient modalities in clinical scenarios, we incorporated an online masked autoencoder paradigm into HGCN, which can effectively capture intrinsic dependence between modalities and seamlessly generate missing hyperedges for model inference. Extensive experiments and analysis on six cancer cohorts from TCGA show that our method significantly outperforms the state-of-the-arts in both complete and missing modal settings. Our codes are made available at https://github.com/lin-lcx/HGCN.


Asunto(s)
Genómica , Neoplasias , Humanos , Neoplasias/diagnóstico por imagen
2.
IEEE J Biomed Health Inform ; 26(6): 2660-2669, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34855605

RESUMEN

Survival prediction of esophageal cancer is an essential task for doctors to make personalized cancer treatment plans. However, handcrafted features from medical images need prior medical knowledge, which is usually limited and not complete, yielding unsatisfying survival predictions. To address these challenges, we propose a novel and efficient deep learning-based survival prediction framework for evaluating clinical outcomes before concurrent chemoradiotherapy. The proposed model consists of two key components: a 3D Coordinate Attention Convolutional Autoencoder (CACA) and an uncertainty-based jointly Optimizing Cox Model (UOCM). The CACA is built upon an autoencoder structure with 3D coordinate attention layers, capturing latent representations and encoding 3D spatial characteristics with precise positional information. Additionally, we designed an Uncertainty-based jointly Optimizing Cox Model, which jointly optimizes the CACA and survival prediction task. The survival prediction task models the interactions between a patient's feature signatures and clinical outcome to predict a reliable hazard ratio of patients. To verify the effectiveness of our model, we conducted extensive experiments on a dataset including computed tomography of 285 patients with esophageal cancer. Experimental results demonstrated that the proposed method achieved a C-index of 0.72, outperforming the state-of-the-art method.


Asunto(s)
Neoplasias Esofágicas , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/terapia , Humanos , Tomografía Computarizada por Rayos X
3.
Med Image Anal ; 72: 102092, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34030101

RESUMEN

Automatic surveillance of early neoplasia in Barrett's esophagus (BE) is of great significance for improving the survival rate of esophageal cancer. It remains, however, a challenging task due to (1) the large variation of early neoplasia, (2) the existence of hard mimics, (3) the complicated anatomical and lighting environment in endoscopic images, and (4) the intrinsic real-time requirement of this application. We propose a novel end-to-end network equipped with an attentive hierarchical aggregation module and a self-distillation mechanism to comprehensively address these challenges. The hierarchical aggregation module is proposed to capture the complementariness of adjacent layers and hence strengthen the representation capability of each aggregated feature. Meanwhile, an attention mask is developed to selectively integrate the logits of each feature, which not only improves the prediction accuracy but also enhances the prediction interpretability. Furthermore, an efficient self-distillation mechanism is implemented based on a teacher-student architecture, where the student aims at capturing abstract high-level features while the teacher is applied to bring more low-level semantic details to calibrate the classification results. The proposed techniques are effective yet lightweight, improving the classification performance without sacrificing time performance, and thus achieving real-time inference. We extensively evaluate the proposed method on the MICCAI EndoVis Challenge Dataset. Experimental results demonstrate the proposed method can achieve competitive accuracy with a much faster speed than state-of-the-arts.


Asunto(s)
Esófago de Barrett , Neoplasias Esofágicas , Atención , Esófago de Barrett/diagnóstico por imagen , Neoplasias Esofágicas/diagnóstico por imagen , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA