Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-37788195

RESUMEN

Survival prediction based on histopathological whole slide images (WSIs) is of great significance for risk-benefit assessment and clinical decision. However, complex microenvironments and heterogeneous tissue structures in WSIs bring challenges to learning informative prognosis-related representations. Additionally, previous studies mainly focus on modeling using mono-scale WSIs, which commonly ignore useful subtle differences existed in multi-zoom WSIs. To this end, we propose a deep multi-dictionary learning framework for cancer survival prediction with multi-zoom histopathological WSIs. The framework can recognize and learn discriminative clusters (i.e., microenvironments) based on multi-scale deep representations for survival analysis. Specifically, we learn multi-scale features based on multi-zoom tiles from WSIs via stacked deep autoencoders network followed by grouping different microenvironments by cluster algorithm. Based on multi-scale deep features of clusters, a multi-dictionary learning method with a post-pruning strategy is devised to learn discriminative representations from selected prognosis-related clusters in a task-driven manner. Finally, a survival model (i.e., EN-Cox) is constructed to estimate the risk index of an individual patient. The proposed model is evaluated on three datasets derived from The Cancer Genome Atlas (TCGA), and the experimental results demonstrate that it outperforms several state-of-the-art survival analysis approaches.


Asunto(s)
Algoritmos , Neoplasias , Humanos , Neoplasias/genética , Microambiente Tumoral
2.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3737-3750, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34596560

RESUMEN

The Cox proportional hazard model has been widely applied to cancer prognosis prediction. Nowadays, multi-modal data, such as histopathological images and gene data, have advanced this field by providing histologic phenotype and genotype information. However, how to efficiently fuse and select the complementary information of high-dimensional multi-modal data remains challenging for Cox model, as it generally does not equip with feature fusion/selection mechanism. Many previous studies typically perform feature fusion/selection in the original feature space before Cox modeling. Alternatively, learning a latent shared feature space that is tailored for Cox model and simultaneously keeps sparsity is desirable. In addition, existing Cox-based models commonly pay little attention to the actual length of the observed time that may help to boost the model's performance. In this article, we propose a novel Cox-driven multi-constraint latent representation learning framework for prognosis analysis with multi-modal data. Specifically, for efficient feature fusion, a multi-modal latent space is learned via a bi-mapping approach under ranking and regression constraints. The ranking constraint utilizes the log-partial likelihood of Cox model to induce learning discriminative representations in a task-oriented manner. Meanwhile, the representations also benefit from regression constraint, which imposes the supervision of specific survival time on representation learning. To improve generalization and alleviate overfitting, we further introduce similarity and sparsity constraints to encourage extra consistency and sparseness. Extensive experiments on three datasets acquired from The Cancer Genome Atlas (TCGA) demonstrate that the proposed method is superior to state-of-the-art Cox-based models.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Generalización Psicológica , Pronóstico , Probabilidad
3.
IEEE Trans Med Imaging ; 41(1): 186-198, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34460368

RESUMEN

The integrative analysis of complementary phenotype information contained in multi-modality data (e.g., histopathological images and genomic data) has advanced the prognostic evaluation of cancers. However, multi-modality based prognosis analysis confronts two challenges: (1) how to explore underlying relations inherent in different modalities data for learning compact and discriminative multi-modality representations; (2) how to take full consideration of incomplete multi-modality data for constructing accurate and robust prognostic model, since a host of complete multi-modality data are not always available. Additionally, many existing multi-modality based prognostic methods commonly ignore relevant clinical variables (e.g., grade and stage), which, however, may provide supplemental information to promote the performance of model. In this paper, we propose a relation-aware shared representation learning method for prognosis analysis of cancers, which makes full use of clinical information and incomplete multi-modality data. The proposed method learns multi-modal shared space tailored for prognostic model via a dual mapping. Within the shared space, it equips with relational regularizers to explore the potential relations (i.e., feature-label and feature-feature relations) among multi-modality data for inducing discriminatory representations and simultaneously obtaining extra sparsity for alleviating overfitting. Moreover, it regresses and incorporates multiple auxiliary clinical attributes with dynamic coefficients to meliorate performance. Furthermore, in training stage, a partial mapping strategy is employed to extend and train a more reliable model with incomplete multi-modality data. We have evaluated our method on three public datasets derived from The Cancer Genome Atlas (TCGA) project, and the experimental results demonstrate the superior performance of the proposed method.


Asunto(s)
Genómica , Neoplasias , Humanos , Neoplasias/diagnóstico por imagen , Pronóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA