Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Comput Biol Med ; 178: 108746, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38878403

RESUMO

Multi-phase computed tomography (CT) has been widely used for the preoperative diagnosis of kidney cancer due to its non-invasive nature and ability to characterize renal lesions. However, since enhancement patterns of renal lesions across CT phases are different even for the same lesion type, the visual assessment by radiologists suffers from inter-observer variability in clinical practice. Although deep learning-based approaches have been recently explored for differential diagnosis of kidney cancer, they do not explicitly model the relationships between CT phases in the network design, limiting the diagnostic performance. In this paper, we propose a novel lesion-aware cross-phase attention network (LACPANet) that can effectively capture temporal dependencies of renal lesions across CT phases to accurately classify the lesions into five major pathological subtypes from time-series multi-phase CT images. We introduce a 3D inter-phase lesion-aware attention mechanism to learn effective 3D lesion features that are used to estimate attention weights describing the inter-phase relations of the enhancement patterns. We also present a multi-scale attention scheme to capture and aggregate temporal patterns of lesion features at different spatial scales for further improvement. Extensive experiments on multi-phase CT scans of kidney cancer patients from the collected dataset demonstrate that our LACPANet outperforms state-of-the-art approaches in diagnostic accuracy.

2.
IEEE J Biomed Health Inform ; 26(12): 6093-6104, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36327174

RESUMO

Multi-phase computed tomography (CT) is widely adopted for the diagnosis of kidney cancer due to the complementary information among phases. However, the complete set of multi-phase CT is often not available in practical clinical applications. In recent years, there have been some studies to generate the missing modality image from the available data. Nevertheless, the generated images are not guaranteed to be effective for the diagnosis task. In this paper, we propose a unified framework for kidney cancer diagnosis with incomplete multi-phase CT, which simultaneously recovers missing CT images and classifies cancer subtypes using the completed set of images. The advantage of our framework is that it encourages a synthesis model to explicitly learn to generate missing CT phases that are helpful for classifying cancer subtypes. We further incorporate lesion segmentation network into our framework to exploit lesion-level features for effective cancer classification in the whole CT volumes. The proposed framework is based on fully 3D convolutional neural networks to jointly optimize both synthesis and classification of 3D CT volumes. Extensive experiments on both in-house and external datasets demonstrate the effectiveness of our framework for the diagnosis with incomplete data compared with state-of-the-art baselines. In particular, cancer subtype classification using the completed CT data by our method achieves higher performance than the classification using the given incomplete data.


Assuntos
Neoplasias Renais , Redes Neurais de Computação , Humanos , Tomografia Computadorizada por Raios X/métodos , Rim , Neoplasias Renais/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
3.
Sci Rep ; 12(1): 21948, 2022 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-36536017

RESUMO

Deep-learning-based survival prediction can assist doctors by providing additional information for diagnosis by estimating the risk or time of death. The former focuses on ranking deaths among patients based on the Cox model, whereas the latter directly predicts the survival time of each patient. However, it is observed that survival time prediction for the patients, particularly with close observation times, possibly has incorrect orders, leading to low prediction accuracy. Therefore, in this paper, we present a whole slide image (WSI)-based survival time prediction method that takes advantage of both the risk as well as time prediction. Specifically, we propose to combine these two approaches by extracting the risk prediction features and using them as guides for the survival time prediction. Considering the high resolution of WSIs, we extract tumor patches from WSIs using a pre-trained tumor classifier and apply the graph convolutional network to aggregate information across these patches effectively. Extensive experiments demonstrate that the proposed method significantly improves the time prediction accuracy when compared with direct prediction of the survival times without guidance and outperforms existing methods.


Assuntos
Conscientização , Médicos , Humanos , Registros , Fatores de Risco
4.
NPJ Precis Oncol ; 5(1): 54, 2021 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-34145374

RESUMO

In 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA