Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
Intervalo de año de publicación
1.
Am J Pathol ; 194(7): 1285-1293, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38588853

RESUMEN

Bronchial premalignant lesions (PMLs) precede the development of invasive lung squamous cell carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. This study followed a novel computational approach, the Graph Perceiver Network, leveraging hematoxylin and eosin-stained whole slide images to stratify endobronchial biopsies of PMLs across a spectrum from normal to tumor lung tissues. The Graph Perceiver Network outperformed existing frameworks in classification accuracy predicting LUSC, lung adenocarcinoma, and nontumor lung tissue on The Cancer Genome Atlas and Clinical Proteomic Tumor Analysis Consortium datasets containing lung resection tissues while efficiently generating pathologist-aligned, class-specific heatmaps. The network was further tested using endobronchial biopsies from two data cohorts, containing normal to carcinoma in situ histology. It demonstrated a unique capability to differentiate carcinoma in situ lung squamous PMLs based on their progression status to invasive carcinoma. The network may have utility in stratifying PMLs for chemoprevention trials or more aggressive follow-up.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias Pulmonares , Lesiones Precancerosas , Humanos , Lesiones Precancerosas/patología , Neoplasias Pulmonares/patología , Neoplasias Pulmonares/genética , Carcinoma de Células Escamosas/patología
2.
IEEE Trans Med Imaging ; 41(11): 3003-3015, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35594209

RESUMEN

Deep learning is a powerful tool for whole slide image (WSI) analysis. Typically, when performing supervised deep learning, a WSI is divided into small patches, trained and the outcomes are aggregated to estimate disease grade. However, patch-based methods introduce label noise during training by assuming that each patch is independent with the same label as the WSI and neglect overall WSI-level information that is significant in disease grading. Here we present a Graph-Transformer (GT) that fuses a graph-based representation of an WSI and a vision transformer for processing pathology images, called GTP, to predict disease grade. We selected 4,818 WSIs from the Clinical Proteomic Tumor Analysis Consortium (CPTAC), the National Lung Screening Trial (NLST), and The Cancer Genome Atlas (TCGA), and used GTP to distinguish adenocarcinoma (LUAD) and squamous cell carcinoma (LSCC) from adjacent non-cancerous tissue (normal). First, using NLST data, we developed a contrastive learning framework to generate a feature extractor. This allowed us to compute feature vectors of individual WSI patches, which were used to represent the nodes of the graph followed by construction of the GTP framework. Our model trained on the CPTAC data achieved consistently high performance on three-label classification (normal versus LUAD versus LSCC: mean accuracy = 91.2 ± 2.5%) based on five-fold cross-validation, and mean accuracy = 82.3 ± 1.0% on external test data (TCGA). We also introduced a graph-based saliency mapping technique, called GraphCAM, that can identify regions that are highly associated with the class label. Our findings demonstrate GTP as an interpretable and effective deep learning framework for WSI-level classification.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Proteómica , Procesamiento de Imagen Asistido por Computador/métodos , Guanosina Trifosfato
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA