Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Nat Med ; 30(4): 1174-1190, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38641744

RESUMO

Despite increasing numbers of regulatory approvals, deep learning-based computational pathology systems often overlook the impact of demographic factors on performance, potentially leading to biases. This concern is all the more important as computational pathology has leveraged large public datasets that underrepresent certain demographic groups. Using publicly available data from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, as well as internal patient data, we show that whole-slide image classification models display marked performance disparities across different demographic groups when used to subtype breast and lung carcinomas and to predict IDH1 mutations in gliomas. For example, when using common modeling approaches, we observed performance gaps (in area under the receiver operating characteristic curve) between white and Black patients of 3.0% for breast cancer subtyping, 10.9% for lung cancer subtyping and 16.0% for IDH1 mutation prediction in gliomas. We found that richer feature representations obtained from self-supervised vision foundation models reduce performance variations between groups. These representations provide improvements upon weaker models even when those weaker models are combined with state-of-the-art bias mitigation strategies and modeling choices. Nevertheless, self-supervised vision foundation models do not fully eliminate these discrepancies, highlighting the continuing need for bias mitigation efforts in computational pathology. Finally, we demonstrate that our results extend to other demographic factors beyond patient race. Given these findings, we encourage regulatory and policy agencies to integrate demographic-stratified evaluation into their assessment guidelines.


Assuntos
Glioma , Neoplasias Pulmonares , Humanos , Viés , Negro ou Afro-Americano , População Negra , Demografia , Erros de Diagnóstico , Glioma/diagnóstico , Glioma/genética , Brancos
2.
Nat Med ; 30(3): 850-862, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38504018

RESUMO

Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.


Assuntos
Inteligência Artificial , Fluxo de Trabalho
3.
ArXiv ; 2023 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-37693180

RESUMO

Tissue phenotyping is a fundamental computational pathology (CPath) task in learning objective characterizations of histopathologic biomarkers in anatomic pathology. However, whole-slide imaging (WSI) poses a complex computer vision problem in which the large-scale image resolutions of WSIs and the enormous diversity of morphological phenotypes preclude large-scale data annotation. Current efforts have proposed using pretrained image encoders with either transfer learning from natural image datasets or self-supervised pretraining on publicly-available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using over 100 million tissue patches from over 100,000 diagnostic haematoxylin and eosin-stained WSIs across 20 major tissue types, and evaluated on 33 representative CPath clinical tasks in CPath of varying diagnostic difficulties. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree code classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient AI models that can generalize and transfer to a gamut of diagnostically-challenging tasks and clinical workflows in anatomic pathology.

4.
Cancer Cell ; 40(10): 1095-1110, 2022 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-36220072

RESUMO

In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.


Assuntos
Inteligência Artificial , Radiologia , Registros Eletrônicos de Saúde , Genômica , Humanos , Oncologia
5.
Cancer Cell ; 40(8): 865-878.e6, 2022 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-35944502

RESUMO

The rapidly emerging field of computational pathology has demonstrated promise in developing objective prognostic models from histology images. However, most prognostic models are either based on histology or genomics alone and do not address how these data sources can be integrated to develop joint image-omic prognostic models. Additionally, identifying explainable morphological and molecular descriptors from these models that govern such prognosis is of interest. We use multimodal deep learning to jointly examine pathology whole-slide images and molecular profile data from 14 cancer types. Our weakly supervised, multimodal deep-learning algorithm is able to fuse these heterogeneous modalities to predict outcomes and discover prognostic features that correlate with poor and favorable outcomes. We present all analyses for morphological and molecular correlates of patient prognosis across the 14 cancer types at both a disease and a patient level in an interactive open-access database to allow for further exploration, biomarker discovery, and feature assessment.


Assuntos
Aprendizado Profundo , Neoplasias , Algoritmos , Genômica/métodos , Humanos , Neoplasias/genética , Neoplasias/patologia , Prognóstico
6.
Nat Med ; 28(3): 575-582, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35314822

RESUMO

Endomyocardial biopsy (EMB) screening represents the standard of care for detecting allograft rejections after heart transplant. Manual interpretation of EMBs is affected by substantial interobserver and intraobserver variability, which often leads to inappropriate treatment with immunosuppressive drugs, unnecessary follow-up biopsies and poor transplant outcomes. Here we present a deep learning-based artificial intelligence (AI) system for automated assessment of gigapixel whole-slide images obtained from EMBs, which simultaneously addresses detection, subtyping and grading of allograft rejection. To assess model performance, we curated a large dataset from the United States, as well as independent test cohorts from Turkey and Switzerland, which includes large-scale variability across populations, sample preparations and slide scanning instrumentation. The model detects allograft rejection with an area under the receiver operating characteristic curve (AUC) of 0.962; assesses the cellular and antibody-mediated rejection type with AUCs of 0.958 and 0.874, respectively; detects Quilty B lesions, benign mimics of rejection, with an AUC of 0.939; and differentiates between low-grade and high-grade rejections with an AUC of 0.833. In a human reader study, the AI system showed non-inferior performance to conventional assessment and reduced interobserver variability and assessment time. This robust evaluation of cardiac allograft rejection paves the way for clinical trials to establish the efficacy of AI-assisted EMB assessment and its potential for improving heart transplant outcomes.


Assuntos
Aprendizado Profundo , Rejeição de Enxerto , Aloenxertos , Inteligência Artificial , Biópsia , Rejeição de Enxerto/diagnóstico , Humanos , Miocárdio/patologia
7.
IEEE Trans Med Imaging ; 41(4): 757-770, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-32881682

RESUMO

Cancer diagnosis, prognosis, mymargin and therapeutic response predictions are based on morphological information from histology slides and molecular profiles from genomic data. However, most deep learning-based objective outcome prediction and grading paradigms are based on histology or genomics alone and do not make use of the complementary information in an intuitive manner. In this work, we propose Pathomic Fusion, an interpretable strategy for end-to-end multimodal fusion of histology image and genomic (mutations, CNV, RNA-Seq) features for survival outcome prediction. Our approach models pairwise feature interactions across modalities by taking the Kronecker product of unimodal feature representations, and controls the expressiveness of each representation via a gating-based attention mechanism. Following supervised learning, we are able to interpret and saliently localize features across each modality, and understand how feature importance shifts when conditioning on multimodal input. We validate our approach using glioma and clear cell renal cell carcinoma datasets from the Cancer Genome Atlas (TCGA), which contains paired whole-slide image, genotype, and transcriptome data with ground truth survival and histologic grade labels. In a 15-fold cross-validation, our results demonstrate that the proposed multimodal fusion paradigm improves prognostic determinations from ground truth grading and molecular subtyping, as well as unimodal deep networks trained on histology and genomic data alone. The proposed method establishes insight and theory on how to train deep networks on multimodal biomedical data in an intuitive manner, which will be useful for other problems in medicine that seek to combine heterogeneous data streams for understanding diseases and predicting response and resistance to treatment. Code and trained models are made available at: https://github.com/mahmoodlab/PathomicFusion.


Assuntos
Genômica , Glioma , Genômica/métodos , Glioma/patologia , Técnicas Histológicas , Humanos , Prognóstico
9.
Nat Biomed Eng ; 5(6): 555-570, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33649564

RESUMO

Deep-learning methods for computational pathology require either manual annotation of gigapixel whole-slide images (WSIs) or large datasets of WSIs with slide-level labels and typically suffer from poor domain adaptation and interpretability. Here we report an interpretable weakly supervised deep-learning method for data-efficient WSI processing and learning that only requires slide-level labels. The method, which we named clustering-constrained-attention multiple-instance learning (CLAM), uses attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. By applying CLAM to the subtyping of renal cell carcinoma and non-small-cell lung cancer as well as the detection of lymph node metastasis, we show that it can be used to localize well-known morphological features on WSIs without the need for spatial labels, that it overperforms standard weakly supervised classification algorithms and that it is adaptable to independent test cohorts, smartphone microscopy and varying tissue content.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma de Células Renais/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Neoplasias Renais/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Área Sob a Curva , Carcinoma Pulmonar de Células não Pequenas/patologia , Carcinoma de Células Renais/patologia , Histocitoquímica/métodos , Histocitoquímica/estatística & dados numéricos , Humanos , Neoplasias Renais/patologia , Neoplasias Pulmonares/patologia , Metástase Linfática , Microscopia/métodos , Microscopia/estatística & dados numéricos , Smartphone
10.
Med Image Anal ; 70: 101990, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33609920

RESUMO

Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).


Assuntos
Endoscopia por Cápsula , Robótica , Algoritmos , Simulação por Computador , Endoscopia , Humanos , Redes Neurais de Computação
11.
IEEE Trans Med Imaging ; 39(12): 4297-4309, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32795966

RESUMO

Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of 8× , 10× , 12× , respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H.


Assuntos
Endoscopia por Cápsula
12.
IEEE Trans Med Imaging ; 39(11): 3257-3267, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-31283474

RESUMO

Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos
13.
PLoS One ; 10(11): e0142554, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26565809

RESUMO

Human pluripotent stem cells (hPSCs) represent very promising resources for cell-based regenerative medicine. It is essential to determine the biological implications of some fundamental physiological processes (such as glycogen metabolism) in these stem cells. In this report, we employ electron, immunofluorescence microscopy, and biochemical methods to study glycogen synthesis in hPSCs. Our results indicate that there is a high level of glycogen synthesis (0.28 to 0.62 µg/µg proteins) in undifferentiated human embryonic stem cells (hESCs) compared with the glycogen levels (0 to 0.25 µg/µg proteins) reported in human cancer cell lines. Moreover, we found that glycogen synthesis was regulated by bone morphogenetic protein 4 (BMP-4) and the glycogen synthase kinase 3 (GSK-3) pathway. Our observation of glycogen bodies and sustained expression of the pluripotent factor Oct-4 mediated by the potent GSK-3 inhibitor CHIR-99021 reveals an altered pluripotent state in hPSC culture. We further confirmed glycogen variations under different naïve pluripotent cell growth conditions based on the addition of the GSK-3 inhibitor BIO. Our data suggest that primed hPSCs treated with naïve growth conditions acquire altered pluripotent states, similar to those naïve-like hPSCs, with increased glycogen synthesis. Furthermore, we found that suppression of phosphorylated glycogen synthase was an underlying mechanism responsible for altered glycogen synthesis. Thus, our novel findings regarding the dynamic changes in glycogen metabolism provide new markers to assess the energetic and various pluripotent states in hPSCs. The components of glycogen metabolic pathways offer new assays to delineate previously unrecognized properties of hPSCs under different growth conditions.


Assuntos
Glicogênio/metabolismo , Células-Tronco Pluripotentes/citologia , Células-Tronco Pluripotentes/metabolismo , Proteína Morfogenética Óssea 4/metabolismo , Diferenciação Celular , Linhagem Celular , Proliferação de Células , Glicogênio Sintase/metabolismo , Quinase 3 da Glicogênio Sintase/metabolismo , Humanos , Células-Tronco Pluripotentes Induzidas/citologia , Células-Tronco Pluripotentes Induzidas/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA