Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
Nat Rev Cancer ; 2024 May 16.
Article in English | MEDLINE | ID: mdl-38755439

ABSTRACT

Artificial intelligence (AI) has been commoditized. It has evolved from a specialty resource to a readily accessible tool for cancer researchers. AI-based tools can boost research productivity in daily workflows, but can also extract hidden information from existing data, thereby enabling new scientific discoveries. Building a basic literacy in these tools is useful for every cancer researcher. Researchers with a traditional biological science focus can use AI-based tools through off-the-shelf software, whereas those who are more computationally inclined can develop their own AI-based software pipelines. In this article, we provide a practical guide for non-computational cancer researchers to understand how AI-based tools can benefit them. We convey general principles of AI for applications in image analysis, natural language processing and drug discovery. In addition, we give examples of how non-computational researchers can get started on the journey to productively use AI in their own work.

2.
Histopathology ; 84(7): 1139-1153, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38409878

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has numerous applications in pathology, supporting diagnosis and prognostication in cancer. However, most AI models are trained on highly selected data, typically one tissue slide per patient. In reality, especially for large surgical resection specimens, dozens of slides can be available for each patient. Manually sorting and labelling whole-slide images (WSIs) is a very time-consuming process, hindering the direct application of AI on the collected tissue samples from large cohorts. In this study we addressed this issue by developing a deep-learning (DL)-based method for automatic curation of large pathology datasets with several slides per patient. METHODS: We collected multiple large multicentric datasets of colorectal cancer histopathological slides from the United Kingdom (FOXTROT, N = 21,384 slides; CR07, N = 7985 slides) and Germany (DACHS, N = 3606 slides). These datasets contained multiple types of tissue slides, including bowel resection specimens, endoscopic biopsies, lymph node resections, immunohistochemistry-stained slides, and tissue microarrays. We developed, trained, and tested a deep convolutional neural network model to predict the type of slide from the slide overview (thumbnail) image. The primary statistical endpoint was the macro-averaged area under the receiver operating curve (AUROCs) for detection of the type of slide. RESULTS: In the primary dataset (FOXTROT), with an AUROC of 0.995 [95% confidence interval [CI]: 0.994-0.996] the algorithm achieved a high classification performance and was able to accurately predict the type of slide from the thumbnail image alone. In the two external test cohorts (CR07, DACHS) AUROCs of 0.982 [95% CI: 0.979-0.985] and 0.875 [95% CI: 0.864-0.887] were observed, which indicates the generalizability of the trained model on unseen datasets. With a confidence threshold of 0.95, the model reached an accuracy of 94.6% (7331 classified cases) in CR07 and 85.1% (2752 classified cases) for the DACHS cohort. CONCLUSION: Our findings show that using the low-resolution thumbnail image is sufficient to accurately classify the type of slide in digital pathology. This can support researchers to make the vast resource of existing pathology archives accessible to modern AI models with only minimal manual annotations.


Subject(s)
Colorectal Neoplasms , Deep Learning , Humans , Colorectal Neoplasms/pathology , Colorectal Neoplasms/diagnosis , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods
3.
Nat Commun ; 14(1): 8290, 2023 Dec 14.
Article in English | MEDLINE | ID: mdl-38092727

ABSTRACT

Primary liver cancer arises either from hepatocytic or biliary lineage cells, giving rise to hepatocellular carcinoma (HCC) or intrahepatic cholangiocarcinoma (ICCA). Combined hepatocellular- cholangiocarcinomas (cHCC-CCA) exhibit equivocal or mixed features of both, causing diagnostic uncertainty and difficulty in determining proper management. Here, we perform a comprehensive deep learning-based phenotyping of multiple cohorts of patients. We show that deep learning can reproduce the diagnosis of HCC vs. CCA with a high performance. We analyze a series of 405 cHCC-CCA patients and demonstrate that the model can reclassify the tumors as HCC or ICCA, and that the predictions are consistent with clinical outcomes, genetic alterations and in situ spatial gene expression profiling. This type of approach could improve treatment decisions and ultimately clinical outcome for patients with rare and biphenotypic cancers such as cHCC-CCA.


Subject(s)
Bile Duct Neoplasms , Carcinoma, Hepatocellular , Cholangiocarcinoma , Deep Learning , Liver Neoplasms , Humans , Carcinoma, Hepatocellular/diagnosis , Carcinoma, Hepatocellular/genetics , Carcinoma, Hepatocellular/pathology , Liver Neoplasms/diagnosis , Liver Neoplasms/genetics , Liver Neoplasms/pathology , Cholangiocarcinoma/genetics , Cholangiocarcinoma/pathology , Bile Ducts, Intrahepatic , Bile Duct Neoplasms/diagnosis , Bile Duct Neoplasms/genetics , Bile Duct Neoplasms/pathology , Retrospective Studies
4.
Clin Cancer Res ; 29(2): 316-323, 2023 01 17.
Article in English | MEDLINE | ID: mdl-36083132

ABSTRACT

Immunotherapy by immune checkpoint inhibitors has become a standard treatment strategy for many types of solid tumors. However, the majority of patients with cancer will not respond, and predicting response to this therapy is still a challenge. Artificial intelligence (AI) methods can extract meaningful information from complex data, such as image data. In clinical routine, radiology or histopathology images are ubiquitously available. AI has been used to predict the response to immunotherapy from radiology or histopathology images, either directly or indirectly via surrogate markers. While none of these methods are currently used in clinical routine, academic and commercial developments are pointing toward potential clinical adoption in the near future. Here, we summarize the state of the art in AI-based image biomarkers for immunotherapy response based on radiology and histopathology images. We point out limitations, caveats, and pitfalls, including biases, generalizability, and explainability, which are relevant for researchers and health care providers alike, and outline key clinical use cases of this new class of predictive biomarkers.


Subject(s)
Neoplasms , Radiology , Humans , Artificial Intelligence , Neoplasms/therapy , Biomarkers , Immunotherapy
5.
Nat Cancer ; 3(9): 1026-1038, 2022 09.
Article in English | MEDLINE | ID: mdl-36138135

ABSTRACT

Artificial intelligence (AI) methods have multiplied our capabilities to extract quantitative information from digital histopathology images. AI is expected to reduce workload for human experts, improve the objectivity and consistency of pathology reports, and have a clinical impact by extracting hidden information from routinely available data. Here, we describe how AI can be used to predict cancer outcome, treatment response, genetic alterations and gene expression from digitized histopathology slides. We summarize the underlying technologies and emerging approaches, noting limitations, including the need for data sharing and standards. Finally, we discuss the broader implications of AI in cancer research and oncology.


Subject(s)
Artificial Intelligence , Neoplasms , Humans , Medical Oncology/methods , Neoplasms/diagnosis , Research
7.
Nat Commun ; 13(1): 5711, 2022 09 29.
Article in English | MEDLINE | ID: mdl-36175413

ABSTRACT

Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Electric Power Supplies , Knowledge , Workflow
8.
Bull Math Biol ; 84(11): 130, 2022 09 29.
Article in English | MEDLINE | ID: mdl-36175705

ABSTRACT

Several mathematical models to predict tumor growth over time have been developed in the last decades. A central aspect of such models is the interaction of tumor cells with immune effector cells. The Kuznetsov model (Kuznetsov et al. in Bull Math Biol 56(2):295-321, 1994) is the most prominent of these models and has been used as a basis for many other related models and theoretical studies. However, none of these models have been validated with large-scale real-world data of human patients treated with cancer immunotherapy. In addition, parameter estimation of these models remains a major bottleneck on the way to model-based and data-driven medical treatment. In this study, we quantitatively fit Kuznetsov's model to a large dataset of 1472 patients, of which 210 patients have more than six data points, by estimating the model parameters of each patient individually. We also conduct a global practical identifiability analysis for the estimated parameters. We thus demonstrate that several combinations of parameter values could lead to accurate data fitting. This opens the potential for global parameter estimation of the model, in which the values of all or some parameters are fixed for all patients. Furthermore, by omitting the last two or three data points, we show that the model can be extrapolated and predict future tumor dynamics. This paves the way for a more clinically relevant application of mathematical tumor modeling, in which the treatment strategy could be adjusted in advance according to the model's future predictions.


Subject(s)
Mathematical Concepts , Neoplasms , Cell Count , Humans , Immunotherapy , Models, Biological , Neoplasms/therapy
10.
Med Image Anal ; 79: 102474, 2022 07.
Article in English | MEDLINE | ID: mdl-35588568

ABSTRACT

Artificial intelligence (AI) can extract visual information from histopathological slides and yield biological insight and clinical biomarkers. Whole slide images are cut into thousands of tiles and classification problems are often weakly-supervised: the ground truth is only known for the slide, not for every single tile. In classical weakly-supervised analysis pipelines, all tiles inherit the slide label while in multiple-instance learning (MIL), only bags of tiles inherit the label. However, it is still unclear how these widely used but markedly different approaches perform relative to each other. We implemented and systematically compared six methods in six clinically relevant end-to-end prediction tasks using data from N=2980 patients for training with rigorous external validation. We tested three classical weakly-supervised approaches with convolutional neural networks and vision transformers (ViT) and three MIL-based approaches with and without an additional attention module. Our results empirically demonstrate that histological tumor subtyping of renal cell carcinoma is an easy task in which all approaches achieve an area under the receiver operating curve (AUROC) of above 0.9. In contrast, we report significant performance differences for clinically relevant tasks of mutation prediction in colorectal, gastric, and bladder cancer. In these mutation prediction tasks, classical weakly-supervised workflows outperformed MIL-based weakly-supervised methods for mutation prediction, which is surprising given their simplicity. This shows that new end-to-end image analysis pipelines in computational pathology should be compared to classical weakly-supervised methods. Also, these findings motivate the development of new methods which combine the elegant assumptions of MIL with the empirically observed higher performance of classical weakly-supervised approaches. We make all source codes publicly available at https://github.com/KatherLab/HIA, allowing easy application of all methods to any similar task.


Subject(s)
Deep Learning , Artificial Intelligence , Benchmarking , Humans , Neural Networks, Computer , Supervised Machine Learning
11.
Nat Med ; 28(6): 1232-1239, 2022 06.
Article in English | MEDLINE | ID: mdl-35469069

ABSTRACT

Artificial intelligence (AI) can predict the presence of molecular alterations directly from routine histopathology slides. However, training robust AI systems requires large datasets for which data collection faces practical, ethical and legal obstacles. These obstacles could be overcome with swarm learning (SL), in which partners jointly train AI models while avoiding data transfer and monopolistic data governance. Here, we demonstrate the successful use of SL in large, multicentric datasets of gigapixel histopathology images from over 5,000 patients. We show that AI models trained using SL can predict BRAF mutational status and microsatellite instability directly from hematoxylin and eosin (H&E)-stained pathology slides of colorectal cancer. We trained AI models on three patient cohorts from Northern Ireland, Germany and the United States, and validated the prediction performance in two independent datasets from the United Kingdom. Our data show that SL-trained AI models outperform most locally trained models, and perform on par with models that are trained on the merged datasets. In addition, we show that SL-based AI models are data efficient. In the future, SL can be used to train distributed AI models for any histopathology image analysis task, eliminating the need for data transfer.


Subject(s)
Artificial Intelligence , Neoplasms , Humans , Image Processing, Computer-Assisted , Neoplasms/genetics , Staining and Labeling , United Kingdom
12.
Sci Rep ; 12(1): 4829, 2022 03 22.
Article in English | MEDLINE | ID: mdl-35318364

ABSTRACT

Artificial intelligence (AI) is widely used to analyze gastrointestinal (GI) endoscopy image data. AI has led to several clinically approved algorithms for polyp detection, but application of AI beyond this specific task is limited by the high cost of manual annotations. Here, we show that a weakly supervised AI can be trained on data from a clinical routine database to learn visual patterns of GI diseases without any manual labeling or annotation. We trained a deep neural network on a dataset of N = 29,506 gastroscopy and N = 18,942 colonoscopy examinations from a large endoscopy unit serving patients in Germany, the Netherlands and Belgium, using only routine diagnosis data for the 42 most common diseases. Despite a high data heterogeneity, the AI system reached a high performance for diagnosis of multiple diseases, including inflammatory, degenerative, infectious and neoplastic diseases. Specifically, a cross-validated area under the receiver operating curve (AUROC) of above 0.70 was reached for 13 diseases, and an AUROC of above 0.80 was reached for two diseases in the primary data set. In an external validation set including six disease categories, the AI system was able to significantly predict the presence of diverticulosis, candidiasis, colon and rectal cancer with AUROCs above 0.76. Reverse engineering the predictions demonstrated that plausible patterns were learned on the level of images and within images and potential confounders were identified. In summary, our study demonstrates the potential of weakly supervised AI to generate high-performing classifiers and identify clinically relevant visual patterns based on non-annotated routine image data in GI endoscopy and potentially other clinical imaging modalities.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Algorithms , Area Under Curve , Endoscopy, Gastrointestinal/methods , Humans
13.
PLoS Comput Biol ; 18(2): e1009822, 2022 02.
Article in English | MEDLINE | ID: mdl-35120124

ABSTRACT

Classical mathematical models of tumor growth have shaped our understanding of cancer and have broad practical implications for treatment scheduling and dosage. However, even the simplest textbook models have been barely validated in real world-data of human patients. In this study, we fitted a range of differential equation models to tumor volume measurements of patients undergoing chemotherapy or cancer immunotherapy for solid tumors. We used a large dataset of 1472 patients with three or more measurements per target lesion, of which 652 patients had six or more data points. We show that the early treatment response shows only moderate correlation with the final treatment response, demonstrating the need for nuanced models. We then perform a head-to-head comparison of six classical models which are widely used in the field: the Exponential, Logistic, Classic Bertalanffy, General Bertalanffy, Classic Gompertz and General Gompertz model. Several models provide a good fit to tumor volume measurements, with the Gompertz model providing the best balance between goodness of fit and number of parameters. Similarly, when fitting to early treatment data, the general Bertalanffy and Gompertz models yield the lowest mean absolute error to forecasted data, indicating that these models could potentially be effective at predicting treatment outcome. In summary, we provide a quantitative benchmark for classical textbook models and state-of-the art models of human tumor growth. We publicly release an anonymized version of our original data, providing the first benchmark set of human tumor growth data for evaluation of mathematical models.


Subject(s)
Models, Biological , Neoplasms , Humans , Immunotherapy , Models, Theoretical , Neoplasms/drug therapy , Neoplasms/pathology , Tumor Burden
14.
J Pathol ; 256(1): 50-60, 2022 01.
Article in English | MEDLINE | ID: mdl-34561876

ABSTRACT

Deep learning is a powerful tool in computational pathology: it can be used for tumor detection and for predicting genetic alterations based on histopathology images alone. Conventionally, tumor detection and prediction of genetic alterations are two separate workflows. Newer methods have combined them, but require complex, manually engineered computational pipelines, restricting reproducibility and robustness. To address these issues, we present a new method for simultaneous tumor detection and prediction of genetic alterations: The Slide-Level Assessment Model (SLAM) uses a single off-the-shelf neural network to predict molecular alterations directly from routine pathology slides without any manual annotations, improving upon previous methods by automatically excluding normal and non-informative tissue regions. SLAM requires only standard programming libraries and is conceptually simpler than previous approaches. We have extensively validated SLAM for clinically relevant tasks using two large multicentric cohorts of colorectal cancer patients, Darmkrebs: Chancen der Verhütung durch Screening (DACHS) from Germany and Yorkshire Cancer Research Bowel Cancer Improvement Programme (YCR-BCIP) from the UK. We show that SLAM yields reliable slide-level classification of tumor presence with an area under the receiver operating curve (AUROC) of 0.980 (confidence interval 0.975, 0.984; n = 2,297 tumor and n = 1,281 normal slides). In addition, SLAM can detect microsatellite instability (MSI)/mismatch repair deficiency (dMMR) or microsatellite stability/mismatch repair proficiency with an AUROC of 0.909 (0.888, 0.929; n = 2,039 patients) and BRAF mutational status with an AUROC of 0.821 (0.786, 0.852; n = 2,075 patients). The improvement with respect to previous methods was validated in a large external testing cohort in which MSI/dMMR status was detected with an AUROC of 0.900 (0.864, 0.931; n = 805 patients). In addition, SLAM provides human-interpretable visualization maps, enabling the analysis of multiplexed network predictions by human experts. In summary, SLAM is a new simple and powerful method for computational pathology that could be applied to multiple disease contexts. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Subject(s)
Brain Neoplasms/genetics , Brain Neoplasms/pathology , Colorectal Neoplasms/genetics , Colorectal Neoplasms/pathology , Microsatellite Instability , Mutation/genetics , Neoplastic Syndromes, Hereditary/genetics , Neoplastic Syndromes, Hereditary/pathology , Adult , Aged , Aged, 80 and over , Brain Neoplasms/diagnosis , Cohort Studies , Colorectal Neoplasms/diagnosis , Deep Learning , Female , Genotype , Humans , Male , Middle Aged , Neoplastic Syndromes, Hereditary/diagnosis , Reproducibility of Results
15.
J Pathol ; 256(3): 269-281, 2022 03.
Article in English | MEDLINE | ID: mdl-34738636

ABSTRACT

The spread of early-stage (T1 and T2) adenocarcinomas to locoregional lymph nodes is a key event in disease progression of colorectal cancer (CRC). The cellular mechanisms behind this event are not completely understood and existing predictive biomarkers are imperfect. Here, we used an end-to-end deep learning algorithm to identify risk factors for lymph node metastasis (LNM) status in digitized histopathology slides of the primary CRC and its surrounding tissue. In two large population-based cohorts, we show that this system can predict the presence of more than one LNM in pT2 CRC patients with an area under the receiver operating curve (AUROC) of 0.733 (0.67-0.758) and patients with any LNM with an AUROC of 0.711 (0.597-0.797). Similarly, in pT1 CRC patients, the presence of more than one LNM or any LNM was predictable with an AUROC of 0.733 (0.644-0.778) and 0.567 (0.542-0.597), respectively. Based on these findings, we used the deep learning system to guide human pathology experts towards highly predictive regions for LNM in the whole slide images. This hybrid human observer and deep learning approach identified inflamed adipose tissue as the highest predictive feature for LNM presence. Our study is a first proof of concept that artificial intelligence (AI) systems may be able to discover potentially new biological mechanisms in cancer progression. Our deep learning algorithm is publicly available and can be used for biomarker discovery in any disease setting. © 2021 The Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.


Subject(s)
Adipose Tissue/pathology , Colorectal Neoplasms/pathology , Deep Learning , Diagnosis, Computer-Assisted , Early Detection of Cancer , Image Interpretation, Computer-Assisted , Lymph Nodes/pathology , Microscopy , Biopsy , Humans , Lymphatic Metastasis , Neoplasm Staging , Predictive Value of Tests , Proof of Concept Study , Reproducibility of Results , Retrospective Studies , Risk Assessment , Risk Factors
16.
Front Genet ; 12: 806386, 2021.
Article in English | MEDLINE | ID: mdl-35251119

ABSTRACT

In the last four years, advances in Deep Learning technology have enabled the inference of selected mutational alterations directly from routine histopathology slides. In particular, recent studies have shown that genetic changes in clinically relevant driver genes are reflected in the histological phenotype of solid tumors and can be inferred by analysing routine Haematoxylin and Eosin (H&E) stained tissue sections with Deep Learning. However, these studies mostly focused on selected individual genes in selected tumor types. In addition, genetic changes in solid tumors primarily act by changing signaling pathways that regulate cell behaviour. In this study, we hypothesized that Deep Learning networks can be trained to directly predict alterations of genes and pathways across a spectrum of solid tumors. We manually outlined tumor tissue in H&E-stained tissue sections from 7,829 patients with 23 different tumor types from The Cancer Genome Atlas. We then trained convolutional neural networks in an end-to-end way to detect alterations in the most clinically relevant pathways or genes, directly from histology images. Using this automatic approach, we found that alterations in 12 out of 14 clinically relevant pathways and numerous single gene alterations appear to be detectable in tissue sections, many of which have not been reported before. Interestingly, we show that the prediction performance for single gene alterations is better than that for pathway alterations. Collectively, these data demonstrate the predictability of genetic alterations directly from routine cancer histology images and show that individual genes leave a stronger morphological signature than genetic pathways.

SELECTION OF CITATIONS
SEARCH DETAIL
...