Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 85
Filter
1.
Pediatr Dermatol ; 2024 May 21.
Article in English | MEDLINE | ID: mdl-38770539

ABSTRACT

BACKGROUND: Ultraviolet (UV)-exposure behaviors can directly impact an individual's skin cancer risk, with many habits formed during childhood and adolescence. We explored the utility of a photoaging smartphone application to motivate youth to improve sun safety practices. METHODS: Participants completed a preintervention survey to gather baseline sun safety perceptions and behaviors. Participants then used a photoaging mobile application to view the projected effects of chronic UV exposure on participants' self-face image over time, followed by a postintervention survey to assess motivation to engage in future sun safety practices. RESULTS: The study sample included 87 participants (median [interquartile (IQR)] age, 14 [11-16] years). Most participants were White (50.6%) and reported skin type that burns a little and tans easily (42.5%). Preintervention sun exposure behaviors among participants revealed that 33 (37.9%) mostly or always used sunscreen on a sunny day, 48 (55.2%) experienced at least one sunburn over the past year, 26 (30.6%) engaged in outdoor sunbathing at least once during the past year, and zero (0%) used indoor tanning beds. Non-skin of color (18 [41.9%], p = .02) and older (24 [41.4%], p = .007) participants more often agreed they felt better with a tan. Most participants agreed the intervention increased their motivation to practice sun-protective behaviors (wear sunscreen, 74 [85.1%]; wear hats, 64 [74.4%]; avoid indoor tanning, 73 [83.9%]; avoid outdoor tanning, 68 [79%]). CONCLUSION: The findings of this cross-sectional study suggest that a photoaging smartphone application may serve as a useful tool to promote sun safety behaviors from a young age.

2.
Med Image Anal ; 94: 103149, 2024 May.
Article in English | MEDLINE | ID: mdl-38574542

ABSTRACT

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Subject(s)
Coloring Agents , Neoplasms , Humans , Coloring Agents/chemistry , Staining and Labeling , Algorithms , Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted/methods
4.
Lab Invest ; 104(6): 102049, 2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38513977

ABSTRACT

Although pathological tissue analysis is typically performed on single 2-dimensional (2D) histologic reference slides, 3-dimensional (3D) reconstruction from a sequence of histologic sections could provide novel opportunities for spatial analysis of the extracted tissue. In this review, we analyze recent works published after 2018 and report information on the extracted tissue types, the section thickness, and the number of sections used for reconstruction. By analyzing the technological requirements for 3D reconstruction, we observe that software tools exist, both free and commercial, which include the functionality to perform 3D reconstruction from a sequence of histologic images. Through the analysis of the most recent works, we provide an overview of the workflows and tools that are currently used for 3D reconstruction from histologic sections and address points for future work, such as a missing common file format or computer-aided analysis of the reconstructed model.

5.
Histopathology ; 84(7): 1139-1153, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38409878

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has numerous applications in pathology, supporting diagnosis and prognostication in cancer. However, most AI models are trained on highly selected data, typically one tissue slide per patient. In reality, especially for large surgical resection specimens, dozens of slides can be available for each patient. Manually sorting and labelling whole-slide images (WSIs) is a very time-consuming process, hindering the direct application of AI on the collected tissue samples from large cohorts. In this study we addressed this issue by developing a deep-learning (DL)-based method for automatic curation of large pathology datasets with several slides per patient. METHODS: We collected multiple large multicentric datasets of colorectal cancer histopathological slides from the United Kingdom (FOXTROT, N = 21,384 slides; CR07, N = 7985 slides) and Germany (DACHS, N = 3606 slides). These datasets contained multiple types of tissue slides, including bowel resection specimens, endoscopic biopsies, lymph node resections, immunohistochemistry-stained slides, and tissue microarrays. We developed, trained, and tested a deep convolutional neural network model to predict the type of slide from the slide overview (thumbnail) image. The primary statistical endpoint was the macro-averaged area under the receiver operating curve (AUROCs) for detection of the type of slide. RESULTS: In the primary dataset (FOXTROT), with an AUROC of 0.995 [95% confidence interval [CI]: 0.994-0.996] the algorithm achieved a high classification performance and was able to accurately predict the type of slide from the thumbnail image alone. In the two external test cohorts (CR07, DACHS) AUROCs of 0.982 [95% CI: 0.979-0.985] and 0.875 [95% CI: 0.864-0.887] were observed, which indicates the generalizability of the trained model on unseen datasets. With a confidence threshold of 0.95, the model reached an accuracy of 94.6% (7331 classified cases) in CR07 and 85.1% (2752 classified cases) for the DACHS cohort. CONCLUSION: Our findings show that using the low-resolution thumbnail image is sufficient to accurately classify the type of slide in digital pathology. This can support researchers to make the vast resource of existing pathology archives accessible to modern AI models with only minimal manual annotations.


Subject(s)
Colorectal Neoplasms , Deep Learning , Humans , Colorectal Neoplasms/pathology , Colorectal Neoplasms/diagnosis , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods
6.
JAMA Dermatol ; 160(3): 303-311, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38324293

ABSTRACT

Importance: The development of artificial intelligence (AI)-based melanoma classifiers typically calls for large, centralized datasets, requiring hospitals to give away their patient data, which raises serious privacy concerns. To address this concern, decentralized federated learning has been proposed, where classifier development is distributed across hospitals. Objective: To investigate whether a more privacy-preserving federated learning approach can achieve comparable diagnostic performance to a classical centralized (ie, single-model) and ensemble learning approach for AI-based melanoma diagnostics. Design, Setting, and Participants: This multicentric, single-arm diagnostic study developed a federated model for melanoma-nevus classification using histopathological whole-slide images prospectively acquired at 6 German university hospitals between April 2021 and February 2023 and benchmarked it using both a holdout and an external test dataset. Data analysis was performed from February to April 2023. Exposures: All whole-slide images were retrospectively analyzed by an AI-based classifier without influencing routine clinical care. Main Outcomes and Measures: The area under the receiver operating characteristic curve (AUROC) served as the primary end point for evaluating the diagnostic performance. Secondary end points included balanced accuracy, sensitivity, and specificity. Results: The study included 1025 whole-slide images of clinically melanoma-suspicious skin lesions from 923 patients, consisting of 388 histopathologically confirmed invasive melanomas and 637 nevi. The median (range) age at diagnosis was 58 (18-95) years for the training set, 57 (18-93) years for the holdout test dataset, and 61 (18-95) years for the external test dataset; the median (range) Breslow thickness was 0.70 (0.10-34.00) mm, 0.70 (0.20-14.40) mm, and 0.80 (0.30-20.00) mm, respectively. The federated approach (0.8579; 95% CI, 0.7693-0.9299) performed significantly worse than the classical centralized approach (0.9024; 95% CI, 0.8379-0.9565) in terms of AUROC on a holdout test dataset (pairwise Wilcoxon signed-rank, P < .001) but performed significantly better (0.9126; 95% CI, 0.8810-0.9412) than the classical centralized approach (0.9045; 95% CI, 0.8701-0.9331) on an external test dataset (pairwise Wilcoxon signed-rank, P < .001). Notably, the federated approach performed significantly worse than the ensemble approach on both the holdout (0.8867; 95% CI, 0.8103-0.9481) and external test dataset (0.9227; 95% CI, 0.8941-0.9479). Conclusions and Relevance: The findings of this diagnostic study suggest that federated learning is a viable approach for the binary classification of invasive melanomas and nevi on a clinically representative distributed dataset. Federated learning can improve privacy protection in AI-based melanoma diagnostics while simultaneously promoting collaboration across institutions and countries. Moreover, it may have the potential to be extended to other image classification tasks in digital cancer histopathology and beyond.


Subject(s)
Dermatology , Melanoma , Nevus , Skin Neoplasms , Humans , Melanoma/diagnosis , Artificial Intelligence , Retrospective Studies , Skin Neoplasms/diagnosis , Nevus/diagnosis
8.
Nat Commun ; 15(1): 524, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38225244

ABSTRACT

Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists' decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists' diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists' confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists' willingness to adopt such XAI systems, promoting future use in the clinic.


Subject(s)
Melanoma , Trust , Humans , Artificial Intelligence , Dermatologists , Melanoma/diagnosis , Diagnosis, Differential
9.
PLoS One ; 19(1): e0297146, 2024.
Article in English | MEDLINE | ID: mdl-38241314

ABSTRACT

Pathologists routinely use immunohistochemical (IHC)-stained tissue slides against MelanA in addition to hematoxylin and eosin (H&E)-stained slides to improve their accuracy in diagnosing melanomas. The use of diagnostic Deep Learning (DL)-based support systems for automated examination of tissue morphology and cellular composition has been well studied in standard H&E-stained tissue slides. In contrast, there are few studies that analyze IHC slides using DL. Therefore, we investigated the separate and joint performance of ResNets trained on MelanA and corresponding H&E-stained slides. The MelanA classifier achieved an area under receiver operating characteristics curve (AUROC) of 0.82 and 0.74 on out of distribution (OOD)-datasets, similar to the H&E-based benchmark classification of 0.81 and 0.75, respectively. A combined classifier using MelanA and H&E achieved AUROCs of 0.85 and 0.81 on the OOD datasets. DL MelanA-based assistance systems show the same performance as the benchmark H&E classification and may be improved by multi stain classification to assist pathologists in their clinical routine.


Subject(s)
Deep Learning , Melanoma , Humans , Melanoma/diagnosis , Immunohistochemistry , MART-1 Antigen , ROC Curve
10.
Lancet Digit Health ; 6(1): e33-e43, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38123254

ABSTRACT

BACKGROUND: Precise prognosis prediction in patients with colorectal cancer (ie, forecasting survival) is pivotal for individualised treatment and care. Histopathological tissue slides of colorectal cancer specimens contain rich prognostically relevant information. However, existing studies do not have multicentre external validation with real-world sample processing protocols, and algorithms are not yet widely used in clinical routine. METHODS: In this retrospective, multicentre study, we collected tissue samples from four groups of patients with resected colorectal cancer from Australia, Germany, and the USA. We developed and externally validated a deep learning-based prognostic-stratification system for automatic prediction of overall and cancer-specific survival in patients with resected colorectal cancer. We used the model-predicted risk scores to stratify patients into different risk groups and compared survival outcomes between these groups. Additionally, we evaluated the prognostic value of these risk groups after adjusting for established prognostic variables. FINDINGS: We trained and validated our model on a total of 4428 patients. We found that patients could be divided into high-risk and low-risk groups on the basis of the deep learning-based risk score. On the internal test set, the group with a high-risk score had a worse prognosis than the group with a low-risk score, as reflected by a hazard ratio (HR) of 4·50 (95% CI 3·33-6·09) for overall survival and 8·35 (5·06-13·78) for disease-specific survival (DSS). We found consistent performance across three large external test sets. In a test set of 1395 patients, the high-risk group had a lower DSS than the low-risk group, with an HR of 3·08 (2·44-3·89). In two additional test sets, the HRs for DSS were 2·23 (1·23-4·04) and 3·07 (1·78-5·3). We showed that the prognostic value of the deep learning-based risk score is independent of established clinical risk factors. INTERPRETATION: Our findings indicate that attention-based self-supervised deep learning can robustly offer a prognosis on clinical outcomes in patients with colorectal cancer, generalising across different populations and serving as a potentially new prognostic tool in clinical decision making for colorectal cancer management. We release all source codes and trained models under an open-source licence, allowing other researchers to reuse and build upon our work. FUNDING: The German Federal Ministry of Health, the Max-Eder-Programme of German Cancer Aid, the German Federal Ministry of Education and Research, the German Academic Exchange Service, and the EU.


Subject(s)
Colorectal Neoplasms , Deep Learning , Humans , Retrospective Studies , Prognosis , Risk Factors , Colorectal Neoplasms/diagnosis , Colorectal Neoplasms/pathology
12.
Eur J Cancer ; 195: 113390, 2023 12.
Article in English | MEDLINE | ID: mdl-37890350

ABSTRACT

BACKGROUND: Sentinel lymph node (SLN) status is a clinically important prognostic biomarker in breast cancer and is used to guide therapy, especially for hormone receptor-positive, HER2-negative cases. However, invasive lymph node staging is increasingly omitted before therapy, and studies such as the randomised Intergroup Sentinel Mamma (INSEMA) trial address the potential for further de-escalation of axillary surgery. Therefore, it would be helpful to accurately predict the pretherapeutic sentinel status using medical images. METHODS: Using a ResNet 50 architecture pretrained on ImageNet and a previously successful strategy, we trained deep learning (DL)-based image analysis algorithms to predict sentinel status on hematoxylin/eosin-stained images of predominantly luminal, primary breast tumours from the INSEMA trial and three additional, independent cohorts (The Cancer Genome Atlas (TCGA) and cohorts from the University hospitals of Mannheim and Regensburg), and compared their performance with that of a logistic regression using clinical data only. Performance on an INSEMA hold-out set was investigated in a blinded manner. RESULTS: None of the generated image analysis algorithms yielded significantly better than random areas under the receiver operating characteristic curves on the test sets, including the hold-out test set from INSEMA. In contrast, the logistic regression fitted on the Mannheim cohort retained a better than random performance on INSEMA and Regensburg. Including the image analysis model output in the logistic regression did not improve performance further on INSEMA. CONCLUSIONS: Employing DL-based image analysis on histological slides, we could not predict SLN status for unseen cases in the INSEMA trial and other predominantly luminal cohorts.


Subject(s)
Breast Neoplasms , Deep Learning , Lymphadenopathy , Sentinel Lymph Node , Female , Humans , Axilla/pathology , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/surgery , Breast Neoplasms/genetics , Lymph Node Excision/methods , Lymph Nodes/pathology , Lymphatic Metastasis/pathology , Sentinel Lymph Node/pathology , Sentinel Lymph Node Biopsy/methods
13.
NPJ Precis Oncol ; 7(1): 98, 2023 Sep 26.
Article in English | MEDLINE | ID: mdl-37752266

ABSTRACT

Studies have shown that colorectal cancer prognosis can be predicted by deep learning-based analysis of histological tissue sections of the primary tumor. So far, this has been achieved using a binary prediction. Survival curves might contain more detailed information and thus enable a more fine-grained risk prediction. Therefore, we established survival curve-based CRC survival predictors and benchmarked them against standard binary survival predictors, comparing their performance extensively on the clinical high and low risk subsets of one internal and three external cohorts. Survival curve-based risk prediction achieved a very similar risk stratification to binary risk prediction for this task. Exchanging other components of the pipeline, namely input tissue and feature extractor, had largely identical effects on model performance independently of the type of risk prediction. An ensemble of all survival curve-based models exhibited a more robust performance, as did a similar ensemble based on binary risk prediction. Patients could be further stratified within clinical risk groups. However, performance still varied across cohorts, indicating limited generalization of all investigated image analysis pipelines, whereas models using clinical data performed robustly on all cohorts.

14.
Eur J Cancer ; 193: 113294, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37690178

ABSTRACT

BACKGROUND: Historically, cancer diagnoses have been made by pathologists using two-dimensional histological slides. However, with the advent of digital pathology and artificial intelligence, slides are being digitised, providing new opportunities to integrate their information. Since nature is 3-dimensional (3D), it seems intuitive to digitally reassemble the 3D structure for diagnosis. OBJECTIVE: To develop the first human-3D-melanoma-histology-model with full data and code availability. Further, to evaluate the 3D-simulation together with experienced pathologists in the field and discuss the implications of digital 3D-models for the future of digital pathology. METHODS: A malignant melanoma of the skin was digitised via 3 µm cuts by a slide scanner; an open-source software was then leveraged to construct the 3D model. A total of nine pathologists from four different countries with at least 10 years of experience in the histologic diagnosis of melanoma tested the model and discussed their experiences as well as implications for future pathology. RESULTS: We successfully constructed and tested the first 3D-model of human melanoma. Based on testing, 88.9% of pathologists believe that the technology is likely to enter routine pathology within the next 10 years; advantages include a better reflectance of anatomy, 3D assessment of symmetry and the opportunity to simultaneously evaluate different tissue levels at the same time; limitations include the high consumption of tissue and a yet inferior resolution due to computational limitations. CONCLUSIONS: 3D-histology-models are promising for digital pathology of cancer and melanoma specifically, however, there are yet limitations which need to be carefully addressed.

15.
Med Image Anal ; 89: 102914, 2023 10.
Article in English | MEDLINE | ID: mdl-37544085

ABSTRACT

In the past years, deep learning has seen an increase in usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images, with a focus on the task of selective classification, where the model should reject the classification in situations in which it is uncertain. We conduct our experiments on tile-level under the aspects of domain shift and label noise, as well as on slide-level. In our experiments, we compare Deep Ensembles, Monte-Carlo Dropout, Stochastic Variational Inference, Test-Time Data Augmentation as well as ensembles of the latter approaches. We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise, while contrary to results from classical computer vision benchmarks no systematic gain of the other methods can be shown. Across methods, a rejection of the most uncertain samples reliably leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.


Subject(s)
Benchmarking , Humans , Uncertainty , Probability
16.
World J Urol ; 41(8): 2233-2241, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37382622

ABSTRACT

PURPOSE: To develop and validate an interpretable deep learning model to predict overall and disease-specific survival (OS/DSS) in clear cell renal cell carcinoma (ccRCC). METHODS: Digitised haematoxylin and eosin-stained slides from The Cancer Genome Atlas were used as a training set for a vision transformer (ViT) to extract image features with a self-supervised model called DINO (self-distillation with no labels). Extracted features were used in Cox regression models to prognosticate OS and DSS. Kaplan-Meier for univariable evaluation and Cox regression analyses for multivariable evaluation of the DINO-ViT risk groups were performed for prediction of OS and DSS. For validation, a cohort from a tertiary care centre was used. RESULTS: A significant risk stratification was achieved in univariable analysis for OS and DSS in the training (n = 443, log rank test, p < 0.01) and validation set (n = 266, p < 0.01). In multivariable analysis, including age, metastatic status, tumour size and grading, the DINO-ViT risk stratification was a significant predictor for OS (hazard ratio [HR] 3.03; 95%-confidence interval [95%-CI] 2.11-4.35; p < 0.01) and DSS (HR 4.90; 95%-CI 2.78-8.64; p < 0.01) in the training set but only for DSS in the validation set (HR 2.31; 95%-CI 1.15-4.65; p = 0.02). DINO-ViT visualisation showed that features were mainly extracted from nuclei, cytoplasm, and peritumoural stroma, demonstrating good interpretability. CONCLUSION: The DINO-ViT can identify high-risk patients using histological images of ccRCC. This model might improve individual risk-adapted renal cancer therapy in the future.


Subject(s)
Carcinoma, Renal Cell , Kidney Neoplasms , Humans , Carcinoma, Renal Cell/pathology , Kidney Neoplasms/pathology , Proportional Hazards Models , Risk Factors , Endoscopy , Prognosis
17.
N Biotechnol ; 76: 106-117, 2023 Sep 25.
Article in English | MEDLINE | ID: mdl-37146681

ABSTRACT

The limited ability of Convolutional Neural Networks to generalize to images from previously unseen domains is a major limitation, in particular, for safety-critical clinical tasks such as dermoscopic skin cancer classification. In order to translate CNN-based applications into the clinic, it is essential that they are able to adapt to domain shifts. Such new conditions can arise through the use of different image acquisition systems or varying lighting conditions. In dermoscopy, shifts can also occur as a change in patient age or occurrence of rare lesion localizations (e.g. palms). These are not prominently represented in most training datasets and can therefore lead to a decrease in performance. In order to verify the generalizability of classification models in real world clinical settings it is crucial to have access to data which mimics such domain shifts. To our knowledge no dermoscopic image dataset exists where such domain shifts are properly described and quantified. We therefore grouped publicly available images from ISIC archive based on their metadata (e.g. acquisition location, lesion localization, patient age) to generate meaningful domains. To verify that these domains are in fact distinct, we used multiple quantification measures to estimate the presence and intensity of domain shifts. Additionally, we analyzed the performance on these domains with and without an unsupervised domain adaptation technique. We observed that in most of our grouped domains, domain shifts in fact exist. Based on our results, we believe these datasets to be helpful for testing the generalization capabilities of dermoscopic skin cancer classifiers.


Subject(s)
Dermoscopy , Skin Neoplasms , Humans , Dermoscopy/methods , Skin Neoplasms/pathology , Neural Networks, Computer
19.
Cell Rep Med ; 4(4): 100980, 2023 04 18.
Article in English | MEDLINE | ID: mdl-36958327

ABSTRACT

Deep learning (DL) can predict microsatellite instability (MSI) from routine histopathology slides of colorectal cancer (CRC). However, it is unclear whether DL can also predict other biomarkers with high performance and whether DL predictions generalize to external patient populations. Here, we acquire CRC tissue samples from two large multi-centric studies. We systematically compare six different state-of-the-art DL architectures to predict biomarkers from pathology slides, including MSI and mutations in BRAF, KRAS, NRAS, and PIK3CA. Using a large external validation cohort to provide a realistic evaluation setting, we show that models using self-supervised, attention-based multiple-instance learning consistently outperform previous approaches while offering explainable visualizations of the indicative regions and morphologies. While the prediction of MSI and BRAF mutations reaches a clinical-grade performance, mutation prediction of PIK3CA, KRAS, and NRAS was clinically insufficient.


Subject(s)
Colorectal Neoplasms , Deep Learning , Humans , Retrospective Studies , Proto-Oncogene Proteins B-raf/genetics , Proto-Oncogene Proteins p21(ras)/genetics , Colorectal Neoplasms/genetics , Colorectal Neoplasms/pathology , Biomarkers , Microsatellite Instability , Class I Phosphatidylinositol 3-Kinases/genetics
20.
Eur J Cancer ; 183: 131-138, 2023 04.
Article in English | MEDLINE | ID: mdl-36854237

ABSTRACT

BACKGROUND: In machine learning, multimodal classifiers can provide more generalised performance than unimodal classifiers. In clinical practice, physicians usually also rely on a range of information from different examinations for diagnosis. In this study, we used BRAF mutation status prediction in melanoma as a model system to analyse the contribution of different data types in a combined classifier because BRAF status can be determined accurately by sequencing as the current gold standard, thus nearly eliminating label noise. METHODS: We trained a deep learning-based classifier by combining individually trained random forests of image, clinical and methylation data to predict BRAF-V600 mutation status in primary and metastatic melanomas of The Cancer Genome Atlas cohort. RESULTS: With our multimodal approach, we achieved an area under the receiver operating characteristic curve of 0.80, whereas the individual classifiers yielded areas under the receiver operating characteristic curve of 0.63 (histopathologic image data), 0.66 (clinical data) and 0.66 (methylation data) on an independent data set. CONCLUSIONS: Our combined approach can predict BRAF status to some extent by identifying BRAF-V600 specific patterns at the histologic, clinical and epigenetic levels. The multimodal classifiers have improved generalisability in predicting BRAF mutation status.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Proto-Oncogene Proteins B-raf/genetics , Melanoma/pathology , Skin Neoplasms/pathology , Mutation , Epigenesis, Genetic
SELECTION OF CITATIONS
SEARCH DETAIL
...