Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
World J Urol ; 41(8): 2233-2241, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37382622

RESUMO

PURPOSE: To develop and validate an interpretable deep learning model to predict overall and disease-specific survival (OS/DSS) in clear cell renal cell carcinoma (ccRCC). METHODS: Digitised haematoxylin and eosin-stained slides from The Cancer Genome Atlas were used as a training set for a vision transformer (ViT) to extract image features with a self-supervised model called DINO (self-distillation with no labels). Extracted features were used in Cox regression models to prognosticate OS and DSS. Kaplan-Meier for univariable evaluation and Cox regression analyses for multivariable evaluation of the DINO-ViT risk groups were performed for prediction of OS and DSS. For validation, a cohort from a tertiary care centre was used. RESULTS: A significant risk stratification was achieved in univariable analysis for OS and DSS in the training (n = 443, log rank test, p < 0.01) and validation set (n = 266, p < 0.01). In multivariable analysis, including age, metastatic status, tumour size and grading, the DINO-ViT risk stratification was a significant predictor for OS (hazard ratio [HR] 3.03; 95%-confidence interval [95%-CI] 2.11-4.35; p < 0.01) and DSS (HR 4.90; 95%-CI 2.78-8.64; p < 0.01) in the training set but only for DSS in the validation set (HR 2.31; 95%-CI 1.15-4.65; p = 0.02). DINO-ViT visualisation showed that features were mainly extracted from nuclei, cytoplasm, and peritumoural stroma, demonstrating good interpretability. CONCLUSION: The DINO-ViT can identify high-risk patients using histological images of ccRCC. This model might improve individual risk-adapted renal cancer therapy in the future.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Humanos , Carcinoma de Células Renais/patologia , Neoplasias Renais/patologia , Modelos de Riscos Proporcionais , Fatores de Risco , Endoscopia , Prognóstico
2.
Laryngorhinootologie ; 102(7): 496-503, 2023 07.
Artigo em Alemão | MEDLINE | ID: mdl-36580975

RESUMO

The incidence of malignant melanoma is increasing worldwide. If detected early, melanoma is highly treatable, so early detection is vital.Skin cancer early detection has improved significantly in recent decades, for example by the introduction of screening in 2008 and dermoscopy. Nevertheless, in particular visual detection of early melanomas remains challenging because they show many morphological overlaps with nevi. Hence, there continues to be a high medical need to further develop methods for early skin cancer detection in order to be able to reliably diagnosemelanomas at a very early stage.Routine diagnostics for melanoma detection include visual whole body inspection, often supplemented by dermoscopy, which can significantly increase the diagnostic accuracy of experienced dermatologists. A procedure that is additionally offered in some practices and clinics is wholebody photography combined with digital dermoscopy for the early detection of malignant melanoma, especially for monitoring high-risk patients.In recent decades, numerous noninvasive adjunctive diagnostic techniques were developed for the examination of suspicious pigmented moles, that may have the potential to allow improved and, in some cases, automated evaluation of these lesions. First, confocal laser microscopy should be mentioned here, as well as electrical impedance spectroscopy, multiphoton laser tomography, multispectral analysis, Raman spectroscopy or optical coherence tomography. These diagnostic techniques usually focus on high sensitivity to avoid malignant melanoma being overlooked. However, this usually implies lower specificity, which may lead to unnecessary excision of benign lesions in screening. Also, some of the procedures are time-consuming and costly, which also limits their applicability in skin cancer screening. In the near future, the use of artificial intelligence might change skin cancer diagnostics in many ways. The most promising approach may be the analysis of routine macroscopic and dermoscopic images by artificial intelligence.For the classification of pigmented skin lesions based on macroscopic and dermoscopic images, artificial intelligence, especially in form of neural networks, has achieved comparable diagnostic accuracies to dermatologists under experimental conditions in numerous studies. In particular, it achieved high accuracies in the binary melanoma/nevus classification task, but it also performed comparably well to dermatologists in multiclass differentiation of various skin diseases. However, proof of the basic applicability and utility of such systems in clinical practice is still pending. Prerequisites that remain to be established to enable translation of such diagnostic systems into dermatological routine are means that allow users to comprehend the system's decisions as well as a uniformly high performance of the algorithms on image data from other hospitals and practices.At present, hints are accumulating that computer-aided diagnosis systems could provide their greatest benefit as assistance systems, since studies indicate that a combination of human and machine achieves the best results. Diagnostic systems based on artificial intelligence are capable of detecting morphological characteristics quickly, quantitatively, objectively and reproducibly, and could thus provide a more objective analytical basis - in addition to medical experience.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Inteligência Artificial , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Melanoma/diagnóstico , Algoritmos , Dermoscopia/métodos , Sensibilidade e Especificidade , Melanoma Maligno Cutâneo
3.
BJU Int ; 128(3): 352-360, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33706408

RESUMO

OBJECTIVE: To develop a new digital biomarker based on the analysis of primary tumour tissue by a convolutional neural network (CNN) to predict lymph node metastasis (LNM) in a cohort matched for already established risk factors. PATIENTS AND METHODS: Haematoxylin and eosin (H&E) stained primary tumour slides from 218 patients (102 N+; 116 N0), matched for Gleason score, tumour size, venous invasion, perineural invasion and age, who underwent radical prostatectomy were selected to train a CNN and evaluate its ability to predict LN status. RESULTS: With 10 models trained with the same data, a mean area under the receiver operating characteristic curve (AUROC) of 0.68 (95% confidence interval [CI] 0.678-0.682) and a mean balanced accuracy of 61.37% (95% CI 60.05-62.69%) was achieved. The mean sensitivity and specificity was 53.09% (95% CI 49.77-56.41%) and 69.65% (95% CI 68.21-71.1%), respectively. These results were confirmed via cross-validation. The probability score for LNM prediction was significantly higher on image sections from N+ samples (mean [SD] N+ probability score 0.58 [0.17] vs 0.47 [0.15] N0 probability score, P = 0.002). In multivariable analysis, the probability score of the CNN (odds ratio [OR] 1.04 per percentage probability, 95% CI 1.02-1.08; P = 0.04) and lymphovascular invasion (OR 11.73, 95% CI 3.96-35.7; P < 0.001) proved to be independent predictors for LNM. CONCLUSION: In our present study, CNN-based image analyses showed promising results as a potential novel low-cost method to extract relevant prognostic information directly from H&E histology to predict the LN status of patients with prostate cancer. Our ubiquitously available technique might contribute to an improved LN status prediction.


Assuntos
Aprendizado Profundo , Metástase Linfática , Redes Neurais de Computação , Neoplasias da Próstata/patologia , Idoso , Humanos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Prognóstico , Estudos Retrospectivos
4.
J Med Internet Res ; 23(3): e21695, 2021 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-33764307

RESUMO

BACKGROUND: Studies have shown that artificial intelligence achieves similar or better performance than dermatologists in specific dermoscopic image classification tasks. However, artificial intelligence is susceptible to the influence of confounding factors within images (eg, skin markings), which can lead to false diagnoses of cancerous skin lesions. Image segmentation can remove lesion-adjacent confounding factors but greatly change the image representation. OBJECTIVE: The aim of this study was to compare the performance of 2 image classification workflows where images were either segmented or left unprocessed before the subsequent training and evaluation of a binary skin lesion classifier. METHODS: Separate binary skin lesion classifiers (nevus vs melanoma) were trained and evaluated on segmented and unsegmented dermoscopic images. For a more informative result, separate classifiers were trained on 2 distinct training data sets (human against machine [HAM] and International Skin Imaging Collaboration [ISIC]). Each training run was repeated 5 times. The mean performance of the 5 runs was evaluated on a multi-source test set (n=688) consisting of a holdout and an external component. RESULTS: Our findings showed that when trained on HAM, the segmented classifiers showed a higher overall balanced accuracy (75.6% [SD 1.1%]) than the unsegmented classifiers (66.7% [SD 3.2%]), which was significant in 4 out of 5 runs (P<.001). The overall balanced accuracy was numerically higher for the unsegmented ISIC classifiers (78.3% [SD 1.8%]) than for the segmented ISIC classifiers (77.4% [SD 1.5%]), which was significantly different in 1 out of 5 runs (P=.004). CONCLUSIONS: Image segmentation does not result in overall performance decrease but it causes the beneficial removal of lesion-adjacent confounding factors. Thus, it is a viable option to address the negative impact that confounding factors have on deep learning models in dermatology. However, the segmentation step might introduce new pitfalls, which require further investigations.


Assuntos
Melanoma , Neoplasias Cutâneas , Algoritmos , Inteligência Artificial , Dermoscopia , Humanos , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem
5.
J Med Internet Res ; 23(2): e23436, 2021 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-33528370

RESUMO

BACKGROUND: An increasing number of studies within digital pathology show the potential of artificial intelligence (AI) to diagnose cancer using histological whole slide images, which requires large and diverse data sets. While diversification may result in more generalizable AI-based systems, it can also introduce hidden variables. If neural networks are able to distinguish/learn hidden variables, these variables can introduce batch effects that compromise the accuracy of classification systems. OBJECTIVE: The objective of the study was to analyze the learnability of an exemplary selection of hidden variables (patient age, slide preparation date, slide origin, and scanner type) that are commonly found in whole slide image data sets in digital pathology and could create batch effects. METHODS: We trained four separate convolutional neural networks (CNNs) to learn four variables using a data set of digitized whole slide melanoma images from five different institutes. For robustness, each CNN training and evaluation run was repeated multiple times, and a variable was only considered learnable if the lower bound of the 95% confidence interval of its mean balanced accuracy was above 50.0%. RESULTS: A mean balanced accuracy above 50.0% was achieved for all four tasks, even when considering the lower bound of the 95% confidence interval. Performance between tasks showed wide variation, ranging from 56.1% (slide preparation date) to 100% (slide origin). CONCLUSIONS: Because all of the analyzed hidden variables are learnable, they have the potential to create batch effects in dermatopathology data sets, which negatively affect AI-based classification systems. Practitioners should be aware of these and similar pitfalls when developing and evaluating such systems and address these and potentially other batch effect variables in their data sets through sufficient data set stratification.


Assuntos
Inteligência Artificial/normas , Aprendizado Profundo/normas , Redes Neurais de Computação , Patologia/métodos , Humanos
6.
J Med Internet Res ; 23(7): e20708, 2021 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-34255646

RESUMO

BACKGROUND: Recent years have been witnessing a substantial improvement in the accuracy of skin cancer classification using convolutional neural networks (CNNs). CNNs perform on par with or better than dermatologists with respect to the classification tasks of single images. However, in clinical practice, dermatologists also use other patient data beyond the visual aspects present in a digitized image, further increasing their diagnostic accuracy. Several pilot studies have recently investigated the effects of integrating different subtypes of patient data into CNN-based skin cancer classifiers. OBJECTIVE: This systematic review focuses on the current research investigating the impact of merging information from image features and patient data on the performance of CNN-based skin cancer image classification. This study aims to explore the potential in this field of research by evaluating the types of patient data used, the ways in which the nonimage data are encoded and merged with the image features, and the impact of the integration on the classifier performance. METHODS: Google Scholar, PubMed, MEDLINE, and ScienceDirect were screened for peer-reviewed studies published in English that dealt with the integration of patient data within a CNN-based skin cancer classification. The search terms skin cancer classification, convolutional neural network(s), deep learning, lesions, melanoma, metadata, clinical information, and patient data were combined. RESULTS: A total of 11 publications fulfilled the inclusion criteria. All of them reported an overall improvement in different skin lesion classification tasks with patient data integration. The most commonly used patient data were age, sex, and lesion location. The patient data were mostly one-hot encoded. There were differences in the complexity that the encoded patient data were processed with regarding deep learning methods before and after fusing them with the image features for a combined classifier. CONCLUSIONS: This study indicates the potential benefits of integrating patient data into CNN-based diagnostic algorithms. However, how exactly the individual patient data enhance classification performance, especially in the case of multiclass classification problems, is still unclear. Moreover, a substantial fraction of patient data used by dermatologists remains to be analyzed in the context of CNN-based skin cancer classification. Further exploratory analyses in this promising field may optimize patient data integration into CNN-based skin cancer diagnostics for patients' benefits.


Assuntos
Melanoma , Neoplasias Cutâneas , Dermoscopia , Humanos , Melanoma/diagnóstico , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico
7.
J Med Internet Res ; 22(9): e18091, 2020 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-32915161

RESUMO

BACKGROUND: Early detection of melanoma can be lifesaving but this remains a challenge. Recent diagnostic studies have revealed the superiority of artificial intelligence (AI) in classifying dermoscopic images of melanoma and nevi, concluding that these algorithms should assist a dermatologist's diagnoses. OBJECTIVE: The aim of this study was to investigate whether AI support improves the accuracy and overall diagnostic performance of dermatologists in the dichotomous image-based discrimination between melanoma and nevus. METHODS: Twelve board-certified dermatologists were presented disjoint sets of 100 unique dermoscopic images of melanomas and nevi (total of 1200 unique images), and they had to classify the images based on personal experience alone (part I) and with the support of a trained convolutional neural network (CNN, part II). Additionally, dermatologists were asked to rate their confidence in their final decision for each image. RESULTS: While the mean specificity of the dermatologists based on personal experience alone remained almost unchanged (70.6% vs 72.4%; P=.54) with AI support, the mean sensitivity and mean accuracy increased significantly (59.4% vs 74.6%; P=.003 and 65.0% vs 73.6%; P=.002, respectively) with AI support. Out of the 10% (10/94; 95% CI 8.4%-11.8%) of cases where dermatologists were correct and AI was incorrect, dermatologists on average changed to the incorrect answer for 39% (4/10; 95% CI 23.2%-55.6%) of cases. When dermatologists were incorrect and AI was correct (25/94, 27%; 95% CI 24.0%-30.1%), dermatologists changed their answers to the correct answer for 46% (11/25; 95% CI 33.1%-58.4%) of cases. Additionally, the dermatologists' average confidence in their decisions increased when the CNN confirmed their decision and decreased when the CNN disagreed, even when the dermatologists were correct. Reported values are based on the mean of all participants. Whenever absolute values are shown, the denominator and numerator are approximations as every dermatologist ended up rating a varying number of images due to a quality control step. CONCLUSIONS: The findings of our study show that AI support can improve the overall accuracy of the dermatologists in the dichotomous image-based discrimination between melanoma and nevus. This supports the argument for AI-based tools to aid clinicians in skin lesion classification and provides a rationale for studies of such classifiers in real-life settings, wherein clinicians can integrate additional information such as patient age and medical history into their decisions.


Assuntos
Inteligência Artificial/normas , Dermatologistas/normas , Dermoscopia/métodos , Diagnóstico por Imagem/classificação , Melanoma/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem , Humanos , Internet , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico , Inquéritos e Questionários
8.
J Dtsch Dermatol Ges ; 18(11): 1236-1243, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32841508

RESUMO

Malignant melanoma is the skin tumor that causes most deaths in Germany. At an early stage, melanoma is well treatable, so early detection is essential. However, the skin cancer screening program in Germany has been criticized because although melanomas have been diagnosed more frequently since introduction of the program, the mortality from malignant melanoma has not decreased. This indicates that the observed increase in melanoma diagnoses be due to overdiagnosis, i.e. to the detection of lesions that would never have created serious health problems for the patients. One of the reasons is the challenging distinction between some benign and malignant lesions. In addition, there may be lesions that are biologically equivocal, and other lesions that are classified as malignant according to current criteria, but that grow so slowly that they would never have posed a threat to patient's life. So far, these "indolent" melanomas cannot be identified reliably due to a lack of biomarkers. Moreover, the likelihood that an in-situ melanoma will progress to an invasive tumor still cannot be determined with any certainty. When benign lesions are diagnosed as melanoma, the consequences are unnecessary psychological and physical stress for the affected patients and incurred therapy costs. Vice versa, underdiagnoses in the sense of overlooked melanomas can adversely affect patients' prognoses and may necessitate more intense therapies. Novel diagnostic options could reduce the number of over- and underdiagnoses and contribute to more objective diagnoses in borderline cases. One strategy that has yielded promising results in pilot studies is the use of artificial intelligence-based diagnostic tools. However, these applications still await translation into clinical and pathological routine.


Assuntos
Melanoma , Neoplasias Cutâneas , Inteligência Artificial , Alemanha , Humanos , Uso Excessivo dos Serviços de Saúde
12.
Med Monatsschr Pharm ; 40(2): 48-54, 2017 Feb.
Artigo em Inglês, Alemão | MEDLINE | ID: mdl-29952494

RESUMO

Cancer is characterized by proliferation of cells that have managed to evade central endogenous control mechanisms. Cancers are grouped according to their organ or tissue of origin, but increasingly also based on molecular characteristics of the respective cancer cells. Due to the rapid technological advances of the last years, it is now possible to analyze the molecular makeup of different cancer types in detail within short time periods. The accumulating knowledge about development and progression of cancer can be used to develop more precise diagnostics and more effective and/or less toxic cancer therapies. In the long run, the goal is to offer to every cancer patient a therapeutic regimen that is tailored to his individual disease and situation in an optimal way.


Assuntos
Antineoplásicos/uso terapêutico , Neoplasias/tratamento farmacológico , Resistencia a Medicamentos Antineoplásicos , Humanos , Neoplasias/patologia , Medicina de Precisão
14.
PLoS One ; 19(1): e0297146, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38241314

RESUMO

Pathologists routinely use immunohistochemical (IHC)-stained tissue slides against MelanA in addition to hematoxylin and eosin (H&E)-stained slides to improve their accuracy in diagnosing melanomas. The use of diagnostic Deep Learning (DL)-based support systems for automated examination of tissue morphology and cellular composition has been well studied in standard H&E-stained tissue slides. In contrast, there are few studies that analyze IHC slides using DL. Therefore, we investigated the separate and joint performance of ResNets trained on MelanA and corresponding H&E-stained slides. The MelanA classifier achieved an area under receiver operating characteristics curve (AUROC) of 0.82 and 0.74 on out of distribution (OOD)-datasets, similar to the H&E-based benchmark classification of 0.81 and 0.75, respectively. A combined classifier using MelanA and H&E achieved AUROCs of 0.85 and 0.81 on the OOD datasets. DL MelanA-based assistance systems show the same performance as the benchmark H&E classification and may be improved by multi stain classification to assist pathologists in their clinical routine.


Assuntos
Aprendizado Profundo , Melanoma , Humanos , Melanoma/diagnóstico , Imuno-Histoquímica , Antígeno MART-1 , Curva ROC
15.
JAMA Dermatol ; 160(3): 303-311, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38324293

RESUMO

Importance: The development of artificial intelligence (AI)-based melanoma classifiers typically calls for large, centralized datasets, requiring hospitals to give away their patient data, which raises serious privacy concerns. To address this concern, decentralized federated learning has been proposed, where classifier development is distributed across hospitals. Objective: To investigate whether a more privacy-preserving federated learning approach can achieve comparable diagnostic performance to a classical centralized (ie, single-model) and ensemble learning approach for AI-based melanoma diagnostics. Design, Setting, and Participants: This multicentric, single-arm diagnostic study developed a federated model for melanoma-nevus classification using histopathological whole-slide images prospectively acquired at 6 German university hospitals between April 2021 and February 2023 and benchmarked it using both a holdout and an external test dataset. Data analysis was performed from February to April 2023. Exposures: All whole-slide images were retrospectively analyzed by an AI-based classifier without influencing routine clinical care. Main Outcomes and Measures: The area under the receiver operating characteristic curve (AUROC) served as the primary end point for evaluating the diagnostic performance. Secondary end points included balanced accuracy, sensitivity, and specificity. Results: The study included 1025 whole-slide images of clinically melanoma-suspicious skin lesions from 923 patients, consisting of 388 histopathologically confirmed invasive melanomas and 637 nevi. The median (range) age at diagnosis was 58 (18-95) years for the training set, 57 (18-93) years for the holdout test dataset, and 61 (18-95) years for the external test dataset; the median (range) Breslow thickness was 0.70 (0.10-34.00) mm, 0.70 (0.20-14.40) mm, and 0.80 (0.30-20.00) mm, respectively. The federated approach (0.8579; 95% CI, 0.7693-0.9299) performed significantly worse than the classical centralized approach (0.9024; 95% CI, 0.8379-0.9565) in terms of AUROC on a holdout test dataset (pairwise Wilcoxon signed-rank, P < .001) but performed significantly better (0.9126; 95% CI, 0.8810-0.9412) than the classical centralized approach (0.9045; 95% CI, 0.8701-0.9331) on an external test dataset (pairwise Wilcoxon signed-rank, P < .001). Notably, the federated approach performed significantly worse than the ensemble approach on both the holdout (0.8867; 95% CI, 0.8103-0.9481) and external test dataset (0.9227; 95% CI, 0.8941-0.9479). Conclusions and Relevance: The findings of this diagnostic study suggest that federated learning is a viable approach for the binary classification of invasive melanomas and nevi on a clinically representative distributed dataset. Federated learning can improve privacy protection in AI-based melanoma diagnostics while simultaneously promoting collaboration across institutions and countries. Moreover, it may have the potential to be extended to other image classification tasks in digital cancer histopathology and beyond.


Assuntos
Dermatologia , Melanoma , Nevo , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico , Inteligência Artificial , Estudos Retrospectivos , Neoplasias Cutâneas/diagnóstico , Nevo/diagnóstico
16.
Nat Commun ; 15(1): 524, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38225244

RESUMO

Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists' decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists' diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists' confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists' willingness to adopt such XAI systems, promoting future use in the clinic.


Assuntos
Melanoma , Confiança , Humanos , Inteligência Artificial , Dermatologistas , Melanoma/diagnóstico , Diagnóstico Diferencial
17.
Eur J Cancer ; 183: 131-138, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36854237

RESUMO

BACKGROUND: In machine learning, multimodal classifiers can provide more generalised performance than unimodal classifiers. In clinical practice, physicians usually also rely on a range of information from different examinations for diagnosis. In this study, we used BRAF mutation status prediction in melanoma as a model system to analyse the contribution of different data types in a combined classifier because BRAF status can be determined accurately by sequencing as the current gold standard, thus nearly eliminating label noise. METHODS: We trained a deep learning-based classifier by combining individually trained random forests of image, clinical and methylation data to predict BRAF-V600 mutation status in primary and metastatic melanomas of The Cancer Genome Atlas cohort. RESULTS: With our multimodal approach, we achieved an area under the receiver operating characteristic curve of 0.80, whereas the individual classifiers yielded areas under the receiver operating characteristic curve of 0.63 (histopathologic image data), 0.66 (clinical data) and 0.66 (methylation data) on an independent data set. CONCLUSIONS: Our combined approach can predict BRAF status to some extent by identifying BRAF-V600 specific patterns at the histologic, clinical and epigenetic levels. The multimodal classifiers have improved generalisability in predicting BRAF mutation status.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Proteínas Proto-Oncogênicas B-raf/genética , Melanoma/patologia , Neoplasias Cutâneas/patologia , Mutação , Epigênese Genética
18.
Eur J Cancer ; 195: 113390, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37890350

RESUMO

BACKGROUND: Sentinel lymph node (SLN) status is a clinically important prognostic biomarker in breast cancer and is used to guide therapy, especially for hormone receptor-positive, HER2-negative cases. However, invasive lymph node staging is increasingly omitted before therapy, and studies such as the randomised Intergroup Sentinel Mamma (INSEMA) trial address the potential for further de-escalation of axillary surgery. Therefore, it would be helpful to accurately predict the pretherapeutic sentinel status using medical images. METHODS: Using a ResNet 50 architecture pretrained on ImageNet and a previously successful strategy, we trained deep learning (DL)-based image analysis algorithms to predict sentinel status on hematoxylin/eosin-stained images of predominantly luminal, primary breast tumours from the INSEMA trial and three additional, independent cohorts (The Cancer Genome Atlas (TCGA) and cohorts from the University hospitals of Mannheim and Regensburg), and compared their performance with that of a logistic regression using clinical data only. Performance on an INSEMA hold-out set was investigated in a blinded manner. RESULTS: None of the generated image analysis algorithms yielded significantly better than random areas under the receiver operating characteristic curves on the test sets, including the hold-out test set from INSEMA. In contrast, the logistic regression fitted on the Mannheim cohort retained a better than random performance on INSEMA and Regensburg. Including the image analysis model output in the logistic regression did not improve performance further on INSEMA. CONCLUSIONS: Employing DL-based image analysis on histological slides, we could not predict SLN status for unseen cases in the INSEMA trial and other predominantly luminal cohorts.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Linfadenopatia , Linfonodo Sentinela , Feminino , Humanos , Axila/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Neoplasias da Mama/genética , Excisão de Linfonodo/métodos , Linfonodos/patologia , Metástase Linfática/patologia , Linfonodo Sentinela/patologia , Biópsia de Linfonodo Sentinela/métodos
19.
NPJ Precis Oncol ; 7(1): 98, 2023 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-37752266

RESUMO

Studies have shown that colorectal cancer prognosis can be predicted by deep learning-based analysis of histological tissue sections of the primary tumor. So far, this has been achieved using a binary prediction. Survival curves might contain more detailed information and thus enable a more fine-grained risk prediction. Therefore, we established survival curve-based CRC survival predictors and benchmarked them against standard binary survival predictors, comparing their performance extensively on the clinical high and low risk subsets of one internal and three external cohorts. Survival curve-based risk prediction achieved a very similar risk stratification to binary risk prediction for this task. Exchanging other components of the pipeline, namely input tissue and feature extractor, had largely identical effects on model performance independently of the type of risk prediction. An ensemble of all survival curve-based models exhibited a more robust performance, as did a similar ensemble based on binary risk prediction. Patients could be further stratified within clinical risk groups. However, performance still varied across cohorts, indicating limited generalization of all investigated image analysis pipelines, whereas models using clinical data performed robustly on all cohorts.

20.
Eur J Cancer ; 160: 80-91, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34810047

RESUMO

BACKGROUND: Over the past decade, the development of molecular high-throughput methods (omics) increased rapidly and provided new insights for cancer research. In parallel, deep learning approaches revealed the enormous potential for medical image analysis, especially in digital pathology. Combining image and omics data with deep learning tools may enable the discovery of new cancer biomarkers and a more precise prediction of patient prognosis. This systematic review addresses different multimodal fusion methods of convolutional neural network-based image analyses with omics data, focussing on the impact of data combination on the classification performance. METHODS: PubMed was screened for peer-reviewed articles published in English between January 2015 and June 2021 by two independent researchers. Search terms related to deep learning, digital pathology, omics, and multimodal fusion were combined. RESULTS: We identified a total of 11 studies meeting the inclusion criteria, namely studies that used convolutional neural networks for haematoxylin and eosin image analysis of patients with cancer in combination with integrated omics data. Publications were categorised according to their endpoints: 7 studies focused on survival analysis and 4 studies on prediction of cancer subtypes, malignancy or microsatellite instability with spatial analysis. CONCLUSIONS: Image-based classifiers already show high performances in prognostic and predictive cancer diagnostics. The integration of omics data led to improved performance in all studies described here. However, these are very early studies that still require external validation to demonstrate their generalisability and robustness. Further and more comprehensive studies with larger sample sizes are needed to evaluate performance and determine clinical benefits.


Assuntos
Aprendizado Profundo/normas , Genômica/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/genética , Humanos , Neoplasias/patologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa