Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
World J Urol ; 41(8): 2233-2241, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37382622

RESUMO

PURPOSE: To develop and validate an interpretable deep learning model to predict overall and disease-specific survival (OS/DSS) in clear cell renal cell carcinoma (ccRCC). METHODS: Digitised haematoxylin and eosin-stained slides from The Cancer Genome Atlas were used as a training set for a vision transformer (ViT) to extract image features with a self-supervised model called DINO (self-distillation with no labels). Extracted features were used in Cox regression models to prognosticate OS and DSS. Kaplan-Meier for univariable evaluation and Cox regression analyses for multivariable evaluation of the DINO-ViT risk groups were performed for prediction of OS and DSS. For validation, a cohort from a tertiary care centre was used. RESULTS: A significant risk stratification was achieved in univariable analysis for OS and DSS in the training (n = 443, log rank test, p < 0.01) and validation set (n = 266, p < 0.01). In multivariable analysis, including age, metastatic status, tumour size and grading, the DINO-ViT risk stratification was a significant predictor for OS (hazard ratio [HR] 3.03; 95%-confidence interval [95%-CI] 2.11-4.35; p < 0.01) and DSS (HR 4.90; 95%-CI 2.78-8.64; p < 0.01) in the training set but only for DSS in the validation set (HR 2.31; 95%-CI 1.15-4.65; p = 0.02). DINO-ViT visualisation showed that features were mainly extracted from nuclei, cytoplasm, and peritumoural stroma, demonstrating good interpretability. CONCLUSION: The DINO-ViT can identify high-risk patients using histological images of ccRCC. This model might improve individual risk-adapted renal cancer therapy in the future.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Humanos , Carcinoma de Células Renais/patologia , Neoplasias Renais/patologia , Modelos de Riscos Proporcionais , Fatores de Risco , Endoscopia , Prognóstico
2.
BJU Int ; 128(3): 352-360, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33706408

RESUMO

OBJECTIVE: To develop a new digital biomarker based on the analysis of primary tumour tissue by a convolutional neural network (CNN) to predict lymph node metastasis (LNM) in a cohort matched for already established risk factors. PATIENTS AND METHODS: Haematoxylin and eosin (H&E) stained primary tumour slides from 218 patients (102 N+; 116 N0), matched for Gleason score, tumour size, venous invasion, perineural invasion and age, who underwent radical prostatectomy were selected to train a CNN and evaluate its ability to predict LN status. RESULTS: With 10 models trained with the same data, a mean area under the receiver operating characteristic curve (AUROC) of 0.68 (95% confidence interval [CI] 0.678-0.682) and a mean balanced accuracy of 61.37% (95% CI 60.05-62.69%) was achieved. The mean sensitivity and specificity was 53.09% (95% CI 49.77-56.41%) and 69.65% (95% CI 68.21-71.1%), respectively. These results were confirmed via cross-validation. The probability score for LNM prediction was significantly higher on image sections from N+ samples (mean [SD] N+ probability score 0.58 [0.17] vs 0.47 [0.15] N0 probability score, P = 0.002). In multivariable analysis, the probability score of the CNN (odds ratio [OR] 1.04 per percentage probability, 95% CI 1.02-1.08; P = 0.04) and lymphovascular invasion (OR 11.73, 95% CI 3.96-35.7; P < 0.001) proved to be independent predictors for LNM. CONCLUSION: In our present study, CNN-based image analyses showed promising results as a potential novel low-cost method to extract relevant prognostic information directly from H&E histology to predict the LN status of patients with prostate cancer. Our ubiquitously available technique might contribute to an improved LN status prediction.


Assuntos
Aprendizado Profundo , Metástase Linfática , Redes Neurais de Computação , Neoplasias da Próstata/patologia , Idoso , Humanos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Prognóstico , Estudos Retrospectivos
3.
J Med Internet Res ; 23(3): e21695, 2021 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-33764307

RESUMO

BACKGROUND: Studies have shown that artificial intelligence achieves similar or better performance than dermatologists in specific dermoscopic image classification tasks. However, artificial intelligence is susceptible to the influence of confounding factors within images (eg, skin markings), which can lead to false diagnoses of cancerous skin lesions. Image segmentation can remove lesion-adjacent confounding factors but greatly change the image representation. OBJECTIVE: The aim of this study was to compare the performance of 2 image classification workflows where images were either segmented or left unprocessed before the subsequent training and evaluation of a binary skin lesion classifier. METHODS: Separate binary skin lesion classifiers (nevus vs melanoma) were trained and evaluated on segmented and unsegmented dermoscopic images. For a more informative result, separate classifiers were trained on 2 distinct training data sets (human against machine [HAM] and International Skin Imaging Collaboration [ISIC]). Each training run was repeated 5 times. The mean performance of the 5 runs was evaluated on a multi-source test set (n=688) consisting of a holdout and an external component. RESULTS: Our findings showed that when trained on HAM, the segmented classifiers showed a higher overall balanced accuracy (75.6% [SD 1.1%]) than the unsegmented classifiers (66.7% [SD 3.2%]), which was significant in 4 out of 5 runs (P<.001). The overall balanced accuracy was numerically higher for the unsegmented ISIC classifiers (78.3% [SD 1.8%]) than for the segmented ISIC classifiers (77.4% [SD 1.5%]), which was significantly different in 1 out of 5 runs (P=.004). CONCLUSIONS: Image segmentation does not result in overall performance decrease but it causes the beneficial removal of lesion-adjacent confounding factors. Thus, it is a viable option to address the negative impact that confounding factors have on deep learning models in dermatology. However, the segmentation step might introduce new pitfalls, which require further investigations.


Assuntos
Melanoma , Neoplasias Cutâneas , Algoritmos , Inteligência Artificial , Dermoscopia , Humanos , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem
4.
J Med Internet Res ; 23(2): e23436, 2021 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-33528370

RESUMO

BACKGROUND: An increasing number of studies within digital pathology show the potential of artificial intelligence (AI) to diagnose cancer using histological whole slide images, which requires large and diverse data sets. While diversification may result in more generalizable AI-based systems, it can also introduce hidden variables. If neural networks are able to distinguish/learn hidden variables, these variables can introduce batch effects that compromise the accuracy of classification systems. OBJECTIVE: The objective of the study was to analyze the learnability of an exemplary selection of hidden variables (patient age, slide preparation date, slide origin, and scanner type) that are commonly found in whole slide image data sets in digital pathology and could create batch effects. METHODS: We trained four separate convolutional neural networks (CNNs) to learn four variables using a data set of digitized whole slide melanoma images from five different institutes. For robustness, each CNN training and evaluation run was repeated multiple times, and a variable was only considered learnable if the lower bound of the 95% confidence interval of its mean balanced accuracy was above 50.0%. RESULTS: A mean balanced accuracy above 50.0% was achieved for all four tasks, even when considering the lower bound of the 95% confidence interval. Performance between tasks showed wide variation, ranging from 56.1% (slide preparation date) to 100% (slide origin). CONCLUSIONS: Because all of the analyzed hidden variables are learnable, they have the potential to create batch effects in dermatopathology data sets, which negatively affect AI-based classification systems. Practitioners should be aware of these and similar pitfalls when developing and evaluating such systems and address these and potentially other batch effect variables in their data sets through sufficient data set stratification.


Assuntos
Inteligência Artificial/normas , Aprendizado Profundo/normas , Redes Neurais de Computação , Patologia/métodos , Humanos
5.
J Med Internet Res ; 23(7): e20708, 2021 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-34255646

RESUMO

BACKGROUND: Recent years have been witnessing a substantial improvement in the accuracy of skin cancer classification using convolutional neural networks (CNNs). CNNs perform on par with or better than dermatologists with respect to the classification tasks of single images. However, in clinical practice, dermatologists also use other patient data beyond the visual aspects present in a digitized image, further increasing their diagnostic accuracy. Several pilot studies have recently investigated the effects of integrating different subtypes of patient data into CNN-based skin cancer classifiers. OBJECTIVE: This systematic review focuses on the current research investigating the impact of merging information from image features and patient data on the performance of CNN-based skin cancer image classification. This study aims to explore the potential in this field of research by evaluating the types of patient data used, the ways in which the nonimage data are encoded and merged with the image features, and the impact of the integration on the classifier performance. METHODS: Google Scholar, PubMed, MEDLINE, and ScienceDirect were screened for peer-reviewed studies published in English that dealt with the integration of patient data within a CNN-based skin cancer classification. The search terms skin cancer classification, convolutional neural network(s), deep learning, lesions, melanoma, metadata, clinical information, and patient data were combined. RESULTS: A total of 11 publications fulfilled the inclusion criteria. All of them reported an overall improvement in different skin lesion classification tasks with patient data integration. The most commonly used patient data were age, sex, and lesion location. The patient data were mostly one-hot encoded. There were differences in the complexity that the encoded patient data were processed with regarding deep learning methods before and after fusing them with the image features for a combined classifier. CONCLUSIONS: This study indicates the potential benefits of integrating patient data into CNN-based diagnostic algorithms. However, how exactly the individual patient data enhance classification performance, especially in the case of multiclass classification problems, is still unclear. Moreover, a substantial fraction of patient data used by dermatologists remains to be analyzed in the context of CNN-based skin cancer classification. Further exploratory analyses in this promising field may optimize patient data integration into CNN-based skin cancer diagnostics for patients' benefits.


Assuntos
Melanoma , Neoplasias Cutâneas , Dermoscopia , Humanos , Melanoma/diagnóstico , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico
6.
J Med Internet Res ; 22(9): e18091, 2020 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-32915161

RESUMO

BACKGROUND: Early detection of melanoma can be lifesaving but this remains a challenge. Recent diagnostic studies have revealed the superiority of artificial intelligence (AI) in classifying dermoscopic images of melanoma and nevi, concluding that these algorithms should assist a dermatologist's diagnoses. OBJECTIVE: The aim of this study was to investigate whether AI support improves the accuracy and overall diagnostic performance of dermatologists in the dichotomous image-based discrimination between melanoma and nevus. METHODS: Twelve board-certified dermatologists were presented disjoint sets of 100 unique dermoscopic images of melanomas and nevi (total of 1200 unique images), and they had to classify the images based on personal experience alone (part I) and with the support of a trained convolutional neural network (CNN, part II). Additionally, dermatologists were asked to rate their confidence in their final decision for each image. RESULTS: While the mean specificity of the dermatologists based on personal experience alone remained almost unchanged (70.6% vs 72.4%; P=.54) with AI support, the mean sensitivity and mean accuracy increased significantly (59.4% vs 74.6%; P=.003 and 65.0% vs 73.6%; P=.002, respectively) with AI support. Out of the 10% (10/94; 95% CI 8.4%-11.8%) of cases where dermatologists were correct and AI was incorrect, dermatologists on average changed to the incorrect answer for 39% (4/10; 95% CI 23.2%-55.6%) of cases. When dermatologists were incorrect and AI was correct (25/94, 27%; 95% CI 24.0%-30.1%), dermatologists changed their answers to the correct answer for 46% (11/25; 95% CI 33.1%-58.4%) of cases. Additionally, the dermatologists' average confidence in their decisions increased when the CNN confirmed their decision and decreased when the CNN disagreed, even when the dermatologists were correct. Reported values are based on the mean of all participants. Whenever absolute values are shown, the denominator and numerator are approximations as every dermatologist ended up rating a varying number of images due to a quality control step. CONCLUSIONS: The findings of our study show that AI support can improve the overall accuracy of the dermatologists in the dichotomous image-based discrimination between melanoma and nevus. This supports the argument for AI-based tools to aid clinicians in skin lesion classification and provides a rationale for studies of such classifiers in real-life settings, wherein clinicians can integrate additional information such as patient age and medical history into their decisions.


Assuntos
Inteligência Artificial/normas , Dermatologistas/normas , Dermoscopia/métodos , Diagnóstico por Imagem/classificação , Melanoma/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem , Humanos , Internet , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico , Inquéritos e Questionários
7.
J Dtsch Dermatol Ges ; 18(11): 1236-1243, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32841508

RESUMO

Malignant melanoma is the skin tumor that causes most deaths in Germany. At an early stage, melanoma is well treatable, so early detection is essential. However, the skin cancer screening program in Germany has been criticized because although melanomas have been diagnosed more frequently since introduction of the program, the mortality from malignant melanoma has not decreased. This indicates that the observed increase in melanoma diagnoses be due to overdiagnosis, i.e. to the detection of lesions that would never have created serious health problems for the patients. One of the reasons is the challenging distinction between some benign and malignant lesions. In addition, there may be lesions that are biologically equivocal, and other lesions that are classified as malignant according to current criteria, but that grow so slowly that they would never have posed a threat to patient's life. So far, these "indolent" melanomas cannot be identified reliably due to a lack of biomarkers. Moreover, the likelihood that an in-situ melanoma will progress to an invasive tumor still cannot be determined with any certainty. When benign lesions are diagnosed as melanoma, the consequences are unnecessary psychological and physical stress for the affected patients and incurred therapy costs. Vice versa, underdiagnoses in the sense of overlooked melanomas can adversely affect patients' prognoses and may necessitate more intense therapies. Novel diagnostic options could reduce the number of over- and underdiagnoses and contribute to more objective diagnoses in borderline cases. One strategy that has yielded promising results in pilot studies is the use of artificial intelligence-based diagnostic tools. However, these applications still await translation into clinical and pathological routine.


Assuntos
Melanoma , Neoplasias Cutâneas , Inteligência Artificial , Alemanha , Humanos , Uso Excessivo dos Serviços de Saúde
11.
JAMA Dermatol ; 160(3): 303-311, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38324293

RESUMO

Importance: The development of artificial intelligence (AI)-based melanoma classifiers typically calls for large, centralized datasets, requiring hospitals to give away their patient data, which raises serious privacy concerns. To address this concern, decentralized federated learning has been proposed, where classifier development is distributed across hospitals. Objective: To investigate whether a more privacy-preserving federated learning approach can achieve comparable diagnostic performance to a classical centralized (ie, single-model) and ensemble learning approach for AI-based melanoma diagnostics. Design, Setting, and Participants: This multicentric, single-arm diagnostic study developed a federated model for melanoma-nevus classification using histopathological whole-slide images prospectively acquired at 6 German university hospitals between April 2021 and February 2023 and benchmarked it using both a holdout and an external test dataset. Data analysis was performed from February to April 2023. Exposures: All whole-slide images were retrospectively analyzed by an AI-based classifier without influencing routine clinical care. Main Outcomes and Measures: The area under the receiver operating characteristic curve (AUROC) served as the primary end point for evaluating the diagnostic performance. Secondary end points included balanced accuracy, sensitivity, and specificity. Results: The study included 1025 whole-slide images of clinically melanoma-suspicious skin lesions from 923 patients, consisting of 388 histopathologically confirmed invasive melanomas and 637 nevi. The median (range) age at diagnosis was 58 (18-95) years for the training set, 57 (18-93) years for the holdout test dataset, and 61 (18-95) years for the external test dataset; the median (range) Breslow thickness was 0.70 (0.10-34.00) mm, 0.70 (0.20-14.40) mm, and 0.80 (0.30-20.00) mm, respectively. The federated approach (0.8579; 95% CI, 0.7693-0.9299) performed significantly worse than the classical centralized approach (0.9024; 95% CI, 0.8379-0.9565) in terms of AUROC on a holdout test dataset (pairwise Wilcoxon signed-rank, P < .001) but performed significantly better (0.9126; 95% CI, 0.8810-0.9412) than the classical centralized approach (0.9045; 95% CI, 0.8701-0.9331) on an external test dataset (pairwise Wilcoxon signed-rank, P < .001). Notably, the federated approach performed significantly worse than the ensemble approach on both the holdout (0.8867; 95% CI, 0.8103-0.9481) and external test dataset (0.9227; 95% CI, 0.8941-0.9479). Conclusions and Relevance: The findings of this diagnostic study suggest that federated learning is a viable approach for the binary classification of invasive melanomas and nevi on a clinically representative distributed dataset. Federated learning can improve privacy protection in AI-based melanoma diagnostics while simultaneously promoting collaboration across institutions and countries. Moreover, it may have the potential to be extended to other image classification tasks in digital cancer histopathology and beyond.


Assuntos
Dermatologia , Melanoma , Nevo , Neoplasias Cutâneas , Humanos , Melanoma/diagnóstico , Inteligência Artificial , Estudos Retrospectivos , Neoplasias Cutâneas/diagnóstico , Nevo/diagnóstico
12.
Eur J Cancer ; 195: 113390, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37890350

RESUMO

BACKGROUND: Sentinel lymph node (SLN) status is a clinically important prognostic biomarker in breast cancer and is used to guide therapy, especially for hormone receptor-positive, HER2-negative cases. However, invasive lymph node staging is increasingly omitted before therapy, and studies such as the randomised Intergroup Sentinel Mamma (INSEMA) trial address the potential for further de-escalation of axillary surgery. Therefore, it would be helpful to accurately predict the pretherapeutic sentinel status using medical images. METHODS: Using a ResNet 50 architecture pretrained on ImageNet and a previously successful strategy, we trained deep learning (DL)-based image analysis algorithms to predict sentinel status on hematoxylin/eosin-stained images of predominantly luminal, primary breast tumours from the INSEMA trial and three additional, independent cohorts (The Cancer Genome Atlas (TCGA) and cohorts from the University hospitals of Mannheim and Regensburg), and compared their performance with that of a logistic regression using clinical data only. Performance on an INSEMA hold-out set was investigated in a blinded manner. RESULTS: None of the generated image analysis algorithms yielded significantly better than random areas under the receiver operating characteristic curves on the test sets, including the hold-out test set from INSEMA. In contrast, the logistic regression fitted on the Mannheim cohort retained a better than random performance on INSEMA and Regensburg. Including the image analysis model output in the logistic regression did not improve performance further on INSEMA. CONCLUSIONS: Employing DL-based image analysis on histological slides, we could not predict SLN status for unseen cases in the INSEMA trial and other predominantly luminal cohorts.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Linfadenopatia , Linfonodo Sentinela , Feminino , Humanos , Axila/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Neoplasias da Mama/genética , Excisão de Linfonodo/métodos , Linfonodos/patologia , Metástase Linfática/patologia , Linfonodo Sentinela/patologia , Biópsia de Linfonodo Sentinela/métodos
13.
Minerva Urol Nephrol ; 74(5): 538-550, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35274903

RESUMO

INTRODUCTION: Artificial intelligence (AI) has been successfully applied for automatic tumor detection and grading in histopathological image analysis in urologic oncology. The aim of this review was to assess the applicability of these approaches in image-based oncological outcome prediction. EVIDENCE ACQUISITION: A systematic literature search was conducted using the databases MEDLINE through PubMed and Web of Science up to April 20, 2021. Studies investigating AI approaches to determine the risk of recurrence, metastasis, or survival directly from H&E-stained tissue sections in prostate, renal cell or urothelial carcinoma were included. Characteristics of the AI approach and performance metrics were extracted and summarized. Risk of bias (RoB) was assessed using the PROBAST tool. EVIDENCE SYNTHESIS: 16 studies yielding a total of 6658 patients and reporting on 17 outcome predictions were included. Six studies focused on renal cell, six on prostate and three on urothelial carcinoma while one study investigated renal cell and urothelial carcinoma. Handcrafted feature extraction was used in five, a convolutional neural network (CNN) in six and a deep feature extraction in four studies. One study compared a CNN with handcrafted feature extraction. In seven outcome predictions, a multivariable comparison with clinicopathological parameters was reported. Five of them showed statistically significant hazard ratios for the AI's model's-prediction. However, RoB was high in 15 outcome predictions and unclear in two. CONCLUSIONS: The included studies are promising but predominantly early pilot studies, therefore primarily highlighting the potential of AI approaches. Additional well-designed studies are needed to assess the actual clinical applicability.


Assuntos
Carcinoma de Células de Transição , Neoplasias da Bexiga Urinária , Urologia , Inteligência Artificial , Amarelo de Eosina-(YS) , Hematoxilina , Humanos , Masculino
14.
PLoS One ; 17(8): e0272656, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35976907

RESUMO

For clear cell renal cell carcinoma (ccRCC) risk-dependent diagnostic and therapeutic algorithms are routinely implemented in clinical practice. Artificial intelligence-based image analysis has the potential to improve outcome prediction and thereby risk stratification. Thus, we investigated whether a convolutional neural network (CNN) can extract relevant image features from a representative hematoxylin and eosin-stained slide to predict 5-year overall survival (5y-OS) in ccRCC. The CNN was trained to predict 5y-OS in a binary manner using slides from TCGA and validated using an independent in-house cohort. Multivariable logistic regression was used to combine of the CNNs prediction and clinicopathological parameters. A mean balanced accuracy of 72.0% (standard deviation [SD] = 7.9%), sensitivity of 72.4% (SD = 10.6%), specificity of 71.7% (SD = 11.9%) and area under receiver operating characteristics curve (AUROC) of 0.75 (SD = 0.07) was achieved on the TCGA training set (n = 254 patients / WSIs) using 10-fold cross-validation. On the external validation cohort (n = 99 patients / WSIs), mean accuracy, sensitivity, specificity and AUROC were 65.5% (95%-confidence interval [CI]: 62.9-68.1%), 86.2% (95%-CI: 81.8-90.5%), 44.9% (95%-CI: 40.2-49.6%), and 0.70 (95%-CI: 0.69-0.71). A multivariable model including age, tumor stage and metastasis yielded an AUROC of 0.75 on the TCGA cohort. The inclusion of the CNN-based classification (Odds ratio = 4.86, 95%-CI: 2.70-8.75, p < 0.01) raised the AUROC to 0.81. On the validation cohort, both models showed an AUROC of 0.88. In univariable Cox regression, the CNN showed a hazard ratio of 3.69 (95%-CI: 2.60-5.23, p < 0.01) on TCGA and 2.13 (95%-CI: 0.92-4.94, p = 0.08) on external validation. The results demonstrate that the CNN's image-based prediction of survival is promising and thus this widely applicable technique should be further investigated with the aim of improving existing risk stratification in ccRCC.


Assuntos
Carcinoma de Células Renais , Aprendizado Profundo , Neoplasias Renais , Inteligência Artificial , Carcinoma de Células Renais/diagnóstico , Carcinoma de Células Renais/genética , Humanos , Neoplasias Renais/diagnóstico , Neoplasias Renais/genética , Redes Neurais de Computação , Estudos Retrospectivos
15.
Eur J Cancer ; 154: 227-234, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34298373

RESUMO

AIM: Sentinel lymph node status is a central prognostic factor for melanomas. However, the surgical excision involves some risks for affected patients. In this study, we therefore aimed to develop a digital biomarker that can predict lymph node metastasis non-invasively from digitised H&E slides of primary melanoma tumours. METHODS: A total of 415 H&E slides from primary melanoma tumours with known sentinel node (SN) status from three German university hospitals and one private pathological practice were digitised (150 SN positive/265 SN negative). Two hundred ninety-one slides were used to train artificial neural networks (ANNs). The remaining 124 slides were used to test the ability of the ANNs to predict sentinel status. ANNs were trained and/or tested on data sets that were matched or not matched between SN-positive and SN-negative cases for patient age, ulceration, and tumour thickness, factors that are known to correlate with lymph node status. RESULTS: The best accuracy was achieved by an ANN that was trained and tested on unmatched cases (61.8% ± 0.2%) area under the receiver operating characteristic (AUROC). In contrast, ANNs that were trained and/or tested on matched cases achieved (55.0% ± 3.5%) AUROC or less. CONCLUSION: Our results indicate that the image classifier can predict lymph node status to some, albeit so far not clinically relevant, extent. It may do so by mostly detecting equivalents of factors on histological slides that are already known to correlate with lymph node status. Our results provide a basis for future research with larger data cohorts.


Assuntos
Aprendizado Profundo , Melanoma/patologia , Linfonodo Sentinela/patologia , Adulto , Idoso , Humanos , Metástase Linfática , Pessoa de Meia-Idade
16.
Front Med (Lausanne) ; 7: 233, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32671078

RESUMO

Background: Artificial intelligence (AI) has shown promise in numerous experimental studies, particularly in skin cancer diagnostics. Translation of these findings into the clinic is the logical next step. This translation can only be successful if patients' concerns and questions are addressed suitably. We therefore conducted a survey to evaluate the patients' view of artificial intelligence in melanoma diagnostics in Germany, with a particular focus on patients with a history of melanoma. Participants and Methods: A web-based questionnaire was designed using LimeSurvey, sent by e-mail to university hospitals and melanoma support groups and advertised on social media. The anonymous questionnaire evaluated patients' expectations and concerns toward artificial intelligence in general as well as their attitudes toward different application scenarios. Descriptive analysis was performed with expression of categorical variables as percentages and 95% confidence intervals. Statistical tests were performed to investigate associations between sociodemographic data and selected items of the questionnaire. Results: 298 individuals (154 with a melanoma diagnosis, 143 without) responded to the questionnaire. About 94% [95% CI = 0.91-0.97] of respondents supported the use of artificial intelligence in medical approaches. 88% [95% CI = 0.85-0.92] would even make their own health data anonymously available for the further development of AI-based applications in medicine. Only 41% [95% CI = 0.35-0.46] of respondents were amenable to the use of artificial intelligence as stand-alone system, 94% [95% CI = 0.92-0.97] to its use as assistance system for physicians. In sub-group analyses, only minor differences were detectable. Respondents with a previous history of melanoma were more amenable to the use of AI applications for early detection even at home. They would prefer an application scenario where physician and AI classify the lesions independently. With respect to AI-based applications in medicine, patients were concerned about insufficient data protection, impersonality and susceptibility to errors, but expected faster, more precise and unbiased diagnostics, less diagnostic errors and support for physicians. Conclusions: The vast majority of participants exhibited a positive attitude toward the use of artificial intelligence in melanoma diagnostics, especially as an assistance system.

17.
Front Med (Lausanne) ; 7: 177, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32435646

RESUMO

Recent studies have shown that deep learning is capable of classifying dermatoscopic images at least as well as dermatologists. However, many studies in skin cancer classification utilize non-biopsy-verified training images. This imperfect ground truth introduces a systematic error, but the effects on classifier performance are currently unknown. Here, we systematically examine the effects of label noise by training and evaluating convolutional neural networks (CNN) with 804 images of melanoma and nevi labeled either by dermatologists or by biopsy. The CNNs are evaluated on a test set of 384 images by means of 4-fold cross validation comparing the outputs with either the corresponding dermatological or the biopsy-verified diagnosis. With identical ground truths of training and test labels, high accuracies with 75.03% (95% CI: 74.39-75.66%) for dermatological and 73.80% (95% CI: 73.10-74.51%) for biopsy-verified labels can be achieved. However, if the CNN is trained and tested with different ground truths, accuracy drops significantly to 64.53% (95% CI: 63.12-65.94%, p < 0.01) on a non-biopsy-verified and to 64.24% (95% CI: 62.66-65.83%, p < 0.01) on a biopsy-verified test set. In conclusion, deep learning methods for skin cancer classification are highly sensitive to label noise and future work should use biopsy-verified training images to mitigate this problem.

18.
Biosens Bioelectron ; 131: 95-103, 2019 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-30826656

RESUMO

Electroporation has been a widely established method for delivering DNA and other material into cells in vitro. Conventional electroporation infrastructure is typically immobile, non-customizable, non-transparent regarding the characteristics of output pulses, and expensive. Here, we describe a portable electroporator for DNA delivery into bacterial cells that can quickly be reconstructed using 3D desktop printing and off-the-shelf components. The device is light weight (700 g), small (70 × 180 × 210 mm) and extremely low-cost (

Assuntos
Técnicas Biossensoriais , DNA/genética , Eletroporação , Técnicas de Transferência de Genes/tendências , Pesquisa Biomédica/tendências , Biotecnologia/tendências , Humanos , Impressão Tridimensional/tendências
19.
Eur J Cancer ; 118: 91-96, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31325876

RESUMO

BACKGROUND: The diagnosis of most cancers is made by a board-certified pathologist based on a tissue biopsy under the microscope. Recent research reveals a high discordance between individual pathologists. For melanoma, the literature reports on 25-26% of discordance for classifying a benign nevus versus malignant melanoma. A recent study indicated the potential of deep learning to lower these discordances. However, the performance of deep learning in classifying histopathologic melanoma images was never compared directly to human experts. The aim of this study is to perform such a first direct comparison. METHODS: A total of 695 lesions were classified by an expert histopathologist in accordance with current guidelines (350 nevi/345 melanoma). Only the haematoxylin & eosin (H&E) slides of these lesions were digitalised via a slide scanner and then randomly cropped. A total of 595 of the resulting images were used to train a convolutional neural network (CNN). The additional 100 H&E image sections were used to test the results of the CNN in comparison to 11 histopathologists. Three combined McNemar tests comparing the results of the CNNs test runs in terms of sensitivity, specificity and accuracy were predefined to test for significance (p < 0.05). FINDINGS: The CNN achieved a mean sensitivity/specificity/accuracy of 76%/60%/68% over 11 test runs. In comparison, the 11 pathologists achieved a mean sensitivity/specificity/accuracy of 51.8%/66.5%/59.2%. Thus, the CNN was significantly (p = 0.016) superior in classifying the cropped images. INTERPRETATION: With limited image information available, a CNN was able to outperform 11 histopathologists in the classification of histopathological melanoma images and thus shows promise to assist human melanoma diagnoses.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador , Interpretação de Imagem Assistida por Computador , Melanoma/patologia , Microscopia , Nevo/patologia , Patologistas , Neoplasias Cutâneas/patologia , Biópsia , Diagnóstico Diferencial , Humanos , Melanoma/classificação , Nevo/classificação , Variações Dependentes do Observador , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Neoplasias Cutâneas/classificação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA