Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Analyst ; 148(20): 5022-5032, 2023 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-37702617

RESUMO

While infrared microscopy provides molecular information at spatial resolution in a label-free manner, exploiting both spatial and molecular information for classifying the disease status of tissue samples constitutes a major challenge. One strategy to mitigate this problem is to embed high-dimensional pixel spectra in lower dimensions, aiming to preserve molecular information in a more compact manner, which reduces the amount of data and promises to make subsequent disease classification more accessible for machine learning procedures. In this study, we compare several dimensionality reduction approaches and their effect on identifying cancer in the context of a colon carcinoma study. We observe surprisingly small differences between convolutional neural networks trained on dimensionality reduced spectra compared to utilizing full spectra, indicating a clear tendency of the convolutional networks to focus on spatial rather than spectral information for classifying disease status.


Assuntos
Aprendizado Profundo , Microscopia , Redes Neurais de Computação , Aprendizado de Máquina
2.
Eur J Cancer ; 182: 122-131, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36773401

RESUMO

PURPOSE: Microsatellite instability (MSI) due to mismatch repair (MMR) defects accounts for 15-20% of colon cancers (CC). MSI testing is currently standard of care in CC with immunohistochemistry of the four MMR proteins representing the gold standard. Instead, label-free quantum cascade laser (QCL) based infrared (IR) imaging combined with artificial intelligence (AI) may classify MSI/microsatellite stability (MSS) in unstained tissue sections user-independently and tissue preserving. METHODS: Paraffin-embedded unstained tissue sections of early CC from patients participating in the multicentre AIO ColoPredict Plus (CPP) 2.0 registry were analysed after dividing into three groups (training, test, and validation). IR images of tissue sections using QCL-IR microscopes were classified by AI (convolutional neural networks [CNN]) using a two-step approach. The first CNN (modified U-Net) detected areas of cancer while the second CNN (VGG-Net) classified MSI/MSS. End-points were area under receiver operating characteristic (AUROC) and area under precision recall curve (AUPRC). RESULTS: The cancer detection in the first step was based on 629 patients (train n = 273, test n = 138, and validation n = 218). Resulting classification AUROC was 1.0 for the validation dataset. The second step classifying MSI/MSS was performed on 547 patients (train n = 331, test n = 69, and validation n = 147) reaching AUROC and AUPRC of 0.9 and 0.74, respectively, for the validation cohort. CONCLUSION: Our novel label-free digital pathology approach accurately and rapidly classifies MSI vs. MSS. The tissue sections analysed were not processed leaving the sample unmodified for subsequent analyses. Our approach demonstrates an AI-based decision support tool potentially driving improved patient stratification and precision oncology in the future.


Assuntos
Neoplasias do Colo , Neoplasias Colorretais , Humanos , Inteligência Artificial , Medicina de Precisão , Neoplasias do Colo/patologia , Repetições de Microssatélites , Instabilidade de Microssatélites , Neoplasias Colorretais/patologia
3.
Med Image Anal ; 82: 102594, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36058053

RESUMO

In recent years, deep learning has been the key driver of breakthrough developments in computational pathology and other image based approaches that support medical diagnosis and treatment. The underlying neural networks as inherent black boxes lack transparency and are often accompanied by approaches to explain their output. However, formally defining explainability has been a notorious unsolved riddle. Here, we introduce a hypothesis-based framework for falsifiable explanations of machine learning models. A falsifiable explanation is a hypothesis that connects an intermediate space induced by the model with the sample from which the data originate. We instantiate this framework in a computational pathology setting using hyperspectral infrared microscopy. The intermediate space is an activation map, which is trained with an inductive bias to localize tumor. An explanation is constituted by hypothesizing that activation corresponds to tumor and associated structures, which we validate by histological staining as an independent secondary experiment.


Assuntos
Aprendizado de Máquina , Neoplasias , Humanos , Redes Neurais de Computação , Microscopia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...