Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 14 de 14
Filtrer
1.
Med Image Anal ; 96: 103203, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38810517

RÉSUMÉ

The classification of gigapixel Whole Slide Images (WSIs) is an important task in the emerging area of computational pathology. There has been a surge of interest in deep learning models for WSI classification with clinical applications such as cancer detection or prediction of cellular mutations. Most supervised methods require expensive and labor-intensive manual annotations by expert pathologists. Weakly supervised Multiple Instance Learning (MIL) methods have recently demonstrated excellent performance; however, they still require large-scale slide-level labeled training datasets that require a careful inspection of each slide by an expert pathologist. In this work, we propose a fully unsupervised WSI classification algorithm based on mutual transformer learning. The instances (i.e., patches) from gigapixel WSIs are transformed into a latent space and then inverse-transformed to the original space. Using the transformation loss, pseudo labels are generated and cleaned using a transformer label cleaner. The proposed transformer-based pseudo-label generator and cleaner modules mutually train each other iteratively in an unsupervised manner. A discriminative learning mechanism is introduced to improve normal versus cancerous instance labeling. In addition to the unsupervised learning, we demonstrate the effectiveness of the proposed framework for weakly supervised learning and cancer subtype classification as downstream analysis. Extensive experiments on four publicly available datasets show better performance of the proposed algorithm compared to the existing state-of-the-art methods.


Sujet(s)
Algorithmes , Interprétation d'images assistée par ordinateur , Humains , Interprétation d'images assistée par ordinateur/méthodes , Apprentissage machine non supervisé , Apprentissage profond , Tumeurs/imagerie diagnostique , Traitement d'image par ordinateur/méthodes
2.
Med Image Anal ; 91: 102997, 2024 Jan.
Article de Anglais | MEDLINE | ID: mdl-37866169

RÉSUMÉ

Semantic segmentation of various tissue and nuclei types in histology images is fundamental to many downstream tasks in the area of computational pathology (CPath). In recent years, Deep Learning (DL) methods have been shown to perform well on segmentation tasks but DL methods generally require a large amount of pixel-wise annotated data. Pixel-wise annotation sometimes requires expert's knowledge and time which is laborious and costly to obtain. In this paper, we present a consistency based semi-supervised learning (SSL) approach that can help mitigate this challenge by exploiting a large amount of unlabelled data for model training thus alleviating the need for a large annotated dataset. However, SSL models might also be susceptible to changing context and features perturbations exhibiting poor generalisation due to the limited training data. We propose an SSL method that learns robust features from both labelled and unlabelled images by enforcing consistency against varying contexts and feature perturbations. The proposed method incorporates context-aware consistency by contrasting pairs of overlapping images in a pixel-wise manner from changing contexts resulting in robust and context invariant features. We show that cross-consistency training makes the encoder features invariant to different perturbations and improves the prediction confidence. Finally, entropy minimisation is employed to further boost the confidence of the final prediction maps from unlabelled data. We conduct an extensive set of experiments on two publicly available large datasets (BCSS and MoNuSeg) and show superior performance compared to the state-of-the-art methods.


Sujet(s)
Noyau de la cellule , Sémantique , Humains , Entropie , Techniques histologiques , Apprentissage machine supervisé , Traitement d'image par ordinateur
3.
Article de Anglais | MEDLINE | ID: mdl-37021915

RÉSUMÉ

Automatic tissue classification is a fundamental task in computational pathology for profiling tumor micro-environments. Deep learning has advanced tissue classification performance at the cost of significant computational power. Shallow networks have also been end-to-end trained using direct supervision however their performance degrades because of the lack of capturing robust tissue heterogeneity. Knowledge distillation has recently been employed to improve the performance of the shallow networks used as student networks by using additional supervision from deep neural networks used as teacher networks. In the current work, we propose a novel knowledge distillation algorithm to improve the performance of shallow networks for tissue phenotyping in histology images. For this purpose, we propose multi-layer feature distillation such that a single layer in the student network gets supervision from multiple teacher layers. In the proposed algorithm, the size of the feature map of two layers is matched by using a learnable multi-layer perceptron. The distance between the feature maps of the two layers is then minimized during the training of the student network. The overall objective function is computed by summation of the loss over multiple layers combination weighted with a learnable attention-based parameter. The proposed algorithm is named as Knowledge Distillation for Tissue Phenotyping (KDTP). Experiments are performed on five different publicly available histology image classification datasets using several teacher-student network combinations within the KDTP algorithm. Our results demonstrate a significant performance increase in the student networks by using the proposed KDTP algorithm compared to direct supervision-based training methods.

4.
BMJ Open ; 12(10): e067140, 2022 10 05.
Article de Anglais | MEDLINE | ID: mdl-36198471

RÉSUMÉ

INTRODUCTION: Whole-body MRI (WB-MRI) is recommended by the National Institute of Clinical Excellence as the first-line imaging tool for diagnosis of multiple myeloma. Reporting WB-MRI scans requires expertise to interpret and can be challenging for radiologists who need to meet rapid turn-around requirements. Automated computational tools based on machine learning (ML) could assist the radiologist in terms of sensitivity and reading speed and would facilitate improved accuracy, productivity and cost-effectiveness. The MALIMAR study aims to develop and validate a ML algorithm to increase the diagnostic accuracy and reading speed of radiological interpretation of WB-MRI compared with standard methods. METHODS AND ANALYSIS: This phase II/III imaging trial will perform retrospective analysis of previously obtained clinical radiology MRI scans and scans from healthy volunteers obtained prospectively to implement training and validation of an ML algorithm. The study will comprise three project phases using approximately 633 scans to (1) train the ML algorithm to identify active disease, (2) clinically validate the ML algorithm and (3) determine change in disease status following treatment via a quantification of burden of disease in patients with myeloma. Phase 1 will primarily train the ML algorithm to detect active myeloma against an expert assessment ('reference standard'). Phase 2 will use the ML output in the setting of radiology reader study to assess the difference in sensitivity when using ML-assisted reading or human-alone reading. Phase 3 will assess the agreement between experienced readers (with and without ML) and the reference standard in scoring both overall burden of disease before and after treatment, and response. ETHICS AND DISSEMINATION: MALIMAR has ethical approval from South Central-Oxford C Research Ethics Committee (REC Reference: 17/SC/0630). IRAS Project ID: 233501. CPMS Portfolio adoption (CPMS ID: 36766). Participants gave informed consent to participate in the study before taking part. MALIMAR is funded by National Institute for Healthcare Research Efficacy and Mechanism Evaluation funding (NIHR EME Project ID: 16/68/34). Findings will be made available through peer-reviewed publications and conference dissemination. TRIAL REGISTRATION NUMBER: NCT03574454.


Sujet(s)
Apprentissage machine , Imagerie par résonance magnétique , Myélome multiple , Imagerie du corps entier , Chlorobenzènes , Essais cliniques de phase II comme sujet , Essais cliniques de phase III comme sujet , Études transversales , Tests diagnostiques courants , Humains , Imagerie par résonance magnétique/méthodes , Myélome multiple/imagerie diagnostique , Myélome multiple/thérapie , Études rétrospectives , Sulfures , Imagerie du corps entier/méthodes
5.
Sensors (Basel) ; 22(20)2022 Oct 21.
Article de Anglais | MEDLINE | ID: mdl-36298412

RÉSUMÉ

Sensor fusion is the process of merging data from many sources, such as radar, lidar and camera sensors, to provide less uncertain information compared to the information collected from single source [...].


Sujet(s)
Algorithmes , Apprentissage profond , Radar , Vision , Ordinateurs
6.
NPJ Precis Oncol ; 6(1): 37, 2022 Jun 15.
Article de Anglais | MEDLINE | ID: mdl-35705792

RÉSUMÉ

Understanding factors that impact prognosis for cancer patients have high clinical relevance for treatment decisions and monitoring of the disease outcome. Advances in artificial intelligence (AI) and digital pathology offer an exciting opportunity to capitalize on the use of whole slide images (WSIs) of hematoxylin and eosin (H&E) stained tumor tissue for objective prognosis and prediction of response to targeted therapies. AI models often require hand-delineated annotations for effective training which may not be readily available for larger data sets. In this study, we investigated whether AI models can be trained without region-level annotations and solely on patient-level survival data. We present a weakly supervised survival convolutional neural network (WSS-CNN) approach equipped with a visual attention mechanism for predicting overall survival. The inclusion of visual attention provides insights into regions of the tumor microenvironment with the pathological interpretation which may improve our understanding of the disease pathomechanism. We performed this analysis on two independent, multi-center patient data sets of lung (which is publicly available data) and bladder urothelial carcinoma. We perform univariable and multivariable analysis and show that WSS-CNN features are prognostic of overall survival in both tumor indications. The presented results highlight the significance of computational pathology algorithms for predicting prognosis using H&E stained images alone and underpin the use of computational methods to improve the efficiency of clinical trial studies.

7.
Med Image Anal ; 79: 102480, 2022 07.
Article de Anglais | MEDLINE | ID: mdl-35598521

RÉSUMÉ

Identification of nuclear components in the histology landscape is an important step towards developing computational pathology tools for the profiling of tumor micro-environment. Most existing methods for the identification of such components are limited in scope due to heterogeneous nature of the nuclei. Graph-based methods offer a natural way to formulate the nucleus classification problem to incorporate both appearance and geometric locations of the nuclei. The main challenge is to define models that can handle such an unstructured domain. Current approaches focus on learning better features and then employ well-known classifiers for identifying distinct nuclear phenotypes. In contrast, we propose a message passing network that is a fully learnable framework build on classical network flow formulation. Based on physical interaction of the nuclei, a nearest neighbor graph is constructed such that the nodes represent the nuclei centroids. For each edge and node, appearance and geometric features are computed which are then used for the construction of messages utilized for diffusing contextual information to the neighboring nodes. Such an algorithm can infer global information over an entire network and predict biologically meaningful nuclear communities. We show that learning such communities improves the performance of nucleus classification task in histology images. The proposed algorithm can be used as a component in existing state-of-the-art methods resulting in improved nucleus classification performance across four different publicly available datasets.


Sujet(s)
Techniques histologiques , , Algorithmes , Noyau de la cellule , Humains
8.
Article de Anglais | MEDLINE | ID: mdl-31001524

RÉSUMÉ

High-resolution microscopy images of tissue specimens provide detailed information about the morphology of normal and diseased tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Segmentation of nuclei and classification of tissue images are two common tasks in tissue image analysis. Development of accurate and efficient algorithms for these tasks is a challenging problem because of the complexity of tissue morphology and tumor heterogeneity. In this paper we present two computer algorithms; one designed for segmentation of nuclei and the other for classification of whole slide tissue images. The segmentation algorithm implements a multiscale deep residual aggregation network to accurately segment nuclear material and then separate clumped nuclei into individual nuclei. The classification algorithm initially carries out patch-level classification via a deep learning method, then patch-level statistical and morphological features are used as input to a random forest regression model for whole slide image classification. The segmentation and classification algorithms were evaluated in the MICCAI 2017 Digital Pathology challenge. The segmentation algorithm achieved an accuracy score of 0.78. The classification algorithm achieved an accuracy score of 0.81. These scores were the highest in the challenge.

9.
Med Image Anal ; 55: 1-14, 2019 07.
Article de Anglais | MEDLINE | ID: mdl-30991188

RÉSUMÉ

Tumor segmentation in whole-slide images of histology slides is an important step towards computer-assisted diagnosis. In this work, we propose a tumor segmentation framework based on the novel concept of persistent homology profiles (PHPs). For a given image patch, the homology profiles are derived by efficient computation of persistent homology, which is an algebraic tool from homology theory. We propose an efficient way of computing topological persistence of an image, alternative to simplicial homology. The PHPs are devised to distinguish tumor regions from their normal counterparts by modeling the atypical characteristics of tumor nuclei. We propose two variants of our method for tumor segmentation: one that targets speed without compromising accuracy and the other that targets higher accuracy. The fast version is based on a selection of exemplar image patches from a convolution neural network (CNN) and patch classification by quantifying the divergence between the PHPs of exemplars and the input image patch. Detailed comparative evaluation shows that the proposed algorithm is significantly faster than competing algorithms while achieving comparable results. The accurate version combines the PHPs and high-level CNN features and employs a multi-stage ensemble strategy for image patch labeling. Experimental results demonstrate that the combination of PHPs and CNN features outperform competing algorithms. This study is performed on two independently collected colorectal datasets containing adenoma, adenocarcinoma, signet, and healthy cases. Collectively, the accurate tumor segmentation produces the highest average patch-level F1-score, as compared with competing algorithms, on malignant and healthy cases from both the datasets. Overall the proposed framework highlights the utility of persistent homology for histopathology image analysis.


Sujet(s)
Adénocarcinome/imagerie diagnostique , Adénocarcinome/anatomopathologie , Algorithmes , Tumeurs colorectales/imagerie diagnostique , Tumeurs colorectales/anatomopathologie , Traitement d'image par ordinateur/méthodes , Prolifération cellulaire , Apprentissage profond , Techniques histologiques , Humains
10.
Med Image Anal ; 54: 111-121, 2019 05.
Article de Anglais | MEDLINE | ID: mdl-30861443

RÉSUMÉ

Tumor proliferation is an important biomarker indicative of the prognosis of breast cancer patients. Assessment of tumor proliferation in a clinical setting is a highly subjective and labor-intensive task. Previous efforts to automate tumor proliferation assessment by image analysis only focused on mitosis detection in predefined tumor regions. However, in a real-world scenario, automatic mitosis detection should be performed in whole-slide images (WSIs) and an automatic method should be able to produce a tumor proliferation score given a WSI as input. To address this, we organized the TUmor Proliferation Assessment Challenge 2016 (TUPAC16) on prediction of tumor proliferation scores from WSIs. The challenge dataset consisted of 500 training and 321 testing breast cancer histopathology WSIs. In order to ensure fair and independent evaluation, only the ground truth for the training dataset was provided to the challenge participants. The first task of the challenge was to predict mitotic scores, i.e., to reproduce the manual method of assessing tumor proliferation by a pathologist. The second task was to predict the gene expression based PAM50 proliferation scores from the WSI. The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of κ = 0.567, 95% CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method had a Spearman's correlation coefficient of r = 0.617, 95% CI [0.581 0.651] with the ground truth. This was the first comparison study that investigated tumor proliferation assessment from WSIs. The achieved results are promising given the difficulty of the tasks and weakly-labeled nature of the ground truth. However, further research is needed to improve the practical utility of image analysis methods for this task.


Sujet(s)
Marqueurs biologiques tumoraux/analyse , Tumeurs du sein/anatomopathologie , Apprentissage profond , Traitement d'image par ordinateur/méthodes , Marqueurs biologiques tumoraux/génétique , Tumeurs du sein/génétique , Prolifération cellulaire , Femelle , Expression des gènes , Humains , Mitose , Anatomopathologie/méthodes , Valeur prédictive des tests , Pronostic
11.
IEEE Trans Med Imaging ; 38(11): 2620-2631, 2019 11.
Article de Anglais | MEDLINE | ID: mdl-30908205

RÉSUMÉ

Estimating over-amplification of human epidermal growth factor receptor 2 (HER2) on invasive breast cancer is regarded as a significant predictive and prognostic marker. We propose a novel deep reinforcement learning (DRL)-based model that treats immunohistochemical (IHC) scoring of HER2 as a sequential learning task. For a given image tile sampled from multi-resolution giga-pixel whole slide image (WSI), the model learns to sequentially identify some of the diagnostically relevant regions of interest (ROIs) by following a parameterized policy. The selected ROIs are processed by recurrent and residual convolution networks to learn the discriminative features for different HER2 scores and predict the next location, without requiring to process all the sub-image patches of a given tile for predicting the HER2 score, mimicking the histopathologist who would not usually analyze every part of the slide at the highest magnification. The proposed model incorporates a task-specific regularization term and inhibition of return mechanism to prevent the model from revisiting the previously attended locations. We evaluated our model on two IHC datasets: a publicly available dataset from the HER2 scoring challenge contest and another dataset consisting of WSIs of gastroenteropancreatic neuroendocrine tumor sections stained with Glo1 marker. We demonstrate that the proposed model outperforms other methods based on state-of-the-art deep convolutional networks. To the best of our knowledge, this is the first study using DRL for IHC scoring and could potentially lead to wider use of DRL in the domain of computational pathology reducing the computational burden of the analysis of large multi-gigapixel histology images.


Sujet(s)
Apprentissage profond , Interprétation d'images assistée par ordinateur/méthodes , Immunohistochimie/méthodes , Algorithmes , Marqueurs biologiques tumoraux/analyse , Région mammaire/composition chimique , Région mammaire/imagerie diagnostique , Tumeurs du sein/composition chimique , Tumeurs du sein/imagerie diagnostique , Tumeurs du sein/anatomopathologie , Femelle , Humains , Récepteur ErbB-2/analyse
12.
Histopathology ; 72(2): 227-238, 2018 Jan.
Article de Anglais | MEDLINE | ID: mdl-28771788

RÉSUMÉ

AIMS: Evaluating expression of the human epidermal growth factor receptor 2 (HER2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognized importance as a predictive and prognostic marker in clinical practice. However, visual scoring of HER2 is subjective, and consequently prone to interobserver variability. Given the prognostic and therapeutic implications of HER2 scoring, a more objective method is required. In this paper, we report on a recent automated HER2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state-of-the-art artificial intelligence (AI)-based automated methods for HER2 scoring. METHODS AND RESULTS: The contest data set comprised digitized whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both haematoxylin and eosin (H&E) and IHC for HER2. The contesting algorithms predicted scores of the IHC slides automatically for an unseen subset of the data set and the predicted scores were compared with the 'ground truth' (a consensus score from at least two experts). We also report on a simple 'Man versus Machine' contest for the scoring of HER2 and show that the automated methods could beat the pathology experts on this contest data set. CONCLUSIONS: This paper presents a benchmark for comparing the performance of automated algorithms for scoring of HER2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring.


Sujet(s)
Algorithmes , Marqueurs biologiques tumoraux/analyse , Tumeurs du sein/diagnostic , Interprétation d'images assistée par ordinateur/méthodes , Récepteur ErbB-2/analyse , Femelle , Humains , Immunohistochimie
13.
JAMA ; 318(22): 2199-2210, 2017 12 12.
Article de Anglais | MEDLINE | ID: mdl-29234806

RÉSUMÉ

Importance: Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective: Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Design, Setting, and Participants: Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures: Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures: The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results: The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.


Sujet(s)
Tumeurs du sein/anatomopathologie , Métastase lymphatique/diagnostic , Apprentissage machine , Anatomopathologistes , Algorithmes , Femelle , Humains , Métastase lymphatique/anatomopathologie , Anatomopathologie clinique , Courbe ROC
14.
Oncotarget ; 8(44): 76961-76973, 2017 Sep 29.
Article de Anglais | MEDLINE | ID: mdl-29100361

RÉSUMÉ

BACKGROUND: The glyoxalase-1 gene (GLO1) is a hotspot for copy-number variation (CNV) in human genomes. Increased GLO1 copy-number is associated with multidrug resistance in tumour chemotherapy, but prevalence of GLO1 CNV in gastro-entero-pancreatic neuroendocrine tumours (GEP-NET) is unknown. METHODS: GLO1 copy-number variation was measured in 39 patients with GEP-NET (midgut NET, n = 25; pancreatic NET, n = 14) after curative or debulking surgical treatment. Primary tumour tissue, surrounding healthy tissue and, where applicable, additional metastatic tumour tissue were analysed, using real time qPCR. Progression and survival following surgical treatment were monitored over 4.2 ± 0.5 years. RESULTS: In the pooled GEP-NET cohort, GLO1 copy-number in healthy tissue was 2.0 in all samples but significantly increased in primary tumour tissue in 43% of patients with pancreatic NET and in 72% of patients with midgut NET, mainly driven by significantly higher GLO1 copy-number in midgut NET. In tissue from additional metastases resection (18 midgut NET and one pancreatic NET), GLO1 copy number was also increased, compared with healthy tissue; but was not significantly different compared with primary tumour tissue. During mean 3 - 5 years follow-up, 8 patients died and 16 patients showed radiological progression. In midgut NET, a high GLO1 copy-number was associated with earlier progression. In NETs with increased GLO1 copy number, there was increased Glo1 protein expression compared to non-malignant tissue. CONCLUSIONS: GLO1 copy-number was increased in a large percentage of patients with GEP-NET and correlated positively with increased Glo1 protein in tumour tissue. Analysis of GLO1 copy-number variation particularly in patients with midgut NET could be a novel prognostic marker for tumour progression.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...