Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Mod Pathol ; 37(3): 100416, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38154653

RESUMO

In recent years, artificial intelligence (AI) has demonstrated exceptional performance in mitosis identification and quantification. However, the implementation of AI in clinical practice needs to be evaluated against the existing methods. This study is aimed at assessing the optimal method of using AI-based mitotic figure scoring in breast cancer (BC). We utilized whole slide images from a large cohort of BC with extended follow-up comprising a discovery (n = 1715) and a validation (n = 859) set (Nottingham cohort). The Cancer Genome Atlas of breast invasive carcinoma (TCGA-BRCA) cohort (n = 757) was used as an external test set. Employing automated mitosis detection, the mitotic count was assessed using 3 different methods, the mitotic count per tumor area (MCT; calculated by dividing the number of mitotic figures by the total tumor area), the mitotic index (MI; defined as the average number of mitotic figures per 1000 malignant cells), and the mitotic activity index (MAI; defined as the number of mitotic figures in 3 mm2 area within the mitotic hotspot). These automated metrics were evaluated and compared based on their correlation with the well-established visual scoring method of the Nottingham grading system and Ki67 score, clinicopathologic parameters, and patient outcomes. AI-based mitotic scores derived from the 3 methods (MCT, MI, and MAI) were significantly correlated with the clinicopathologic characteristics and patient survival (P < .001). However, the mitotic counts and the derived cutoffs varied significantly between the 3 methods. Only MAI and MCT were positively correlated with the gold standard visual scoring method used in Nottingham grading system (r = 0.8 and r = 0.7, respectively) and Ki67 scores (r = 0.69 and r = 0.55, respectively), and MAI was the only independent predictor of survival (P < .05) in multivariate Cox regression analysis. For clinical applications, the optimum method of scoring mitosis using AI needs to be considered. MAI can provide reliable and reproducible results and can accurately quantify mitotic figures in BC.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/patologia , Antígeno Ki-67 , Inteligência Artificial , Mitose , Índice Mitótico
2.
Gut ; 72(9): 1709-1721, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37173125

RESUMO

OBJECTIVE: To develop an interpretable artificial intelligence algorithm to rule out normal large bowel endoscopic biopsies, saving pathologist resources and helping with early diagnosis. DESIGN: A graph neural network was developed incorporating pathologist domain knowledge to classify 6591 whole-slides images (WSIs) of endoscopic large bowel biopsies from 3291 patients (approximately 54% female, 46% male) as normal or abnormal (non-neoplastic and neoplastic) using clinically driven interpretable features. One UK National Health Service (NHS) site was used for model training and internal validation. External validation was conducted on data from two other NHS sites and one Portuguese site. RESULTS: Model training and internal validation were performed on 5054 WSIs of 2080 patients resulting in an area under the curve-receiver operating characteristic (AUC-ROC) of 0.98 (SD=0.004) and AUC-precision-recall (PR) of 0.98 (SD=0.003). The performance of the model, named Interpretable Gland-Graphs using a Neural Aggregator (IGUANA), was consistent in testing over 1537 WSIs of 1211 patients from three independent external datasets with mean AUC-ROC=0.97 (SD=0.007) and AUC-PR=0.97 (SD=0.005). At a high sensitivity threshold of 99%, the proposed model can reduce the number of normal slides to be reviewed by a pathologist by approximately 55%. IGUANA also provides an explainable output highlighting potential abnormalities in a WSI in the form of a heatmap as well as numerical values associating the model prediction with various histological features. CONCLUSION: The model achieved consistently high accuracy showing its potential in optimising increasingly scarce pathologist resources. Explainable predictions can guide pathologists in their diagnostic decision-making and help boost their confidence in the algorithm, paving the way for its future clinical adoption.


Assuntos
Inteligência Artificial , Medicina Estatal , Humanos , Masculino , Feminino , Estudos Retrospectivos , Algoritmos , Biópsia
3.
Br J Cancer ; 129(11): 1747-1758, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37777578

RESUMO

BACKGROUND: Tumour infiltrating lymphocytes (TILs) are a prognostic parameter in triple-negative and human epidermal growth factor receptor 2 (HER2)-positive breast cancer (BC). However, their role in luminal (oestrogen receptor positive and HER2 negative (ER + /HER2-)) BC remains unclear. In this study, we used artificial intelligence (AI) to assess the prognostic significance of TILs in a large well-characterised cohort of luminal BC. METHODS: Supervised deep learning model analysis of Haematoxylin and Eosin (H&E)-stained whole slide images (WSI) was applied to a cohort of 2231 luminal early-stage BC patients with long-term follow-up. Stromal TILs (sTILs) and intratumoural TILs (tTILs) were quantified and their spatial distribution within tumour tissue, as well as the proportion of stroma involved by sTILs were assessed. The association of TILs with clinicopathological parameters and patient outcome was determined. RESULTS: A strong positive linear correlation was observed between sTILs and tTILs. High sTILs and tTILs counts, as well as their proximity to stromal and tumour cells (co-occurrence) were associated with poor clinical outcomes and unfavourable clinicopathological parameters including high tumour grade, lymph node metastasis, large tumour size, and young age. AI-based assessment of the proportion of stroma composed of sTILs (as assessed visually in routine practice) was not predictive of patient outcome. tTILs was an independent predictor of worse patient outcome in multivariate Cox Regression analysis. CONCLUSION: AI-based detection of TILs counts, and their spatial distribution provides prognostic value in luminal early-stage BC patients. The utilisation of AI algorithms could provide a comprehensive assessment of TILs as a morphological variable in WSIs beyond eyeballing assessment.


Assuntos
Neoplasias da Mama , Neoplasias de Mama Triplo Negativas , Humanos , Feminino , Neoplasias da Mama/patologia , Linfócitos do Interstício Tumoral/patologia , Inteligência Artificial , Prognóstico , Neoplasias de Mama Triplo Negativas/patologia , Biomarcadores Tumorais/metabolismo
4.
Mod Pathol ; 36(10): 100254, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37380057

RESUMO

Tumor-associated stroma in breast cancer (BC) is complex and exhibits a high degree of heterogeneity. To date, no standardized assessment method has been established. Artificial intelligence (AI) could provide an objective morphologic assessment of tumors and stroma, with the potential to identify new features not discernible by visual microscopy. In this study, we used AI to assess the clinical significance of (1) stroma-to-tumor ratio (S:TR) and (2) the spatial arrangement of stromal cells, tumor cell density, and tumor burden in BC. Whole-slide images of a large cohort (n = 1968) of well-characterized luminal BC cases were examined. Region and cell-level annotation was performed, and supervised deep learning models were applied for automated quantification of tumor and stromal features. S:TR was calculated in terms of surface area and cell count ratio, and the S:TR heterogeneity and spatial distribution were also assessed. Tumor cell density and tumor size were used to estimate tumor burden. Cases were divided into discovery (n = 1027) and test (n = 941) sets for validation of the findings. In the whole cohort, the stroma-to-tumor mean surface area ratio was 0.74, and stromal cell density heterogeneity score was high (0.7/1). BC with high S:TR showed features characteristic of good prognosis and longer patient survival in both the discovery and test sets. Heterogeneous spatial distribution of S:TR areas was predictive of worse outcome. Higher tumor burden was associated with aggressive tumor behavior and shorter survival and was an independent predictor of worse outcome (BC-specific survival; hazard ratio: 1.7, P = .03, 95% CI, 1.04-2.83 and distant metastasis-free survival; hazard ratio: 1.64, P = .04, 95% CI, 1.01-2.62) superior to absolute tumor size. The study concludes that AI provides a tool to assess major and subtle morphologic stromal features in BC with prognostic implications. Tumor burden is more prognostically informative than tumor size.

5.
Med Image Anal ; 93: 103071, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38199068

RESUMO

Colorectal cancer (CRC) is a primary global health concern, and identifying the molecular pathways, genetic subtypes, and mutations associated with CRC is crucial for precision medicine. However, traditional measurement techniques such as gene sequencing are costly and time-consuming, while most deep learning methods proposed for this task lack interpretability. This study offers a new approach to enhance the state-of-the-art deep learning methods for molecular pathways and key mutation prediction by incorporating cell network information. We build cell graphs with nuclei as nodes and nuclei connections as edges of the network and leverage Social Network Analysis (SNA) measures to extract abstract, perceivable, and interpretable features that explicitly describe the cell network characteristics in an image. Our approach does not rely on precise nuclei segmentation or feature extraction, is computationally efficient, and is easily scalable. In this study, we utilize the TCGA-CRC-DX dataset, comprising 499 patients and 502 diagnostic slides from primary colorectal tumours, sourced from 36 distinct medical centres in the United States. By incorporating the SNA features alongside deep features in two multiple instance learning frameworks, we demonstrate improved performance for chromosomal instability (CIN), hypermutated tumour (HM), TP53 gene, BRAF gene, and Microsatellite instability (MSI) status prediction tasks (2.4%-4% and 7-8.8% improvement in AUROC and AUPRC on average). Additionally, our method achieves outstanding performance on MSI prediction in an external PAIP dataset (99% AUROC and 98% AUPRC), demonstrating its generalizability. Our findings highlight the discrimination power of SNA features and how they can be beneficial to deep learning models' performance and provide insights into the correlation of cell network profiles with molecular pathways and key mutations.


Assuntos
Neoplasias Colorretais , Aprendizado Profundo , Humanos , Proteínas Proto-Oncogênicas B-raf/genética , Análise de Rede Social , Mutação , Neoplasias Colorretais/genética , Neoplasias Colorretais/patologia , Instabilidade de Microssatélites
6.
Med Image Anal ; 94: 103132, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38442527

RESUMO

Counting of mitotic figures is a fundamental step in grading and prognostication of several cancers. However, manual mitosis counting is tedious and time-consuming. In addition, variation in the appearance of mitotic figures causes a high degree of discordance among pathologists. With advances in deep learning models, several automatic mitosis detection algorithms have been proposed but they are sensitive to domain shift often seen in histology images. We propose a robust and efficient two-stage mitosis detection framework, which comprises mitosis candidate segmentation (Detecting Fast) and candidate refinement (Detecting Slow) stages. The proposed candidate segmentation model, termed EUNet, is fast and accurate due to its architectural design. EUNet can precisely segment candidates at a lower resolution to considerably speed up candidate detection. Candidates are then refined using a deeper classifier network, EfficientNet-B7, in the second stage. We make sure both stages are robust against domain shift by incorporating domain generalization methods. We demonstrate state-of-the-art performance and generalizability of the proposed model on the three largest publicly available mitosis datasets, winning the two mitosis domain generalization challenge contests (MIDOG21 and MIDOG22). Finally, we showcase the utility of the proposed algorithm by processing the TCGA breast cancer cohort (1,124 whole-slide images) to generate and release a repository of more than 620K potential mitotic figures (not exhaustively validated).


Assuntos
Neoplasias da Mama , Mitose , Humanos , Feminino , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Técnicas Histológicas , Processamento de Imagem Assistida por Computador/métodos
7.
NPJ Precis Oncol ; 8(1): 137, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38942998

RESUMO

Oral epithelial dysplasia (OED) is a premalignant histopathological diagnosis given to lesions of the oral cavity. Its grading suffers from significant inter-/intra-observer variability, and does not reliably predict malignancy progression, potentially leading to suboptimal treatment decisions. To address this, we developed an artificial intelligence (AI) algorithm, that assigns an Oral Malignant Transformation (OMT) risk score based on the Haematoxylin and Eosin (H&E) stained whole slide images (WSIs). Our AI pipeline leverages an in-house segmentation model to detect and segment both nuclei and epithelium. Subsequently, a shallow neural network utilises interpretable morphological and spatial features, emulating histological markers, to predict progression. We conducted internal cross-validation on our development cohort (Sheffield; n = 193 cases) and independent validation on two external cohorts (Birmingham and Belfast; n = 89 cases). On external validation, the proposed OMTscore achieved an AUROC = 0.75 (Recall = 0.92) in predicting OED progression, outperforming other grading systems (Binary: AUROC = 0.72, Recall = 0.85). Survival analyses showed the prognostic value of our OMTscore (C-index = 0.60, p = 0.02), compared to WHO (C-index = 0.64, p = 0.003) and binary grades (C-index = 0.65, p < 0.001). Nuclear analyses elucidated the presence of peri-epithelial and intra-epithelial lymphocytes in highly predictive patches of transforming cases (p < 0.001). This is the first study to propose a completely automated, explainable, and externally validated algorithm for predicting OED transformation. Our algorithm shows comparable-to-human-level performance, offering a promising solution to the challenges of grading OED in routine clinical practice.

8.
J Pathol Clin Res ; 10(1): e346, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37873865

RESUMO

Early-stage estrogen receptor positive and human epidermal growth factor receptor negative (ER+/HER2-) luminal breast cancer (BC) is quite heterogeneous and accounts for about 70% of all BCs. Ki67 is a proliferation marker that has a significant prognostic value in luminal BC despite the challenges in its assessment. There is increasing evidence that spatial colocalization, which measures the evenness of different types of cells, is clinically important in several types of cancer. However, reproducible quantification of intra-tumor spatial heterogeneity remains largely unexplored. We propose an automated pipeline for prognostication of luminal BC based on the analysis of spatial distribution of Ki67 expression in tumor cells using a large well-characterized cohort (n = 2,081). The proposed Ki67 colocalization (Ki67CL) score can stratify ER+/HER2- BC patients with high significance in terms of BC-specific survival (p < 0.00001) and distant metastasis-free survival (p = 0.0048). Ki67CL score is shown to be highly significant compared with the standard Ki67 index. In addition, we show that the proposed Ki67CL score can help identify luminal BC patients who can potentially benefit from adjuvant chemotherapy.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/patologia , Prognóstico , Antígeno Ki-67 , Receptor ErbB-2/genética , Receptor ErbB-2/metabolismo , Inteligência Artificial
9.
Med Image Anal ; 94: 103155, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38537415

RESUMO

Recognition of mitotic figures in histologic tumor specimens is highly relevant to patient outcome assessment. This task is challenging for algorithms and human experts alike, with deterioration of algorithmic performance under shifts in image representations. Considerable covariate shifts occur when assessment is performed on different tumor types, images are acquired using different digitization devices, or specimens are produced in different laboratories. This observation motivated the inception of the 2022 challenge on MItosis Domain Generalization (MIDOG 2022). The challenge provided annotated histologic tumor images from six different domains and evaluated the algorithmic approaches for mitotic figure detection provided by nine challenge participants on ten independent domains. Ground truth for mitotic figure detection was established in two ways: a three-expert majority vote and an independent, immunohistochemistry-assisted set of labels. This work represents an overview of the challenge tasks, the algorithmic strategies employed by the participants, and potential factors contributing to their success. With an F1 score of 0.764 for the top-performing team, we summarize that domain generalization across various tumor domains is possible with today's deep learning-based recognition pipelines. However, we also found that domain characteristics not present in the training set (feline as new species, spindle cell shape as new morphology and a new scanner) led to small but significant decreases in performance. When assessed against the immunohistochemistry-assisted reference standard, all methods resulted in reduced recall scores, with only minor changes in the order of participants in the ranking.


Assuntos
Laboratórios , Mitose , Humanos , Animais , Gatos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Padrões de Referência
10.
Med Image Anal ; 92: 103047, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38157647

RESUMO

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular/patologia , Técnicas Histológicas/métodos
11.
Med Image Anal ; 83: 102685, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36410209

RESUMO

The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.


Assuntos
Pesquisa Biomédica , Humanos
12.
Cell Rep Med ; 4(12): 101313, 2023 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-38118424

RESUMO

Identification of the gene expression state of a cancer patient from routine pathology imaging and characterization of its phenotypic effects have significant clinical and therapeutic implications. However, prediction of expression of individual genes from whole slide images (WSIs) is challenging due to co-dependent or correlated expression of multiple genes. Here, we use a purely data-driven approach to first identify groups of genes with co-dependent expression and then predict their status from WSIs using a bespoke graph neural network. These gene groups allow us to capture the gene expression state of a patient with a small number of binary variables that are biologically meaningful and carry histopathological insights for clinical and therapeutic use cases. Prediction of gene expression state based on these gene groups allows associating histological phenotypes (cellular composition, mitotic counts, grading, etc.) with underlying gene expression patterns and opens avenues for gaining biological insights from routine pathology imaging directly.


Assuntos
Neoplasias da Mama , Perfilação da Expressão Gênica , Humanos , Feminino , Transcriptoma/genética , Redes Neurais de Computação , Fenótipo , Neoplasias da Mama/genética
13.
Lancet Digit Health ; 5(11): e786-e797, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37890902

RESUMO

BACKGROUND: Histopathological examination is a crucial step in the diagnosis and treatment of many major diseases. Aiming to facilitate diagnostic decision making and improve the workload of pathologists, we developed an artificial intelligence (AI)-based prescreening tool that analyses whole-slide images (WSIs) of large-bowel biopsies to identify typical, non-neoplastic, and neoplastic biopsies. METHODS: This retrospective cohort study was conducted with an internal development cohort of slides acquired from a hospital in the UK and three external validation cohorts of WSIs acquired from two hospitals in the UK and one clinical laboratory in Portugal. To learn the differential histological patterns from digitised WSIs of large-bowel biopsy slides, our proposed weakly supervised deep-learning model (Colorectal AI Model for Abnormality Detection [CAIMAN]) used slide-level diagnostic labels and no detailed cell or region-level annotations. The method was developed with an internal development cohort of 5054 biopsy slides from 2080 patients that were labelled with corresponding diagnostic categories assigned by pathologists. The three external validation cohorts, with a total of 1536 slides, were used for independent validation of CAIMAN. Each WSI was classified into one of three classes (ie, typical, atypical non-neoplastic, and atypical neoplastic). Prediction scores of image tiles were aggregated into three prediction scores for the whole slide, one for its likelihood of being typical, one for its likelihood of being non-neoplastic, and one for its likelihood of being neoplastic. The assessment of the external validation cohorts was conducted by the trained and frozen CAIMAN model. To evaluate model performance, we calculated area under the convex hull of the receiver operating characteristic curve (AUROC), area under the precision-recall curve, and specificity compared with our previously published iterative draw and rank sampling (IDaRS) algorithm. We also generated heat maps and saliency maps to analyse and visualise the relationship between the WSI diagnostic labels and spatial features of the tissue microenvironment. The main outcome of this study was the ability of CAIMAN to accurately identify typical and atypical WSIs of colon biopsies, which could potentially facilitate automatic removing of typical biopsies from the diagnostic workload in clinics. FINDINGS: A randomly selected subset of all large bowel biopsies was obtained between Jan 1, 2012, and Dec 31, 2017. The AI training, validation, and assessments were done between Jan 1, 2021, and Sept 30, 2022. WSIs with diagnostic labels were collected between Jan 1 and Sept 30, 2022. Our analysis showed no statistically significant differences across prediction scores from CAIMAN for typical and atypical classes based on anatomical sites of the biopsy. At 0·99 sensitivity, CAIMAN (specificity 0·5592) was more accurate than an IDaRS-based weakly supervised WSI-classification pipeline (0·4629) in identifying typical and atypical biopsies on cross-validation in the internal development cohort (p<0·0001). At 0·99 sensitivity, CAIMAN was also more accurate than IDaRS for two external validation cohorts (p<0·0001), but not for a third external validation cohort (p=0·10). CAIMAN provided higher specificity than IDaRS at some high-sensitivity thresholds (0·7763 vs 0·6222 for 0·95 sensitivity, 0·7126 vs 0·5407 for 0·97 sensitivity, and 0·5615 vs 0·3970 for 0·99 sensitivity on one of the external validation cohorts) and showed high classification performance in distinguishing between neoplastic biopsies (AUROC 0·9928, 95% CI 0·9927-0·9929), inflammatory biopsies (0·9658, 0·9655-0·9661), and atypical biopsies (0·9789, 0·9786-0·9792). On the three external validation cohorts, CAIMAN had AUROC values of 0·9431 (95% CI 0·9165-0·9697), 0·9576 (0·9568-0·9584), and 0·9636 (0·9615-0·9657) for the detection of atypical biopsies. Saliency maps supported the representation of disease heterogeneity in model predictions and its association with relevant histological features. INTERPRETATION: CAIMAN, with its high sensitivity in detecting atypical large-bowel biopsies, might be a promising improvement in clinical workflow efficiency and diagnostic decision making in prescreening of typical colorectal biopsies. FUNDING: The Pathology Image Data Lake for Analytics, Knowledge and Education Centre of Excellence; the UK Government's Industrial Strategy Challenge Fund; and Innovate UK on behalf of UK Research and Innovation.


Assuntos
Inteligência Artificial , Neoplasias Colorretais , Humanos , Portugal , Estudos Retrospectivos , Biópsia , Reino Unido , Microambiente Tumoral
14.
NPJ Precis Oncol ; 7(1): 122, 2023 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-37968376

RESUMO

Breast cancer (BC) grade is a well-established subjective prognostic indicator of tumour aggressiveness. Tumour heterogeneity and subjective assessment result in high degree of variability among observers in BC grading. Here we propose an objective Haematoxylin & Eosin (H&E) image-based prognostic marker for early-stage luminal/Her2-negative BReAst CancEr that we term as the BRACE marker. The proposed BRACE marker is derived from AI based assessment of heterogeneity in BC at a detailed level using the power of deep learning. The prognostic ability of the marker is validated in two well-annotated cohorts (Cohort-A/Nottingham: n = 2122 and Cohort-B/Coventry: n = 311) on early-stage luminal/HER2-negative BC patients treated with endocrine therapy and with long-term follow-up. The BRACE marker is able to stratify patients for both distant metastasis free survival (p = 0.001, C-index: 0.73) and BC specific survival (p < 0.0001, C-index: 0.84) showing comparable prediction accuracy to Nottingham Prognostic Index and Magee scores, which are both derived from manual histopathological assessment, to identify luminal BC patients that may be likely to benefit from adjuvant chemotherapy.

15.
Med Image Anal ; 84: 102699, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36463832

RESUMO

The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F1 score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task.


Assuntos
Algoritmos , Mitose , Humanos , Gradação de Tumores , Prognóstico
16.
Commun Med (Lond) ; 2: 120, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36168445

RESUMO

Background: Computational pathology has seen rapid growth in recent years, driven by advanced deep-learning algorithms. Due to the sheer size and complexity of multi-gigapixel whole-slide images, to the best of our knowledge, there is no open-source software library providing a generic end-to-end API for pathology image analysis using best practices. Most researchers have designed custom pipelines from the bottom up, restricting the development of advanced algorithms to specialist users. To help overcome this bottleneck, we present TIAToolbox, a Python toolbox designed to make computational pathology accessible to computational, biomedical, and clinical researchers. Methods: By creating modular and configurable components, we enable the implementation of computational pathology algorithms in a way that is easy to use, flexible and extensible. We consider common sub-tasks including reading whole slide image data, patch extraction, stain normalization and augmentation, model inference, and visualization. For each of these steps, we provide a user-friendly application programming interface for commonly used methods and models. Results: We demonstrate the use of the interface to construct a full computational pathology deep-learning pipeline. We show, with the help of examples, how state-of-the-art deep-learning algorithms can be reimplemented in a streamlined manner using our library with minimal effort. Conclusions: We provide a usable and adaptable library with efficient, cutting-edge, and unit-tested tools for data loading, pre-processing, model inference, post-processing, and visualization. This enables a range of users to easily build upon recent deep-learning developments in the computational pathology literature.

17.
J Pathol Clin Res ; 8(2): 116-128, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35014198

RESUMO

Recent advances in whole-slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence-based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilise information embedded in pathology WSIs beyond what can be obtained through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue, and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms that are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well-defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large-scale annotation exercise involving a multidisciplinary team of pathologists, ML experts, and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real-world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary, and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.


Assuntos
Inteligência Artificial , Semântica , Algoritmos , Humanos , Patologistas
18.
Med Image Anal ; 65: 101771, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32769053

RESUMO

Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images.


Assuntos
Aprendizado Profundo , Técnicas Histológicas , Humanos , Processamento de Imagem Assistida por Computador
19.
IEEE Trans Med Imaging ; 39(5): 1380-1391, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31647422

RESUMO

Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Núcleo Celular , Humanos
20.
IEEE J Biomed Health Inform ; 23(2): 509-518, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-29994323

RESUMO

Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners, and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). A DRFI method incorporates multilevel segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and postprocessing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation algorithms and its performance is comparable with most recent approaches that are based on deep convolutional neural networks.


Assuntos
Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Cutâneas/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Humanos , Pele/diagnóstico por imagem , Aprendizado de Máquina Supervisionado
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa