Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
NPJ Precis Oncol ; 8(1): 5, 2024 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-38184744

RESUMEN

Drug sensitivity prediction models can aid in personalising cancer therapy, biomarker discovery, and drug design. Such models require survival data from randomised controlled trials which can be time consuming and expensive. In this proof-of-concept study, we demonstrate for the first time that deep learning can link histological patterns in whole slide images (WSIs) of Haematoxylin & Eosin (H&E) stained breast cancer sections with drug sensitivities inferred from cell lines. We employ patient-wise drug sensitivities imputed from gene expression-based mapping of drug effects on cancer cell lines to train a deep learning model that predicts patients' sensitivity to multiple drugs from WSIs. We show that it is possible to use routine WSIs to predict the drug sensitivity profile of a cancer patient for a number of approved and experimental drugs. We also show that the proposed approach can identify cellular and histological patterns associated with drug sensitivity profiles of cancer patients.

2.
Med Image Anal ; 92: 103047, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38157647

RESUMEN

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Núcleo Celular/patología , Técnicas Histológicas/métodos
3.
Med Image Anal ; 85: 102743, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36702037

RESUMEN

Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.


Asunto(s)
Histología , Redes Neurales de la Computación , Humanos , Histología/instrumentación
4.
Med Image Anal ; 83: 102685, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36410209

RESUMEN

The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.


Asunto(s)
Investigación Biomédica , Humanos
5.
Commun Med (Lond) ; 2: 120, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36168445

RESUMEN

Background: Computational pathology has seen rapid growth in recent years, driven by advanced deep-learning algorithms. Due to the sheer size and complexity of multi-gigapixel whole-slide images, to the best of our knowledge, there is no open-source software library providing a generic end-to-end API for pathology image analysis using best practices. Most researchers have designed custom pipelines from the bottom up, restricting the development of advanced algorithms to specialist users. To help overcome this bottleneck, we present TIAToolbox, a Python toolbox designed to make computational pathology accessible to computational, biomedical, and clinical researchers. Methods: By creating modular and configurable components, we enable the implementation of computational pathology algorithms in a way that is easy to use, flexible and extensible. We consider common sub-tasks including reading whole slide image data, patch extraction, stain normalization and augmentation, model inference, and visualization. For each of these steps, we provide a user-friendly application programming interface for commonly used methods and models. Results: We demonstrate the use of the interface to construct a full computational pathology deep-learning pipeline. We show, with the help of examples, how state-of-the-art deep-learning algorithms can be reimplemented in a streamlined manner using our library with minimal effort. Conclusions: We provide a usable and adaptable library with efficient, cutting-edge, and unit-tested tools for data loading, pre-processing, model inference, post-processing, and visualization. This enables a range of users to easily build upon recent deep-learning developments in the computational pathology literature.

6.
IEEE J Biomed Health Inform ; 25(2): 348-357, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-32396112

RESUMEN

Grading for cancer, based upon the degree of cancer differentiation, plays a major role in describing the characteristics and behavior of the cancer and determining treatment plan for patients. The grade is determined by a subjective and qualitative assessment of tissues under microscope, which suffers from high inter- and intra-observer variability among pathologists. Digital pathology offers an alternative means to automate the procedure as well as to improve the accuracy and robustness of cancer grading. However, most of such methods tend to mimic or reproduce cancer grade determined by human experts. Herein, we propose an alternative, quantitative means of assessing and characterizing cancers in an unsupervised manner. The proposed method utilizes conditional generative adversarial networks to characterize tissues. The proposed method is evaluated using whole slide images (WSIs) and tissue microarrays (TMAs) of colorectal cancer specimens. The results suggest that the proposed method holds a potential for quantifying cancer characteristics and improving cancer pathology.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias , Humanos
7.
IEEE Trans Med Imaging ; 40(12): 3413-3423, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34086562

RESUMEN

Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.


Asunto(s)
Algoritmos , Núcleo Celular , Humanos , Procesamiento de Imagen Asistido por Computador
8.
IEEE Trans Med Imaging ; 39(5): 1380-1391, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31647422

RESUMEN

Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Núcleo Celular , Humanos
9.
Comput Methods Programs Biomed ; 173: 119-129, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-31046986

RESUMEN

BACKGROUND AND OBJECTIVE: Segmenting different tissue components in histopathological images is of great importance for analyzing tissues and tumor environments. In recent years, an encoder-decoder family of convolutional neural networks has increasingly adopted to develop automated segmentation tools. While an encoder has been the main focus of most investigations, the role of a decoder so far has not been well studied and understood. Herein, we proposed an improved design of a decoder for the segmentation of epithelium and stroma components in histopathology images. METHODS: The proposed decoder is built upon a multi-path layout and dense shortcut connections between layers to maximize the learning and inference capability. Equipped with the proposed decoder, neural networks are built using three types of encoders (VGG, ResNet and preactived ResNet). To assess the proposed method, breast and prostate tissue datasets are utilized, including 108 and 52 hematoxylin and eosin (H&E) breast tissues images and 224 H&E prostate tissue images. RESULTS: Combining the pre-activated ResNet encoder and the proposed decoder, we achieved a pixel wise accuracy (ACC) of 0.9122, a rand index (RAND) score of 0.8398, an area under receiver operating characteristic curve (AUC) of 0.9716, Dice coefficient for stroma (DICE_STR) of 0.9092 and Dice coefficient for epithelium (DICE_EPI) of 0.9150 on the breast tissue dataset. The same network obtained 0.9074 ACC, 0.8320 Rand index, 0.9719 AUC, 0.9021 DICE_EPI and 0.9121 DICE_STR on the prostate dataset. CONCLUSIONS: In general, the experimental results confirmed that the proposed network is superior to the networks combined with the conventional decoder. Therefore, the proposed decoder could aid in improving tissue analysis in histopathology images.


Asunto(s)
Mama/diagnóstico por imagen , Histología , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Próstata/diagnóstico por imagen , Algoritmos , Área Bajo la Curva , Epitelio/diagnóstico por imagen , Femenino , Humanos , Masculino , Reconocimiento de Normas Patrones Automatizadas , Curva ROC , Reproducibilidad de los Resultados , Programas Informáticos
10.
Med Image Anal ; 58: 101563, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31561183

RESUMEN

Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.


Asunto(s)
Núcleo Celular/patología , Núcleo Celular/ultraestructura , Técnicas Histológicas/métodos , Redes Neurales de la Computación , Adenocarcinoma/patología , Neoplasias Colorrectales/patología , Conjuntos de Datos como Asunto , Humanos , Coloración y Etiquetado
11.
Artículo en Inglés | MEDLINE | ID: mdl-31001524

RESUMEN

High-resolution microscopy images of tissue specimens provide detailed information about the morphology of normal and diseased tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Segmentation of nuclei and classification of tissue images are two common tasks in tissue image analysis. Development of accurate and efficient algorithms for these tasks is a challenging problem because of the complexity of tissue morphology and tumor heterogeneity. In this paper we present two computer algorithms; one designed for segmentation of nuclei and the other for classification of whole slide tissue images. The segmentation algorithm implements a multiscale deep residual aggregation network to accurately segment nuclear material and then separate clumped nuclei into individual nuclei. The classification algorithm initially carries out patch-level classification via a deep learning method, then patch-level statistical and morphological features are used as input to a random forest regression model for whole slide image classification. The segmentation and classification algorithms were evaluated in the MICCAI 2017 Digital Pathology challenge. The segmentation algorithm achieved an accuracy score of 0.78. The classification algorithm achieved an accuracy score of 0.81. These scores were the highest in the challenge.

12.
Med Image Anal ; 56: 122-139, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31226662

RESUMEN

Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.


Asunto(s)
Neoplasias de la Mama/patología , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Algoritmos , Femenino , Humanos , Microscopía , Coloración y Etiquetado
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA