Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Comput Comput Assist Interv ; 14225: 704-713, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37841230

ABSTRACT

We introduce a new AI-ready computational pathology dataset containing restained and co-registered digitized images from eight head-and-neck squamous cell carcinoma patients. Specifically, the same tumor sections were stained with the expensive multiplex immunofluorescence (mIF) assay first and then restained with cheaper multiplex immunohistochemistry (mIHC). This is a first public dataset that demonstrates the equivalence of these two staining methods which in turn allows several use cases; due to the equivalence, our cheaper mIHC staining protocol can offset the need for expensive mIF staining/scanning which requires highly-skilled lab technicians. As opposed to subjective and error-prone immune cell annotations from individual pathologists (disagreement > 50%) to drive SOTA deep learning approaches, this dataset provides objective immune and tumor cell annotations via mIF/mIHC restaining for more reproducible and accurate characterization of tumor immune microenvironment (e.g. for immunotherapy). We demonstrate the effectiveness of this dataset in three use cases: (1) IHC quantification of CD3/CD8 tumor-infiltrating lymphocytes via style transfer, (2) virtual translation of cheap mIHC stains to more expensive mIF stains, and (3) virtual tumor/immune cellular phenotyping on standard hematoxylin images. The dataset is available at https://github.com/nadeemlab/DeepLIIF.

2.
Comput Med Imaging Graph ; 106: 102202, 2023 06.
Article in English | MEDLINE | ID: mdl-36857953

ABSTRACT

Oral Squamous Cell Carcinoma (OSCC) is the most prevalent type of oral cancer across the globe. Histopathology examination is the gold standard for OSCC examination, where stained histopathology slides help in studying and analyzing the cell structures under a microscope to determine the stages and grading of OSCC. One of the staining methods popularly known as H&E staining is used to produce differential coloration, highlight key tissue features, and improve contrast, which makes cell analysis easier. However, the stained H&E histopathology images exhibit inter and intra-variation due to staining techniques, incubation times, and staining reagents. These variations negatively impact computer-aided diagnosis (CAD) and Machine learning algorithm's accuracy and development. A pre-processing procedure called stain normalization must be employed to reduce stain variance's negative impacts. Numerous state-of-the-art stain normalization methods are introduced. However, a robust multi-domain stain normalization approach is still required because, in a real-world situation, the OSCC histopathology images will include more than two color variations involving several domains. In this paper, a multi-domain stain translation method is proposed. The proposed method is an attention gated generator based on a Conditional Generative Adversarial Network (cGAN) with a novel objective function to enforce color distribution and the perpetual resemblance between the source and target domains. Instead of using WSI scanner images like previous techniques, the proposed method is experimented on OSCC histopathology images obtained by several conventional microscopes coupled with cameras. The proposed method receives the L* channel from the L*a*b* color space in inference mode and generates the G(a*b*) channel, which are color-adapted. The proposed technique uses mappings learned during training phases to translate the source domain to the target domain; mapping are learned using the whole color distribution of the target domain instead of one reference image. The suggested technique outperforms the four state-of-the-art methods in multi-domain OSCC histopathological translation, the claim is supported by results obtained after assessment in both quantitative and qualitative ways.


Subject(s)
Carcinoma, Squamous Cell , Head and Neck Neoplasms , Mouth Neoplasms , Humans , Coloring Agents/chemistry , Carcinoma, Squamous Cell/diagnostic imaging , Squamous Cell Carcinoma of Head and Neck , Mouth Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Color
3.
J Pathol Inform ; 14: 100195, 2023.
Article in English | MEDLINE | ID: mdl-36844704

ABSTRACT

Background: Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues. Methods: StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy. Results: The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively. Conclusions: This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.

4.
Comput Med Imaging Graph ; 105: 102185, 2023 04.
Article in English | MEDLINE | ID: mdl-36764189

ABSTRACT

Fibrosis is an inevitable stage in the development of chronic liver disease and has an irreplaceable role in characterizing the degree of progression of chronic liver disease. Histopathological diagnosis is the gold standard for the interpretation of fibrosis parameters. Conventional hematoxylin-eosin (H&E) staining can only reflect the gross structure of the tissue and the distribution of hepatocytes, while Masson trichrome can highlight specific types of collagen fiber structure, thus providing the necessary structural information for fibrosis scoring. However, the expensive costs of time, economy, and patient specimens as well as the non-uniform preparation and staining process make the conversion of existing H&E staining into virtual Masson trichrome staining a solution for fibrosis evaluation. Existing translation approaches fail to extract fiber features accurately enough, and the decoder of staining is unable to converge due to the inconsistent color of physical staining. In this work, we propose a prior-guided generative adversarial network, based on unpaired data for effective Masson trichrome stained image generation from the corresponding H&E stained image. Conducted on a small training set, our method takes full advantage of prior knowledge to set up better constraints on both the encoder and the decoder. Experiments indicate the superior performance of our method that surpasses the previous approaches. For various liver diseases, our results demonstrate a high correlation between the staging of real and virtual stains (ρ=0.82; 95% CI: 0.73-0.89). In addition, our finetuning strategy is able to standardize the staining color and release the memory and computational burden, which can be employed in clinical assessment.


Subject(s)
Coloring Agents , Humans , Staining and Labeling , Eosine Yellowish-(YS) , Fibrosis
5.
J Pathol Inform ; 13: 100107, 2022.
Article in English | MEDLINE | ID: mdl-36268068

ABSTRACT

Background: In digital pathology, many image analysis tasks are challenged by the need for large and time-consuming manual data annotations to cope with various sources of variability in the image domain. Unsupervised domain adaptation based on image-to-image translation is gaining importance in this field by addressing variabilities without the manual overhead. Here, we tackle the variation of different histological stains by unsupervised stain-to-stain translation to enable a stain-independent applicability of a deep learning segmentation model. Methods: We use CycleGANs for stain-to-stain translation in kidney histopathology, and propose two novel approaches to improve translational effectivity. First, we integrate a prior segmentation network into the CycleGAN for a self-supervised, application-oriented optimization of translation through semantic guidance, and second, we incorporate extra channels to the translation output to implicitly separate artificial meta-information otherwise encoded for tackling underdetermined reconstructions. Results: The latter showed partially superior performances to the unmodified CycleGAN, but the former performed best in all stains providing instance-level Dice scores ranging between 78% and 92% for most kidney structures, such as glomeruli, tubules, and veins. However, CycleGANs showed only limited performance in the translation of other structures, e.g. arteries. Our study also found somewhat lower performance for all structures in all stains when compared to segmentation in the original stain. Conclusions: Our study suggests that with current unsupervised technologies, it seems unlikely to produce "generally" applicable simulated stains.

SELECTION OF CITATIONS
SEARCH DETAIL
...