Your browser doesn't support javascript.
loading
Stain transfer using Generative Adversarial Networks and disentangled features.
Moghadam, Atefeh Ziaei; Azarnoush, Hamed; Seyyedsalehi, Seyyed Ali; Havaei, Mohammad.
Afiliação
  • Moghadam AZ; Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.
  • Azarnoush H; Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran. Electronic address: azarnoush@aut.ac.ir.
  • Seyyedsalehi SA; Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.
  • Havaei M; Imagia, Montreal, Canada.
Comput Biol Med ; 142: 105219, 2022 03.
Article em En | MEDLINE | ID: mdl-35026572
ABSTRACT
With the digitization of histopathology, machine learning algorithms have been developed to help pathologists. Color variation in histopathology images degrades the performance of these algorithms. Many models have been proposed to resolve the impact of color variation and transfer histopathology images to a single stain style. Major shortcomings include manual feature extraction, bias on a reference image, being limited to one style to one style transfer, dependence on style labels for source and target domains, and information loss. We propose two models, considering these shortcomings. Our main novelty is using Generative Adversarial Networks (GANs) along with feature disentanglement. The models extract color-related and structural features with neural networks; thus, features are not hand-crafted. Extracting features helps our models do many-to-one stain transformations and require only target-style labels. Our models also do not require a reference image by exploiting GAN. Our first model has one network per stain style transformation, while the second model uses only one network for many-to-many stain style transformations. We compare our models with six state-of-the-art models on the Mitosis-Atypia Dataset. Both proposed models achieved good results, but our second model outperforms other models based on the Histogram Intersection Score (HIS). Our proposed models were applied to three datasets to test their performance. The efficacy of our models was also evaluated on a classification task. Our second model obtained the best results in all the experiments with HIS of 0.88, 0.85, 0.75 for L-channel, a-channel, and b-channel, using the Mitosis-Atypia Dataset and accuracy of 90.3% for classification.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Imagem Assistida por Computador / Corantes Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Imagem Assistida por Computador / Corantes Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2022 Tipo de documento: Article