Your browser doesn't support javascript.
loading
Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images.
Hou, Le; Nguyen, Vu; Kanevsky, Ariel B; Samaras, Dimitris; Kurc, Tahsin M; Zhao, Tianhao; Gupta, Rajarsi R; Gao, Yi; Chen, Wenjin; Foran, David; Saltz, Joel H.
Afiliação
  • Hou L; Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA.
  • Nguyen V; Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA.
  • Kanevsky AB; Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA.
  • Samaras D; Montreal Institute for Learning Algorithms, University of Montreal, Montreal, Canada.
  • Kurc TM; Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA.
  • Zhao T; Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA.
  • Gupta RR; Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA.
  • Gao Y; Oak Ridge National Laboratory, Oak Ridge, TN, USA.
  • Chen W; Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA.
  • Foran D; Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA.
  • Saltz JH; Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA.
Pattern Recognit ; 86: 188-200, 2019 Feb.
Article em En | MEDLINE | ID: mdl-30631215
ABSTRACT
We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Diagnostic_studies Idioma: En Revista: Pattern Recognit Ano de publicação: 2019 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Diagnostic_studies Idioma: En Revista: Pattern Recognit Ano de publicação: 2019 Tipo de documento: Article País de afiliação: Estados Unidos