Your browser doesn't support javascript.
loading
Super-resolution and segmentation deep learning for breast cancer histopathology image analysis.
Juhong, Aniwat; Li, Bo; Yao, Cheng-You; Yang, Chia-Wei; Agnew, Dalen W; Lei, Yu Leo; Huang, Xuefei; Piyawattanametha, Wibool; Qiu, Zhen.
Affiliation
  • Juhong A; Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA.
  • Li B; Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA.
  • Yao CY; Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA.
  • Yang CW; Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA.
  • Agnew DW; Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA.
  • Lei YL; Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA.
  • Huang X; Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA.
  • Piyawattanametha W; Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA.
  • Qiu Z; College of Veterinary Medicine, Michigan State University, East Lansing, MI 48824, USA.
Biomed Opt Express ; 14(1): 18-36, 2023 Jan 01.
Article in En | MEDLINE | ID: mdl-36698665
ABSTRACT
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.