Your browser doesn't support javascript.
loading
A deep learning-based framework for retinal fundus image enhancement.
Lee, Kang Geon; Song, Su Jeong; Lee, Soochahn; Yu, Hyeong Gon; Kim, Dong Ik; Lee, Kyoung Mu.
Afiliação
  • Lee KG; Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, South Korea.
  • Song SJ; Department of Ophthalmology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, South Korea.
  • Lee S; Biomedical Institute for Convergence (BICS), Sungkyunkwan University, Suwon, South South Korea.
  • Yu HG; School of Electrical Engineering, Kookmin University, Seoul, South Korea.
  • Kim DI; Sky Eye Clinic, Seoul, South Korea.
  • Lee KM; HanGil Eye Hospital, Incheon, South Korea.
PLoS One ; 18(3): e0282416, 2023.
Article em En | MEDLINE | ID: mdl-36928209
PROBLEM: Low-quality fundus images with complex degredation can cause costly re-examinations of patients or inaccurate clinical diagnosis. AIM: This study aims to create an automatic fundus macular image enhancement framework to improve low-quality fundus images and remove complex image degradation. METHOD: We propose a new deep learning-based model that automatically enhances low-quality retinal fundus images that suffer from complex degradation. We collected a dataset, comprising 1068 pairs of high-quality (HQ) and low-quality (LQ) fundus images from the Kangbuk Samsung Hospital's health screening program and ophthalmology department from 2017 to 2019. Then, we used these dataset to develop data augmentation methods to simulate major aspects of retinal image degradation and to propose a customized convolutional neural network (CNN) architecture to enhance LQ images, depending on the nature of the degradation. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), r-value (linear index of fuzziness), and proportion of ungradable fundus photographs before and after the enhancement process are calculated to assess the performance of proposed model. A comparative evaluation is conducted on an external database and four different open-source databases. RESULTS: The results of the evaluation on the external test dataset showed an significant increase in PSNR and SSIM compared with the original LQ images. Moreover, PSNR and SSIM increased by over 4 dB and 0.04, respectively compared with the previous state-of-the-art methods (P < 0.05). The proportion of ungradable fundus photographs decreased from 42.6% to 26.4% (P = 0.012). CONCLUSION: Our enhancement process improves LQ fundus images that suffer from complex degradation significantly. Moreover our customized CNN achieved improved performance over the existing state-of-the-art methods. Overall, our framework can have a clinical impact on reducing re-examinations and improving the accuracy of diagnosis.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Idioma: En Ano de publicação: 2023 Tipo de documento: Article