Your browser doesn't support javascript.
loading
DCCMED-Net: Densely connected and concatenated multi Encoder-Decoder CNNs for retinal vessel extraction from fundus images.
Budak, Ümit; Cömert, Zafer; Çibuk, Musa; Sengür, Abdulkadir.
Afiliação
  • Budak Ü; Department of Electrical and Electronics Engineering, Bitlis Eren University, Bitlis, Turkey. Electronic address: ubudak@beu.edu.tr.
  • Cömert Z; Department of Software Engineering, Samsun University, Samsun, Turkey.
  • Çibuk M; Department of Computer Engineering, Bitlis Eren University, Bitlis, Turkey.
  • Sengür A; Department of Electrical and Electronics Engineering, Technology Faculty, Firat University, Elazig, Turkey.
Med Hypotheses ; 134: 109426, 2020 Jan.
Article em En | MEDLINE | ID: mdl-31622926
ABSTRACT
Recent studies have shown that convolutional neural networks (CNNs) can be more accurate, efficient and even deeper on their training if they include direct connections from the layers close to the input to those close to the output in order to transfer activation maps. Through this observation, this study introduces a new CNN model, namely Densely Connected and Concatenated Multi Encoder-Decoder (DCCMED) network. DCCMED contains concatenated multi encoder-decoder CNNs and connects certain layers to the corresponding input of the subsequent encoder-decoder block in a feed-forward fashion, for retinal vessel extraction from fundus image. The DCCMED model has assertive aspects such as reducing pixel-vanishing and encouraging features reuse. A patch-based data augmentation strategy is also developed for the training of the proposed DCCMED model that increases the generalization ability of the network. Experiments are carried out on two publicly available datasets, namely Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). Evaluation criterions such as sensitivity (Se), specificity (Sp), accuracy (Acc), dice and area under the receiver operating characteristic curve (AUC) are used for verifying the effectiveness of the proposed method. The obtained results are compared with several supervised and unsupervised state-of-the-art methods based on AUC scores. The obtained results demonstrate that the proposed DCCMED model yields the best performance compared with the-state-of-the-art methods according to accuracy and AUC scores.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Vasos Retinianos / Processamento de Imagem Assistida por Computador / Diagnóstico por Computador / Aprendizado Profundo / Fundo de Olho Tipo de estudo: Diagnostic_studies / Prognostic_studies Limite: Humans Idioma: En Revista: Med Hypotheses Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Vasos Retinianos / Processamento de Imagem Assistida por Computador / Diagnóstico por Computador / Aprendizado Profundo / Fundo de Olho Tipo de estudo: Diagnostic_studies / Prognostic_studies Limite: Humans Idioma: En Revista: Med Hypotheses Ano de publicação: 2020 Tipo de documento: Article