Your browser doesn't support javascript.
loading
2D medical image segmentation via learning multi-scale contextual dependencies.
Pang, Shuchao; Du, Anan; Yu, Zhenmei; Orgun, Mehmet A.
Affiliation
  • Pang S; Department of Computing, Macquarie University, North Ryde, NSW 2109, Australia. Electronic address: shuchao.pang@hdr.mq.edu.au.
  • Du A; School of Electrical and Data Engineering, University of Technology Sydney, NSW 2007, Australia. Electronic address: anan.du@student.uts.edu.au.
  • Yu Z; School of Data and Computer Science, Shandong Women's University, Jinan 250014, China. Electronic address: zhenmei_yu@sdwu.edu.cn.
  • Orgun MA; Department of Computing, Macquarie University, North Ryde, NSW 2109, Australia; Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa 999078, Macau. Electronic address: mehmet.orgun@mq.edu.au.
Methods ; 202: 40-53, 2022 06.
Article in En | MEDLINE | ID: mdl-34029714
ABSTRACT
Automatic medical image segmentation plays an important role as a diagnostic aid in the identification of diseases and their treatment in clinical settings. Recently proposed methods based on Convolutional Neural Networks (CNNs) have demonstrated their potential in image processing tasks, including some medical image analysis tasks. Those methods can learn various feature representations with numerous weight-shared convolutional kernels, however, the missed diagnosis rate of regions of interest (ROIs) is still high in medical image segmentation. Two crucial factors behind this shortcoming, which have been overlooked, are small ROIs from medical images and the limited context information from the existing network models. In order to reduce the missed diagnosis rate of ROIs from medical images, we propose a new segmentation framework which enhances the representative capability of small ROIs (particularly in deep layers) and explicitly learns global contextual dependencies in multi-scale feature spaces. In particular, the local features and their global dependencies from each feature space are adaptively aggregated based on both the spatial and the channel dimensions. Moreover, some visualization comparisons of the learned features from our framework further boost neural networks' interpretability. Experimental results show that, in comparison to some popular medical image segmentation and general image segmentation methods, our proposed framework achieves the state-of-the-art performance on the liver tumor segmentation task with 91.18% Sensitivity, the COVID-19 lung infection segmentation task with 75.73% Sensitivity and the retinal vessel detection task with 82.68% Sensitivity. Moreover, it is possible to integrate (parts of) the proposed framework into most of the recently proposed Fully CNN-based models, in order to improve their effectiveness in medical image segmentation tasks.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: COVID-19 / Liver Neoplasms Type of study: Prognostic_studies Limits: Humans Language: En Journal: Methods Journal subject: BIOQUIMICA Year: 2022 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: COVID-19 / Liver Neoplasms Type of study: Prognostic_studies Limits: Humans Language: En Journal: Methods Journal subject: BIOQUIMICA Year: 2022 Document type: Article