Your browser doesn't support javascript.
loading
GFNet: Automatic segmentation of COVID-19 lung infection regions using CT images based on boundary features.
Fan, Chaodong; Zeng, Zhenhuan; Xiao, Leyi; Qu, Xilong.
Afiliação
  • Fan C; School of Computer Science and Technology, Hainan University, Haikou 570228, China.
  • Zeng Z; School of Computer Science, Xiangtan University, Xiangtan 411100, China.
  • Xiao L; Foshan Green Intelligent Manufacturing Research Institute of Xiangtan University, Foshan 528000, China.
  • Qu X; School of Information Technology and Management, Hunan University of Finance and Economics, Changsha 410205, China.
Pattern Recognit ; 132: 108963, 2022 Dec.
Article em En | MEDLINE | ID: mdl-35966970
ABSTRACT
In early 2020, the global spread of the COVID-19 has presented the world with a serious health crisis. Due to the large number of infected patients, automatic segmentation of lung infections using computed tomography (CT) images has great potential to enhance traditional medical strategies. However, the segmentation of infected regions in CT slices still faces many challenges. Specially, the most core problem is the high variability of infection characteristics and the low contrast between the infected and the normal regions. This problem leads to fuzzy regions in lung CT segmentation. To address this problem, we have designed a novel global feature network(GFNet) for COVID-19 lung infections VGG16 as backbone, we design a Edge-guidance module(Eg) that fuses the features of each layer. First, features are extracted by reverse attention module and Eg is combined with it. This series of steps enables each layer to fully extract boundary details that are difficult to be noticed by previous models, thus solving the fuzzy problem of infected regions. The multi-layer output features are fused into the final output to finally achieve automatic and accurate segmentation of infected areas. We compared the traditional medical segmentation networks, UNet, UNet++, the latest model Inf-Net, and methods of few shot learning field. Experiments show that our model is superior to the above models in Dice, Sensitivity, Specificity and other evaluation metrics, and our segmentation results are clear and accurate from the visual effect, which proves the effectiveness of GFNet. In addition, we verify the generalization ability of GFNet on another "never seen" dataset, and the results prove that our model still has better generalization ability than the above model. Our code has been shared at https//github.com/zengzhenhuan/GFNet.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: Pattern Recognit Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: Pattern Recognit Ano de publicação: 2022 Tipo de documento: Article