Your browser doesn't support javascript.
loading
Deep Learning for Variational Multimodality Tumor Segmentation in PET/CT.
Li, Laquan; Zhao, Xiangming; Lu, Wei; Tan, Shan.
Afiliação
  • Li L; Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
  • Zhao X; College of Science, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
  • Lu W; Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
  • Tan S; Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065, USA.
Neurocomputing (Amst) ; 392: 277-295, 2020 Jun 07.
Article em En | MEDLINE | ID: mdl-32773965
ABSTRACT
Positron emission tomography/computed tomography (PET/CT) imaging can simultaneously acquire functional metabolic information and anatomical information of the human body. How to rationally fuse the complementary information in PET/CT for accurate tumor segmentation is challenging. In this study, a novel deep learning based variational method was proposed to automatically fuse multimodality information for tumor segmentation in PET/CT. A 3D fully convolutional network (FCN) was first designed and trained to produce a probability map from the CT image. The learnt probability map describes the probability of each CT voxel belonging to the tumor or the background, and roughly distinguishes the tumor from its surrounding soft tissues. A fuzzy variational model was then proposed to incorporate the probability map and the PET intensity image for an accurate multimodality tumor segmentation, where the probability map acted as a membership degree prior. A split Bregman algorithm was used to minimize the variational model. The proposed method was validated on a non-small cell lung cancer dataset with 84 PET/CT images. Experimental results demonstrated that 1). Only a few training samples were needed for training the designed network to produce the probability map; 2). The proposed method can be applied to small datasets, normally seen in clinic research; 3). The proposed method successfully fused the complementary information in PET/CT, and outperformed two existing deep learning-based multimodality segmentation methods and other multimodality segmentation methods using traditional fusion strategies (without deep learning); 4). The proposed method had a good performance for tumor segmentation, even for those with Fluorodeoxyglucose (FDG) uptake inhomogeneity and blurred tumor edges (two major challenges in PET single modality segmentation) and complex surrounding soft tissues (one major challenge in CT single modality segmentation), and achieved an average dice similarity indexes (DSI) of 0.86 ± 0.05, sensitivity (SE) of 0.86 ± 0.07, positive predictive value (PPV) of 0.87 ± 0.10, volume error (VE) of 0.16 ± 0.12, and classification error (CE) of 0.30 ± 0.12.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: Neurocomputing (Amst) Ano de publicação: 2020 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: Neurocomputing (Amst) Ano de publicação: 2020 Tipo de documento: Article País de afiliação: China