Your browser doesn't support javascript.
loading
Unsupervised X-ray image segmentation with task driven generative adversarial networks.
Zhang, Yue; Miao, Shun; Mansi, Tommaso; Liao, Rui.
Afiliação
  • Zhang Y; Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA; Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University, Cleveland, OH, USA. Electronic address: yue.zhang@siemens-healthineers.com.
  • Miao S; Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA. Electronic address: shwinmiao@gmail.com.
  • Mansi T; Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA. Electronic address: tommaso.mansi@siemens-healthineers.com.
  • Liao R; Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA. Electronic address: rui.liao@siemens-healthineers.com.
Med Image Anal ; 62: 101664, 2020 05.
Article em En | MEDLINE | ID: mdl-32120268
ABSTRACT
Semantic parsing of anatomical structures in X-ray images is a critical task in many clinical applications. Modern methods leverage deep convolutional networks, and generally require a large amount of labeled data for model training. However, obtaining accurate pixel-wise labels on X-ray images is very challenging due to the appearance of anatomy overlaps and complex texture patterns. In comparison, labeled CT data are more accessible since organs in 3D CT scans preserve clearer structures and thus can be easily delineated. In this paper, we propose a model framework for learning automatic X-ray image parsing from labeled 3D CT scans. Specifically, a Deep Image-to-Image network (DI2I) for multi-organ segmentation is first trained on X-ray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT volumes. Then we build a Task Driven Generative Adversarial Network (TD-GAN) to achieve simultaneous synthesis and parsing for unseen real X-ray images. The entire model pipeline does not require any annotations from the X-ray image domain. In the numerical experiments, we validate the proposed model on over 800 DRRs and 300 topograms. While the vanilla DI2I trained on DRRs without any adaptation fails completely on segmenting the topograms, the proposed model does not require any topogram labels and is able to provide a promising average dice of 86% which achieves the same level of accuracy as results from supervised training (89%). Furthermore, we also demonstrate the generality of TD-GAN through quantatitive and qualitative study on widely used public dataset.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Imagem Assistida por Computador / Tomografia Computadorizada por Raios X Tipo de estudo: Diagnostic_studies / Qualitative_research Limite: Humans Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Processamento de Imagem Assistida por Computador / Tomografia Computadorizada por Raios X Tipo de estudo: Diagnostic_studies / Qualitative_research Limite: Humans Idioma: En Ano de publicação: 2020 Tipo de documento: Article