Your browser doesn't support javascript.
loading
Fully automated multiorgan segmentation in abdominal magnetic resonance imaging with deep neural networks.
Chen, Yuhua; Ruan, Dan; Xiao, Jiayu; Wang, Lixia; Sun, Bin; Saouaf, Rola; Yang, Wensha; Li, Debiao; Fan, Zhaoyang.
Afiliación
  • Chen Y; Department of Bioengineering, University of California, Los Angeles, CA, USA.
  • Ruan D; Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
  • Xiao J; Department of Bioengineering, University of California, Los Angeles, CA, USA.
  • Wang L; Department of Radiation Oncology, University of California, Los Angeles, CA, USA.
  • Sun B; Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
  • Saouaf R; Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
  • Yang W; Department of Radiology, Chaoyang Hospital, Capital Medical University, Beijing, China.
  • Li D; Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China.
  • Fan Z; Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
Med Phys ; 47(10): 4971-4982, 2020 Oct.
Article en En | MEDLINE | ID: mdl-32748401
ABSTRACT

PURPOSE:

Segmentation of multiple organs-at-risk (OARs) is essential for magnetic resonance (MR)-only radiation therapy treatment planning and MR-guided adaptive radiotherapy of abdominal cancers. Current practice requires manual delineation that is labor-intensive, time-consuming, and prone to intra- and interobserver variations. We developed a deep learning (DL) technique for fully automated segmentation of multiple OARs on clinical abdominal MR images with high accuracy, reliability, and efficiency.

METHODS:

We developed Automated deep Learning-based abdominal multiorgan segmentation (ALAMO) technique based on two-dimensional U-net and a densely connected network structure with tailored design in data augmentation and training procedures such as deep connection, auxiliary supervision, and multiview. The model takes in multislice MR images and generates the output of segmentation results. 3.0-Tesla T1 VIBE (Volumetric Interpolated Breath-hold Examination) images of 102 subjects were used in our study and split into 66 for training, 16 for validation, and 20 for testing. Ten OARs were studied, including the liver, spleen, pancreas, left/right kidneys, stomach, duodenum, small intestine, spinal cord, and vertebral bodies. An experienced radiologist manually labeled each OAR, followed by reediting, if necessary, by a senior radiologist, to create the ground-truth. The performance was measured using volume overlapping and surface distance.

RESULTS:

The ALAMO technique generated segmentation labels in good agreement with the manual results. Specifically, among the ten OARs, nine achieved high dice similarity coefficients (DSCs) in the range of 0.87-0.96, except for the duodenum with a DSC of 0.80. The inference completed within 1 min for a three-dimensional volume of 320 × 288 × 180. Overall, the ALAMO model matched the state-of-the-art techniques in performance.

CONCLUSION:

The proposed ALAMO technique allows for fully automated abdominal MR segmentation with high accuracy and practical memory and computation time demands.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Procesamiento de Imagen Asistido por Computador / Tomografía Computarizada por Rayos X Tipo de estudio: Guideline / Prognostic_studies Límite: Humans Idioma: En Revista: Med Phys Año: 2020 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Procesamiento de Imagen Asistido por Computador / Tomografía Computarizada por Rayos X Tipo de estudio: Guideline / Prognostic_studies Límite: Humans Idioma: En Revista: Med Phys Año: 2020 Tipo del documento: Article País de afiliación: Estados Unidos