Your browser doesn't support javascript.
loading
A novel deep learning-based 3D cell segmentation framework for future image-based disease detection.
Wang, Andong; Zhang, Qi; Han, Yang; Megason, Sean; Hormoz, Sahand; Mosaliganti, Kishore R; Lam, Jacqueline C K; Li, Victor O K.
Affiliation
  • Wang A; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
  • Zhang Q; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
  • Han Y; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
  • Megason S; Department of Systems Biology, Harvard Medical School, Boston, MA, USA.
  • Hormoz S; Department of Systems Biology, Harvard Medical School, Boston, MA, USA.
  • Mosaliganti KR; Department of Systems Biology, Harvard Medical School, Boston, MA, USA.
  • Lam JCK; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China. jcklam@eee.hklu.hk.
  • Li VOK; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China. vli@eee.hku.hk.
Sci Rep ; 12(1): 342, 2022 01 10.
Article de En | MEDLINE | ID: mdl-35013443
ABSTRACT
Cell segmentation plays a crucial role in understanding, diagnosing, and treating diseases. Despite the recent success of deep learning-based cell segmentation methods, it remains challenging to accurately segment densely packed cells in 3D cell membrane images. Existing approaches also require fine-tuning multiple manually selected hyperparameters on the new datasets. We develop a deep learning-based 3D cell segmentation pipeline, 3DCellSeg, to address these challenges. Compared to the existing methods, our approach carries the following novelties (1) a robust two-stage pipeline, requiring only one hyperparameter; (2) a light-weight deep convolutional neural network (3DCellSegNet) to efficiently output voxel-wise masks; (3) a custom loss function (3DCellSeg Loss) to tackle the clumped cell problem; and (4) an efficient touching area-based clustering algorithm (TASCAN) to separate 3D cells from the foreground masks. Cell segmentation experiments conducted on four different cell datasets show that 3DCellSeg outperforms the baseline models on the ATAS (plant), HMS (animal), and LRP (plant) datasets with an overall accuracy of 95.6%, 76.4%, and 74.7%, respectively, while achieving an accuracy comparable to the baselines on the Ovules (plant) dataset with an overall accuracy of 82.2%. Ablation studies show that the individual improvements in accuracy is attributable to 3DCellSegNet, 3DCellSeg Loss, and TASCAN, with the 3DCellSeg demonstrating robustness across different datasets and cell shapes. Our results suggest that 3DCellSeg can serve a powerful biomedical and clinical tool, such as histo-pathological image analysis, for cancer diagnosis and grading.
Sujet(s)

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Interprétation d'images assistée par ordinateur / Membrane cellulaire / Imagerie tridimensionnelle / Apprentissage profond / Microscopie Type d'étude: Diagnostic_studies / Prognostic_studies Limites: Animals Langue: En Journal: Sci Rep Année: 2022 Type de document: Article Pays d'affiliation: Chine

Texte intégral: 1 Collection: 01-internacional Base de données: MEDLINE Sujet principal: Interprétation d'images assistée par ordinateur / Membrane cellulaire / Imagerie tridimensionnelle / Apprentissage profond / Microscopie Type d'étude: Diagnostic_studies / Prognostic_studies Limites: Animals Langue: En Journal: Sci Rep Année: 2022 Type de document: Article Pays d'affiliation: Chine