Structure preserving adversarial generation of labeled training samples for single-cell segmentation.
Cell Rep Methods
; 3(9): 100592, 2023 09 25.
Article
en En
| MEDLINE
| ID: mdl-37725984
We introduce a generative data augmentation strategy to improve the accuracy of instance segmentation of microscopy data for complex tissue structures. Our pipeline uses regular and conditional generative adversarial networks (GANs) for image-to-image translation to construct synthetic microscopy images along with their corresponding masks to simulate the distribution and shape of the objects and their appearance. The synthetic samples are then used for training an instance segmentation network (for example, StarDist or Cellpose). We show on two single-cell-resolution tissue datasets that our method improves the accuracy of downstream instance segmentation tasks compared with traditional training strategies using either the raw data or basic augmentations. We also compare the quality of the object masks with those generated by a traditional cell population simulation method, finding that our synthesized masks are closer to the ground truth considering Fréchet inception distances.
Palabras clave
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Máscaras
/
Microscopía
Tipo de estudio:
Clinical_trials
/
Prognostic_studies
Idioma:
En
Revista:
Cell Rep Methods
Año:
2023
Tipo del documento:
Article
Pais de publicación:
Estados Unidos