Leveraging multimodal microscopy to optimize deep learning models for cell segmentation.
APL Bioeng
; 5(1): 016101, 2021 Mar.
Article
em En
| MEDLINE
| ID: mdl-33415313
Deep learning provides an opportunity to automatically segment and extract cellular features from high-throughput microscopy images. Many labeling strategies have been developed for this purpose, ranging from the use of fluorescent markers to label-free approaches. However, differences in the channels available to each respective training dataset make it difficult to directly compare the effectiveness of these strategies across studies. Here, we explore training models using subimage stacks composed of channels sampled from larger, "hyper-labeled," image stacks. This allows us to directly compare a variety of labeling strategies and training approaches on identical cells. This approach revealed that fluorescence-based strategies generally provide higher segmentation accuracies but were less accurate than label-free models when labeling was inconsistent. The relative strengths of label and label-free techniques could be combined through the use of merging fluorescence channels and using out-of-focus brightfield images. Beyond comparing labeling strategies, using subimage stacks for training was also found to provide a method of simulating a wide range of labeling conditions, increasing the ability of the final model to accommodate a greater range of candidate cell labeling strategies.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Idioma:
En
Revista:
APL Bioeng
Ano de publicação:
2021
Tipo de documento:
Article
País de afiliação:
Canadá