Your browser doesn't support javascript.
loading
Extracting quantitative biological information from bright-field cell images using deep learning.
Helgadottir, Saga; Midtvedt, Benjamin; Pineda, Jesús; Sabirsh, Alan; B Adiels, Caroline; Romeo, Stefano; Midtvedt, Daniel; Volpe, Giovanni.
Afiliação
  • Helgadottir S; Department of Physics, University of Gothenburg, Gothenburg, Sweden.
  • Midtvedt B; Department of Physics, University of Gothenburg, Gothenburg, Sweden.
  • Pineda J; Department of Physics, University of Gothenburg, Gothenburg, Sweden.
  • Sabirsh A; Advanced Drug Delivery, Pharmaceutical Sciences, R&D, AstraZeneca, Gothenburg, Sweden.
  • B Adiels C; Department of Physics, University of Gothenburg, Gothenburg, Sweden.
  • Midtvedt D; Department of Physics, University of Gothenburg, Gothenburg, Sweden.
  • Volpe G; Department of Physics, University of Gothenburg, Gothenburg, Sweden.
Biophys Rev (Melville) ; 2(3): 031401, 2021 Sep.
Article em En | MEDLINE | ID: mdl-38505631
ABSTRACT
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time consuming, labor intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of bright-field images by a conditional generative adversarial neural network (cGAN). We show that this is a robust and fast-converging approach to generate virtually stained images from the bright-field images and, in subsequent downstream analyses, to quantify the properties of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using bright-field images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually stained images to extract quantitative measures about these cell structures. Generating virtually stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell. To make this deep-learning-powered approach readily available for other users, we provide a Python software package, which can be easily personalized and optimized for specific virtual-staining and cell-profiling applications.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Biophys Rev (Melville) Ano de publicação: 2021 Tipo de documento: Article País de afiliação: Suécia

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Biophys Rev (Melville) Ano de publicação: 2021 Tipo de documento: Article País de afiliação: Suécia