Your browser doesn't support javascript.
loading
A deep generative model of 3D single-cell organization.
Donovan-Maiye, Rory M; Brown, Jackson M; Chan, Caleb K; Ding, Liya; Yan, Calysta; Gaudreault, Nathalie; Theriot, Julie A; Maleckar, Mary M; Knijnenburg, Theo A; Johnson, Gregory R.
Affiliation
  • Donovan-Maiye RM; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Brown JM; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Chan CK; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Ding L; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Yan C; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Gaudreault N; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Theriot JA; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Maleckar MM; Department of Biology and Howard Hughes Medical Institute, University of Washington, Seattle, Washington, United States of America.
  • Knijnenburg TA; Allen Institute for Cell Science, Seattle, Washington, United States of America.
  • Johnson GR; Allen Institute for Cell Science, Seattle, Washington, United States of America.
PLoS Comput Biol ; 18(1): e1009155, 2022 01.
Article in En | MEDLINE | ID: mdl-35041651
ABSTRACT
We introduce a framework for end-to-end integrative modeling of 3D single-cell multi-channel fluorescent image data of diverse subcellular structures. We employ stacked conditional ß-variational autoencoders to first learn a latent representation of cell morphology, and then learn a latent representation of subcellular structure localization which is conditioned on the learned cell morphology. Our model is flexible and can be trained on images of arbitrary subcellular structures and at varying degrees of sparsity and reconstruction fidelity. We train our full model on 3D cell image data and explore design trade-offs in the 2D setting. Once trained, our model can be used to predict plausible locations of structures in cells where these structures were not imaged. The trained model can also be used to quantify the variation in the location of subcellular structures by generating plausible instantiations of each structure in arbitrary cell geometries. We apply our trained model to a small drug perturbation screen to demonstrate its applicability to new data. We show how the latent representations of drugged cells differ from unperturbed cells as expected by on-target effects of the drugs.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cell Nucleus / Intracellular Space / Cell Shape / Induced Pluripotent Stem Cells / Models, Biological Type of study: Prognostic_studies Limits: Humans Language: En Journal: PLoS Comput Biol Journal subject: BIOLOGIA / INFORMATICA MEDICA Year: 2022 Document type: Article Affiliation country: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Cell Nucleus / Intracellular Space / Cell Shape / Induced Pluripotent Stem Cells / Models, Biological Type of study: Prognostic_studies Limits: Humans Language: En Journal: PLoS Comput Biol Journal subject: BIOLOGIA / INFORMATICA MEDICA Year: 2022 Document type: Article Affiliation country: United States