ABSTRACT
In response to different stimuli many transcription factors (TFs) display different activation dynamics that trigger the expression of specific sets of target genes, suggesting that promoters have a way to decode dynamics. Here, we use optogenetics to directly manipulate the nuclear localization of a synthetic TF in mammalian cells without affecting other processes. We generate pulsatile or sustained TF dynamics and employ live cell microscopy and mathematical modelling to analyse the behaviour of a library of reporter constructs. We find decoding of TF dynamics occurs only when the coupling between TF binding and transcription pre-initiation complex formation is inefficient and that the ability of a promoter to decode TF dynamics gets amplified by inefficient translation initiation. Using the knowledge acquired, we build a synthetic circuit that allows obtaining two gene expression programs depending solely on TF dynamics. Finally, we show that some of the promoter features identified in our study can be used to distinguish natural promoters that have previously been experimentally characterized as responsive to either sustained or pulsatile p53 and NF-κB signals. These results help elucidate how gene expression is regulated in mammalian cells and open up the possibility to build complex synthetic circuits steered by TF dynamics.
Subject(s)
Gene Expression Regulation , Promoter Regions, Genetic , Transcription Factors , Animals , Mammals , NF-kappa B/genetics , NF-kappa B/metabolism , Protein Binding , Transcription Factors/genetics , Transcription Factors/metabolismABSTRACT
Genome editing simplifies the generation of new animal models for congenital disorders. However, the detailed and unbiased phenotypic assessment of altered embryonic development remains a challenge. Here, we explore how deep learning (U-Net) can automate segmentation tasks in various imaging modalities, and we quantify phenotypes of altered renal, neural and craniofacial development in Xenopus embryos in comparison with normal variability. We demonstrate the utility of this approach in embryos with polycystic kidneys (pkd1 and pkd2) and craniofacial dysmorphia (six1). We highlight how in toto light-sheet microscopy facilitates accurate reconstruction of brain and craniofacial structures within X. tropicalis embryos upon dyrk1a and six1 loss of function or treatment with retinoic acid inhibitors. These tools increase the sensitivity and throughput of evaluating developmental malformations caused by chemical or genetic disruption. Furthermore, we provide a library of pre-trained networks and detailed instructions for applying deep learning to the reader's own datasets. We demonstrate the versatility, precision and scalability of deep neural network phenotyping on embryonic disease models. By combining light-sheet microscopy and deep learning, we provide a framework for higher-throughput characterization of embryonic model organisms. This article has an associated 'The people behind the papers' interview.
Subject(s)
Deep Learning , Embryonic Development/genetics , Phenotype , Animals , Craniofacial Abnormalities/embryology , Craniofacial Abnormalities/genetics , Craniofacial Abnormalities/pathology , Disease Models, Animal , Image Processing, Computer-Assisted , Mice , Microscopy , Mutation , Neural Networks, Computer , Neurodevelopmental Disorders/genetics , Neurodevelopmental Disorders/pathology , Polycystic Kidney Diseases/embryology , Polycystic Kidney Diseases/genetics , Polycystic Kidney Diseases/pathology , Xenopus Proteins/genetics , Xenopus laevisABSTRACT
In the version of this paper originally published, one of the affiliations for Dominic Mai was incorrect: "Center for Biological Systems Analysis (ZBSA), Albert-Ludwigs-University, Freiburg, Germany" should have been "Life Imaging Center, Center for Biological Systems Analysis, Albert-Ludwigs-University, Freiburg, Germany." This change required some renumbering of subsequent author affiliations. These corrections have been made in the PDF and HTML versions of the article, as well as in any cover sheets for associated Supplementary Information.
ABSTRACT
U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.
Subject(s)
Cell Count , Deep Learning , Cloud Computing , Neural Networks, Computer , Software DesignABSTRACT
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.
Subject(s)
Algorithms , Cell Tracking/methods , Image Interpretation, Computer-Assisted , Benchmarking , Cell Line , HumansABSTRACT
Precise three-dimensional (3D) mapping of a large number of gene expression patterns, neuronal types and connections to an anatomical reference helps us to understand the vertebrate brain and its development. We developed the Virtual Brain Explorer (ViBE-Z), a software that automatically maps gene expression data with cellular resolution to a 3D standard larval zebrafish (Danio rerio) brain. ViBE-Z enhances the data quality through fusion and attenuation correction of multiple confocal microscope stacks per specimen and uses a fluorescent stain of cell nuclei for image registration. It automatically detects 14 predefined anatomical landmarks for aligning new data with the reference brain. ViBE-Z performs colocalization analysis in expression databases for anatomical domains or subdomains defined by any specific pattern; here we demonstrate its utility for mapping neurons of the dopaminergic system. The ViBE-Z database, atlas and software are provided via a web interface.
Subject(s)
Brain , Databases, Genetic , Gene Expression , Imaging, Three-Dimensional/methods , Zebrafish , Animals , Brain/embryology , Brain/metabolism , Brain/ultrastructure , Embryonic Development/genetics , Larva , Neurons/metabolism , Neurons/ultrastructure , Software , Zebrafish/embryology , Zebrafish/geneticsABSTRACT
How do brains-biological or artificial-respond and adapt to an ever-changing environment? In a recent meeting, experts from various fields of neuroscience and artificial intelligence met to discuss internal world models in brains and machines, arguing for an interdisciplinary approach to gain deeper insights into the underlying mechanisms.
Subject(s)
Artificial Intelligence , Brain , Animals , Humans , Brain/physiology , Models, Neurological , NeurosciencesABSTRACT
Mutations of inversin cause type II nephronophthisis, an infantile autosomal recessive disease characterized by cystic kidney disease and developmental defects. Inversin regulates Wnt signaling and is required for convergent extension movements during early embryogenesis. We now show that Inversin is essential for Xenopus pronephros formation, involving two distinct and opposing forms of cell movements. Knockdown of Inversin abrogated both proximal pronephros extension and distal tubule differentiation, phenotypes similar to that of Xenopus deficient in Frizzled-8. Exogenous Inversin rescued the pronephric defects caused by lack of Frizzled-8, indicating that Inversin acts downstream of Frizzled-8 in pronephros morphogenesis. Depletion of Inversin prevents the recruitment of Dishevelled in response to Frizzled-8 and impeded the accumulation of Dishevelled at the apical membrane of tubular epithelial cells in vivo. Thus, defective tubule morphogenesis seems to contribute to the renal pathology observed in patients with nephronophthisis type II.
Subject(s)
Intracellular Signaling Peptides and Proteins/metabolism , Kidney/embryology , Receptors, Cell Surface/metabolism , Signal Transduction/physiology , Transcription Factors/metabolism , Xenopus Proteins/metabolism , Adaptor Proteins, Signal Transducing/metabolism , Animals , Dishevelled Proteins , Fluorescence , In Situ Hybridization , Kidney/metabolism , Mice , Microscopy, Confocal , Oligonucleotides/genetics , Phosphoproteins/metabolism , Wnt Proteins/metabolism , XenopusABSTRACT
The impact of spontaneous movements on neuronal activity has created the need to quantify behavior. We present a versatile framework to directly capture the 3D motion of freely definable body points in a marker-free manner with high precision and reliability. Combining the tracking with neural recordings revealed multiplexing of information in the motor cortex neurons of freely moving rats. By integrating multiple behavioral variables into a model of the neural response, we derived a virtual head fixation for which the influence of specific body movements was removed. This strategy enabled us to analyze the behavior of interest (e.g., front paw movements). Thus, we unveiled an unexpectedly large fraction of neurons in the motor cortex with tuning to the paw movements, which was previously masked by body posture tuning. Once established, our framework can be efficiently applied to large datasets while minimizing the experimental workload caused by animal training and manual labeling.
Subject(s)
Motor Cortex , Movement , Animals , Motor Cortex/physiology , Motor Neurons/physiology , Movement/physiology , Posture/physiology , Rats , Reproducibility of ResultsABSTRACT
Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior-posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.
Subject(s)
Sensorimotor Cortex , Rats , Animals , Neurons , Cardiac Electrophysiology , Electrodes , Generalization, PsychologicalABSTRACT
Automatic prostate tumor segmentation is often unable to identify the lesion even if multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. The CNN achieved a mean Dice Sorensen Coefficient 0.62 and 0.31 for the prostate gland and the tumor lesions -with the radiologist drawn ground truth and 0.32 with whole-mount histology ground truth for tumor lesions. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.
Subject(s)
Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Humans , Magnetic Resonance Imaging/methods , Male , Neural Networks, Computer , Prostatic Neoplasms/diagnostic imagingABSTRACT
Several tissues contain cells with multiple motile cilia that generate a fluid or particle flow to support development and organ functions; defective motility causes human disease. Developmental cues orient motile cilia, but how cilia are locked into their final position to maintain a directional flow is not understood. Here we find that the actin cytoskeleton is highly dynamic during early development of multiciliated cells (MCCs). While apical actin bundles become increasingly more static, subapical actin filaments are nucleated from the distal tip of ciliary rootlets. Anchorage of these subapical actin filaments requires the presence of microridge-like structures formed during MCC development, and the activity of Nonmuscle Myosin II. Optogenetic manipulation of Ezrin, a core component of the microridge actin-anchoring complex, or inhibition of Myosin Light Chain Kinase interfere with rootlet anchorage and orientation. These observations identify microridge-like structures as an essential component of basal body rootlet anchoring in MCCs.
Subject(s)
Actins , Cilia , Actin Cytoskeleton , Basal Bodies , Cilia/physiology , Cytoskeleton , HumansABSTRACT
The ability to understand visual information from limited labeled data is an important aspect of machine learning. While image-level classification has been extensively studied in a semi-supervised setting, dense pixel-level classification with limited data has only drawn attention recently. In this work, we propose an approach for semi-supervised semantic segmentation that learns from limited pixel-wise annotated samples while exploiting additional annotation-free images. The proposed approach relies on adversarial training with a feature matching loss to learn from unlabeled images. It uses two network branches that link semi-supervised classification with semi-supervised segmentation including self-training. The dual-branch approach reduces both the low-level and the high-level artifacts typical when training with few labels. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks-PASCAL VOC 2012, PASCAL-Context, and Cityscapes-the approach achieves new state-of-the-art in semi-supervised learning.
ABSTRACT
This paper contributes two novel techniques in the context of image restoration by nonlocal filtering. First, we introduce an efficient implementation of the nonlocal means filter based on arranging the data in a cluster tree. The structuring of data allows for a fast and accurate preselection of similar patches. In contrast to previous approaches, the preselection is based on the same distance measure as used by the filter itself. It allows for large speedups, especially when the search for similar patches covers the whole image domain, i.e., when the filter is truly nonlocal. However, also in the windowed version of the filter, the cluster tree approach compares favorably to previous techniques in respect of quality versus computational cost. Second, we suggest an iterative version of the filter that is derived from a variational principle and is designed to yield nontrivial steady states. It reveals to be particularly useful in order to restore regular, textured patterns.
Subject(s)
Algorithms , Artifacts , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and SpecificityABSTRACT
Models for computer vision are commonly defined either w.r.t. low-level concepts such as pixels that are to be grouped, or w.r.t. high-level concepts such as semantic objects that are to be detected and tracked. Combining bottom-up grouping with top-down detection and tracking, although highly desirable, is a challenging problem. We state this joint problem as a co-clustering problem that is principled and tractable by existing algorithms. We demonstrate the effectiveness of this approach by combining bottom-up motion segmentation by grouping of point trajectories with high-level multiple object tracking by clustering of bounding boxes. We show that solving the joint problem is beneficial at the low-level, in terms of the FBMS59 motion segmentation benchmark, and at the high-level, in terms of the Multiple Object Tracking benchmarks MOT15, MOT16 and the MOT17 challenge, and is state-of-the-art in some metrics.
ABSTRACT
We train generative 'up-convolutional' neural networks which are able to generate images of objects given object style, viewpoint, and color. We train the networks on rendered 3D models of chairs, tables, and cars. Our experiments show that the networks do not merely learn all images by heart, but rather find a meaningful representation of 3D models allowing them to assess the similarity of different models, interpolate between given views to generate the missing ones, extrapolate views, and invent new objects not present in the training set by recombining training instances, or even two different object classes. Moreover, we show that such generative networks can be used to find correspondences between different objects from the dataset, outperforming existing approaches on this task.
ABSTRACT
The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours.
Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methodsABSTRACT
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.