Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Nat Neurosci ; 23(1): 138-151, 2020 01.
Article in English | MEDLINE | ID: mdl-31844315

ABSTRACT

To understand how the brain processes sensory information to guide behavior, we must know how stimulus representations are transformed throughout the visual cortex. Here we report an open, large-scale physiological survey of activity in the awake mouse visual cortex: the Allen Brain Observatory Visual Coding dataset. This publicly available dataset includes the cortical activity of nearly 60,000 neurons from six visual areas, four layers, and 12 transgenic mouse lines in a total of 243 adult mice, in response to a systematic set of visual stimuli. We classify neurons on the basis of joint reliabilities to multiple stimuli and validate this functional classification with models of visual responses. While most classes are characterized by responses to specific subsets of the stimuli, the largest class is not reliably responsive to any of the stimuli and becomes progressively larger in higher visual areas. These classes reveal a functional organization wherein putative dorsal areas show specialization for visual motion signals.


Subject(s)
Visual Cortex/anatomy & histology , Visual Cortex/physiology , Animals , Datasets as Topic , Mice
2.
PLoS One ; 14(5): e0213924, 2019.
Article in English | MEDLINE | ID: mdl-31042712

ABSTRACT

Visual cortex is organized into discrete sub-regions or areas that are arranged into a hierarchy and serves different functions in the processing of visual information. In retinotopic maps of mouse cortex, there appear to be substantial mouse-to-mouse differences in visual area location, size and shape. Here we quantify the biological variation in the size, shape and locations of 11 visual areas in the mouse, after separating biological variation and measurement noise. We find that there is biological variation in the locations and sizes of visual areas.


Subject(s)
Visual Cortex/anatomy & histology , Animals , Brain Mapping , Male , Mice , Visual Cortex/physiology , Visual Pathways/physiology
3.
Brain Inform ; 5(2): 3, 2018 Jun 06.
Article in English | MEDLINE | ID: mdl-29876679

ABSTRACT

Reconstructing three-dimensional (3D) morphology of neurons is essential for understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semiautomatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new Open Source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.

4.
Science ; 360(6389): 660-663, 2018 05 11.
Article in English | MEDLINE | ID: mdl-29748285

ABSTRACT

Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment.


Subject(s)
Brain Neoplasms/genetics , Brain Neoplasms/pathology , Glioblastoma/genetics , Glioblastoma/pathology , Atlases as Topic , Databases, Genetic , Gene Expression Profiling , Humans , Prognosis
5.
Neuroinformatics ; 13(4): 487-99, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26036213

ABSTRACT

Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.


Subject(s)
Database Management Systems , Imaging, Three-Dimensional , Models, Neurological , Neurons/physiology , Algorithms , Animals , Brain/cytology , Cluster Analysis , Humans , Nonlinear Dynamics
6.
Nat Commun ; 5: 4342, 2014 Jul 11.
Article in English | MEDLINE | ID: mdl-25014658

ABSTRACT

Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Microsurgery/methods , User-Computer Interface , Animals , Brain/cytology , Caenorhabditis elegans , Drosophila , Lung/cytology , Mice , Models, Animal , Muscle Cells/cytology , Neurons/cytology
7.
Development ; 141(12): 2524-32, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24917506

ABSTRACT

A major limitation in understanding embryonic development is the lack of cell type-specific markers. Existing gene expression and marker atlases provide valuable tools, but they typically have one or more limitations: a lack of single-cell resolution; an inability to register multiple expression patterns to determine their precise relationship; an inability to be upgraded by users; an inability to compare novel patterns with the database patterns; and a lack of three-dimensional images. Here, we develop new 'atlas-builder' software that overcomes each of these limitations. A newly generated atlas is three-dimensional, allows the precise registration of an infinite number of cell type-specific markers, is searchable and is open-ended. Our software can be used to create an atlas of any tissue in any organism that contains stereotyped cell positions. We used the software to generate an 'eNeuro' atlas of the Drosophila embryonic CNS containing eight transcription factors that mark the major CNS cell types (motor neurons, glia, neurosecretory cells and interneurons). We found neuronal, but not glial, nuclei occupied stereotyped locations. We added 75 new Gal4 markers to the atlas to identify over 50% of all interneurons in the ventral CNS, and these lines allowed functional access to those interneurons for the first time. We expect the atlas-builder software to benefit a large proportion of the developmental biology community, and the eNeuro atlas to serve as a publicly accessible hub for integrating neuronal attributes - cell lineage, gene expression patterns, axon/dendrite projections, neurotransmitters--and linking them to individual neurons.


Subject(s)
Central Nervous System/cytology , Databases, Genetic , Drosophila melanogaster/embryology , Drosophila melanogaster/genetics , Animals , Axons/metabolism , Cell Lineage , Computational Biology , Dendrites/metabolism , Drosophila Proteins/metabolism , Gene Expression Profiling , Gene Expression Regulation, Developmental , Genetic Markers , Interneurons/metabolism , Mice , Neurons/metabolism , Neurotransmitter Agents , Rats , Software
8.
Nat Protoc ; 9(1): 193-208, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24385149

ABSTRACT

Open-Source 3D Visualization-Assisted Analysis (Vaa3D) is a software platform for the visualization and analysis of large-scale multidimensional images. In this protocol we describe how to use several popular features of Vaa3D, including (i) multidimensional image visualization, (ii) 3D image object generation and quantitative measurement, (iii) 3D image comparison, fusion and management, (iv) visualization of heterogeneous images and respective surface objects and (v) extension of Vaa3D functions using its plug-in interface. We also briefly demonstrate how to integrate these functions for complicated applications of microscopic image visualization and quantitative analysis using three exemplar pipelines, including an automated pipeline for image filtering, segmentation and surface generation; an automated pipeline for 3D image stitching; and an automated pipeline for neuron morphology reconstruction, quantification and comparison. Once a user is familiar with Vaa3D, visualization usually runs in real time and analysis takes less than a few minutes for a simple data set.


Subject(s)
Imaging, Three-Dimensional/methods , Software , Animals , Brain/anatomy & histology , Computer Simulation , Drosophila/anatomy & histology , Neurons/ultrastructure , User-Computer Interface
9.
Bioinformatics ; 29(13): i18-26, 2013 Jul 01.
Article in English | MEDLINE | ID: mdl-23812982

ABSTRACT

MOTIVATION: Advances in high-resolution microscopy have recently made possible the analysis of gene expression at the level of individual cells. The fixed lineage of cells in the adult worm Caenorhabditis elegans makes this organism an ideal model for studying complex biological processes like development and aging. However, annotating individual cells in images of adult C.elegans typically requires expertise and significant manual effort. Automation of this task is therefore critical to enabling high-resolution studies of a large number of genes. RESULTS: In this article, we describe an automated method for annotating a subset of 154 cells (including various muscle, intestinal and hypodermal cells) in high-resolution images of adult C.elegans. We formulate the task of labeling cells within an image as a combinatorial optimization problem, where the goal is to minimize a scoring function that compares cells in a test input image with cells from a training atlas of manually annotated worms according to various spatial and morphological characteristics. We propose an approach for solving this problem based on reduction to minimum-cost maximum-flow and apply a cross-entropy-based learning algorithm to tune the weights of our scoring function. We achieve 84% median accuracy across a set of 154 cell labels in this highly variable system. These results demonstrate the feasibility of the automatic annotation of microscopy-based images in adult C.elegans.


Subject(s)
Caenorhabditis elegans/cytology , Gene Expression Profiling , Imaging, Three-Dimensional/methods , Algorithms , Animals , Caenorhabditis elegans/genetics , Caenorhabditis elegans/metabolism , Cell Division , Cell Lineage , Microscopy, Confocal
10.
Curr Biol ; 23(8): 633-43, 2013 Apr 22.
Article in English | MEDLINE | ID: mdl-23541733

ABSTRACT

BACKGROUND: The insect brain can be divided into neuropils that are formed by neurites of both local and remote origin. The complexity of the interconnections obscures how these neuropils are established and interconnected through development. The Drosophila central brain develops from a fixed number of neuroblasts (NBs) that deposit neurons in regional clusters. RESULTS: By determining individual NB clones and pursuing their projections into specific neuropils, we unravel the regional development of the brain neural network. Exhaustive clonal analysis revealed 95 stereotyped neuronal lineages with characteristic cell-body locations and neurite trajectories. Most clones show complex projection patterns, but despite the complexity, neighboring clones often coinnervate the same local neuropil or neuropils and further target a restricted set of distant neuropils. CONCLUSIONS: These observations argue for regional clonal development of both neuropils and neuropil connectivity throughout the Drosophila central brain.


Subject(s)
Drosophila melanogaster/growth & development , Drosophila melanogaster/metabolism , Animals , Brain/cytology , Brain/growth & development , Brain/metabolism , Cell Lineage , Clone Cells/cytology , Clone Cells/metabolism , Drosophila melanogaster/cytology , Drosophila melanogaster/genetics , Female , Larva/cytology , Larva/genetics , Larva/growth & development , Larva/metabolism , Male , Microscopy, Confocal , Neural Stem Cells/cytology , Neural Stem Cells/metabolism , Neuropil/cytology , Neuropil/metabolism
11.
Cell Rep ; 2(4): 991-1001, 2012 Oct 25.
Article in English | MEDLINE | ID: mdl-23063364

ABSTRACT

We established a collection of 7,000 transgenic lines of Drosophila melanogaster. Expression of GAL4 in each line is controlled by a different, defined fragment of genomic DNA that serves as a transcriptional enhancer. We used confocal microscopy of dissected nervous systems to determine the expression patterns driven by each fragment in the adult brain and ventral nerve cord. We present image data on 6,650 lines. Using both manual and machine-assisted annotation, we describe the expression patterns in the most useful lines. We illustrate the utility of these data for identifying novel neuronal cell types, revealing brain asymmetry, and describing the nature and extent of neuronal shape stereotypy. The GAL4 lines allow expression of exogenous genes in distinct, small subsets of the adult nervous system. The set of DNA fragments, each driving a documented expression pattern, will facilitate the generation of additional constructs for manipulating neuronal function.


Subject(s)
Drosophila Proteins/metabolism , Drosophila melanogaster/metabolism , Nervous System/metabolism , Transcription Factors/metabolism , Animals , Animals, Genetically Modified , Brain/metabolism , Databases, Factual , Drosophila Proteins/genetics , Drosophila melanogaster/genetics , Immunohistochemistry , Microscopy, Confocal , Transcription Factors/genetics , Transcription, Genetic
12.
PLoS Comput Biol ; 8(6): e1002519, 2012.
Article in English | MEDLINE | ID: mdl-22719236

ABSTRACT

In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.


Subject(s)
Imaging, Three-Dimensional/methods , Microscopy/methods , Animals , Brain/anatomy & histology , Caenorhabditis elegans/cytology , Caenorhabditis elegans/genetics , Computational Biology , Computer Simulation , Drosophila melanogaster/anatomy & histology , Gene Expression Profiling/statistics & numerical data , Imaging, Three-Dimensional/statistics & numerical data , Microscopy/statistics & numerical data
14.
Bioinformatics ; 27(20): 2895-902, 2011 Oct 15.
Article in English | MEDLINE | ID: mdl-21849395

ABSTRACT

MOTIVATION: Automatic recognition of cell identities is critical for quantitative measurement, targeting and manipulation of cells of model animals at single-cell resolution. It has been shown to be a powerful tool for studying gene expression and regulation, cell lineages and cell fates. Existing methods first segment cells, before applying a recognition algorithm in the second step. As a result, the segmentation errors in the first step directly affect and complicate the subsequent cell recognition step. Moreover, in new experimental settings, some of the image features that have been previously relied upon to recognize cells may not be easy to reproduce, due to limitations on the number of color channels available for fluorescent imaging or to the cost of building transgenic animals. An approach that is more accurate and relies on only a single signal channel is clearly desirable. RESULTS: We have developed a new method, called simultaneous recognition and segmentation (SRS) of cells, and applied it to 3D image stacks of the model organism Caenorhabditis elegans. Given a 3D image stack of the animal and a 3D atlas of target cells, SRS is effectively an atlas-guided voxel classification process: cell recognition is realized by smoothly deforming the atlas to best fit the image, where the segmentation is obtained naturally via classification of all image voxels. The method achieved a 97.7% overall recognition accuracy in recognizing a key class of marker cells, the body wall muscle (BWM) cells, on a dataset of 175 C.elegans image stacks containing 14 118 manually curated BWM cells providing the 'ground-truth' for accuracy. This result was achieved without any additional fiducial image features. SRS also automatically identified 14 of the image stacks as involving ±90° rotations. With these stacks excluded from the dataset, the recognition accuracy rose to 99.1%. We also show SRS is generally applicable to other cell types, e.g. intestinal cells. AVAILABILITY: The supplementary movies can be downloaded from our web site http://penglab.janelia.org/proj/celegans_seganno. The method has been implemented as a plug-in program within the V3D system (http://penglab.janelia.org/proj/v3d), and will be released in the V3D plugin source code repository. CONTACT: pengh@janelia.hhmi.org.


Subject(s)
Algorithms , Caenorhabditis elegans/cytology , Imaging, Three-Dimensional/methods , Animals , Single-Cell Analysis
15.
Bioinformatics ; 27(13): i239-47, 2011 Jul 01.
Article in English | MEDLINE | ID: mdl-21685076

ABSTRACT

MOTIVATION: Digital reconstruction, or tracing, of 3D neuron structures is critical toward reverse engineering the wiring and functions of a brain. However, despite a number of existing studies, this task is still challenging, especially when a 3D microscopic image has low signal-to-noise ratio (SNR) and fragmented neuron segments. Published work can handle these hard situations only by introducing global prior information, such as where a neurite segment starts and terminates. However, manual incorporation of such global information can be very time consuming. Thus, a completely automatic approach for these hard situations is highly desirable. RESULTS: We have developed an automatic graph algorithm, called the all-path pruning (APP), to trace the 3D structure of a neuron. To avoid potential mis-tracing of some parts of a neuron, an APP first produces an initial over-reconstruction, by tracing the optimal geodesic shortest path from the seed location to every possible destination voxel/pixel location in the image. Since the initial reconstruction contains all the possible paths and thus could contain redundant structural components (SC), we simplify the entire reconstruction without compromising its connectedness by pruning the redundant structural elements, using a new maximal-covering minimal-redundant (MCMR) subgraph algorithm. We show that MCMR has a linear computational complexity and will converge. We examined the performance of our method using challenging 3D neuronal image datasets of model organisms (e.g. fruit fly). AVAILABILITY: The software is available upon request. We plan to eventually release the software as a plugin of the V3D-Neuron package at http://penglab.janelia.org/proj/v3d. CONTACT: pengh@janelia.hhmi.org.


Subject(s)
Algorithms , Neurons/cytology , Software , Animals , Brain/cytology , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods
16.
Neuroinformatics ; 9(2-3): 247-61, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21547564

ABSTRACT

Digital reconstruction of neurons from microscope images is an important and challenging problem in neuroscience. In this paper, we propose a model-based method to tackle this problem. We first formulate a model structure, then develop an algorithm for computing it by carefully taking into account morphological characteristics of neurons, as well as the image properties under typical imaging protocols. The method has been tested on the data sets used in the DIADEM competition and produced promising results for four out of the five data sets.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Models, Neurological , Neurons/cytology , Software/trends , Animals , Computer Simulation , Humans , Image Processing, Computer-Assisted/trends , Imaging, Three-Dimensional/trends , Neurons/physiology , Software Design , Software Validation
17.
Nat Methods ; 8(6): 493-500, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21532582

ABSTRACT

Analyzing Drosophila melanogaster neural expression patterns in thousands of three-dimensional image stacks of individual brains requires registering them into a canonical framework based on a fiducial reference of neuropil morphology. Given a target brain labeled with predefined landmarks, the BrainAligner program automatically finds the corresponding landmarks in a subject brain and maps it to the coordinate system of the target brain via a deformable warp. Using a neuropil marker (the antibody nc82) as a reference of the brain morphology and a target brain that is itself a statistical average of data for 295 brains, we achieved a registration accuracy of 2 µm on average, permitting assessment of stereotypy, potential connectivity and functional mapping of the adult fruit fly brain. We used BrainAligner to generate an image pattern atlas of 2954 registered brains containing 470 different expression patterns that cover all the major compartments of the fly brain.


Subject(s)
Algorithms , Brain/anatomy & histology , Drosophila melanogaster/anatomy & histology , Image Processing, Computer-Assisted/statistics & numerical data , Animals , Animals, Genetically Modified , Brain/metabolism , Drosophila Proteins/genetics , Drosophila melanogaster/genetics , Gene Expression , Green Fluorescent Proteins/genetics , Neuropil/cytology , Recombinant Proteins/genetics , Software , Transcription Factors/genetics
19.
Nat Biotechnol ; 28(4): 348-53, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20231818

ABSTRACT

The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain.


Subject(s)
Computer Graphics , Databases, Factual , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , Microscopy/methods , User-Computer Interface , Radiology Information Systems
20.
Methods ; 50(2): 63-9, 2010 Feb.
Article in English | MEDLINE | ID: mdl-19698789

ABSTRACT

Automatic alignment (registration) of 3D images of adult fruit fly brains is often influenced by the significant displacement of the relative locations of the two optic lobes (OLs) and the center brain (CB). In one of our ongoing efforts to produce a better image alignment pipeline of adult fruit fly brains, we consider separating CB and OLs and align them independently. This paper reports our automatic method to segregate CB and OLs, in particular under conditions where the signal to noise ratio (SNR) is low, the variation of the image intensity is big, and the relative displacement of OLs and CB is substantial. We design an algorithm to find a minimum-cost 3D surface in a 3D image stack to best separate an OL (of one side, either left or right) from CB. This surface is defined as an aggregation of the respective minimum-cost curves detected in each individual 2D image slice. Each curve is defined by a list of control points that best segregate OL and CB. To obtain the locations of these control points, we derive an energy function that includes an image energy term defined by local pixel intensities and two internal energy terms that constrain the curve's smoothness and length. Gradient descent method is used to optimize this energy function. To improve both the speed and robustness of the method, for each stack, the locations of optimized control points in a slice are taken as the initialization prior for the next slice. We have tested this approach on simulated and real 3D fly brain image stacks and demonstrated that this method can reasonably segregate OLs from CBs despite the aforementioned difficulties.


Subject(s)
Brain Mapping/methods , Brain/pathology , Microscopy, Confocal/methods , Optic Lobe, Nonmammalian/anatomy & histology , Algorithms , Animals , Automation , Computer Graphics , Computer Simulation , Drosophila melanogaster , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Models, Statistical , Optic Lobe, Nonmammalian/physiology , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...