RESUMO
Three-dimensional (3D) reconstruction of living brain tissue down to an individual synapse level would create opportunities for decoding the dynamics and structure-function relationships of the brain's complex and dense information processing network; however, this has been hindered by insufficient 3D resolution, inadequate signal-to-noise ratio and prohibitive light burden in optical imaging, whereas electron microscopy is inherently static. Here we solved these challenges by developing an integrated optical/machine-learning technology, LIONESS (live information-optimized nanoscopy enabling saturated segmentation). This leverages optical modifications to stimulated emission depletion microscopy in comprehensively, extracellularly labeled tissue and previous information on sample structure via machine learning to simultaneously achieve isotropic super-resolution, high signal-to-noise ratio and compatibility with living tissue. This allows dense deep-learning-based instance segmentation and 3D reconstruction at a synapse level, incorporating molecular, activity and morphodynamic information. LIONESS opens up avenues for studying the dynamic functional (nano-)architecture of living brain tissue.
Assuntos
Encéfalo , Sinapses , Microscopia de Fluorescência/métodos , Processamento de Imagem Assistida por ComputadorRESUMO
Over the past century, multichannel fluorescence imaging has been pivotal in myriad scientific breakthroughs by enabling the spatial visualization of proteins within a biological sample. With the shift to digital methods and visualization software, experts can now flexibly pseudocolor and combine image channels, each corresponding to a different protein, to explore their spatial relationships. We thus propose psudo, an interactive system that allows users to create optimal color palettes for multichannel spatial data. In psudo, a novel optimization method generates palettes that maximize the perceptual differences between channels while mitigating confusing color blending in overlapping channels. We integrate this method into a system that allows users to explore multi-channel image data and compare and evaluate color palettes for their data. An interactive lensing approach provides on-demand feedback on channel overlap and a color confusion metric while giving context to the underlying channel values. Color palettes can be applied globally or, using the lens, to local regions of interest. We evaluate our palette optimization approach using three graphical perception tasks in a crowdsourced user study with 150 participants, showing that users are more accurate at discerning and comparing the underlying data using our approach. Additionally, we showcase psudo in a case study exploring the complex immune responses in cancer tissue data with a biologist.
RESUMO
To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. Our analysis showed that glia outnumber neurons 2:1, oligodendrocytes were the most common cell, deep layer excitatory neurons could be classified on the basis of dendritic orientation, and among thousands of weak connections to each neuron, there exist rare powerful axonal inputs of up to 50 synapses. Further studies using this resource may bring valuable insights into the mysteries of the human brain.
Assuntos
Córtex Cerebral , Humanos , Axônios/fisiologia , Axônios/ultraestrutura , Córtex Cerebral/irrigação sanguínea , Córtex Cerebral/ultraestrutura , Dendritos/fisiologia , Neurônios/ultraestrutura , Oligodendroglia/ultraestrutura , Sinapses/fisiologia , Sinapses/ultraestrutura , Lobo Temporal/ultraestrutura , MicroscopiaRESUMO
Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2 - 6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2 - 4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.
RESUMO
The National Cancer Institute (NCI) supports many research programs and consortia, many of which use imaging as a major modality for characterizing cancerous tissue. A trans-consortia Image Analysis Working Group (IAWG) was established in 2019 with a mission to disseminate imaging-related work and foster collaborations. In 2022, the IAWG held a virtual hackathon focused on addressing challenges of analyzing high dimensional datasets from fixed cancerous tissues. Standard image processing techniques have automated feature extraction, but the next generation of imaging data requires more advanced methods to fully utilize the available information. In this perspective, we discuss current limitations of the automated analysis of multiplexed tissue images, the first steps toward deeper understanding of these limitations, what possible solutions have been developed, any new or refined approaches that were developed during the Image Analysis Hackathon 2022, and where further effort is required. The outstanding problems addressed in the hackathon fell into three main themes: 1) challenges to cell type classification and assessment, 2) translation and visual representation of spatial aspects of high dimensional data, and 3) scaling digital image analyses to large (multi-TB) datasets. We describe the rationale for each specific challenge and the progress made toward addressing it during the hackathon. We also suggest areas that would benefit from more focus and offer insight into broader challenges that the community will need to address as new technologies are developed and integrated into the broad range of image-based modalities and analytical resources already in use within the cancer research community.
RESUMO
Advances in Electron Microscopy, image segmentation and computational infrastructure have given rise to large-scale and richly annotated connectomic datasets which are increasingly shared across communities. To enable collaboration, users need to be able to concurrently create new annotations and correct errors in the automated segmentation by proofreading. In large datasets, every proofreading edit relabels cell identities of millions of voxels and thousands of annotations like synapses. For analysis, users require immediate and reproducible access to this constantly changing and expanding data landscape. Here, we present the Connectome Annotation Versioning Engine (CAVE), a computational infrastructure for immediate and reproducible connectome analysis in up-to petascale datasets (~1mm3) while proofreading and annotating is ongoing. For segmentation, CAVE provides a distributed proofreading infrastructure for continuous versioning of large reconstructions. Annotations in CAVE are defined by locations such that they can be quickly assigned to the underlying segment which enables fast analysis queries of CAVE's data for arbitrary time points. CAVE supports schematized, extensible annotations, so that researchers can readily design novel annotation types. CAVE is already used for many connectomics datasets, including the largest datasets available to date.
RESUMO
Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.