Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 30(1): 55-65, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37874718

RESUMO

This paper describes a novel method for detecting and visualizing vortex structures in unsteady 2D fluid flows. The method is based on an interactive local reference frame estimation that minimizes the observed time derivative of the input flow field v(x, t). A locally optimal reference frame w(x, t) assists the user in the identification of physically observable vortex structures in Observed Line Integral Convolution (LIC) visualizations. The observed LIC visualizations are interactively computed and displayed in a user-steered vortex lens region, embedded in the context of a conventional LIC visualization outside the lens. The locally optimal reference frame is then used to detect observed critical points, where v=w, which are used to seed vortex core lines. Each vortex core line is computed as a solution of the ordinary differential equation (ODE) · w(t)=w(w(t), t), with an observed critical point as initial condition (w(t0), t0). During integration, we enforce a strict error bound on the difference between the extracted core line and the integration of a path line of the input vector field, i.e., a solution to the ODE · v(t)=v(v(t), t). We experimentally verify that this error depends on the step size of the core line integration. This ensures that our method extracts Lagrangian vortex core lines that are the simultaneous solution of both ODEs with a numerical error that is controllable by the integration step size. We show the usability of our method in the context of an interactive system using a lens metaphor, and evaluate the results in comparison to state-of-the-art vortex core line extraction methods.

2.
IEEE Trans Vis Comput Graph ; 30(1): 1380-1390, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37889813

RESUMO

We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.

3.
IEEE Trans Vis Comput Graph ; 29(2): 1491-1505, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34653000

RESUMO

The rapidly growing size and complexity of 3D geological models has increased the need for level-of-detail techniques and compact encodings to facilitate interactive visualization. For large-scale hexahedral meshes, state-of-the-art approaches often employ wavelet schemes for level of detail as well as for data compression. Here, wavelet transforms serve two purposes: (1) they achieve substantial compression for data reduction; and (2) the multiresolution encoding provides levels of detail for visualization. However, in coarser detail levels, important geometric features, such as geological faults, often get too smoothed out or lost, due to linear translation-invariant filtering. The same is true for attribute features, such as discontinuities in porosity or permeability. We present a novel, integrated approach addressing both purposes above, while preserving critical data features of both model geometry and its attributes. Our first major contribution is that we completely decouple the computation of levels of detail from data compression, and perform nonlinear filtering in a high-dimensional data space jointly representing the geological model geometry with its attributes. Computing detail levels in this space enables us to jointly preserve features in both geometry and attributes. While designed in a general way, our framework specifically employs joint bilateral filters, computed efficiently on a high-dimensional permutohedral grid. For data compression, after the computation of all detail levels, each level is separately encoded with a standard wavelet transform. Our second major contribution is a compact GPU data structure for the encoded mesh and attributes that enables direct real-time GPU visualization without prior decoding.

4.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36434788

RESUMO

Ultraliser is a neuroscience-specific software framework capable of creating accurate and biologically realistic 3D models of complex neuroscientific structures at intracellular (e.g. mitochondria and endoplasmic reticula), cellular (e.g. neurons and glia) and even multicellular scales of resolution (e.g. cerebral vasculature and minicolumns). Resulting models are exported as triangulated surface meshes and annotated volumes for multiple applications in in silico neuroscience, allowing scalable supercomputer simulations that can unravel intricate cellular structure-function relationships. Ultraliser implements a high-performance and unconditionally robust voxelization engine adapted to create optimized watertight surface meshes and annotated voxel grids from arbitrary non-watertight triangular soups, digitized morphological skeletons or binary volumetric masks. The framework represents a major leap forward in simulation-based neuroscience, making it possible to employ high-resolution 3D structural models for quantification of surface areas and volumes, which are of the utmost importance for cellular and system simulations. The power of Ultraliser is demonstrated with several use cases in which hundreds of models are created for potential application in diverse types of simulations. Ultraliser is publicly released under the GNU GPL3 license on GitHub (BlueBrain/Ultraliser). SIGNIFICANCE: There is crystal clear evidence on the impact of cell shape on its signaling mechanisms. Structural models can therefore be insightful to realize the function; the more realistic the structure can be, the further we get insights into the function. Creating realistic structural models from existing ones is challenging, particularly when needed for detailed subcellular simulations. We present Ultraliser, a neuroscience-dedicated framework capable of building these structural models with realistic and detailed cellular geometries that can be used for simulations.


Assuntos
Neurônios , Software , Simulação por Computador
5.
IEEE Trans Vis Comput Graph ; 29(1): 646-656, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36155448

RESUMO

Large-scale scientific data, such as weather and climate simulations, often comprise a large number of attributes for each data sample, like temperature, pressure, humidity, and many more. Interactive visualization and analysis require filtering according to any desired combination of attributes, in particular logical AND operations, which is challenging for large data and many attributes. Many general data structures for this problem are built for and scale with a fixed number of attributes, and scalability of joint queries with arbitrary attribute subsets remains a significant problem. We propose a flexible probabilistic framework for multivariate range queries that decouples all attribute dimensions via projection, allowing any subset of attributes to be queried with full efficiency. Moreover, our approach is output-sensitive, mainly scaling with the cardinality of the query result rather than with the input data size. This is particularly important for joint attribute queries, where the query output is usually much smaller than the whole data set. Additionally, our approach can split query evaluation between user interaction and rendering, achieving much better scalability for interactive visualization than the previous state of the art. Furthermore, even when a multi-resolution strategy is used for visualization, queries are jointly evaluated at the finest data granularity, because our framework does not limit query accuracy to a fixed spatial subdivision.

6.
IEEE Trans Vis Comput Graph ; 28(1): 281-290, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34596555

RESUMO

State-of-the-art computation and visualization of vortices in unsteady fluid flow employ objective vortex criteria, which makes them independent of reference frames or observers. However, objectivity by itself, although crucial, is not sufficient to guarantee that one can identify physically-realizable observers that would perceive or detect the same vortices. Moreover, a significant challenge is that a single reference frame is often not sufficient to accurately observe multiple vortices that follow different motions. This paper presents a novel framework for the exploration and use of an interactively-chosen set of observers, of the resulting relative velocity fields, and of objective vortex structures. We show that our approach facilitates the objective detection and visualization of vortices relative to well-adapted reference frame motions, while at the same time guaranteeing that these observers are in fact physically realizable. In order to represent and manipulate observers efficiently, we make use of the low-dimensional vector space structure of the Lie algebra of physically-realizable observer motions. We illustrate that our framework facilitates the efficient choice and guided exploration of objective vortices in unsteady 2D flow, on planar as well as on spherical domains, using well-adapted reference frames.

7.
IEEE Trans Vis Comput Graph ; 28(1): 573-582, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34587033

RESUMO

Achieving high rendering quality in the visualization of large particle data, for example from large-scale molecular dynamics simulations, requires a significant amount of sub-pixel super-sampling, due to very high numbers of particles per pixel. Although it is impossible to super-sample all particles of large-scale data at interactive rates, efficient occlusion culling can decouple the overall data size from a high effective sampling rate of visible particles. However, while the latter is essential for domain scientists to be able to see important data features, performing occlusion culling by sampling or sorting the data is usually slow or error-prone due to visibility estimates of insufficient quality. We present a novel probabilistic culling architecture for super-sampled high-quality rendering of large particle data. Occlusion is dynamically determined at the sub-pixel level, without explicit visibility sorting or data simplification. We introduce confidence maps to probabilistically estimate confidence in the visibility data gathered so far. This enables progressive, confidence-based culling, helping to avoid wrong visibility decisions. In this way, we determine particle visibility with high accuracy, although only a small part of the data set is sampled. This enables extensive super-sampling of (partially) visible particles for high rendering quality, at a fraction of the cost of sampling all particles. For real-time performance with millions of particles, we exploit novel features of recent GPU architectures to group particles into two hierarchy levels, combining fine-grained culling with high frame rates.

8.
IEEE Trans Vis Comput Graph ; 27(2): 283-293, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048741

RESUMO

Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.

9.
J Vis Exp ; (151)2019 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-31609327

RESUMO

Serial sectioning and subsequent high-resolution imaging of biological tissue using electron microscopy (EM) allow for the segmentation and reconstruction of high-resolution imaged stacks to reveal ultrastructural patterns that could not be resolved using 2D images. Indeed, the latter might lead to a misinterpretation of morphologies, like in the case of mitochondria; the use of 3D models is, therefore, more and more common and applied to the formulation of morphology-based functional hypotheses. To date, the use of 3D models generated from light or electron image stacks makes qualitative, visual assessments, as well as quantification, more convenient to be performed directly in 3D. As these models are often extremely complex, a virtual reality environment is also important to be set up to overcome occlusion and to take full advantage of the 3D structure. Here, a step-by-step guide from image segmentation to reconstruction and analysis is described in detail.


Assuntos
Imageamento Tridimensional/métodos , Microscopia Eletrônica/métodos , Neuroglia/citologia , Neurônios/citologia , Realidade Virtual
10.
Prog Neurobiol ; 183: 101696, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31550514

RESUMO

With the rapid evolution in the automation of serial electron microscopy in life sciences, the acquisition of terabyte-sized datasets is becoming increasingly common. High resolution serial block-face imaging (SBEM) of biological tissues offers the opportunity to segment and reconstruct nanoscale structures to reveal spatial features previously inaccessible with simple, single section, two-dimensional images. In particular, we focussed here on glial cells, whose reconstruction efforts in literature are still limited, compared to neurons. We imaged a 750,000 cubic micron volume of the somatosensory cortex from a juvenile P14 rat, with 20 nm accuracy. We recognized a total of 186 cells using their nuclei, and classified them as neuronal or glial based on features of the soma and the processes. We reconstructed for the first time 4 almost complete astrocytes and neurons, 4 complete microglia and 4 complete pericytes, including their intracellular mitochondria, 186 nuclei and 213 myelinated axons. We then performed quantitative analysis on the three-dimensional models. Out of the data that we generated, we observed that neurons have larger nuclei, which correlated with their lesser density, and that astrocytes and pericytes have a higher surface to volume ratio, compared to other cell types. All reconstructed morphologies represent an important resource for computational neuroscientists, as morphological quantitative information can be inferred, to tune simulations that take into account the spatial compartmentalization of the different cell types.


Assuntos
Astrócitos/ultraestrutura , Encéfalo/citologia , Encéfalo/diagnóstico por imagem , Imageamento Tridimensional , Microglia/ultraestrutura , Microscopia Eletrônica de Varredura , Neurônios/ultraestrutura , Pericitos/ultraestrutura , Animais , Microscopia Eletrônica , Ratos , Córtex Somatossensorial/citologia , Córtex Somatossensorial/diagnóstico por imagem
11.
Front Neurosci ; 12: 664, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30319342

RESUMO

One will not understand the brain without an integrated exploration of structure and function, these attributes being two sides of the same coin: together they form the currency of biological computation. Accordingly, biologically realistic models require the re-creation of the architecture of the cellular components in which biochemical reactions are contained. We describe here a process of reconstructing a functional oligocellular assembly that is responsible for energy supply management in the brain and creating a computational model of the associated biochemical and biophysical processes. The reactions that underwrite thought are both constrained by and take advantage of brain morphologies pertaining to neurons, astrocytes and the blood vessels that deliver oxygen, glucose and other nutrients. Each component of this neuro-glio-vasculature ensemble (NGV) carries-out delegated tasks, as the dynamics of this system provide for each cell-type its own energy requirements while including mechanisms that allow cooperative energy transfers. Our process for recreating the ultrastructure of cellular components and modeling the reactions that describe energy flow uses an amalgam of state-of the-art techniques, including digital reconstructions of electron micrographs, advanced data analysis tools, computational simulations and in silico visualization software. While we demonstrate this process with the NGV, it is equally well adapted to any cellular system for integrating multimodal cellular data in a coherent framework.

12.
Artigo em Inglês | MEDLINE | ID: mdl-30136947

RESUMO

With the rapid increase in raw volume data sizes, such as terabyte-sized microscopy volumes, the corresponding segmentation label volumes have become extremely large as well. We focus on integer label data, whose efficient representation in memory, as well as fast random data access, pose an even greater challenge than the raw image data. Often, it is crucial to be able to rapidly identify which segments are located where, whether for empty space skipping for fast rendering, or for spatial proximity queries. We refer to this process as culling. In order to enable efficient culling of millions of labeled segments, we present a novel hybrid approach that combines deterministic and probabilistic representations of label data in a data-adaptive hierarchical data structure that we call the label list tree. In each node, we adaptively encode label data using either a probabilistic constant-time access representation for fast conservative culling, or a deterministic logarithmic-time access representation for exact queries. We choose the best data structures for representing the labels of each spatial region while building the label list tree. At run time, we further employ a novel query-adaptive culling strategy. While filtering a query down the tree, we prune it successively, and in each node adaptively select the representation that is best suited for evaluating the pruned query, depending on its size. We show an analysis of the efficiency of our approach with several large data sets from connectomics, including a brain scan with more than 13 million labeled segments, and compare our method to conventional culling approaches. Our approach achieves significant reductions in storage size as well as faster query times.

13.
Artigo em Inglês | MEDLINE | ID: mdl-30130222

RESUMO

Flow fields are usually visualized relative to a global observer, i.e., a single frame of reference. However, often no global frame can depict all flow features equally well. Likewise, objective criteria for detecting features such as vortices often use either a global reference frame, or compute a separate frame for each point in space and time. We propose the first general framework that enables choosing a smooth trade-off between these two extremes. Using global optimization to minimize specific differential geometric properties, we compute a time-dependent observer velocity field that describes the motion of a continuous field of observers adapted to the input flow. This requires developing the novel notion of an observed time derivative. While individual observers are restricted to rigid motions, overall we compute an approximate Killing field, corresponding to almost-rigid motion. This enables continuous transitions between different observers. Instead of focusing only on flow features, we furthermore develop a novel general notion of visualizing how all observers jointly perceive the input field. This in fact requires introducing the concept of an observation time, with respect to which a visualization is computed. We develop the corresponding notions of observed stream, path, streak, and time lines. For efficiency, these characteristic curves can be computed using standard approaches, by first transforming the input field accordingly. Finally, we prove that the input flow perceived by the observer field is objective. This makes derived flow features, such as vortices, objective as well.

14.
IEEE Trans Vis Comput Graph ; 24(1): 944-953, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866508

RESUMO

Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

15.
IEEE Trans Vis Comput Graph ; 24(1): 974-983, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866532

RESUMO

Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

16.
IEEE Trans Vis Comput Graph ; 24(1): 853-861, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866534

RESUMO

This paper presents Abstractocyte, a system for the visual analysis of astrocytes and their relation to neurons, in nanoscale volumes of brain tissue. Astrocytes are glial cells, i.e., non-neuronal cells that support neurons and the nervous system. The study of astrocytes has immense potential for understanding brain function. However, their complex and widely-branching structure requires high-resolution electron microscopy imaging and makes visualization and analysis challenging. Furthermore, the structure and function of astrocytes is very different from neurons, and therefore requires the development of new visualization and analysis tools. With Abstractocyte, biologists can explore the morphology of astrocytes using various visual abstraction levels, while simultaneously analyzing neighboring neurons and their connectivity. We define a novel, conceptual 2D abstraction space for jointly visualizing astrocytes and neurons. Neuroscientists can choose a specific joint visualization as a point in this space. Interactively moving this point allows them to smoothly transition between different abstraction levels in an intuitive manner. In contrast to simply switching between different visualizations, this preserves the visual context and correlations throughout the transition. Users can smoothly navigate from concrete, highly-detailed 3D views to simplified and abstracted 2D views. In addition to investigating astrocytes, neurons, and their relationships, we enable the interactive analysis of the distribution of glycogen, which is of high importance to neuroscientists. We describe the design of Abstractocyte, and present three case studies in which neuroscientists have successfully used our system to assess astrocytic coverage of synapses, glycogen distribution in relation to synapses, and astrocytic-mitochondria coverage.


Assuntos
Astrócitos/citologia , Conectoma/métodos , Imageamento Tridimensional/métodos , Software , Gráficos por Computador , Humanos , Neurônios/citologia
17.
IEEE Trans Vis Comput Graph ; 22(1): 738-46, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26529725

RESUMO

In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.


Assuntos
Gráficos por Computador , Conectoma/métodos , Processamento de Imagem Assistida por Computador/métodos , Neurociências/métodos , Humanos , Internet , Interface Usuário-Computador
18.
IEEE Trans Vis Comput Graph ; 22(1): 1025-34, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26529746

RESUMO

Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

19.
IEEE Trans Vis Comput Graph ; 20(12): 2417-26, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26146475

RESUMO

This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.


Assuntos
Gráficos por Computador , Imageamento Tridimensional/métodos , Humanos , Distribuição Normal , Imagens de Fantasmas , Projetos Ser Humano Visível
20.
IEEE Trans Vis Comput Graph ; 20(12): 2369-78, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356951

RESUMO

We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.


Assuntos
Encéfalo/fisiologia , Gráficos por Computador , Conectoma/métodos , Imageamento Tridimensional/métodos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Encéfalo/citologia , Humanos , Modelos Teóricos , Neurônios/citologia , Ferrovias
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...