Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36434788

RESUMEN

Ultraliser is a neuroscience-specific software framework capable of creating accurate and biologically realistic 3D models of complex neuroscientific structures at intracellular (e.g. mitochondria and endoplasmic reticula), cellular (e.g. neurons and glia) and even multicellular scales of resolution (e.g. cerebral vasculature and minicolumns). Resulting models are exported as triangulated surface meshes and annotated volumes for multiple applications in in silico neuroscience, allowing scalable supercomputer simulations that can unravel intricate cellular structure-function relationships. Ultraliser implements a high-performance and unconditionally robust voxelization engine adapted to create optimized watertight surface meshes and annotated voxel grids from arbitrary non-watertight triangular soups, digitized morphological skeletons or binary volumetric masks. The framework represents a major leap forward in simulation-based neuroscience, making it possible to employ high-resolution 3D structural models for quantification of surface areas and volumes, which are of the utmost importance for cellular and system simulations. The power of Ultraliser is demonstrated with several use cases in which hundreds of models are created for potential application in diverse types of simulations. Ultraliser is publicly released under the GNU GPL3 license on GitHub (BlueBrain/Ultraliser). SIGNIFICANCE: There is crystal clear evidence on the impact of cell shape on its signaling mechanisms. Structural models can therefore be insightful to realize the function; the more realistic the structure can be, the further we get insights into the function. Creating realistic structural models from existing ones is challenging, particularly when needed for detailed subcellular simulations. We present Ultraliser, a neuroscience-dedicated framework capable of building these structural models with realistic and detailed cellular geometries that can be used for simulations.


Asunto(s)
Neuronas , Programas Informáticos , Simulación por Computador
2.
IEEE Trans Vis Comput Graph ; 30(1): 55-65, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37874718

RESUMEN

This paper describes a novel method for detecting and visualizing vortex structures in unsteady 2D fluid flows. The method is based on an interactive local reference frame estimation that minimizes the observed time derivative of the input flow field v(x, t). A locally optimal reference frame w(x, t) assists the user in the identification of physically observable vortex structures in Observed Line Integral Convolution (LIC) visualizations. The observed LIC visualizations are interactively computed and displayed in a user-steered vortex lens region, embedded in the context of a conventional LIC visualization outside the lens. The locally optimal reference frame is then used to detect observed critical points, where v=w, which are used to seed vortex core lines. Each vortex core line is computed as a solution of the ordinary differential equation (ODE) · w(t)=w(w(t), t), with an observed critical point as initial condition (w(t0), t0). During integration, we enforce a strict error bound on the difference between the extracted core line and the integration of a path line of the input vector field, i.e., a solution to the ODE · v(t)=v(v(t), t). We experimentally verify that this error depends on the step size of the core line integration. This ensures that our method extracts Lagrangian vortex core lines that are the simultaneous solution of both ODEs with a numerical error that is controllable by the integration step size. We show the usability of our method in the context of an interactive system using a lens metaphor, and evaluate the results in comparison to state-of-the-art vortex core line extraction methods.

3.
IEEE Trans Vis Comput Graph ; 30(1): 1380-1390, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37889813

RESUMEN

We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.

4.
IEEE Trans Vis Comput Graph ; 29(2): 1491-1505, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34653000

RESUMEN

The rapidly growing size and complexity of 3D geological models has increased the need for level-of-detail techniques and compact encodings to facilitate interactive visualization. For large-scale hexahedral meshes, state-of-the-art approaches often employ wavelet schemes for level of detail as well as for data compression. Here, wavelet transforms serve two purposes: (1) they achieve substantial compression for data reduction; and (2) the multiresolution encoding provides levels of detail for visualization. However, in coarser detail levels, important geometric features, such as geological faults, often get too smoothed out or lost, due to linear translation-invariant filtering. The same is true for attribute features, such as discontinuities in porosity or permeability. We present a novel, integrated approach addressing both purposes above, while preserving critical data features of both model geometry and its attributes. Our first major contribution is that we completely decouple the computation of levels of detail from data compression, and perform nonlinear filtering in a high-dimensional data space jointly representing the geological model geometry with its attributes. Computing detail levels in this space enables us to jointly preserve features in both geometry and attributes. While designed in a general way, our framework specifically employs joint bilateral filters, computed efficiently on a high-dimensional permutohedral grid. For data compression, after the computation of all detail levels, each level is separately encoded with a standard wavelet transform. Our second major contribution is a compact GPU data structure for the encoded mesh and attributes that enables direct real-time GPU visualization without prior decoding.

5.
IEEE Trans Vis Comput Graph ; 29(1): 646-656, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36155448

RESUMEN

Large-scale scientific data, such as weather and climate simulations, often comprise a large number of attributes for each data sample, like temperature, pressure, humidity, and many more. Interactive visualization and analysis require filtering according to any desired combination of attributes, in particular logical AND operations, which is challenging for large data and many attributes. Many general data structures for this problem are built for and scale with a fixed number of attributes, and scalability of joint queries with arbitrary attribute subsets remains a significant problem. We propose a flexible probabilistic framework for multivariate range queries that decouples all attribute dimensions via projection, allowing any subset of attributes to be queried with full efficiency. Moreover, our approach is output-sensitive, mainly scaling with the cardinality of the query result rather than with the input data size. This is particularly important for joint attribute queries, where the query output is usually much smaller than the whole data set. Additionally, our approach can split query evaluation between user interaction and rendering, achieving much better scalability for interactive visualization than the previous state of the art. Furthermore, even when a multi-resolution strategy is used for visualization, queries are jointly evaluated at the finest data granularity, because our framework does not limit query accuracy to a fixed spatial subdivision.

6.
IEEE Trans Vis Comput Graph ; 28(1): 281-290, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34596555

RESUMEN

State-of-the-art computation and visualization of vortices in unsteady fluid flow employ objective vortex criteria, which makes them independent of reference frames or observers. However, objectivity by itself, although crucial, is not sufficient to guarantee that one can identify physically-realizable observers that would perceive or detect the same vortices. Moreover, a significant challenge is that a single reference frame is often not sufficient to accurately observe multiple vortices that follow different motions. This paper presents a novel framework for the exploration and use of an interactively-chosen set of observers, of the resulting relative velocity fields, and of objective vortex structures. We show that our approach facilitates the objective detection and visualization of vortices relative to well-adapted reference frame motions, while at the same time guaranteeing that these observers are in fact physically realizable. In order to represent and manipulate observers efficiently, we make use of the low-dimensional vector space structure of the Lie algebra of physically-realizable observer motions. We illustrate that our framework facilitates the efficient choice and guided exploration of objective vortices in unsteady 2D flow, on planar as well as on spherical domains, using well-adapted reference frames.

7.
IEEE Trans Vis Comput Graph ; 28(1): 573-582, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34587033

RESUMEN

Achieving high rendering quality in the visualization of large particle data, for example from large-scale molecular dynamics simulations, requires a significant amount of sub-pixel super-sampling, due to very high numbers of particles per pixel. Although it is impossible to super-sample all particles of large-scale data at interactive rates, efficient occlusion culling can decouple the overall data size from a high effective sampling rate of visible particles. However, while the latter is essential for domain scientists to be able to see important data features, performing occlusion culling by sampling or sorting the data is usually slow or error-prone due to visibility estimates of insufficient quality. We present a novel probabilistic culling architecture for super-sampled high-quality rendering of large particle data. Occlusion is dynamically determined at the sub-pixel level, without explicit visibility sorting or data simplification. We introduce confidence maps to probabilistically estimate confidence in the visibility data gathered so far. This enables progressive, confidence-based culling, helping to avoid wrong visibility decisions. In this way, we determine particle visibility with high accuracy, although only a small part of the data set is sampled. This enables extensive super-sampling of (partially) visible particles for high rendering quality, at a fraction of the cost of sampling all particles. For real-time performance with millions of particles, we exploit novel features of recent GPU architectures to group particles into two hierarchy levels, combining fine-grained culling with high frame rates.

8.
IEEE Trans Vis Comput Graph ; 27(2): 283-293, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33048741

RESUMEN

Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.

9.
IEEE Trans Vis Comput Graph ; 15(6): 1343-50, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19834207

RESUMEN

This paper describes advanced volume visualization and quantification for applications in non-destructive testing (NDT), which results in novel and highly effective interactive workflows for NDT practitioners. We employ a visual approach to explore and quantify the features of interest, based on transfer functions in the parameter spaces of specific application scenarios. Examples are the orientations of fibres or the roundness of particles. The applicability and effectiveness of our approach is illustrated using two specific scenarios of high practical relevance. First, we discuss the analysis of Steel Fibre Reinforced Sprayed Concrete (SFRSpC). We investigate the orientations of the enclosed steel fibres and their distribution, depending on the concrete's application direction. This is a crucial step in assessing the material's behavior under mechanical stress, which is still in its infancy and therefore a hot topic in the building industry. The second application scenario is the designation of the microstructure of ductile cast irons with respect to the contained graphite. This corresponds to the requirements of the ISO standard 945-1, which deals with 2D metallographic samples. We illustrate how the necessary analysis steps can be carried out much more efficiently using our system for 3D volumes. Overall, we show that a visual approach with custom transfer functions in specific application domains offers significant benefits and has the potential of greatly improving and optimizing the workflows of domain scientists and engineers.

10.
IEEE Trans Vis Comput Graph ; 15(6): 1505-14, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19834227

RESUMEN

Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes.


Asunto(s)
Gráficos por Computador , Bases de Datos Factuales , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Electrónica/métodos , Red Nerviosa/anatomía & histología , Algoritmos , Animales , Corteza Cerebral/anatomía & histología , Distribución de Chi-Cuadrado , Ratones , Distribución Normal
11.
J Vis Exp ; (151)2019 09 28.
Artículo en Inglés | MEDLINE | ID: mdl-31609327

RESUMEN

Serial sectioning and subsequent high-resolution imaging of biological tissue using electron microscopy (EM) allow for the segmentation and reconstruction of high-resolution imaged stacks to reveal ultrastructural patterns that could not be resolved using 2D images. Indeed, the latter might lead to a misinterpretation of morphologies, like in the case of mitochondria; the use of 3D models is, therefore, more and more common and applied to the formulation of morphology-based functional hypotheses. To date, the use of 3D models generated from light or electron image stacks makes qualitative, visual assessments, as well as quantification, more convenient to be performed directly in 3D. As these models are often extremely complex, a virtual reality environment is also important to be set up to overcome occlusion and to take full advantage of the 3D structure. Here, a step-by-step guide from image segmentation to reconstruction and analysis is described in detail.


Asunto(s)
Imagenología Tridimensional/métodos , Microscopía Electrónica/métodos , Neuroglía/citología , Neuronas/citología , Realidad Virtual
12.
Prog Neurobiol ; 183: 101696, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31550514

RESUMEN

With the rapid evolution in the automation of serial electron microscopy in life sciences, the acquisition of terabyte-sized datasets is becoming increasingly common. High resolution serial block-face imaging (SBEM) of biological tissues offers the opportunity to segment and reconstruct nanoscale structures to reveal spatial features previously inaccessible with simple, single section, two-dimensional images. In particular, we focussed here on glial cells, whose reconstruction efforts in literature are still limited, compared to neurons. We imaged a 750,000 cubic micron volume of the somatosensory cortex from a juvenile P14 rat, with 20 nm accuracy. We recognized a total of 186 cells using their nuclei, and classified them as neuronal or glial based on features of the soma and the processes. We reconstructed for the first time 4 almost complete astrocytes and neurons, 4 complete microglia and 4 complete pericytes, including their intracellular mitochondria, 186 nuclei and 213 myelinated axons. We then performed quantitative analysis on the three-dimensional models. Out of the data that we generated, we observed that neurons have larger nuclei, which correlated with their lesser density, and that astrocytes and pericytes have a higher surface to volume ratio, compared to other cell types. All reconstructed morphologies represent an important resource for computational neuroscientists, as morphological quantitative information can be inferred, to tune simulations that take into account the spatial compartmentalization of the different cell types.


Asunto(s)
Astrocitos/ultraestructura , Encéfalo/citología , Encéfalo/diagnóstico por imagen , Imagenología Tridimensional , Microglía/ultraestructura , Microscopía Electrónica de Rastreo , Neuronas/ultraestructura , Pericitos/ultraestructura , Animales , Microscopía Electrónica , Ratas , Corteza Somatosensorial/citología , Corteza Somatosensorial/diagnóstico por imagen
13.
IEEE Trans Vis Comput Graph ; 14(6): 1507-14, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-18989003

RESUMEN

This paper presents a novel method for interactive exploration of industrial CT volumes such as cast metal parts, with the goal of interactively detecting, classifying, and quantifying features using a visualization-driven approach. The standard approach for defect detection builds on region growing, which requires manually tuning parameters such as target ranges for density and size, variance, as well as the specification of seed points. If the results are not satisfactory, region growing must be performed again with different parameters. In contrast, our method allows interactive exploration of the parameter space, completely separated from region growing in an unattended pre-processing stage. The pre-computed feature volume tracks a feature size curve for each voxel over time, which is identified with the main region growing parameter such as variance. A novel 3D transfer function domain over (density, feature size, time) allows for interactive exploration of feature classes. Features and feature size curves can also be explored individually, which helps with transfer function specification and allows coloring individual features and disabling features resulting from CT artifacts. Based on the classification obtained through exploration, the classified features can be quantified immediately.


Asunto(s)
Análisis de Falla de Equipo/métodos , Imagenología Tridimensional/métodos , Industrias/instrumentación , Ensayo de Materiales/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Tomografía Computarizada por Rayos X/métodos , Interfaz Usuario-Computador , Algoritmos , Gráficos por Computador
14.
Artículo en Inglés | MEDLINE | ID: mdl-30130222

RESUMEN

Flow fields are usually visualized relative to a global observer, i.e., a single frame of reference. However, often no global frame can depict all flow features equally well. Likewise, objective criteria for detecting features such as vortices often use either a global reference frame, or compute a separate frame for each point in space and time. We propose the first general framework that enables choosing a smooth trade-off between these two extremes. Using global optimization to minimize specific differential geometric properties, we compute a time-dependent observer velocity field that describes the motion of a continuous field of observers adapted to the input flow. This requires developing the novel notion of an observed time derivative. While individual observers are restricted to rigid motions, overall we compute an approximate Killing field, corresponding to almost-rigid motion. This enables continuous transitions between different observers. Instead of focusing only on flow features, we furthermore develop a novel general notion of visualizing how all observers jointly perceive the input field. This in fact requires introducing the concept of an observation time, with respect to which a visualization is computed. We develop the corresponding notions of observed stream, path, streak, and time lines. For efficiency, these characteristic curves can be computed using standard approaches, by first transforming the input field accordingly. Finally, we prove that the input flow perceived by the observer field is objective. This makes derived flow features, such as vortices, objective as well.

15.
IEEE Trans Vis Comput Graph ; 24(1): 944-953, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28866508

RESUMEN

Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

16.
IEEE Trans Vis Comput Graph ; 24(1): 974-983, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28866532

RESUMEN

Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

17.
IEEE Trans Vis Comput Graph ; 24(1): 853-861, 2018 01.
Artículo en Inglés | MEDLINE | ID: mdl-28866534

RESUMEN

This paper presents Abstractocyte, a system for the visual analysis of astrocytes and their relation to neurons, in nanoscale volumes of brain tissue. Astrocytes are glial cells, i.e., non-neuronal cells that support neurons and the nervous system. The study of astrocytes has immense potential for understanding brain function. However, their complex and widely-branching structure requires high-resolution electron microscopy imaging and makes visualization and analysis challenging. Furthermore, the structure and function of astrocytes is very different from neurons, and therefore requires the development of new visualization and analysis tools. With Abstractocyte, biologists can explore the morphology of astrocytes using various visual abstraction levels, while simultaneously analyzing neighboring neurons and their connectivity. We define a novel, conceptual 2D abstraction space for jointly visualizing astrocytes and neurons. Neuroscientists can choose a specific joint visualization as a point in this space. Interactively moving this point allows them to smoothly transition between different abstraction levels in an intuitive manner. In contrast to simply switching between different visualizations, this preserves the visual context and correlations throughout the transition. Users can smoothly navigate from concrete, highly-detailed 3D views to simplified and abstracted 2D views. In addition to investigating astrocytes, neurons, and their relationships, we enable the interactive analysis of the distribution of glycogen, which is of high importance to neuroscientists. We describe the design of Abstractocyte, and present three case studies in which neuroscientists have successfully used our system to assess astrocytic coverage of synapses, glycogen distribution in relation to synapses, and astrocytic-mitochondria coverage.


Asunto(s)
Astrocitos/citología , Conectoma/métodos , Imagenología Tridimensional/métodos , Programas Informáticos , Gráficos por Computador , Humanos , Neuronas/citología
18.
Artículo en Inglés | MEDLINE | ID: mdl-30136947

RESUMEN

With the rapid increase in raw volume data sizes, such as terabyte-sized microscopy volumes, the corresponding segmentation label volumes have become extremely large as well. We focus on integer label data, whose efficient representation in memory, as well as fast random data access, pose an even greater challenge than the raw image data. Often, it is crucial to be able to rapidly identify which segments are located where, whether for empty space skipping for fast rendering, or for spatial proximity queries. We refer to this process as culling. In order to enable efficient culling of millions of labeled segments, we present a novel hybrid approach that combines deterministic and probabilistic representations of label data in a data-adaptive hierarchical data structure that we call the label list tree. In each node, we adaptively encode label data using either a probabilistic constant-time access representation for fast conservative culling, or a deterministic logarithmic-time access representation for exact queries. We choose the best data structures for representing the labels of each spatial region while building the label list tree. At run time, we further employ a novel query-adaptive culling strategy. While filtering a query down the tree, we prune it successively, and in each node adaptively select the representation that is best suited for evaluating the pruned query, depending on its size. We show an analysis of the efficiency of our approach with several large data sets from connectomics, including a brain scan with more than 13 million labeled segments, and compare our method to conventional culling approaches. Our approach achieves significant reductions in storage size as well as faster query times.

19.
Front Neurosci ; 12: 664, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30319342

RESUMEN

One will not understand the brain without an integrated exploration of structure and function, these attributes being two sides of the same coin: together they form the currency of biological computation. Accordingly, biologically realistic models require the re-creation of the architecture of the cellular components in which biochemical reactions are contained. We describe here a process of reconstructing a functional oligocellular assembly that is responsible for energy supply management in the brain and creating a computational model of the associated biochemical and biophysical processes. The reactions that underwrite thought are both constrained by and take advantage of brain morphologies pertaining to neurons, astrocytes and the blood vessels that deliver oxygen, glucose and other nutrients. Each component of this neuro-glio-vasculature ensemble (NGV) carries-out delegated tasks, as the dynamics of this system provide for each cell-type its own energy requirements while including mechanisms that allow cooperative energy transfers. Our process for recreating the ultrastructure of cellular components and modeling the reactions that describe energy flow uses an amalgam of state-of the-art techniques, including digital reconstructions of electron micrographs, advanced data analysis tools, computational simulations and in silico visualization software. While we demonstrate this process with the NGV, it is equally well adapted to any cellular system for integrating multimodal cellular data in a coherent framework.

20.
IEEE Trans Vis Comput Graph ; 13(6): 1592-9, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-17968114

RESUMEN

This paper presents a scalable framework for real-time raycasting of large unstructured volumes that employs a hybrid bricking approach. It adaptively combines original unstructured bricks in important (focus) regions, with structured bricks that are resampled on demand in less important (context) regions. The basis of this focus+context approach is interactive specification of a scalar degree of interest (DOI) function. Thus, rendering always considers two volumes simultaneously: a scalar data volume, and the current DOI volume. The crucial problem of visibility sorting is solved by raycasting individual bricks and compositing in visibility order from front to back. In order to minimize visual errors at the grid boundary, it is always rendered accurately, even for resampled bricks. A variety of different rendering modes can be combined, including contour enhancement. A very important property of our approach is that it supports a variety of cell types natively, i.e., it is not constrained to tetrahedral grids, even when interpolation within cells is used. Moreover, our framework can handle multi-variate data, e.g., multiple scalar channels such as temperature or pressure, as well as time-dependent data. The combination of unstructured and structured bricks with different quality characteristics such as the type of interpolation or resampling resolution in conjunction with custom texture memory management yields a very scalable system.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA