RESUMO
Applying certain visualization techniques to datasets described on unstructured grids requires the interpolation of variables of interest at arbitrary locations within the dataset's domain of definition. Typical solutions to the problem of finding the grid element enclosing a given interpolation point make use of a variety of spatial subdivision schemes. However, existing solutions are memory- intensive, do not scale well to large grids, or do not work reliably on grids describing complex geometries. In this paper, we propose a data structure and associated construction algorithm for fast cell location in unstructured grids, and apply it to the interpolation problem. Based on the concept of bounding interval hierarchies, the proposed approach is memory-efficient, fast and numerically robust. We examine the performance characteristics of the proposed approach and compare it to existing approaches using a number of benchmark problems related to vector field visualization. Furthermore, we demonstrate that our approach can successfully accommodate large datasets, and discuss application to visualization on both CPUs and GPUs.
RESUMO
A new material interface reconstruction method for volume fraction data is presented. Our method is comprised of two components: first, we generate initial interface topology; then, using a combination of smoothing and volumetric forces within an active interface model, we iteratively transform the initial material interfaces into high-quality surfaces that accurately approximate the problem's volume fractions. Unlike all previous work, our new method produces material interfaces that are smooth, continuous across cell boundaries, and segment cells into regions with proper volume. These properties are critical during visualization and analysis. Generating high-quality mesh representations of material interfaces is required for accurate calculations of interface statistics, and dramatically increases the utility of material boundary visualizations.
RESUMO
Integral surfaces are ideal tools to illustrate vector fields and fluid flow structures. However, these surfaces can be visually complex and exhibit difficult geometric properties, owing to strong stretching, shearing and folding of the flow from which they are derived. Many techniques for non-photorealistic rendering have been presented previously. It is, however, unclear how these techniques can be applied to integral surfaces. In this paper, we examine how transparency and texturing techniques can be used with integral surfaces to convey both shape and directional information. We present a rendering pipeline that combines these techniques aimed at faithfully and accurately representing integral surfaces while improving visualization insight. The presented pipeline is implemented directly on the GPU, providing real-time interaction for all rendering modes, and does not require expensive preprocessing of integral surfaces after computation.
RESUMO
Time and streak surfaces are ideal tools to illustrate time-varying vector fields since they directly appeal to the intuition about coherently moving particles. However, efficient generation of high-quality time and streak surfaces for complex, large and time-varying vector field data has been elusive due to the computational effort involved. In this work, we propose a novel algorithm for computing such surfaces. Our approach is based on a decoupling of surface advection and surface adaptation and yields improved efficiency over other surface tracking methods, and allows us to leverage inherent parallelization opportunities in the surface advection, resulting in more rapid parallel computation. Moreover, we obtain as a result of our algorithm the entire evolution of a time or streak surface in a compact representation, allowing for interactive, high-quality rendering, visualization and exploration of the evolving surface. Finally, we discuss a number of ways to improve surface depiction through advanced rendering and texturing, while preserving interactivity, and provide a number of examples for real-world datasets and analyze the behavior of our algorithm on them.
RESUMO
We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as non-overlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, e.g., line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation. To generate these samples with the desired properties we construct a first set of non-overlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach which combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum.
Assuntos
Gráficos por Computador/estatística & dados numéricos , Algoritmos , Anisotropia , Simulação por Computador , Análise de Fourier , Imageamento por Ressonância Magnética/estatística & dados numéricos , Processos EstocásticosRESUMO
We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces.
RESUMO
The visualization and analysis of AMR-based simulations is integral to the process of obtaining new insight in scientific research. We present a new method for performing query-driven visualization and analysis on AMR data, with specific emphasis on time-varying AMR data. Our work introduces a new method that directly addresses the dynamic spatial and temporal properties of AMR grids that challenge many existing visualization techniques. Further, we present the first implementation of query-driven visualization on the GPU that uses a GPU-based indexing structure to both answer queries and efficiently utilize GPU memory. We apply our method to two different science domains to demonstrate its broad applicability.
RESUMO
Our ability to generate ever-larger, increasingly-complex data, has established the need for scalable methods that identify, and provide insight into, important variable trends and interactions. Query-driven methods are among the small subset of techniques that are able to address both large and highly complex datasets. This paper presents a new method that increases the utility of query-driven techniques by visually conveying statistical information about the trends that exist between variables in a query. In this method, correlation fields, created between pairs of variables, are used with the cumulative distribution functions of variables expressed in a user's query. This integrated use of cumulative distribution functions and correlation fields visually reveals, with respect to the solution space of the query, statistically important interactions between any three variables, and allows for trends between these variables to be readily identified. We demonstrate our method by analyzing interactions between variables in two flame-front simulations.
RESUMO
We present a novel approach to out-of-core time-varying isosurface visualization. We attempt to interactively visualize time-varying datasets which are too large to fit into main memory using a technique which is dramatically different from existing algorithms. Inspired by video encoding techniques, we examine the data differences between time steps to extract isosurface information. We exploit span space extraction techniques to retrieve operations necessary to update isosurface geometry from neighboring time steps. Because only the changes between time steps need to be retrieved from disk, I/O bandwidth requirements are minimized. We apply temporal compression to further reduce disk access and employ a point-based previewing technique that is refined in idle interaction cycles. Our experiments on computational simulation data indicate that this method is an extremely viable solution to large time-varying isosurface visualization. Our work advances the state-of-the-art by enabling all isosurfaces to be represented by a compact set of operations.
RESUMO
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. Finally, for the special application of evaluating the stability of bimodal regions, we develop local and regional metrics.
RESUMO
Visualization and analysis techniques play a key role in the discovery of relevant features in ensemble data. Trends, in the form of persisting commonalities or differences in time-varying ensemble datasets, constitute one of the most expressive feature types in ensemble analysis. We develop a flow-graph representation as the core of a system designed for the visual analysis of trends in time-varying ensembles. In our interactive analysis framework, this graph is linked to a representation of ensemble parameter-space and the ensemble itself. This facilitates a detailed examination of trends and their correlations to properties of input-space. We demonstrate the utility of the proposed trends analysis framework in several benchmark data sets, highlighting its capability to support goal-driven design of time-varying simulations.
RESUMO
The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the Realtime Optimally Adapting Meshes (ROAM) Algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.
Assuntos
Algoritmos , Gráficos por Computador , Planeta Terra , Sistemas de Informação Geográfica , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Interface Usuário-Computador , Sistemas Computacionais , Sistemas de Gerenciamento de Base de Dados , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Sistemas On-LineRESUMO
Effective display and visual analysis of complex 3D data is a challenging task. Occlusions, overlaps, and projective distortions-as frequently caused by typical 3D rendering techniques-can be major obstacles to unambiguous and robust data analysis. Slicing planes are a ubiquitous tool to resolve several of these issues. They act as simple clipping geometry to provide clear cut-away views of the data. We propose to enhance the visualization and analysis process by providing methods for automatic placement of such slicing planes based on local optimization of gradient vector flow. The final obtained slicing planes maximize the total amount of information displayed with respect to a pre-specified importance function. We demonstrate how such automated slicing plane placement is able to support and enrich 3D data visualization and analysis in multiple scenarios, such as volume or surface rendering, and evaluate its performance in several benchmark data sets.
RESUMO
Particle tracing in time-varying flow fields is traditionally performed by numerical integration of the underlying vector field. This procedure can become computationally expensive, especially in scattered, particle-based flow fields, which complicate interpolation due to the lack of an explicit neighborhood structure. If such a particle-based flow field allows for the identification of consecutive particle positions, an alternative approach to particle tracing can be employed: we substitute repeated numerical integration of vector data by geometric interpolation in the highly dynamic particle system as defined by the particle-based simulation. To allow for efficient and accurate location and interpolation of changing particle neighborhoods, we develop a modified k-d tree representation that is capable of creating a dynamic partitioning of even highly compressible data sets with strongly varying particle densities. With this representation we are able to efficiently perform pathline computation by identifying, tracking, and updating an enclosing, dynamic particle neighborhood as particles move overtime. We investigate and evaluate the complexity, accuracy, and robustness of this interpolation-based alternative approach to trajectory generation in compressible and incompressible particle systems generated by simulation techniques such as Smoothed Particle Hydrodynamics (SPH).
RESUMO
We present a new method for topological segmentation in steady three-dimensional vector fields. Depending on desired properties, the algorithm replaces the original vector field by a derived segmented data set, which is utilized to produce separating surfaces in the vector field. We define the concept of a segmented data set, develop methods that produce the segmented data by sampling the vector field with streamlines, and describe algorithms that generate the separating surfaces. This method is applied to generate local separatrices in the field, defined by a movable boundary region placed in the field. The resulting partitions can be visualized using standard techniques for a visualization of a vector field at a higher level of abstraction.
Assuntos
Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão , Gravação em Vídeo/métodos , Simulação por Computador , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Reologia/métodos , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador , Interface Usuário-ComputadorRESUMO
We present an algorithm for adaptively extracting and rendering isosurfaces from compressed time-varying volume data sets. Tetrahedral meshes defined by longest edge bisection are used to create a multiresolution representation of the volume in the spatial domain that is adapted over time to approximate the time-varying volume. The reextraction of the isosurface at each time step is accelerated with the vertex programming capabilities of modern graphics hardware. A data layout scheme which follows the access pattern indicated by mesh refinement is used to access the volume in a spatially and temporally coherent manner. This data layout scheme allows our algorithm to be used for out-of-core visualization.
RESUMO
We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail can be expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-Spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform can be computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.
Assuntos
Algoritmos , Gráficos por Computador , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodosRESUMO
Sets of simulation runs based on parameter and model variation, so-called ensembles, are increasingly used to model physical behaviors whose parameter space is too large or complex to be explored automatically. Visualization plays a key role in conveying important properties in ensembles, such as the degree to which members of the ensemble agree or disagree in their behavior. For ensembles of time-varying vector fields, there are numerous challenges for providing an expressive comparative visualization, among which is the requirement to relate the effect of individual flow divergence to joint transport characteristics of the ensemble. Yet, techniques developed for scalar ensembles are of little use in this context, as the notion of transport induced by a vector field cannot be modeled using such tools. We develop a Lagrangian framework for the comparison of flow fields in an ensemble. Our techniques evaluate individual and joint transport variance and introduce a classification space that facilitates incorporation of these properties into a common ensemble visualization. Variances of Lagrangian neighborhoods are computed using pathline integration and Principal Components Analysis. This allows for an inclusion of uncertainty measurements into the visualization and analysis approach. Our results demonstrate the usefulness and expressiveness of the presented method on several practical examples.
Assuntos
Gráficos por Computador , Imageamento Tridimensional/métodos , Modelos Teóricos , Análise Numérica Assistida por Computador , Reologia/métodos , Técnica de Subtração , Interface Usuário-Computador , Algoritmos , Modelos EstatísticosRESUMO
Numerical ensemble forecasting is a powerful tool that drives many risk analysis efforts and decision making tasks. These ensembles are composed of individual simulations that each uniquely model a possible outcome for a common event of interest: e.g., the direction and force of a hurricane, or the path of travel and mortality rate of a pandemic. This paper presents a new visual strategy to help quantify and characterize a numerical ensemble's predictive uncertainty: i.e., the ability for ensemble constituents to accurately and consistently predict an event of interest based on ground truth observations. Our strategy employs a Bayesian framework to first construct a statistical aggregate from the ensemble. We extend the information obtained from the aggregate with a visualization strategy that characterizes predictive uncertainty at two levels: at a global level, which assesses the ensemble as a whole, as well as a local level, which examines each of the ensemble's constituents. Through this approach, modelers are able to better assess the predictive strengths and weaknesses of the ensemble as a whole, as well as individual models. We apply our method to two datasets to demonstrate its broad applicability.
Assuntos
Algoritmos , Teorema de Bayes , Gráficos por Computador , Interpretação Estatística de Dados , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Interface Usuário-Computador , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Multifluid simulations often create volume fraction data, representing fluid volumes per region or cell of a fluid data set. Accurate and visually realistic extraction of fluid boundaries is a challenging and essential task for efficient analysis of multifluid data. In this work, we present a new material interface reconstruction method for such volume fraction data. Within each cell of the data set, our method utilizes a gradient field approximation based on trilinearly blended Coons-patches to generate a volume fraction function, representing the change in volume fractions over the cells. A continuously varying isovalue field is applied to this function to produce a smooth interface that preserves the given volume fractions well. Further, the method allows user-controlled balance between volume accuracy and physical plausibility of the interface. The method works on two- and three-dimensional Cartesian grids, and handles multiple materials. Calculations are performed locally and utilize only the one-ring of cells surrounding a given cell, allowing visualizations of the material interfaces to be easily generated on a GPU or in a large-scale distributed parallel environment. Our results demonstrate the robustness, accuracy, and flexibility of the developed algorithms.