Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39255107

RESUMO

This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.

2.
IEEE Trans Vis Comput Graph ; 29(1): 613-623, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36155460

RESUMO

Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.

3.
IEEE Comput Graph Appl ; 36(3): 48-58, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28113158

RESUMO

One of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.

4.
IEEE Trans Vis Comput Graph ; 16(6): 1431-40, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20975184

RESUMO

In the development of magnetic confinement fusion which will potentially be a future source for low cost power, physicists must be able to analyze the magnetic field that confines the burning plasma. While the magnetic field can be described as a vector field, traditional techniques for analyzing the field's topology cannot be used because of its Hamiltonian nature. In this paper we describe a technique developed as a collaboration between physicists and computer scientists that determines the topology of a toroidal magnetic field using fieldlines with near minimal lengths. More specifically, we analyze the Poincaré map of the sampled fieldlines in a Poincaré section including identifying critical points and other topological features of interest to physicists. The technique has been deployed into an interactive parallel visualization tool which physicists are using to gain new insight into simulations of magnetically confined burning plasmas.

5.
IEEE Comput Graph Appl ; 30(3): 22-31, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20650715

RESUMO

This article presents the results of experiments studying how the pure-parallelism paradigm scales to massive data sets, including 16,000 or more cores on trillion-cell meshes, the largest data sets published to date in the visualization literature. The findings on scaling characteristics and bottlenecks contribute to understanding how pure parallelism will perform in the future.

6.
IEEE Trans Vis Comput Graph ; 13(4): 798-809, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17495338

RESUMO

This paper describes the first use of a Network Processing Unit (NPU) to perform hardware-based image composition in a distributed rendering system. The image composition step is a notorious bottleneck in a clustered rendering system. Furthermore, image compositing algorithms do not necessarily scale as data size and number of nodes increase. Previous researchers have addressed the composition problem via software and/or custom-built hardware. We used the heterogeneous multicore computation architecture of the Intel IXP28XX NPU, a fully programmable commercial off-the-shelf (COTS) technology, to perform the image composition step. With this design, we have attained a nearly four-times performance increase over traditional software-based compositing methods, achieving sustained compositing rates of 22-28 fps on a 1,024 x 1,024 image. This system is fully scalable with a negligible penalty in frame rate, is entirely COTS, and is flexible with regard to operating system, rendering software, graphics cards, and node architecture. The NPU-based compositor has the additional advantage of being a modular compositing component that is eminently suitable for integration into existing distributed software visualization packages.


Assuntos
Algoritmos , Redes de Comunicação de Computadores/instrumentação , Gráficos por Computador , Aumento da Imagem/instrumentação , Interpretação de Imagem Assistida por Computador/instrumentação , Imageamento Tridimensional/instrumentação , Processamento de Sinais Assistido por Computador/instrumentação , Desenho de Equipamento , Análise de Falha de Equipamento , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA