Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 30(1): 1358-1368, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37922179

RESUMO

In scientific simulations, observations, and experiments, the transfer of data to and from disk and across networks has become a major bottleneck for data analysis and visualization. Compression techniques have been employed to tackle this challenge, but traditional lossy methods often demand conservative error tolerances to meet the numerical accuracy requirements of both anticipated and unknown data analysis tasks. Progressive data compression and retrieval has emerged as a promising solution, where each analysis task dictates its own accuracy needs. However, few analysis algorithms inherently support progressive data processing, and adapting compression techniques, file formats, client/server frameworks, and APIs to support progressivity can be challenging. This paper presents a framework that enables progressive-precision data queries for any data compressor or numerical representation. Our strategy hinges on a multi-component representation that successively reduces the error between the original and compressed field, allowing each field in the progressive sequence to be expressed as a partial sum of components. We have implemented this approach with four established scientific data compressors and assessed its effectiveness using real-world data sets from the SDRBench collection. The results show that our framework competes in accuracy with the standalone compressors it is based upon. Additionally, (de)compression time is proportional to the number of components requested by the user. Finally, our framework allows for fully lossless compression using lossy compressors when a sufficient number of components are employed.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37027261

RESUMO

Scientific simulations and observations using particles have been creating large datasets that require effective and efficient data reduction to store, transfer, and analyze. However, current approaches either compress only small data well while being inefficient for large data, or handle large data but with insufficient compression. Toward effective and scalable compression/decompression of particle positions, we introduce new kinds of particle hierarchies and corresponding traversal orders that quickly reduce reconstruction error while being fast and low in memory footprint. Our solution to compression of large-scale particle data is a flexible block-based hierarchy that supports progressive, random-access, and error-driven decoding, where error estimation heuristics can be supplied by the user. For low-level node encoding, we introduce new schemes that effectively compress both uniform and densely structured particle distributions.

3.
IEEE Trans Vis Comput Graph ; 28(6): 2350-2363, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35394910

RESUMO

Adaptive representations are increasingly indispensable for reducing the in-memory and on-disk footprints of large-scale data. Usual solutions are designed broadly along two themes: reducing data precision, e.g., through compression, or adapting data resolution, e.g., using spatial hierarchies. Recent research suggests that combining the two approaches, i.e., adapting both resolution and precision simultaneously, can offer significant gains over using them individually. However, there currently exist no practical solutions to creating and evaluating such representations at scale. In this work, we present a new resolution-precision-adaptive representation to support hybrid data reduction schemes and offer an interface to existing tools and algorithms. Through novelties in spatial hierarchy, our representation, Adaptive Multilinear Meshes (AMM), provides considerable reduction in the mesh size. AMM creates a piecewise multilinear representation of uniformly sampled scalar data and can selectively relax or enforce constraints on conformity, continuity, and coverage, delivering a flexible adaptive representation. AMM also supports representing the function using mixed-precision values to further the achievable gains in data reduction. We describe a practical approach to creating AMM incrementally using arbitrary orderings of data and demonstrate AMM on six types of resolution and precision datastreams. By interfacing with state-of-the-art rendering tools through VTK, we demonstrate the practical and computational advantages of our representation for visualization techniques. With an open-source release of our tool to create AMM, we make such evaluation of data reduction accessible to the community, which we hope will foster new opportunities and future data reduction schemes.

4.
IEEE Trans Vis Comput Graph ; 27(2): 603-613, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048707

RESUMO

To address the problem of ever-growing scientific data sizes making data movement a major hindrance to analysis, we introduce a novel encoding for scalar fields: a unified tree of resolution and precision, specifically constructed so that valid cuts correspond to sensible approximations of the original field in the precision-resolution space. Furthermore, we introduce a highly flexible encoding of such trees that forms a parameterized family of data hierarchies. We discuss how different parameter choices lead to different trade-offs in practice, and show how specific choices result in known data representation schemes such as zfp [52], idx [58], and jpeg2000 [76]. Finally, we provide system-level details and empirical evidence on how such hierarchies facilitate common approximate queries with minimal data movement and time, using real-world data sets ranging from a few gigabytes to nearly a terabyte in size. Experiments suggest that our new strategy of combining reductions in resolution and precision is competitive with state-of-the-art compression techniques with respect to data quality, while being significantly more flexible and orders of magnitude faster, and requiring significantly reduced resources.

5.
IEEE Trans Vis Comput Graph ; 26(9): 2891-2903, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30869621

RESUMO

Memory and network bandwidth are decisive bottlenecks when handling high-resolution multidimensional data sets in visualization applications, and they increasingly demand suitable data compression strategies. We introduce a novel lossy compression algorithm for multidimensional data over regular grids. It leverages the higher-order singular value decomposition (HOSVD), a generalization of the SVD to three dimensions and higher, together with bit-plane, run-length and arithmetic coding to compress the HOSVD transform coefficients. Our scheme degrades the data particularly smoothly and achieves lower mean squared error than other state-of-the-art algorithms at low-to-medium bit rates, as it is required in data archiving and management for visualization purposes. Further advantages of the proposed algorithm include very fine bit rate selection granularity and the ability to manipulate data at very small cost in the compression domain, for example to reconstruct filtered and/or subsampled versions of all (or selected parts) of the data set.

6.
Artigo em Inglês | MEDLINE | ID: mdl-30136951

RESUMO

There currently exist two dominant strategies to reduce data sizes in analysis and visualization: reducing the precision of the data, e.g., through quantization, or reducing its resolution, e.g., by subsampling. Both have advantages and disadvantages and both face fundamental limits at which the reduced information ceases to be useful. The paper explores the additional gains that could be achieved by combining both strategies. In particular, we present a common framework that allows us to study the trade-off in reducing precision and/or resolution in a principled manner. We represent data reduction schemes as progressive streams of bits and study how various bit orderings such as by resolution, by precision, etc., impact the resulting approximation error across a variety of data sets as well as analysis tasks. Furthermore, we compute streams that are optimized for different tasks to serve as lower bounds on the achievable error. Scientific data management systems can use the results presented in this paper as guidance on how to store and stream data to make efficient use of the limited storage and bandwidth in practice.

7.
IEEE Trans Vis Comput Graph ; 22(1): 985-94, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26529742

RESUMO

Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.

8.
IEEE Trans Vis Comput Graph ; 20(1): 84-98, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24201328

RESUMO

We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As a part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle--i.e., less than the three vertex references stored with each triangle in a conventional indexed mesh format--Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access. We demonstrate the versatility and performance benefits of Grouper using a suite of example meshes and processing kernels.

9.
IEEE Trans Vis Comput Graph ; 20(12): 2674-83, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356981

RESUMO

Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

10.
IEEE Trans Vis Comput Graph ; 17(12): 1842-51, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22034301

RESUMO

We present topological spines--a new visual representation that preserves the topological and geometric structure of a scalar field. This representation encodes the spatial relationships of the extrema of a scalar field together with the local volume and nesting structure of the surrounding contours. Unlike other topological representations, such as contour trees, our approach preserves the local geometric structure of the scalar field, including structural cycles that are useful for exposing symmetries in the data. To obtain this representation, we describe a novel mechanism based on the extraction of extremum graphs--sparse subsets of the Morse-Smale complex that retain the important structural information without the clutter and occlusion problems that arise from visualizing the entire complex directly. Extremum graphs form a natural multiresolution structure that allows the user to suppress noise and enhance topological features via the specification of a persistence range. Applications of our approach include the visualization of 3D scalar fields without occlusion artifacts, and the exploratory analysis of high-dimensional functions.

11.
IEEE Trans Vis Comput Graph ; 17(12): 1852-61, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22034302

RESUMO

Sparse, irregular sampling is becoming a necessity for reconstructing large and high-dimensional signals. However, the analysis of this type of data remains a challenge. One issue is the robust selection of neighborhoods--a crucial part of analytic tools such as topological decomposition, clustering and gradient estimation. When extracting the topology of sparsely sampled data, common neighborhood strategies such as k-nearest neighbors may lead to inaccurate results, either due to missing neighborhood connections, which introduce false extrema, or due to spurious connections, which conceal true extrema. Other neighborhoods, such as the Delaunay triangulation, are costly to compute and store even in relatively low dimensions. In this paper, we address these issues. We present two new types of neighborhood graphs: a variation on and a generalization of empty region graphs, which considerably improve the robustness of neighborhood-based analysis tools, such as topological decomposition. Our findings suggest that these neighborhood graphs lead to more accurate topological representations of low- and high- dimensional data sets at relatively low cost, both in terms of storage and computation time. We describe the implications of our work in the analysis and visualization of scalar functions, and provide general strategies for computing and applying our neighborhood graphs towards robust data analysis.

12.
IEEE Comput Graph Appl ; 30(3): 32-44, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20650716

RESUMO

A hybrid parallel and out-of-core algorithm pads blocks from a structured grid with layers of ghost data from adjacent blocks. This enables end-to-end streaming computations on very large data sets that gracefully adapt to available computing resources, from a single-processor machine to parallel visualization clusters.

13.
IEEE Trans Vis Comput Graph ; 13(6): 1536-43, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17968107

RESUMO

With the exponential growth in size of geometric data, it is becoming increasingly important to make effective use of multilevel caches, limited disk storage, and bandwidth. As a result, recent work in the visualization community has focused either on designing sequential access compression schemes or on producing cache-coherent layouts of (uncompressed) meshes for random access. Unfortunately combining these two strategies is challenging as they fundamentally assume conflicting modes of data access. In this paper, we propose a novel order-preserving compression method that supports transparent random access to compressed triangle meshes. Our decompression method selectively fetches from disk, decodes, and caches in memory requested parts of a mesh. We also provide a general mesh access API for seamless mesh traversal and incidence queries. While the method imposes no particular mesh layout, it is especially suitable for cache-oblivious layouts, which minimize the number of decompression I/O requests and provide high cache utilization during access to decompressed, in-memory portions of the mesh. Moreover, the transparency of our scheme enables improved performance without the need for application code changes. We achieve compression rates on the order of 20:1 and significantly improved I/O performance due to reduced data transfer. To demonstrate the benefits of our method, we implement two common applications as benchmarks. By using cache-oblivious layouts for the input models, we observe 2?6 times overall speedup compared to using uncompressed meshes.

14.
IEEE Trans Vis Comput Graph ; 13(1): 145-55, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17093343

RESUMO

Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.


Assuntos
Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Interface Usuário-Computador , Sistemas Computacionais
15.
IEEE Trans Vis Comput Graph ; 12(5): 1213-20, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17080854

RESUMO

Current computer architectures employ caching to improve the performance of a wide variety of applications. One of the main characteristics of such cache schemes is the use of block fetching whenever an uncached data element is accessed. To maximize the benefit of the block fetching mechanism, we present novel cache-aware and cache-oblivious layouts of surface and volume meshes that improve the performance of interactive visualization and geometric processing algorithms. Based on a general I/O model, we derive new cache-aware and cache-oblivious metrics that have high correlations with the number of cache misses when accessing a mesh. In addition to guiding the layout process, our metrics can be used to quantify the quality of a layout, e.g. for comparing different layouts of the same mesh and for determining whether a given layout is amenable to significant improvement. We show that layouts of unstructured meshes optimized for our metrics result in improvements over conventional layouts in the performance of visualization applications such as isosurface extraction and view-dependent rendering. Moreover, we improve upon recent cache-oblivious mesh layouts in terms of performance, applicability, and accuracy.

16.
IEEE Trans Vis Comput Graph ; 12(5): 1245-50, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17080858

RESUMO

Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.

17.
J Neurooncol ; 68(3): 199-205, 2004 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-15332322

RESUMO

It has been shown that human malignant glioma tumours consist of several subpopulations of tumour cells. Due to heterogeneity and different degrees of vascularisation cell subpopulations possess varying resistance to chemo- or radiation therapy. Therefore, therapy is dependent on the ability to specifically target a tumour cell. Boron neutron capture therapy (BNCT) is a bimodal method, in radiation therapy, taking advantage of the ability of the stable isotope boron-10 to capture neutrons. It results in disintegration products depositing large amounts of energy within a short length, approximately one cell diameter. Thereby, selective irradiation of a target cell may be accomplished if a sufficient amount of boron has been accumulated and hence the cell-associated boron concentration is of critical importance. The accumulation of boron, boronophenylalanine (BPA), was investigated in two human glioma cell subpopulations and a human fibroblast cell line in vitro. The cells were incubated at low boron concentrations (0-5 microg B/ml). Oil filtration was then used for separation of extracellular and cell-associated boron. Inductively coupled plasma atomic emission spectroscopy (ICP-AES) was used for boron determination. Significant (P < 0.05) differences in accumulation ratio (relation between cell-associated and extracellular boron concentration) between human malignant glioma cell lines were found. Human fibroblasts, used to represent normal cells, showed a growth-dependent uptake and a lower accumulation ratio than the glioma cells. Our findings indicate that BPA concentration, incubation time and differences in boron uptake between cell subpopulations should be considered in BNCT.


Assuntos
Terapia por Captura de Nêutron de Boro , Boro/farmacocinética , Neoplasias Encefálicas/metabolismo , Neoplasias Encefálicas/radioterapia , Glioma/metabolismo , Glioma/radioterapia , Fenilalanina/análogos & derivados , Boranos/farmacocinética , Neoplasias Encefálicas/patologia , Linhagem Celular , Linhagem Celular Tumoral , Fibroblastos/citologia , Fibroblastos/metabolismo , Glioblastoma/metabolismo , Glioblastoma/patologia , Glioblastoma/radioterapia , Glioma/patologia , Humanos , Fenilalanina/farmacocinética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA