Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
IEEE Trans Vis Comput Graph ; 30(1): 965-974, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37883276

RESUMO

Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.

2.
IEEE Trans Vis Comput Graph ; 29(12): 5483-5495, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36251892

RESUMO

We present a novel technique for hierarchical super resolution (SR) with neural networks (NNs), which upscales volumetric data represented with an octree data structure to a high-resolution uniform gridwith minimal seam artifacts on octree node boundaries. Our method uses existing state-of-the-art SR models and adds flexibility to upscale input data with varying levels of detail across the domain, instead of only uniform grid data that are supported in previous approaches.The key is to use a hierarchy of SR NNs, each trained to perform 2× SR between two levels of detail, with a hierarchical SR algorithm that minimizes seam artifacts by starting from the coarsest level of detail and working up.We show that our hierarchical approach outperforms baseline interpolation and hierarchical upscaling methods, and demonstrate the usefulness of our proposed approach across three use cases including data reduction using hierarchical downsampling+SR instead of uniform downsampling+SR, computation savings for hierarchical finite-time Lyapunov exponent field calculation, and super-resolving low-resolution simulation results for a high-resolution approximation visualization.

3.
IEEE Trans Vis Comput Graph ; 29(12): 5434-5450, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36251895

RESUMO

The objective of this work is to develop error-bounded lossy compression methods to preserve topological features in 2D and 3D vector fields. Specifically, we explore the preservation of critical points in piecewise linear and bilinear vector fields. We define the preservation of critical points as, without any false positive, false negative, or false type in the decompressed data, (1) keeping each critical point in its original cell and (2) retaining the type of each critical point (e.g., saddle and attracting node). The key to our method is to adapt a vertex-wise error bound for each grid point and to compress input data together with the error bound field using a modified lossy compressor. Our compression algorithm can be also embarrassingly parallelized for large data handling and in situ processing. We benchmark our method by comparing it with existing lossy compressors in terms of false positive/negative/type rates, compression ratio, and various vector field visualizations with several scientific applications.

4.
IEEE Trans Vis Comput Graph ; 29(6): 3052-3066, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35130159

RESUMO

We explore an online reinforcement learning (RL) paradigm to dynamically optimize parallel particle tracing performance in distributed-memory systems. Our method combines three novel components: (1) a work donation algorithm, (2) a high-order workload estimation model, and (3) a communication cost model. First, we design an RL-based work donation algorithm. Our algorithm monitors workloads of processes and creates RL agents to donate data blocks and particles from high-workload processes to low-workload processes to minimize program execution time. The agents learn the donation strategy on the fly based on reward and cost functions designed to consider processes' workload changes and data transfer costs of donation actions. Second, we propose a workload estimation model, helping RL agents estimate the workload distribution of processes in future computations. Third, we design a communication cost model that considers both block and particle data exchange costs, helping RL agents make effective decisions with minimized communication costs. We demonstrate that our algorithm adapts to different flow behaviors in large-scale fluid dynamics, ocean, and weather simulation data. Our algorithm improves parallel particle tracing performance in terms of parallel efficiency, load balance, and costs of I/O and communication for evaluations with up to 16,384 processors.

5.
Nat Commun ; 13(1): 368, 2022 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-35042872

RESUMO

Reinforcement learning (RL) approaches that combine a tree search with deep learning have found remarkable success in searching exorbitantly large, albeit discrete action spaces, as in chess, Shogi and Go. Many real-world materials discovery and design applications, however, involve multi-dimensional search problems and learning domains that have continuous action spaces. Exploring high-dimensional potential energy models of materials is an example. Traditionally, these searches are time consuming (often several years for a single bulk system) and driven by human intuition and/or expertise and more recently by global/local optimization searches that have issues with convergence and/or do not scale well with the search dimensionality. Here, in a departure from discrete action and other gradient-based approaches, we introduce a RL strategy based on decision trees that incorporates modified rewards for improved exploration, efficient sampling during playouts and a "window scaling scheme" for enhanced exploitation, to enable efficient and scalable search for continuous action space problems. Using high-dimensional artificial landscapes and control RL problems, we successfully benchmark our approach against popular global optimization schemes and state of the art policy gradient methods, respectively. We demonstrate its efficacy to parameterize potential models (physics based and high-dimensional neural networks) for 54 different elemental systems across the periodic table as well as alloys. We analyze error trends across different elements in the latent space and trace their origin to elemental structural diversity and the smoothness of the element energy surface. Broadly, our RL strategy will be applicable to many other physical science problems involving search over continuous action spaces.

6.
IEEE Trans Vis Comput Graph ; 27(8): 3463-3480, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33856997

RESUMO

We present the Feature Tracking Kit (FTK), a framework that simplifies, scales, and delivers various feature-tracking algorithms for scientific data. The key of FTK is our simplicial spacetime meshing scheme that generalizes both regular and unstructured spatial meshes to spacetime while tessellating spacetime mesh elements into simplices. The benefits of using simplicial spacetime meshes include (1) reducing ambiguity cases for feature extraction and tracking, (2) simplifying the handling of degeneracies using symbolic perturbations, and (3) enabling scalable and parallel processing. The use of simplicial spacetime meshing simplifies and improves the implementation of several feature-tracking algorithms for critical points, quantum vortices, and isosurfaces. As a software framework, FTK provides end users with VTK/ParaView filters, Python bindings, a command line interface, and programming interfaces for feature-tracking applications. We demonstrate use cases as well as scalability studies through both synthetic data and scientific applications including tokamak, fluid dynamics, and superconductivity simulations. We also conduct end-to-end performance studies on the Summit supercomputer. FTK is open sourced under the MIT license: https://github.com/hguo/ftk.

7.
IEEE Trans Vis Comput Graph ; 27(6): 2808-2820, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33877980

RESUMO

We present a novel distributed union-find algorithm that features asynchronous parallelism and k-d tree based load balancing for scalable visualization and analysis of scientific data. Applications of union-find include level set extraction and critical point tracking, but distributed union-find can suffer from high synchronization costs and imbalanced workloads across parallel processes. In this study, we prove that global synchronizations in existing distributed union-find can be eliminated without changing final results, allowing overlapped communications and computations for scalable processing. We also use a k-d tree decomposition to redistribute inputs, in order to improve workload balancing. We benchmark the scalability of our algorithm with up to 1,024 processes using both synthetic and application data. We demonstrate the use of our algorithm in critical point tracking and super-level set extraction with high-speed imaging experiments and fusion plasma simulations, respectively.

8.
IEEE Trans Vis Comput Graph ; 26(1): 23-33, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31425097

RESUMO

We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.

9.
IEEE Trans Vis Comput Graph ; 26(4): 1716-1731, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30418881

RESUMO

We propose surface density estimate (SDE) to model the spatial distribution of surface features-isosurfaces, ridge surfaces, and streamsurfaces-in 3D ensemble simulation data. The inputs of SDE computation are surface features represented as polygon meshes, and no field datasets are required (e.g., scalar fields or vector fields). The SDE is defined as the kernel density estimate of the infinite set of points on the input surfaces and is approximated by accumulating the surface densities of triangular patches. We also propose an algorithm to guide the selection of a proper kernel bandwidth for SDE computation. An ensemble Feature Exploration method based on Surface densiTy EstimAtes (eFESTA) is then proposed to extract and visualize the major trends of ensemble surface features. For an ensemble of surface features, each surface is first transformed into a density field based on its contribution to the SDE, and the resulting density fields are organized into a hierarchical representation based on the pairwise distances between them. The hierarchical representation is then used to guide visual exploration of the density fields as well as the underlying surface features. We demonstrate the application of our method using isosurface in ensemble scalar fields, Lagrangian coherent structures in uncertain unsteady flows, and streamsurfaces in ensemble fluid flows.

10.
IEEE Trans Vis Comput Graph ; 25(7): 2349-2361, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29994004

RESUMO

We present an algorithm for parallel volume rendering that is a hybrid between classical object order and image order techniques. The algorithm operates on unstructured grids (and structured ones), and thus can deal with block boundaries interleaving in complex ways. It also deals effectively with cases that are prone to load imbalance, i.e., cases where cell sizes differ dramatically, either because of the nature of the input data, or because of the effects of the camera transformation. The algorithm divides work over resources such that each phase of its processing is bounded in the amount of computation it can perform. We demonstrate its efficacy through a series of studies, varying over camera position, data set size, transfer function, image size, and processor count. At its biggest, our experiments scaled up to 8,192 processors and operated on data sets with more than one billion cells. In total, we find that our hybrid algorithm performs well in all cases. This is because our algorithm naturally adapts its computation based on workload, and can operate like either an object order technique or an image order technique in scenarios where those techniques are efficient.

11.
IEEE Trans Vis Comput Graph ; 25(9): 2710-2724, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-30047883

RESUMO

We present an efficient and scalable solution to estimate uncertain transport behaviors-stochastic flow maps (SFMs)-for visualizing and analyzing uncertain unsteady flows. Computing flow maps from uncertain flow fields is extremely expensive because it requires many Monte Carlo runs to trace densely seeded particles in the flow. We reduce the computational cost by decoupling the time dependencies in SFMs so that we can process shorter sub time intervals independently and then compose them together for longer time periods. Adaptive refinement is also used to reduce the number of runs for each location. We parallelize over tasks-packets of particles in our design-to achieve high efficiency in MPI/thread hybrid programming. Such a task model also enables CPU/GPU coprocessing. We show the scalability on two supercomputers, Mira (up to 256K Blue Gene/Q cores) and Titan (up to 128K Opteron cores and 8K GPUs), that can trace billions of particles in seconds.

12.
IEEE Trans Vis Comput Graph ; 24(1): 954-963, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866518

RESUMO

We propose a dynamically load-balanced algorithm for parallel particle tracing, which periodically attempts to evenly redistribute particles across processes based on k-d tree decomposition. Each process is assigned with (1) a statically partitioned, axis-aligned data block that partially overlaps with neighboring blocks in other processes and (2) a dynamically determined k-d tree leaf node that bounds the active particles for computation; the bounds of the k-d tree nodes are constrained by the geometries of data blocks. Given a certain degree of overlap between blocks, our method can balance the number of particles as much as possible. Compared with other load-balancing algorithms for parallel particle tracing, the proposed method does not require any preanalysis, does not use any heuristics based on flow features, does not make any assumptions about seed distribution, does not move any data blocks during the run, and does not need any master process for work redistribution. Based on a comprehensive performance study up to 8K processes on a Blue Gene/Q system, the proposed algorithm outperforms baseline approaches in both load balance and scalability on various flow visualization and analysis problems.

13.
Nano Lett ; 17(12): 7696-7701, 2017 12 13.
Artigo em Inglês | MEDLINE | ID: mdl-29086574

RESUMO

Visualizing the dynamical response of material heterointerfaces is increasingly important for the design of hybrid materials and structures with tailored properties for use in functional devices. In situ characterization of nanoscale heterointerfaces such as metal-semiconductor interfaces, which exhibit a complex interplay between lattice strain, electric potential, and heat transport at subnanosecond time scales, is particularly challenging. In this work, we use a laser pump/X-ray probe form of Bragg coherent diffraction imaging (BCDI) to visualize in three-dimension the deformation of the core of a model core/shell semiconductor-metal (ZnO/Ni) nanorod following laser heating of the shell. We observe a rich interplay of radial, axial, and shear deformation modes acting at different time scales that are induced by the strain from the Ni shell. We construct experimentally informed models by directly importing the reconstructed crystal from the ultrafast experiment into a thermo-electromechanical continuum model. The model elucidates the origin of the deformation modes observed experimentally. Our integrated imaging approach represents an invaluable tool to probe strain dynamics across mixed interfaces under operando conditions.

14.
Phys Rev B ; 95(10)2017 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-28752135

RESUMO

Modern integrated circuits (ICs) employ a myriad of materials organized at nanoscale dimensions, and certain critical tolerances must be met for them to function. To understand departures from intended functionality, it is essential to examine ICs as manufactured so as to adjust design rules, ideally in a non-destructive way so that imaged structures can be correlated with electrical performance. Electron microscopes can do this on thin regions, or on exposed surfaces, but the required processing alters or even destroys functionality. Microscopy with multi-keV x-rays provides an alternative approach with greater penetration, but the spatial resolution of x-ray imaging lenses has not allowed one to see the required detail in the latest generation of ICs. X-ray ptychography provides a way to obtain images of ICs without lens-imposed resolution limits, with past work delivering 20-40 nm resolution on thinned ICs. We describe a simple model for estimating the required exposure, and use it to estimate the future potential for this technique. Here we show for the first time that this approach can be used to image circuit detail through an unprocessed 300 µm thick silicon wafer, with sub-20 nm detail clearly resolved after mechanical polishing to 240 µm thickness was used to eliminate image contrast caused by Si wafer surface scratches. By using continuous x-ray scanning, massively parallel computation, and a new generation of synchrotron light sources, this should enable entire non-etched ICs to be imaged to 10 nm resolution or better while maintaining their ability to function in electrical tests.

15.
Sci Rep ; 7(1): 445, 2017 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-28348401

RESUMO

X-ray microscopy can be used to image whole, unsectioned cells in their native hydrated state. It complements the higher resolution of electron microscopy for submicrometer thick specimens, and the molecule-specific imaging capabilites of fluorescence light microscopy. We describe here the first use of fast, continuous x-ray scanning of frozen hydrated cells for simultaneous sub-20 nm resolution ptychographic transmission imaging with high contrast, and sub-100 nm resolution deconvolved x-ray fluorescence imaging of diffusible and bound ions at native concentrations, without the need to add specific labels. By working with cells that have been rapidly frozen without the use of chemical fixatives, and imaging them under cryogenic conditions, we are able to obtain images with well preserved structural and chemical composition, and sufficient stability against radiation damage to allow for multiple images to be obtained with no observable change.


Assuntos
Congelamento , Processamento de Imagem Assistida por Computador , Microscopia de Fluorescência/métodos , Água/química , Chlamydomonas/citologia , Raios X
16.
Nano Lett ; 17(2): 1102-1108, 2017 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-28026962

RESUMO

Imaging the dynamical response of materials following ultrafast excitation can reveal energy transduction mechanisms and their dissipation pathways, as well as material stability under conditions far from equilibrium. Such dynamical behavior is challenging to characterize, especially operando at nanoscopic spatiotemporal scales. In this letter, we use X-ray coherent diffractive imaging to show that ultrafast laser excitation of a ZnO nanocrystal induces a rich set of deformation dynamics including characteristic "hard" or inhomogeneous and "soft" or homogeneous modes at different time scales, corresponding respectively to the propagation of acoustic phonons and resonant oscillation of the crystal. By integrating the 3D nanocrystal structure obtained from the ultrafast X-ray measurements with a continuum thermo-electro-mechanical finite element model, we elucidate the deformation mechanisms following laser excitation, in particular, a torsional mode that generates a 50% greater electric potential gradient than that resulting from the flexural mode. Understanding of the time-dependence of these mechanisms on ultrafast scales has significant implications for development of new materials for nanoscale power generation.


Assuntos
Nanopartículas/química , Óxido de Zinco/química , Cristalização , Imageamento Tridimensional , Cinética , Lasers , Teste de Materiais , Fônons , Fenômenos Físicos , Raios X
17.
IEEE Trans Vis Comput Graph ; 22(6): 1672-1682, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26955037

RESUMO

The objective of this paper is to understand transport behavior in uncertain time-varying flow fields by redefining the finite-time Lyapunov exponent (FTLE) and Lagrangian coherent structure (LCS) as stochastic counterparts of their traditional deterministic definitions. Three new concepts are introduced: the distribution of the FTLE (D-FTLE), the FTLE of distributions (FTLE-D), and uncertain LCS (U-LCS). The D-FTLE is the probability density function of FTLE values for every spatiotemporal location, which can be visualized with different statistical measurements. The FTLE-D extends the deterministic FTLE by measuring the divergence of particle distributions. It gives a statistical overview of how transport behaviors vary in neighborhood locations. The U-LCS, the probabilities of finding LCSs over the domain, can be extracted with stochastic ridge finding and density estimation algorithms. We show that our approach produces better results than existing variance-based methods do. Our experiments also show that the combination of D-FTLE, FTLE-D, and U-LCS can help users understand transport behaviors and find separatrices in ensemble simulations of atmospheric processes.

18.
Phys Rev E ; 93(2): 023305, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26986437

RESUMO

In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, in Phillips et al. [Phys. Rev. E 91, 023311 (2015)], we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2D space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. Additionally, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.

19.
IEEE Trans Vis Comput Graph ; 22(1): 827-36, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26529730

RESUMO

We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

20.
IEEE Trans Vis Comput Graph ; 22(1): 965-74, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26529740

RESUMO

A notable recent trend in time-varying volumetric data analysis and visualization is to extract data relationships and represent them in a low-dimensional abstract graph view for visual understanding and making connections to the underlying data. Nevertheless, the ever-growing size and complexity of data demands novel techniques that go beyond standard brushing and linking to allow significant reduction of cognition overhead and interaction cost. In this paper, we present a mining approach that automatically extracts meaningful features from a graph-based representation for exploring time-varying volumetric data. This is achieved through the utilization of a series of graph analysis techniques including graph simplification, community detection, and visual recommendation. We investigate the most important transition relationships for time-varying data and evaluate our solution with several time-varying data sets of different sizes and characteristics. For gaining insights from the data, we show that our solution is more efficient and effective than simply asking users to extract relationships via standard interaction techniques, especially when the data set is large and the relationships are complex. We also collect expert feedback to confirm the usefulness of our approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA