Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Synchrotron Radiat ; 27(Pt 1): 1-10, 2020 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-31868729

RESUMO

A new visualization tool, Cinema:Bandit, and its demonstration with a continuous workflow for analyzing shock physics experiments and visually exploring the data in real time at X-ray light sources is presented. Cinema:Bandit is an open-source, web-based visualization application in which the experimenter may explore an aggregated dataset to inform real-time beamline decisions and enable post hoc data analysis. The tool integrates with experimental workflows that process raw detector data into a simple database format, and it allows visualization of disparate data types, including experimental parameters, line graphs, and images. Use of parallel coordinates accommodates the irregular sampling of experimental parameters and allows for display and filtering of both experimental inputs and measurements. The tool is demonstrated on a dataset of shock-compressed titanium collected at the Matter in Extreme Conditions hutch at the Linac Coherent Light Source.

2.
Entropy (Basel) ; 21(7)2019 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-33267413

RESUMO

With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data properties so that the reduced data can answer domain-specific queries involving multiple variables with sufficient accuracy. While analyzing complex scientific events, domain experts often analyze and visualize two or more variables together to obtain a better understanding of the characteristics of the data features. Therefore, data summarization techniques are required to analyze multi-variable relationships in detail and then perform data reduction such that the important features involving multiple variables are preserved in the reduced data. To achieve this, in this work, we propose a data sub-sampling algorithm for performing statistical data summarization that leverages pointwise information theoretic measures to quantify the statistical association of data points considering multiple variables and generates a sub-sampled data that preserves the statistical association among multi-variables. Using such reduced sampled data, we show that multivariate feature query and analysis can be done effectively. The efficacy of the proposed multivariate association driven sampling algorithm is presented by applying it on several scientific data sets.

3.
IEEE Trans Vis Comput Graph ; 30(1): 727-737, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37938968

RESUMO

Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge physio-chemical research. They provide critical insights into how a physical system evolves over time given a model of interatomic interactions. Understanding a system's evolution is key to selecting the best candidates for new drugs, materials for manufacturing, and countless other practical applications. With today's technology, these simulations can encompass millions of unit transitions between discrete molecular structures, spanning up to several milliseconds of real time. Attempting to perform a brute-force analysis with data-sets of this size is not only computationally impractical, but would not shed light on the physically-relevant features of the data. Moreover, there is a need to analyze simulation ensembles in order to compare similar processes in differing environments. These problems call for an approach that is analytically transparent, computationally efficient, and flexible enough to handle the variety found in materials-based research. In order to address these problems, we introduce MolSieve, a progressive visual analytics system that enables the comparison of multiple long-duration simulations. Using MolSieve, analysts are able to quickly identify and compare regions of interest within immense simulations through its combination of control charts, data-reduction techniques, and highly informative visual components. A simple programming interface is provided which allows experts to fit MolSieve to their needs. To demonstrate the efficacy of our approach, we present two case studies of MolSieve and report on findings from domain collaborators.

4.
IEEE Comput Graph Appl ; 42(4): 114-119, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35839167

RESUMO

Scientific visualization is a key approach to understanding the growing massive streams of data from scientific simulations and experiments. In this article, I review technology trends including the positive effects of Moore's law on science, the significant gap between processing and data storage speeds, the emergence of hardware accelerators for ray-tracing, and the availability of robust machine learning techniques. These trends represent changes to the status quo and present the scientific visualization community with a new set of challenges. A major challenge involves extending our approaches to visualize the modern scientific process, which includes scientific verification and validation. Another key challenge to the community is the growing number, size, and complexity of scientific datasets. A final challenge is to take advantage of emerging technology trends in custom hardware and machine learning to significantly improve the large-scale data visualization process.


Assuntos
Armazenamento e Recuperação da Informação , Aprendizado de Máquina , Tecnologia
5.
IEEE Trans Vis Comput Graph ; 28(10): 3471-3485, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-33684039

RESUMO

Contour trees are used for topological data analysis in scientific visualization. While originally computed with serial algorithms, recent work has introduced a vector-parallel algorithm. However, this algorithm is relatively slow for fully augmented contour trees which are needed for many practical data analysis tasks. We therefore introduce a representation called the hyperstructure that enables efficient searches through the contour tree and use it to construct a fully augmented contour tree in data parallel, with performance on average 6 times faster than the state-of-the-art parallel algorithm in the TTK topological toolkit.


Assuntos
Gráficos por Computador , Algoritmos
6.
Epidemics ; 41: 100632, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36182803

RESUMO

INTRODUCTION: School-age children play a key role in the spread of airborne viruses like influenza due to the prolonged and close contacts they have in school settings. As a result, school closures and other non-pharmaceutical interventions were recommended as the first line of defense in response to the novel coronavirus pandemic (COVID-19). METHODS: We used an agent-based model that simulates communities across the United States including daycares, primary, and secondary schools to quantify the relative health outcomes of reopening schools for the period of August 15, 2020 to April 11, 2021. Our simulation was carried out in early September 2020 and was based on the latest (at the time) Centers for Disease Control and Prevention (CDC)'s Pandemic Planning Scenarios released in May 2020. We explored different reopening scenarios including virtual learning, in-person school, and several hybrid options that stratify the student population into cohorts in order to reduce exposure and pathogen spread. RESULTS: Scenarios where cohorts of students return to school in non-overlapping formats, which we refer to as hybrid scenarios, resulted in significant decreases in the percentage of symptomatic individuals with COVID-19, by as much as 75%. These hybrid scenarios have only slightly more negative health impacts of COVID-19 compared to implementing a 100% virtual learning scenario. Hybrid scenarios can significantly avert the number of COVID-19 cases at the national scale-approximately between 28 M and 60 M depending on the scenario-over the simulated eight-month period. We found the results of our simulations to be highly dependent on the number of workplaces assumed to be open for in-person business, as well as the initial level of COVID-19 incidence within the simulated community. CONCLUSION: In an evolving pandemic, while a large proportion of people remain susceptible, reducing the number of students attending school leads to better health outcomes; part-time in-classroom education substantially reduces health risks.


Assuntos
COVID-19 , Criança , Estados Unidos/epidemiologia , Humanos , COVID-19/epidemiologia , Estudos Retrospectivos , Pandemias/prevenção & controle , SARS-CoV-2 , Instituições Acadêmicas
7.
IEEE Trans Vis Comput Graph ; 27(12): 4439-4454, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32746272

RESUMO

Although supercomputers are becoming increasingly powerful, their components have thus far not scaled proportionately. Compute power is growing enormously and is enabling finely resolved simulations that produce never-before-seen features. However, I/O capabilities lag by orders of magnitude, which means only a fraction of the simulation data can be stored for post hoc analysis. Prespecified plans for saving features and quantities of interest do not work for features that have not been seen before. Data-driven intelligent sampling schemes are needed to detect and save important parts of the simulation while it is running. Here, we propose a novel sampling scheme that reduces the size of the data by orders-of-magnitude while still preserving important regions. The approach we develop selects points with unusual data values and high gradients. We demonstrate that our approach outperforms traditional sampling schemes on a number of tasks.

8.
IEEE Trans Vis Comput Graph ; 27(4): 2437-2454, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-31689193

RESUMO

As data sets grow to exascale, automated data analysis and visualization are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. We report the first shared SMP algorithm for fully parallel contour tree computation, with formal guarantees of O(lg V lg t) parallel steps and O(V lg V) work for data with V samples and t contour tree supernodes, and implementations with more than 30× parallel speed up on both CPU using TBB and GPU using Thrust and up 70× speed up compared to the serial sweep and merge algorithm.

9.
IEEE Trans Vis Comput Graph ; 15(6): 1539-46, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19834231

RESUMO

Visualization is essential for understanding the increasing volumes of digital data. However, the process required to create insightful visualizations is involved and time consuming. Although several visualization tools are available, including tools with sophisticated visual interfaces, they are out of reach for users who have little or no knowledge of visualization techniques and/or who do not have programming expertise. In this paper, we propose VisMashup, a new framework for streamlining the creation of customized visualization applications. Because these applications can be customized for very specific tasks, they can hide much of the complexity in a visualization specification and make it easier for users to explore visualizations by manipulating a small set of parameters. We describe the framework and how it supports the various tasks a designer needs to carry out to develop an application, from mining and exploring a set of visualization specifications (pipelines), to the creation of simplified views of the pipelines, and the automatic generation of the application and its interface. We also describe the implementation of the system and demonstrate its use in two real application scenarios.


Assuntos
Gráficos por Computador , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos , Interface Usuário-Computador , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Eletroencefalografia , Internet
10.
IEEE Comput Graph Appl ; 38(1): 119-127, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29535077

RESUMO

In situ processing produces reduced size persistent representations of a simulations state while the simulation is running. The need for in situ visualization and data analysis is usually described in terms of supercomputer size and performance in relation to available storage size.

11.
Artigo em Inglês | MEDLINE | ID: mdl-30136980

RESUMO

We present a direct manipulation technique that allows material scientists to interactively highlight relevant parameterized simulation instances located in dimensionally reduced spaces, enabling a user-defined understanding of a continuous parameter space. Our goals are two-fold: first, to build a user-directed intuition of dimensionally reduced data, and second, to provide a mechanism for creatively exploring parameter relationships in parameterized simulation sets, called ensembles. We start by visualizing ensemble data instances in dimensionally reduced scatter plots. To understand these abstract views, we employ user-defined virtual data instances that, through direct manipulation, search an ensemble for similar instances. Users can create multiple of these direct manipulation queries to visually annotate the spaces with sets of highlighted ensemble data instances. User-defined goals are therefore translated into custom illustrations that are projected onto the dimensionally reduced spaces. Combined forward and inverse searches of the parameter space follow naturally allowing for continuous parameter space prediction and visual query comparison in the context of an ensemble. The potential for this visualization technique is confirmed via expert user feedback for a shock physics application and synthetic model analysis.

12.
IEEE Trans Vis Comput Graph ; 24(1): 923-933, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866507

RESUMO

A myriad of design rules for what constitutes a "good" colormap can be found in the literature. Some common rules include order, uniformity, and high discriminative power. However, the meaning of many of these terms is often ambiguous or open to interpretation. At times, different authors may use the same term to describe different concepts or the same rule is described by varying nomenclature. These ambiguities stand in the way of collaborative work, the design of experiments to assess the characteristics of colormaps, and automated colormap generation. In this paper, we review current and historical guidelines for colormap design. We propose a specified taxonomy and provide unambiguous mathematical definitions for the most common design rules.

13.
J Appl Crystallogr ; 51(Pt 3): 943-951, 2018 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-29896062

RESUMO

A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.

14.
IEEE Trans Vis Comput Graph ; 22(1): 857-66, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26353372

RESUMO

An eddy is a feature associated with a rotating body of fluid, surrounded by a ring of shearing fluid. In the ocean, eddies are 10 to 150 km in diameter, are spawned by boundary currents and baroclinic instabilities, may live for hundreds of days, and travel for hundreds of kilometers. Eddies are important in climate studies because they transport heat, salt, and nutrients through the world's oceans and are vessels of biological productivity. The study of eddies in global ocean-climate models requires large-scale, high-resolution simulations. This poses a problem for feasible (timely) eddy analysis, as ocean simulations generate massive amounts of data, causing a bottleneck for traditional analysis workflows. To enable eddy studies, we have developed an in situ workflow for the quantitative and qualitative analysis of MPAS-Ocean, a high-resolution ocean climate model, in collaboration with the ocean model research and development process. Planned eddy analysis at high spatial and temporal resolutions will not be possible with a postprocessing workflow due to various constraints, such as storage size and I/O time, but the in situ workflow enables it and scales well to ten-thousand processing elements.

16.
IEEE Trans Vis Comput Graph ; 17(12): 2088-95, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22034327

RESUMO

We consider the problem of extracting discrete two-dimensional vortices from a turbulent flow. In our approach we use a reference model describing the expected physics and geometry of an idealized vortex. The model allows us to derive a novel correlation between the size of the vortex and its strength, measured as the square of its strain minus the square of its vorticity. For vortex detection in real models we use the strength parameter to locate potential vortex cores, then measure the similarity of our ideal analytical vortex and the real vortex core for different strength thresholds. This approach provides a metric for how well a vortex core is modeled by an ideal vortex. Moreover, this provides insight into the problem of choosing the thresholds that identify a vortex. By selecting a target coefficient of determination (i.e., statistical confidence), we determine on a per-vortex basis what threshold of the strength parameter would be required to extract that vortex at the chosen confidence. We validate our approach on real data from a global ocean simulation and derive from it a map of expected vortex strengths over the global ocean.

18.
IEEE Comput Graph Appl ; 30(6): 16-28, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-24807895

RESUMO

This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA