Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Comput Sci Eng ; 23(1): 106, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35921167

RESUMO

[This corrects the article DOI: 10.1109/MCSE.2020.3023288.].

2.
Comput Sci Eng ; 22(6): 48-59, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-35916873

RESUMO

We introduce a trans-disciplinary collaboration between researchers, healthcare practitioners, and community health partners in the Southwestern U.S. to enable improved management, response, and recovery to our current pandemic and for future health emergencies. Our Center work enables effective and efficient decision-making through interactive, human-guided analytical environments. We discuss our PanViz 2.0 system, a visual analytics application for supporting pandemic preparedness through a tightly coupled epidemiological model and interactive interface. We discuss our framework, current work, and plans to extend the system with exploration of what-if scenarios, interactive machine learning for model parameter inference, and analysis of mitigation strategies to facilitate decision-making during public health crises.

3.
BMC Bioinformatics ; 13 Suppl 8: S6, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22607515

RESUMO

When analyzing metabolomics data, cancer care researchers are searching for differences between known healthy samples and unhealthy samples. By analyzing and understanding these differences, researchers hope to identify cancer biomarkers. Due to the size and complexity of the data produced, however, analysis can still be very slow and time consuming. This is further complicated by the fact that datasets obtained will exhibit incidental differences in intensity and retention time, not related to actual chemical differences in the samples being evaluated. Additionally, automated tools to correct these errors do not always produce reliable results. This work presents a new analytics system that enables interactive comparative visualization and analytics of metabolomics data obtained by two-dimensional gas chromatography-mass spectrometry (GC × GC-MS). The key features of this system are the ability to produce visualizations of multiple GC × GC-MS data sets, and to explore those data sets interactively, allowing a user to discover differences and features in real time. The system provides statistical support in the form of difference, standard deviation, and kernel density estimation calculations to aid users in identifying meaningful differences between samples. These are combined with novel transfer functions and multiform, linked visualizations in order to provide researchers with a powerful new tool for GC × GC-MS exploration and bio-marker discovery.


Assuntos
Cromatografia Gasosa-Espectrometria de Massas/métodos , Metabolômica/métodos , Neoplasias/metabolismo , Software , Animais , Cães , Cromatografia Gasosa-Espectrometria de Massas/instrumentação , Humanos , Metabolômica/instrumentação , Camundongos , Análise de Regressão
4.
J Med Internet Res ; 14(2): e58, 2012 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-22504018

RESUMO

BACKGROUND: The development of a mobile telephone food record has the potential to ameliorate much of the burden associated with current methods of dietary assessment. When using the mobile telephone food record, respondents capture an image of their foods and beverages before and after eating. Methods of image analysis and volume estimation allow for automatic identification and volume estimation of foods. To obtain a suitable image, all foods and beverages and a fiducial marker must be included in the image. OBJECTIVE: To evaluate a defined set of skills among adolescents and adults when using the mobile telephone food record to capture images and to compare the perceptions and preferences between adults and adolescents regarding their use of the mobile telephone food record. METHODS: We recruited 135 volunteers (78 adolescents, 57 adults) to use the mobile telephone food record for one or two meals under controlled conditions. Volunteers received instruction for using the mobile telephone food record prior to their first meal, captured images of foods and beverages before and after eating, and participated in a feedback session. We used chi-square for comparisons of the set of skills, preferences, and perceptions between the adults and adolescents, and McNemar test for comparisons within the adolescents and adults. RESULTS: Adults were more likely than adolescents to include all foods and beverages in the before and after images, but both age groups had difficulty including the entire fiducial marker. Compared with adolescents, significantly more adults had to capture more than one image before (38% vs 58%, P = .03) and after (25% vs 50%, P = .008) meal session 1 to obtain a suitable image. Despite being less efficient when using the mobile telephone food record, adults were more likely than adolescents to perceive remembering to capture images as easy (P < .001). CONCLUSIONS: A majority of both age groups were able to follow the defined set of skills; however, adults were less efficient when using the mobile telephone food record. Additional interactive training will likely be necessary for all users to provide extra practice in capturing images before entering a free-living situation. These results will inform age-specific development of the mobile telephone food record that may translate to a more accurate method of dietary assessment.


Assuntos
Telefone Celular , Registros de Dieta , Ingestão de Energia , Adolescente , Adulto , Humanos , Autoeficácia
5.
Public Health Nutr ; 14(7): 1184-91, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21324224

RESUMO

OBJECTIVE: To evaluate adolescents' abilities to identify foods and estimate the portion size of foods consumed in order to inform development of the mobile telephone food record (mpFR). DESIGN: Data were collected from two samples of adolescents (11-18 years). Adolescents in sample 1 participated in one lunch (n 63) and fifty-five of the sixty-three adolescents (87 %) returned for breakfast the next morning. Sample 2 volunteers received all meals and snacks for a 24 h period. At mealtime, sample 1 participants were asked to write down the names of the foods. Sample 2 participants identified foods in an image of their meal 10-14 h postprandial. Adolescents in sample 2 also estimated portion sizes of their breakfast foods and snacks. RESULTS: Sample 1 identified thirty of the thirty-eight food items correctly, and of the misidentified foods all were identified within the correct major food group. For sample 2, eleven of the thirteen food items were identified correctly 100 % of the time. Half of the breakfast and snack foods had at least one portion size estimate within 10 % of the true amount using a variety of measurement descriptors. CONCLUSIONS: The results provide evidence that adolescents can correctly identify familiar foods and they can look at an image of their meal and identify the foods in the image up to 14·5 h postprandial. The results of the present study not only inform the development of the mpFR but also provide strong evidence of the use of digital images of eating occasions in research and clinical settings.


Assuntos
Registros de Dieta , Inquéritos sobre Dietas/instrumentação , Inquéritos sobre Dietas/métodos , Alimentos/classificação , Avaliação Nutricional , Adolescente , Comportamento do Adolescente , Criança , Ingestão de Energia , Feminino , Humanos , Masculino , Fotografação , Período Pós-Prandial , Fatores de Tempo , Estados Unidos
6.
IEEE Trans Vis Comput Graph ; 16(2): 205-20, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20075482

RESUMO

As data sources become larger and more complex, the ability to effectively explore and analyze patterns among varying sources becomes a critical bottleneck in analytic reasoning. Incoming data contain multiple variables, high signal-to-noise ratio, and a degree of uncertainty, all of which hinder exploration, hypothesis generation/exploration, and decision making. To facilitate the exploration of such data, advanced tool sets are needed that allow the user to interact with their data in a visual environment that provides direct analytic capability for finding data aberrations or hotspots. In this paper, we present a suite of tools designed to facilitate the exploration of spatiotemporal data sets. Our system allows users to search for hotspots in both space and time, combining linked views and interactive filtering to provide users with contextual information about their data and allow the user to develop and explore their hypotheses. Statistical data models and alert detection algorithms are provided to help draw user attention to critical areas. Demographic filtering can then be further applied as hypotheses generated become fine tuned. This paper demonstrates the use of such tools on multiple geospatiotemporal data sets.


Assuntos
Algoritmos , Inteligência Artificial , Gráficos por Computador , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Teóricos , Interface Usuário-Computador , Simulação por Computador
7.
IEEE Trans Vis Comput Graph ; 26(1): 353-363, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31425085

RESUMO

Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.

8.
IEEE Trans Vis Comput Graph ; 26(1): 874-883, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31425086

RESUMO

Social media platforms are filled with social spambots. Detecting these malicious accounts is essential, yet challenging, as they continually evolve to evade detection techniques. In this article, we present VASSL, a visual analytics system that assists in the process of detecting and labeling spambots. Our tool enhances the performance and scalability of manual labeling by providing multiple connected views and utilizing dimensionality reduction, sentiment analysis and topic modeling, enabling insights for the identification of spambots. The system allows users to select and analyze groups of accounts in an interactive manner, which enables the detection of spambots that may not be identified when examined individually. We present a user study to objectively evaluate the performance of VASSL users, as well as capturing subjective opinions about the usefulness and the ease of use of the tool.

9.
IEEE Trans Vis Comput Graph ; 26(1): 558-568, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31442995

RESUMO

Various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. However, due to the high noise in the deluge of data, effectively determining semantically relevant information can be difficult, further complicated by the changing definition of relevancy by each end user for different events. The majority of existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Existing methods that incorporate interactive user feedback focus on historical datasets. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time. This limits real-time situational awareness, as streaming data that is incorrectly classified cannot be corrected immediately, permitting the possibility for important incoming data to be incorrectly classified as well. We present a novel interactive learning framework to improve the classification process in which the user iteratively corrects the relevancy of tweets in real-time to train the classification model on-the-fly for immediate predictive improvements. We computationally evaluate our classification model adapted to learn at interactive rates. Our results show that our approach outperforms state-of-the-art machine learning models. In addition, we integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system tailored for real-time situational awareness. To demonstrate our framework's effectiveness, we provide domain expert feedback from first responders who used the extended SMART 2.0 system.

10.
IEEE Trans Vis Comput Graph ; 26(1): 1193-1203, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31425117

RESUMO

Evaluating employee performance in organizations with varying workloads and tasks is challenging. Specifically, it is important to understand how quantitative measurements of employee achievements relate to supervisor expectations, what the main drivers of good performance are, and how to combine these complex and flexible performance evaluation metrics into an accurate portrayal of organizational performance in order to identify shortcomings and improve overall productivity. To facilitate this process, we summarize common organizational performance analyses into four visual exploration task categories. Additionally, we develop MetricsVis, a visual analytics system composed of multiple coordinated views to support the dynamic evaluation and comparison of individual, team, and organizational performance in public safety organizations. MetricsVis provides four primary visual components to expedite performance evaluation: (1) a priority adjustment view to support direct manipulation on evaluation metrics; (2) a reorderable performance matrix to demonstrate the details of individual employees; (3) a group performance view that highlights aggregate performance and individual contributions for each group; and (4) a projection view illustrating employees with similar specialties to facilitate shift assignments and training. We demonstrate the usability of our framework with two case studies from medium-sized law enforcement agencies and highlight its broader applicability to other domains.


Assuntos
Gráficos por Computador , Avaliação de Desempenho Profissional/classificação , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Polícia
11.
BMC Med Inform Decis Mak ; 9: 21, 2009 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-19383138

RESUMO

BACKGROUND: Public health surveillance is the monitoring of data to detect and quantify unusual health events. Monitoring pre-diagnostic data, such as emergency department (ED) patient chief complaints, enables rapid detection of disease outbreaks. There are many sources of variation in such data; statistical methods need to accurately model them as a basis for timely and accurate disease outbreak methods. METHODS: Our new methods for modeling daily chief complaint counts are based on a seasonal-trend decomposition procedure based on loess (STL) and were developed using data from the 76 EDs of the Indiana surveillance program from 2004 to 2008. Square root counts are decomposed into inter-annual, yearly-seasonal, day-of-the-week, and random-error components. Using this decomposition method, we develop a new synoptic-scale (days to weeks) outbreak detection method and carry out a simulation study to compare detection performance to four well-known methods for nine outbreak scenarios. RESULT: The components of the STL decomposition reveal insights into the variability of the Indiana ED data. Day-of-the-week components tend to peak Sunday or Monday, fall steadily to a minimum Thursday or Friday, and then rise to the peak. Yearly-seasonal components show seasonal influenza, some with bimodal peaks.Some inter-annual components increase slightly due to increasing patient populations. A new outbreak detection method based on the decomposition modeling performs well with 90 days or more of data. Control limits were set empirically so that all methods had a specificity of 97%. STL had the largest sensitivity in all nine outbreak scenarios. The STL method also exhibited a well-behaved false positive rate when run on the data with no outbreaks injected. CONCLUSION: The STL decomposition method for chief complaint counts leads to a rapid and accurate detection method for disease outbreaks, and requires only 90 days of historical data to be put into operation. The visualization tools that accompany the decomposition and outbreak methods provide much insight into patterns in the data, which is useful for surveillance operations.


Assuntos
Bioterrorismo/estatística & dados numéricos , Surtos de Doenças/estatística & dados numéricos , Serviço Hospitalar de Emergência/estatística & dados numéricos , Computação Matemática , Modelos Estatísticos , Vigilância da População/métodos , Infecções Respiratórias/epidemiologia , Algoritmos , Estudos Transversais , Coleta de Dados/estatística & dados numéricos , Documentação/estatística & dados numéricos , Diagnóstico Precoce , Humanos , Indiana , Estudos Longitudinais , Computação em Informática Médica , Distribuição de Poisson , Infecções Respiratórias/diagnóstico , Estações do Ano , Síndrome
12.
IEEE Trans Vis Comput Graph ; 15(1): 77-86, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19008557

RESUMO

Volume illustration can be used to provide insight into source data from CT/MRI scanners in much the same way as medical illustration depicts the important details of anatomical structures. As such, proven techniques used in medical illustration should be transferable to volume illustration, providing scientists with new tools to visualize their data. In recent years, a number of techniques have been developed to enhance the rendering pipeline and create illustrative effects similar to the ones found in medical textbooks and surgery manuals. Such effects usually highlight important features of the subject while subjugating its context and providing depth cues for correct perception. Inspired by traditional visual and line-drawing techniques found in medical illustration, we have developed a collection of fast algorithms for more effective emphasis/de-emphasis of data as well as conveyance of spatial relationships. Our techniques utilize effective outlining techniques and selective depth enhancement to provide perceptual cues of object importance as well as spatial relationships in volumetric datasets. Moreover, we have used illustration principles to effectively combine and adapt basic techniques so that they work together to provide consistent visual information and a uniform style.


Assuntos
Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Técnica de Subtração , Tomografia Computadorizada por Raios X/métodos
13.
IEEE Trans Vis Comput Graph ; 15(2): 221-34, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19147887

RESUMO

The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.


Assuntos
Gráficos por Computador , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Algoritmos , Simulação por Computador , Humanos , Imageamento Tridimensional/métodos , Luz
14.
IEEE Trans Vis Comput Graph ; 15(6): 1473-80, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19834223

RESUMO

The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation. We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial transfer function generation serves as a reasonable base for volumetric rendering, reducing the trial-and-error overhead typically found in transfer function design.


Assuntos
Algoritmos , Gráficos por Computador , Processamento de Imagem Assistida por Computador/métodos , Estatísticas não Paramétricas , Análise por Conglomerados , Diagnóstico por Imagem/métodos , Humanos
15.
IEEE Trans Vis Comput Graph ; 15(6): 1425-32, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19834217

RESUMO

Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness.


Assuntos
Gráficos por Computador , Imagem de Difusão por Ressonância Magnética/métodos , Coração/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Músculo Esquelético/anatomia & histologia , Algoritmos , Animais , Membro Posterior/anatomia & histologia , Miocárdio/patologia , Distribuição Normal , Suínos
16.
IEEE Trans Vis Comput Graph ; 25(1): 364-373, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30130197

RESUMO

Interpretation and diagnosis of machine learning models have gained renewed interest in recent years with breakthroughs in new approaches. We present Manifold, a framework that utilizes visual analysis techniques to support interpretation, debugging, and comparison of machine learning models in a more transparent and interactive manner. Conventional techniques usually focus on visualizing the internal logic of a specific model type (i.e., deep neural networks), lacking the ability to extend to a more complex scenario where different model types are integrated. To this end, Manifold is designed as a generic framework that does not rely on or access the internal logic of the model and solely observes the input (i.e., instances or features) and the output (i.e., the predicted result and probability distribution). We describe the workflow of Manifold as an iterative process consisting of three major phases that are commonly involved in the model development and diagnosis process: inspection (hypothesis), explanation (reasoning), and refinement (verification). The visual components supporting these tasks include a scatterplot-based visual summary that overviews the models' outcome and a customizable tabular view that reveals feature discrimination. We demonstrate current applications of the framework on the classification and regression tasks and discuss other potential machine learning use scenarios where Manifold can be applied.

17.
IEEE Trans Vis Comput Graph ; 24(3): 1287-1300, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28186901

RESUMO

Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

18.
IEEE Comput Graph Appl ; 37(2): 42-53, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28320644

RESUMO

Visual clutter is a common challenge when visualizing large rank time series data. WikiTopReader, a reader of Wikipedia page rank, lets users explore connections among top-viewed pages by connecting page-rank behaviors with page-link relations. Such a combination enhances the unweighted Wikipedia page-link network and focuses attention on the page of interest. A set of user evaluations shows that the system effectively represents evolving ranking patterns and page-wise correlation.

19.
IEEE Trans Vis Comput Graph ; 12(5): 1061-8, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17080835

RESUMO

The Network for Computational Nanotechnology (NCN) has developed a science gateway at nanoHUB.org for nanotechnology education and research. Remote users can browse through online seminars and courses, and launch sophisticated nanotechnology simulation tools, all within their web browser. Simulations are supported by a middleware that can route complex jobs to grid supercomputing resources. But what is truly unique about the middleware is the way that it uses hardware accelerated graphics to support both problem setup and result visualization. This paper describes the design and integration of a remote visualization framework into the nanoHUB for interactive visual analytics of nanotechnology simulations. Our services flexibly handle a variety of nanoscience simulations, render them utilizing graphics hardware acceleration in a scalable manner, and deliver them seamlessly through the middleware to the user. Rendering is done only on-demand, as needed, so each graphics hardware unit can simultaneously support many user sessions. Additionally, a novel node distribution scheme further improves our system's scalability. Our approach is not only efficient but also cost-effective. Only a half-dozen render nodes are anticipated to support hundreds of active tool sessions on the nanoHUB. Moreover, this architecture and visual analytics environment provides capabilities that can serve many areas of scientific simulation and analysis beyond nanotechnology with its ability to interactively analyze and visualize multivariate scalar and vector fields.


Assuntos
Gráficos por Computador/instrumentação , Internet , Modelos Teóricos , Nanoestruturas/química , Nanoestruturas/ultraestrutura , Nanotecnologia/instrumentação , Nanotecnologia/métodos , Processamento de Sinais Assistido por Computador/instrumentação , Interface Usuário-Computador , Simulação por Computador , Computadores
20.
IEEE Trans Vis Comput Graph ; 12(5): 1157-64, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17080847

RESUMO

Meteorological research involves the analysis of multi-field, multi-scale, and multi-source data sets. In order to better understand these data sets, models and measurements at different resolutions must be analyzed. Unfortunately, traditional atmospheric visualization systems only provide tools to view a limited number of variables and small segments of the data. These tools are often restricted to two-dimensional contour or vector plots or three-dimensional isosurfaces. The meteorologist must mentally synthesize the data from multiple plots to glean the information needed to produce a coherent picture of the weather phenomenon of interest. In order to provide better tools to meteorologists and reduce system limitations, we have designed an integrated atmospheric visual analysis and exploration system for interactive analysis of weather data sets. Our system allows for the integrated visualization of 1D, 2D, and 3D atmospheric data sets in common meteorological grid structures and utilizes a variety of rendering techniques. These tools provide meteorologists with new abilities to analyze their data and answer questions on regions of interest, ranging from physics-based atmospheric rendering to illustrative rendering containing particles and glyphs. In this paper, we will discuss the use and performance of our visual analysis for two important meteorological applications. The first application is warm rain formation in small cumulus clouds. Here, our three-dimensional, interactive visualization of modeled drop trajectories within spatially correlated fields from a cloud simulation has provided researchers with new insight. Our second application is improving and validating severe storm models, specifically the Weather Research and Forecasting (WRF) model. This is done through correlative visualization of WRF model and experimental Doppler storm data.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa