Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Psychiatry ; 15: 1337030, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38333893

RESUMEN

Background: Campus lockdown orders were issued for the purpose of preventing and controlling COVID-19, which resulted in psychological problems among college students. However, the experiences they have during the pandemic may also lead to positive personal changes, including posttraumatic growth (PTG). The current study examined the mediating role of belief in a just world and meaning in life in social support and PTG during the COVID-19 campus lockdown. Method: An online survey was conducted on 1711 college students in Hebei Province, China. Based on the survey results, a structural equation model was established. Results: Social support positively predicted PTG. Furthermore, belief in a just world and meaning in life played a mediating role between social support and PTG respectively. Besides, social support could also predict PTG through the multiple serial mediating effect of belief in a just world and meaning in life. Conclusion: These results indicated mechanisms by which social support influenced PTG, and this provided insights into how to promote post-traumatic growth among university students in the post-pandemic period.

2.
IEEE Trans Vis Comput Graph ; 30(1): 944-954, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37878446

RESUMEN

Computational notebooks have become increasingly popular for exploratory data analysis due to their ability to support data exploration and explanation within a single document. Effective documentation for explaining chart findings during the exploration process is essential as it helps recall and share data analysis. However, documenting chart findings remains a challenge due to its time-consuming and tedious nature. While existing automatic methods alleviate some of the burden on users, they often fail to cater to users' specific interests. In response to these limitations, we present InkSight, a mixed-initiative computational notebook plugin that generates finding documentation based on the user's intent. InkSight allows users to express their intent in specific data subsets through sketching atop visualizations intuitively. To facilitate this, we designed two types of sketches, i.e., open-path and closed-path sketch. Upon receiving a user's sketch, InkSight identifies the sketch type and corresponding selected data items. Subsequently, it filters data fact types based on the sketch and selected data items before employing existing automatic data fact recommendation algorithms to infer data facts. Using large language models (GPT-3.5), InkSight converts data facts into effective natural language documentation. Users can conveniently fine-tune the generated documentation within InkSight. A user study with 12 participants demonstrated the usability and effectiveness of InkSight in expressing user intent and facilitating chart finding documentation.

3.
IEEE Trans Vis Comput Graph ; 30(1): 262-272, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37883259

RESUMEN

Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allows these models to learn rich, contextual relationships between elements of a sequence. The main idea behind our method is to visualize a joint embedding of the query and key vectors used by transformer models to compute attention. Unlike previous attention visualization techniques, our approach enables the analysis of global patterns across multiple input sequences. We create an interactive visualization tool, AttentionViz (demo: http://attentionviz.com), based on these joint query-key embeddings, and use it to study attention mechanisms in both language and vision transformers. We demonstrate the utility of our approach in improving model understanding and offering new insights about query-key interactions through several application scenarios and expert feedback.

4.
IEEE Comput Graph Appl ; 43(5): 83-90, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37713213

RESUMEN

In the past two decades, research in visual analytics (VA) applications has made tremendous progress, not just in terms of scientific contributions, but also in real-world impact across wide-ranging domains including bioinformatics, urban analytics, and explainable AI. Despite these success stories, questions on the rigor and value of VA application research have emerged as a grand challenge. This article outlines a research and development agenda for making VA application research more rigorous and impactful. We first analyze the characteristics of VA application research and explain how they cause the rigor and value problem. Next, we propose a research ecosystem for improving scientific value, and rigor and outline an agenda with 12 open challenges spanning four areas, including foundation, methodology, application, and community. We encourage discussions, debates, and innovative efforts toward more rigorous and impactful VA research.

5.
Artículo en Inglés | MEDLINE | ID: mdl-37028006

RESUMEN

Dashboards, which comprise multiple views on a single display, help analyze and communicate multiple perspectives of data simultaneously. However, creating effective and elegant dashboards is challenging since it requires careful and logical arrangement and coordination of multiple visualizations. To solve the problem, we propose a data-driven approach for mining design rules from dashboards and automating dashboard organization. Specifically, we focus on two prominent aspects of the organization: arrangement, which describes the position, size, and layout of each view in the display space; and coordination, which indicates the interaction between pairwise views. We build a new dataset containing 854 dashboards crawled online, and develop feature engineering methods for describing the single views and view-wise relationships in terms of data, encoding, layout, and interactions. Further, we identify design rules among those features and develop a recommender for dashboard design. We demonstrate the usefulness of DMiner through an expert study and a user study. The expert study shows that our extracted design rules are reasonable and conform to the design practice of experts. Moreover, a comparative user study shows that our recommender could help automate dashboard organization and reach human-level performance. In summary, our work offers a promising starting point for design mining visualizations to build recommenders.

6.
IEEE Trans Vis Comput Graph ; 29(3): 1638-1650, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34780329

RESUMEN

Data visualizations have been increasingly used in oral presentations to communicate data patterns to the general public. Clear verbal introductions of visualizations to explain how to interpret the visually encoded information are essential to convey the takeaways and avoid misunderstandings. We contribute a series of studies to investigate how to effectively introduce visualizations to the audience with varying degrees of visualization literacy. We begin with understanding how people are introducing visualizations. We crowdsource 110 introductions of visualizations and categorize them based on their content and structures. From these crowdsourced introductions, we identify different introduction strategies and generate a set of introductions for evaluation. We conduct experiments to systematically compare the effectiveness of different introduction strategies across four visualizations with 1,080 participants. We find that introductions explaining visual encodings with concrete examples are the most effective. Our study provides both qualitative and quantitative insights into how to construct effective verbal introductions of visualizations in presentations, inspiring further research in data storytelling.

7.
IEEE Trans Vis Comput Graph ; 29(8): 3685-3697, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35446768

RESUMEN

Appropriate gestures can enhance message delivery and audience engagement in both daily communication and public presentations. In this article, we contribute a visual analytic approach that assists professional public speaking coaches in improving their practice of gesture training through analyzing presentation videos. Manually checking and exploring gesture usage in the presentation videos is often tedious and time-consuming. There lacks an efficient method to help users conduct gesture exploration, which is challenging due to the intrinsically temporal evolution of gestures and their complex correlation to speech content. In this article, we propose GestureLens, a visual analytics system to facilitate gesture-based and content-based exploration of gesture usage in presentation videos. Specifically, the exploration view enables users to obtain a quick overview of the spatial and temporal distributions of gestures. The dynamic hand movements are firstly aggregated through a heatmap in the gesture space for uncovering spatial patterns, and then decomposed into two mutually perpendicular timelines for revealing temporal patterns. The relation view allows users to explicitly explore the correlation between speech content and gestures by enabling linked analysis and intuitive glyph designs. The video view and dynamic view show the context and overall dynamic movement of the selected gestures, respectively. Two usage scenarios and expert interviews with professional presentation coaches demonstrate the effectiveness and usefulness of GestureLens in facilitating gesture exploration and analysis of presentation videos.


Asunto(s)
Gráficos por Computador , Gestos , Habla , Mano , Movimiento
8.
IEEE Trans Vis Comput Graph ; 29(1): 1026-1036, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36179000

RESUMEN

The last decade has witnessed many visual analytics (VA) systems that make successful applications to wide-ranging domains like urban analytics and explainable AI. However, their research rigor and contributions have been extensively challenged within the visualization community. We come in defence of VA systems by contributing two interview studies for gathering critics and responses to those criticisms. First, we interview 24 researchers to collect criticisms the review comments on their VA work. Through an iterative coding and refinement process, the interview feedback is summarized into a list of 36 common criticisms. Second, we interview 17 researchers to validate our list and collect their responses, thereby discussing implications for defending and improving the scientific values and rigor of VA systems. We highlight that the presented knowledge is deep, extensive, but also imperfect, provocative, and controversial, and thus recommend reading with an inclusive and critical eye. We hope our work can provide thoughts and foundations for conducting VA research and spark discussions to promote the research field forward more rigorously and vibrantly.

9.
IEEE Trans Vis Comput Graph ; 29(1): 690-700, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36179003

RESUMEN

Analytical dashboards are popular in business intelligence to facilitate insight discovery with multiple charts. However, creating an effective dashboard is highly demanding, which requires users to have adequate data analysis background and be familiar with professional tools, such as Power BI. To create a dashboard, users have to configure charts by selecting data columns and exploring different chart combinations to optimize the communication of insights, which is trial-and-error. Recent research has started to use deep learning methods for dashboard generation to lower the burden of visualization creation. However, such efforts are greatly hindered by the lack of large-scale and high-quality datasets of dashboards. In this work, we propose using deep reinforcement learning to generate analytical dashboards that can use well-established visualization knowledge and the estimation capacity of reinforcement learning. Specifically, we use visualization knowledge to construct a training environment and rewards for agents to explore and imitate human exploration behavior with a well-designed agent network. The usefulness of the deep reinforcement learning model is demonstrated through ablation studies and user studies. In conclusion, our work opens up new opportunities to develop effective ML-based visualization recommenders without beforehand training datasets.

10.
IEEE Trans Vis Comput Graph ; 28(1): 162-172, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34587058

RESUMEN

We contribute a deep-learning-based method that assists in designing analytical dashboards for analyzing a data table. Given a data table, data workers usually need to experience a tedious and time-consuming process to select meaningful combinations of data columns for creating charts. This process is further complicated by the needs of creating dashboards composed of multiple views that unveil different perspectives of data. Existing automated approaches for recommending multiple-view visualizations mainly build on manually crafted design rules, producing sub-optimal or irrelevant suggestions. To address this gap, we present a deep learning approach for selecting data columns and recommending multiple charts. More importantly, we integrate the deep learning models into a mixed-initiative system. Our model could make recommendations given optional user-input selections of data columns. The model, in turn, learns from provenance data of authoring logs in an offline manner. We compare our deep learning model with existing methods for visualization recommendation and conduct a user study to evaluate the usefulness of the system.

11.
IEEE Trans Vis Comput Graph ; 28(12): 5049-5070, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34310306

RESUMEN

Visualizations themselves have become a data format. Akin to other data formats such as text and images, visualizations are increasingly created, stored, shared, and (re-)used with artificial intelligence (AI) techniques. In this survey, we probe the underlying vision of formalizing visualizations as an emerging data format and review the recent advance in applying AI techniques to visualization data (AI4VIS). We define visualization data as the digital representations of visualizations in computers and focus on data visualization (e.g., charts and infographics). We build our survey upon a corpus spanning ten different fields in computer science with an eye toward identifying important common interests. Our resulting taxonomy is organized around WHAT is visualization data and its representation, WHY and HOW to apply AI to visualization data. We highlight a set of common tasks that researchers apply to the visualization data and present a detailed discussion of AI approaches developed to accomplish those tasks. Drawing upon our literature review, we discuss several important research questions surrounding the management and exploitation of visualization data, as well as the role of AI in support of those processes. We make the list of surveyed papers and related material available online at.


Asunto(s)
Inteligencia Artificial , Visualización de Datos , Gráficos por Computador , Encuestas y Cuestionarios
12.
IEEE Trans Vis Comput Graph ; 27(2): 1492-1502, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33048713

RESUMEN

GIFs are enjoying increasing popularity on social media as a format for data-driven storytelling with visualization; simple visual messages are embedded in short animations that usually last less than 15 seconds and are played in automatic repetition. In this paper, we ask the question, "What makes a data-GIF understandable?" While other storytelling formats such as data videos, infographics, or data comics are relatively well studied, we have little knowledge about the design factors and principles for "data-GIFs". To close this gap, we provide results from semi-structured interviews and an online study with a total of 118 participants investigating the impact of design decisions on the understandability of data-GIFs. The study and our consequent analysis are informed by a systematic review and structured design space of 108 data-GIFs that we found online. Our results show the impact of design dimensions from our design space such as animation encoding, context preservation, or repetition on viewers understanding of the GIF's core message. The paper concludes with a list of suggestions for creating more effective Data-GIFs.

13.
IEEE Trans Vis Comput Graph ; 27(2): 464-474, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33074819

RESUMEN

We contribute MobileVisFixer, a new method to make visualizations more mobile-friendly. Although mobile devices have become the primary means of accessing information on the web, many existing visualizations are not optimized for small screens and can lead to a frustrating user experience. Currently, practitioners and researchers have to engage in a tedious and time-consuming process to ensure that their designs scale to screens of different sizes, and existing toolkits and libraries provide little support in diagnosing and repairing issues. To address this challenge, MobileVisFixer automates a mobile-friendly visualization re-design process with a novel reinforcement learning framework. To inform the design of MobileVisFixer, we first collected and analyzed SVG-based visualizations on the web, and identified five common mobile-friendly issues. MobileVisFixer addresses four of these issues on single-view Cartesian visualizations with linear or discrete scales by a Markov Decision Process model that is both generalizable across various visualizations and fully explainable. MobileVisFixer deconstructs charts into declarative formats, and uses a greedy heuristic based on Policy Gradient methods to find solutions to this difficult, multi-criteria optimization problem in reasonable time. In addition, MobileVisFixer can be easily extended with the incorporation of optimization algorithms for data visualizations. Quantitative evaluation on two real-world datasets demonstrates the effectiveness and generalizability of our method.

14.
IEEE Trans Vis Comput Graph ; 26(7): 2429-2442, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-30582544

RESUMEN

While much research in the educational field has revealed many presentation techniques, they often overlap and are even occasionally contradictory. Exploring presentation techniques used in TED Talks could provide evidence for a practical guideline. This study aims to explore the verbal and non-verbal presentation techniques from a collection of TED Talks. However, such analysis is challenging due to the difficulties of analyzing multimodal video collections consisted of frame images, text, and metadata. This paper proposes a visual analytic system to analyze multimodal content in video collections. The system features three views at different levels: the Projection view with novel glyphs to facilitate cluster analysis regarding presentation styles; the Comparison View to present temporal distribution and concurrences of presentation techniques and support intra-cluster analysis; and the Video View to enable contextualized exploration of a video. We conduct a case study with language education experts and university students to provide anecdotal evidence about the effectiveness of our approach, and report new findings about presentation techniques in TED Talks. Quantitative feedback from a user study confirms the usefulness of our visual system for multimodal analysis of video collections.

15.
IEEE Trans Vis Comput Graph ; 26(1): 927-937, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31443002

RESUMEN

Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.


Asunto(s)
Emociones/clasificación , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Grabación en Video , Gráficos por Computador , Humanos , Semántica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...