Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
Children (Basel) ; 10(8)2023 Aug 07.
Article in English | MEDLINE | ID: mdl-37628354

ABSTRACT

Data tracking is a common feature of pain e-health applications, however, viewing visualizations of this data has not been investigated for its potential as an intervention itself. We conducted a pilot feasibility parallel randomized cross-over trial, 1:1 allocation ratio. Participants were youth age 12-18 years recruited from a tertiary-level pediatric chronic pain clinic in Western Canada. Participants completed two weeks of Ecological Momentary Assessment (EMA) data collection, one of which also included access to a data visualization platform to view their results. Order of weeks was randomized, participants were not masked to group assignment. Objectives were to establish feasibility related to recruitment, retention, and participant experience. Of 146 youth approached, 48 were eligible and consented to participation, two actively withdrew prior to the EMA. Most participants reported satisfaction with the process and provided feedback on additional variables of interest. Technical issues with the data collection platform impacted participant experience and data analysis, and only 48% viewed the visualizations. Four youth reported adverse events not related to visualizations. Data visualization offers a promising clinical tool, and patient experience feedback is critical to modifying the platform and addressing technical issues to prepare for deployment in a larger trial.

2.
Pilot Feasibility Stud ; 8(1): 223, 2022 Oct 03.
Article in English | MEDLINE | ID: mdl-36192779

ABSTRACT

BACKGROUND: Chronic pain is a common and costly condition in youth, associated with negative implications that reach far beyond the pain experience itself (e.g., interference with recreational, social, and academic activities, mental health sequelae). As a self-appraised condition, pain experience is influenced by patient's biases and meaning-making in relation to their symptoms and triggers. We propose that interacting with self-reported data will impact the experience of pain by altering understanding and expectations of symptom experience and how pain interacts with other factors (e.g., sleep, emotions, social interactions). In this study, we aim to establish the feasibility and acceptability of using a data visualization platform to track and monitor symptoms and their relationship with other factors, versus simply daily reporting of symptoms using a smartphone-based Ecological Momentary Assessment (EMA). METHODS: This protocol is for a randomized, single-center, open-label crossover trial. We aim to recruit 50 typically developing youth aged 12-18 years with chronic pain to take part in two phases of data collection. The trial will utilize an A-B counterbalanced design in which participants will be randomly assigned to receive either Part A (EMA alone for 7 days) or Part B (EMA plus visualization platform for 7 days) first and then receive the opposite phase after a 7-day break (washout period). Key outcomes will be participant reports of acceptability and feasibility, EMA completion rates, barriers, and perceptions of the benefits or risks of participation. Secondary exploratory analyses will examine the relationship between EMA-reported symptoms over time and in relation to baseline measures, as well as pilot data on any improvements in symptoms related to engaging with the data visualization platform. DISCUSSION: This protocol describes the feasibility and pilot testing of a novel approach to promoting self-management and facilitating symptom appraisal using visualized data. We aim to determine whether there is a sufficient rationale, both from the perspective of feasibility and patient satisfaction/acceptability, to conduct a larger randomized controlled trial of this intervention. This intervention has the potential to support clinical care for youth with chronic pain and other conditions where self-appraisal and understanding of symptom patterns are a critical component of functional recovery. TRIAL REGISTRATION: Open Science Framework doi: https://doi.org/10.17605/OSF.IO/HQX7C . Registered on October 25, 2021, osf.io/hqx7c.

3.
IEEE Trans Vis Comput Graph ; 28(6): 2500-2516, 2022 06.
Article in English | MEDLINE | ID: mdl-35120005

ABSTRACT

Graph neural networks (GNNs) are a class of powerful machine learning tools that model node relations for making predictions of nodes or links. GNN developers rely on quantitative metrics of the predictions to evaluate a GNN, but similar to many other neural networks, it is difficult for them to understand if the GNN truly learns characteristics of a graph as expected. We propose an approach to corresponding an input graph to its node embedding (aka latent space), a common component of GNNs that is later used for prediction. We abstract the data and tasks, and develop an interactive multi-view interface called CorGIE to instantiate the abstraction. As the key function in CorGIE, we propose the K-hop graph layout to show topological neighbors in hops and their clustering structure. To evaluate the functionality and usability of CorGIE, we present how to use CorGIE in two usage scenarios, and conduct a case study with five GNN experts. Availability: Open-source code at https://github.com/zipengliu/corgie-ui/, supplemental materials & video at https://osf.io/tr3sb/.


Subject(s)
Computer Graphics , Neural Networks, Computer , Cluster Analysis , Machine Learning , Software
4.
IEEE Trans Vis Comput Graph ; 28(1): 747-757, 2022 01.
Article in English | MEDLINE | ID: mdl-34596545

ABSTRACT

Visualization collections, accessed by platforms such as Tableau Online or Power Bl, are used by millions of people to share and access diverse analytical knowledge in the form of interactive visualization bundles. Result snippets, compact previews of these bundles, are presented to users to help them identify relevant content when browsing collections. Our engagement with Tableau product teams and review of existing snippet designs on five platforms showed us that current practices fail to help people judge the relevance of bundles because they include only the title and one image. Users frequently need to undertake the time-consuming endeavour of opening a bundle within its visualization system to examine its many views and dashboards. In response, we contribute the first systematic approach to visualization snippet design. We propose a framework for snippet design that addresses eight key challenges that we identify. We present a computational pipeline to compress the visual and textual content of bundles into representative previews that is adaptive to a provided pixel budget and provides high information density with multiple images and carefully chosen keywords. We also reflect on the method of visual inspection through random sampling to gain confidence in model and parameter choices.

5.
IEEE Trans Vis Comput Graph ; 28(12): 4855-4872, 2022 12.
Article in English | MEDLINE | ID: mdl-34449391

ABSTRACT

Genomic Epidemiology (genEpi) is a branch of public health that uses many different data types including tabular, network, genomic, and geographic, to identify and contain outbreaks of deadly diseases. Due to the volume and variety of data, it is challenging for genEpi domain experts to conduct data reconnaissance; that is, have an overview of the data they have and make assessments toward its quality, completeness, and suitability. We present an algorithm for data reconnaissance through automatic visualization recommendation, GEViTRec. Our approach handles a broad variety of dataset types and automatically generates visually coherent combinations of charts, in contrast to existing systems that primarily focus on singleton visual encodings of tabular datasets. We automatically detect linkages across multiple input datasets by analyzing non-numeric attribute fields, creating a data source graph within which we analyze and rank paths. For each high-ranking path, we specify chart combinations with positional and color alignments between shared fields, using a gradual binding approach to transform initial partial specifications of singleton charts to complete specifications that are aligned and oriented consistently. A novel aspect of our approach is its combination of domain-agnostic elements with domain-specific information that is captured through a domain-specific visualization prevalence design space. Our implementation is applied to both synthetic data and real Ebola outbreak data. We compare GEViTRec's output to what previous visualization recommendation systems would generate, and to manually crafted visualizations used by practitioners. We conducted formative evaluations with ten genEpi experts to assess the relevance and interpretability of our results. Code, Data, and Study Materials Availability: https://github.com/amcrisan/GEVitRec.


Subject(s)
Computer Graphics , Software , Prevalence , Genomics , Genome
6.
IEEE Trans Vis Comput Graph ; 27(2): 957-966, 2021 02.
Article in English | MEDLINE | ID: mdl-33074823

ABSTRACT

For the many journalists who use data and computation to report the news, data wrangling is an integral part of their work. Despite an abundance of literature on data wrangling in the context of enterprise data analysis, little is known about the specific operations, processes, and pain points journalists encounter while performing this tedious, time-consuming task. To better understand the needs of this user group, we conduct a technical observation study of 50 public repositories of data and analysis code authored by 33 professional journalists at 26 news organizations. We develop two detailed and cross-cutting taxonomies of data wrangling in computational journalism, for actions and for processes. We observe the extensive use of multiple tables, a notable gap in previous wrangling analyses. We develop a concise, actionable framework for general multi-table data wrangling that includes wrangling operations documented in our taxonomy that are without clear parallels in other work. This framework, the first to incorporate tables as first-class objects, will support future interactive wrangling tools for both computational journalism and general-purpose use. We assess the generative and descriptive power of our framework through discussion of its relationship to our set of taxonomies.

7.
IEEE Trans Vis Comput Graph ; 27(2): 495-505, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33048709

ABSTRACT

Cloud-based visualization services have made visual analytics accessible to a much wider audience than ever before. Systems such as Tableau have started to amass increasingly large repositories of analytical knowledge in the form of interactive visualization workbooks. When shared, these collections can form a visual analytic knowledge base. However, as the size of a collection increases, so does the difficulty in finding relevant information. Content-based recommendation (CBR) systems could help analysts in finding and managing workbooks relevant to their interests. Toward this goal, we focus on text-based content that is representative of the subject matter of visualizations rather than the visual encodings and style. We discuss the challenges associated with creating a CBR based on visualization specifications and explore more concretely how to implement the relevance measures required using Tableau workbook specifications as the source of content data. We also demonstrate what information can be extracted from these visualization specifications and how various natural language processing techniques can be used to compute similarity between workbooks as one way to measure relevance. We report on a crowd-sourced user study to determine if our similarity measure mimics human judgement. Finally, we choose latent Dirichl et al.ocation (LDA) as a specific model and instantiate it in a proof-of-concept recommender tool to demonstrate the basic function of our similarity measure.

8.
IEEE Trans Vis Comput Graph ; 26(6): 2180-2191, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32012018

ABSTRACT

Graph drawing readability metrics are routinely used to assess and create node-link layouts of network data. Existing readability metrics fall short in three ways. The many count-based metrics such as edge-edge or node-edge crossings simply provide integer counts, missing the opportunity to quantify the amount of overlap between items, which may vary in size, at a more fine-grained level. Current metrics focus solely on single-level topological structure, ignoring the possibility of multi-level structure such as large and thus highly salient metanodes. Most current metrics focus on the measurement of clutter in the form of crossings and overlaps, and do not take into account the trade-off between the clutter and the information sparsity of the drawing, which we refer to as sprawl. We propose an area-aware approach to clutter metrics that tracks the extent of geometric overlaps between node-node, node-edge, and edge-edge pairs in detail. It handles variable-size nodes and explicitly treats metanodes and leaf nodes uniformly. We call the combination of a sprawl metric and an area-aware clutter metric a sprawlter metric. We present an instantiation of the sprawlter metrics featuring a formal and thorough discussion of the crucial component, the penalty mapping function. We implement and validate our proposed metrics with extensive computational analysis of graph layouts, considering four layout algorithms and 56 layouts encompassing both real-world data and synthetic examples illustrating specific configurations of interest.

9.
IEEE Trans Vis Comput Graph ; 26(9): 2732-2747, 2020 09.
Article in English | MEDLINE | ID: mdl-30736000

ABSTRACT

We address the visual comparison of multiple phylogenetic trees that arises in evolutionary biology, specifically between one reference tree and a collection of dozens to hundreds of other trees. We abstract the domain questions of phylogenetic tree comparison as tasks to look for supporting or conflicting evidence for hypotheses that requires inspection of both topological structure and attribute values at different levels of detail in the tree collection. We introduce the new visual encoding idiom of aggregated dendrograms to concisely summarize the topological relationships between interactively chosen focal subtrees according to biologically meaningful criteria, and provide a layout algorithm that automatically adapts to the available screen space. We design and implement the ADView system, which represents trees at multiple levels of detail across multiple views: the entire collection, a subset of trees, an individual tree, specific subtrees of interest, and the individual branch level. We benchmark the algorithms developed for ADView, compare its information density to previous work, and demonstrate its utility for quickly gathering evidence about biological hypotheses through usage scenarios with data from recently published phylogenetic analysis and case studies of expert use with real-world data, drawn from a summative interview study.


Subject(s)
Cluster Analysis , Computer Graphics , Phylogeny , Humans , Software
10.
Bioinformatics ; 35(6): 1070-1072, 2019 03 15.
Article in English | MEDLINE | ID: mdl-30875428

ABSTRACT

SUMMARY: Adjutant is an open-source, interactive and R-based application to support mining PubMed for literature reviews. Given a PubMed-compatible search query, Adjutant downloads the relevant articles and allows the user to perform an unsupervised clustering analysis to identify data-driven topic clusters. Following clustering, users can also sample documents using different strategies to obtain a more manageable dataset for further analysis. Adjutant makes explicit trade-offs between speed and accuracy, which are modifiable by the user, such that a complete analysis of several thousand documents can take a few minutes. All analytic datasets generated by Adjutant are saved, allowing users to easily conduct other downstream analyses that Adjutant does not explicitly support. AVAILABILITY AND IMPLEMENTATION: Adjutant is implemented in R, using Shiny, and is available at https://github.com/amcrisan/Adjutant. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Software , Cluster Analysis , PubMed
11.
Bioinformatics ; 35(10): 1668-1676, 2019 05 15.
Article in English | MEDLINE | ID: mdl-30256887

ABSTRACT

MOTIVATION: Data visualization is an important tool for exploring and communicating findings from genomic and healthcare datasets. Yet, without a systematic way of organizing and describing the design space of data visualizations, researchers may not be aware of the breadth of possible visualization design choices or how to distinguish between good and bad options. RESULTS: We have developed a method that systematically surveys data visualizations using the analysis of both text and images. Our method supports the construction of a visualization design space that is explorable along two axes: why the visualization was created and how it was constructed. We applied our method to a corpus of scientific research articles from infectious disease genomic epidemiology and derived a Genomic Epidemiology Visualization Typology (GEViT) that describes how visualizations were created from a series of chart types, combinations and enhancements. We have also implemented an online gallery that allows others to explore our resulting design space of visualizations. Our results have important implications for visualization design and for researchers intending to develop or use data visualization tools. Finally, the method that we introduce is extensible to constructing visualizations design spaces across other research areas. AVAILABILITY AND IMPLEMENTATION: Our browsable gallery is available at http://gevit.net and all project code can be found at https://github.com/amcrisan/gevitAnalysisRelease. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Data Visualization , Software , Genome , Genomics , Surveys and Questionnaires
12.
Article in English | MEDLINE | ID: mdl-29541635

ABSTRACT

Time-lapse imaging of cell colonies in microfluidic chambers provides time series of bioimages, i.e., biomovies. They show the behavior of cells over time under controlled conditions. One of the main remaining bottlenecks in this area of research is the analysis of experimental data and the extraction of cell growth characteristics, such as lineage information. The extraction of the cell line by human observers is time-consuming and error-prone. Previously proposed methods often fail because of their reliance on the accurate detection of a single cell, which is not possible for high density, high diversity of cell shapes and numbers, and high-resolution images with high noise. Our task is to characterize subpopulations in biomovies. In order to shift the analysis of the data from individual cell level to cellular groups with similar fluorescence or even subpopulations, we propose to represent the cells by two new abstractions: the particle and the patch. We use a three-step framework: preprocessing, particle tracking, and construction of the patch lineage. First, preprocessing improves the signal-to-noise ratio and spatially aligns the biomovie frames. Second, cell sampling is performed by assuming particles, which represent a part of a cell, cell or group of contiguous cells in space. Particle analysis includes the following: particle tracking, trajectory linking, filtering, and color information, respectively. Particle tracking consists of following the spatiotemporal position of a particle and gives rise to coherent particle trajectories over time. Typical tracking problems may occur (e.g., appearance or disappearance of cells, spurious artifacts). They are effectively processed using trajectory linking and filtering. Third, the construction of the patch lineage consists in joining particle trajectories that share common attributes (i.e., proximity and fluorescence intensity) and feature common ancestry. This step is based on patch finding, patching trajectory propagation, patch splitting, and patch merging. The main idea is to group together the trajectories of particles in order to gain spatial coherence. The final result of CYCASP is the complete graph of the patch lineage. Finally, the graph encodes the temporal and spatial coherence of the development of cellular colonies. We present results showing a computation time of less than 5 min for biomovies and simulated films. The method, presented here, allowed for the separation of colonies into subpopulations and allowed us to interpret the growth of colonies in a timely manner.

13.
PeerJ ; 6: e4218, 2018.
Article in English | MEDLINE | ID: mdl-29340235

ABSTRACT

BACKGROUND: Microbial genome sequencing is now being routinely used in many clinical and public health laboratories. Understanding how to report complex genomic test results to stakeholders who may have varying familiarity with genomics-including clinicians, laboratorians, epidemiologists, and researchers-is critical to the successful and sustainable implementation of this new technology; however, there are no evidence-based guidelines for designing such a report in the pathogen genomics domain. Here, we describe an iterative, human-centered approach to creating a report template for communicating tuberculosis (TB) genomic test results. METHODS: We used Design Study Methodology-a human centered approach drawn from the information visualization domain-to redesign an existing clinical report. We used expert consults and an online questionnaire to discover various stakeholders' needs around the types of data and tasks related to TB that they encounter in their daily workflow. We also evaluated their perceptions of and familiarity with genomic data, as well as its utility at various clinical decision points. These data shaped the design of multiple prototype reports that were compared against the existing report through a second online survey, with the resulting qualitative and quantitative data informing the final, redesigned, report. RESULTS: We recruited 78 participants, 65 of whom were clinicians, nurses, laboratorians, researchers, and epidemiologists involved in TB diagnosis, treatment, and/or surveillance. Our first survey indicated that participants were largely enthusiastic about genomic data, with the majority agreeing on its utility for certain TB diagnosis and treatment tasks and many reporting some confidence in their ability to interpret this type of data (between 58.8% and 94.1%, depending on the specific data type). When we compared our four prototype reports against the existing design, we found that for the majority (86.7%) of design comparisons, participants preferred the alternative prototype designs over the existing version, and that both clinicians and non-clinicians expressed similar design preferences. Participants showed clearer design preferences when asked to compare individual design elements versus entire reports. Both the quantitative and qualitative data informed the design of a revised report, available online as a LaTeX template. CONCLUSIONS: We show how a human-centered design approach integrating quantitative and qualitative feedback can be used to design an alternative report for representing complex microbial genomic data. We suggest experimental and design guidelines to inform future design studies in the bioinformatics and microbial genomics domains, and suggest that this type of mixed-methods study is important to facilitate the successful translation of pathogen genomics in the clinic, not only for clinical reports but also more complex bioinformatics data visualization software.

14.
IEEE Trans Vis Comput Graph ; 24(1): 435-445, 2018 01.
Article in English | MEDLINE | ID: mdl-28880179

ABSTRACT

Visualization researchers and practitioners engaged in generating or evaluating designs are faced with the difficult problem of transforming the questions asked and actions taken by target users from domain-specific language and context into more abstract forms. Existing abstract task classifications aim to provide support for this endeavour by providing a carefully delineated suite of actions. Our experience is that this bottom-up approach is part of the challenge: low-level actions are difficult to interpret without a higher-level context of analysis goals and the analysis process. To bridge this gap, we propose a framework based on analysis reports derived from open-coding 20 design study papers published at IEEE InfoVis 2009-2015, to build on the previous work of abstractions that collectively encompass a broad variety of domains. The framework is organized in two axes illustrated by nine analysis goals. It helps situate the analysis goals by placing each goal under axes of specificity (Explore, Describe, Explain, Confirm) and number of data populations (Single, Multiple). The single-population types are Discover Observation, Describe Observation, Identify Main Cause, and Collect Evidence. The multiple-population types are Compare Entities, Explain Differences, and Evaluate Hypothesis. Each analysis goal is scoped by an input and an output and is characterized by analysis steps reported in the design study papers. We provide examples of how we and others have used the framework in a top-down approach to abstracting domain problems: visualization designers or researchers first identify the analysis goals of each unit of analysis in an analysis stream, and then encode the individual steps using existing task classifications with the context of the goal, the level of specificity, and the number of populations involved in the analysis.

15.
IEEE Trans Vis Comput Graph ; 23(9): 2151-2164, 2017 09.
Article in English | MEDLINE | ID: mdl-28113509

ABSTRACT

There are many ways to visualize event sequences as timelines. In a storytelling context where the intent is to convey multiple narrative points, a richer set of timeline designs may be more appropriate than the narrow range that has been used for exploratory data analysis by the research community. Informed by a survey of 263 timelines, we present a design space for storytelling with timelines that balances expressiveness and effectiveness, identifying 14 design choices characterized by three dimensions: representation, scale, and layout. Twenty combinations of these choices are viable timeline designs that can be matched to different narrative points, while smooth animated transitions between narrative points allow for the presentation of a cohesive story, an important aspect of both interactive storytelling and data videos. We further validate this design space by realizing the full set of viable timeline designs and transitions in a proof-of-concept sandbox implementation that we used to produce seven example timeline stories. Ultimately, this work is intended to inform and inspire the design of future tools for storytelling with timelines.

16.
IEEE Trans Vis Comput Graph ; 22(1): 300-9, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26529709

ABSTRACT

We present TimeLineCurator, a browser-based authoring tool that automatically extracts event data from temporal references in unstructured text documents using natural language processing and encodes them along a visual timeline. Our goal is to facilitate the timeline creation process for journalists and others who tell temporal stories online. Current solutions involve manually extracting and formatting event data from source documents, a process that tends to be tedious and error prone. With TimeLineCurator, a prospective timeline author can quickly identify the extent of time encompassed by a document, as well as the distribution of events occurring along this timeline. Authors can speculatively browse possible documents to quickly determine whether they are appropriate sources of timeline material. TimeLineCurator provides controls for curating and editing events on a timeline, the ability to combine timelines from multiple source documents, and export curated timelines for online deployment. We evaluate TimeLineCurator through a benchmark comparison of entity extraction error against a manual timeline curation process, a preliminary evaluation of the user experience of timeline authoring, a brief qualitative analysis of its visual output, and a discussion of prospective use cases suggested by members of the target author communities following its deployment.


Subject(s)
Computer Graphics , Data Mining/methods , User-Computer Interface , Humans , Journalism/classification , Time Factors
17.
IEEE Trans Vis Comput Graph ; 22(1): 449-58, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26336126

ABSTRACT

The energy performance of large building portfolios is challenging to analyze and monitor, as current analysis tools are not scalable or they present derived and aggregated data at too coarse of a level. We conducted a visualization design study, beginning with a thorough work domain analysis and a characterization of data and task abstractions. We describe generalizable visual encoding design choices for time-oriented data framed in terms of matches and mismatches, as well as considerations for workflow design. Our designs address several research questions pertaining to scalability, view coordination, and the inappropriateness of line charts for derived and aggregated data due to a combination of data semantics and domain convention. We also present guidelines relating to familiarity and trust, as well as methodological considerations for visualization design studies. Our designs were adopted by our collaborators and incorporated into the design of an energy analysis software application that will be deployed to tens of thousands of energy workers in their client base.


Subject(s)
Computer Graphics , Energy Transfer , Research Design , Software , Construction Materials , Workflow
18.
IEEE Trans Vis Comput Graph ; 20(12): 2271-80, 2014 Dec.
Article in English | MEDLINE | ID: mdl-26356941

ABSTRACT

For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system "in the wild", and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of "exploring" a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology.


Subject(s)
Computer Graphics , Data Mining/methods , Journalism , Cluster Analysis , Humans , Models, Theoretical
19.
IEEE Trans Vis Comput Graph ; 19(12): 2376-85, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24051804

ABSTRACT

The considerable previous work characterizing visualization usage has focused on low-level tasks or interactions and high-level tasks, leaving a gap between them that is not addressed. This gap leads to a lack of distinction between the ends and means of a task, limiting the potential for rigorous analysis. We contribute a multi-level typology of visualization tasks to address this gap, distinguishing why and how a visualization task is performed, as well as what the task inputs and outputs are. Our typology allows complex tasks to be expressed as sequences of interdependent simpler tasks, resulting in concise and flexible descriptions for tasks of varying complexity and scope. It provides abstract rather than domain-specific descriptions of tasks, so that useful comparisons can be made between visualization systems targeted at different application domains. This descriptive power supports a level of analysis required for the generation of new designs, by guiding the translation of domain-specific problems into abstract tasks, and for the qualitative evaluation of visualization usage. We demonstrate the benefits of our approach in a detailed case study, comparing task descriptions from our typology to those derived from related work. We also discuss the similarities and differences between our typology and over two dozen extant classification systems and theoretical frameworks from the literatures of visualization, human-computer interaction, information retrieval, communications, and cartography.


Subject(s)
Algorithms , Artificial Intelligence , Task Performance and Analysis , User-Computer Interface , Visual Perception/physiology , Computer Simulation , Humans , Models, Theoretical , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE Trans Vis Comput Graph ; 19(12): 2546-55, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24051821

ABSTRACT

Scientists use DNA sequence differences between an individual's genome and a standard reference genome to study the genetic basis of disease. Such differences are called sequence variants, and determining their impact in the cell is difficult because it requires reasoning about both the type and location of the variant across several levels of biological context. In this design study, we worked with four analysts to design a visualization tool supporting variant impact assessment for three different tasks. We contribute data and task abstractions for the problem of variant impact assessment, and the carefully justified design and implementation of the Variant View tool. Variant View features an information-dense visual encoding that provides maximal information at the overview level, in contrast to the extensive navigation required by currently-prevalent genome browsers. We provide initial evidence that the tool simplified and accelerated workflows for these three tasks through three case studies. Finally, we reflect on the lessons learned in creating and refining data and task abstractions that allow for concise overviews of sprawling information spaces that can reduce or remove the need for the memory-intensive use of navigation.


Subject(s)
Algorithms , Chromosome Mapping/methods , Computer Graphics , DNA Mutational Analysis/methods , DNA/genetics , Sequence Analysis, DNA/methods , User-Computer Interface , Animals , Base Sequence , Humans , Information Storage and Retrieval/methods , Molecular Sequence Data , Sequence Alignment/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...