Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 48
Filter
1.
Article in English | MEDLINE | ID: mdl-37922174

ABSTRACT

Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.

2.
Article in English | MEDLINE | ID: mdl-37922177

ABSTRACT

A common way to evaluate the reliability of dimensionality reduction (DR) embeddings is to quantify how well labeled classes form compact, mutually separated clusters in the embeddings. This approach is based on the assumption that the classes stay as clear clusters in the original high-dimensional space. However, in reality, this assumption can be violated; a single class can be fragmented into multiple separated clusters, and multiple classes can be merged into a single cluster. We thus cannot always assure the credibility of the evaluation using class labels. In this paper, we introduce two novel quality measures-Label-Trustworthiness and Label-Continuity (Label-T&C)-advancing the process of DR evaluation based on class labels. Instead of assuming that classes are well-clustered in the original space, Label-T&C work by (1) estimating the extent to which classes form clusters in the original and embedded spaces and (2) evaluating the difference between the two. A quantitative evaluation showed that Label-T&C outperform widely used DR evaluation measures (e.g., Trustworthiness and Continuity, Kullback-Leibler divergence) in terms of the accuracy in assessing how well DR embeddings preserve the cluster structure, and are also scalable. Moreover, we present case studies demonstrating that Label-T&C can be successfully used for revealing the intrinsic characteristics of DR techniques and their hyperparameters.

3.
Article in English | MEDLINE | ID: mdl-38019635

ABSTRACT

Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.

4.
Interact J Med Res ; 12: e37604, 2023 Sep 12.
Article in English | MEDLINE | ID: mdl-37698913

ABSTRACT

BACKGROUND: Insufficient physical activity due to social distancing and suppressed outdoor activities increases vulnerability to diseases like cardiovascular diseases, sarcopenia, and severe COVID-19. While bodyweight exercises, such as squats, effectively boost physical activity, incorrect postures risk abnormal muscle activation joint strain, leading to ineffective sessions or even injuries. Avoiding incorrect postures is challenging for novices without expert guidance. Existing solutions for remote coaching and computer-assisted posture correction often prove costly or inefficient. OBJECTIVE: This study aimed to use deep neural networks to develop a personal workout assistant that offers feedback on squat postures using only mobile devices-smartphones and tablets. Deep learning mimicked experts' visual assessments of proper exercise postures. The effectiveness of the mobile app was evaluated by comparing it with exercise videos, a popular at-home workout choice. METHODS: Twenty participants were recruited without squat exercise experience and divided into an experimental group (EXP) with 10 individuals aged 21.90 (SD 2.18) years and a mean BMI of 20.75 (SD 2.11) and a control group (CTL) with 10 individuals aged 22.60 (SD 1.95) years and a mean BMI of 18.72 (SD 1.23) using randomized controlled trials. A data set with over 20,000 squat videos annotated by experts was created and a deep learning model was trained using pose estimation and video classification to analyze the workout postures. Subsequently, a mobile workout assistant app, Home Alone Exercise, was developed, and a 2-week interventional study, in which the EXP used the app while the CTL only followed workout videos, showed how the app helps people improve squat exercise. RESULTS: The EXP significantly improved their squat postures evaluated by the app after 2 weeks (Pre: 0.20 vs Mid: 4.20 vs Post: 8.00, P=.001), whereas the CTL (without the app) showed no significant change in squat posture (Pre: 0.70 vs Mid: 1.30 vs Post: 3.80, P=.13). Significant differences were observed in the left (Pre: 75.06 vs Mid: 76.24 vs Post: 63.13, P=.02) and right (Pre: 71.99 vs Mid: 76.68 vs Post: 62.82, P=.03) knee joint angles in the EXP before and after exercise, with no significant effect found for the CTL in the left (Pre: 73.27 vs Mid: 74.05 vs Post: 70.70, P=.68) and right (Pre: 70.82 vs Mid: 74.02 vs Post: 70.23, P=.61) knee joint angles. CONCLUSIONS: EXP participants trained with the app experienced faster improvement and learned more nuanced details of the squat exercise. The proposed mobile app, offering cost-effective self-discovery feedback, effectively taught users about squat exercises without expensive in-person trainer sessions. TRIAL REGISTRATION: Clinical Research Information Service KCT0008178 (retrospectively registered); https://cris.nih.go.kr/cris/search/detailSearch.do/24006.

5.
PLoS One ; 18(2): e0281422, 2023.
Article in English | MEDLINE | ID: mdl-36758038

ABSTRACT

PubMed is the most extensively used database and search engine in the biomedical and healthcare fields. However, users could experience several difficulties in acquiring their target papers facing massive numbers of search results, especially in their unfamiliar fields. Therefore, we developed a novel user interface for PubMed and conducted three steps of study: step A, a preliminary user survey with 76 medical experts regarding the current usability for the biomedical literature search task at PubMed; step B is implementing EEEvis, a novel interactive visual analytic system for the search task; step C, a randomized user study comparing PubMed and EEEvis. First, we conducted a Google survey of 76 medical experts regarding the unmet needs of PubMed and the user requirements for a novel search interface. According to the data of preliminary Google survey, we implemented a novel interactive visual analytic system for biomedical literature search. This EEEvis provides enhanced literature data analysis functions including (1) an overview of the bibliographic features including publication date, citation count, and impact factors, (2) an overview of the co-authorship network, and (3) interactive sorting, filtering, and highlighting. In the randomized user study of 24 medical experts, the search speed of EEEvis was not inferior to PubMed in the time to reach the first article (median difference 3 sec, 95% CI -2.1 to 8.5, P = 0.535) nor in the search completion time (median difference 8 sec, 95% CI -4.7 to 19.1, P = 0.771). However, 22 participants (91.7%) responded that they are willing to use EEEvis as their first choice for a biomedical literature search task, and 21 participants (87.5%) answered the bibliographic sorting and filtering functionalities of EEEvis as a major advantage. EEEvis could be a supplementary interface for PubMed that can enhance the user experience in the search for biomedical literature.


Subject(s)
Search Engine , Humans , MEDLINE , PubMed , Databases, Factual
6.
J Med Internet Res ; 25: e43634, 2023 02 24.
Article in English | MEDLINE | ID: mdl-36826976

ABSTRACT

BACKGROUND: Maternal-fetal attachment (MFA) has been reported to be associated with the postpartum mother-infant relationship. Seeing the fetus through ultrasound might influence MFA, and the effect could be increased by more realistic images, such as those generated in virtual reality (VR). OBJECTIVE: The aim was to determine the effect of fetal images generated in VR on MFA and depressive symptoms through a prenatal-coaching mobile app. METHODS: This 2-arm parallel randomized controlled trial involved a total of 80 pregnant women. Eligible women were randomly assigned to either a mobile app-only group (n=40) or an app plus VR group (n=40). The VR group experienced their own baby's images generated in VR based on images obtained from fetal ultrasonography. The prenatal-coaching mobile app recommended health behavior for the pregnant women according to gestational age, provided feedback on entered data for maternal weight, blood pressure, and glucose levels, and included a private diary service for fetal ultrasound images. Both groups received the same app, but the VR group also viewed fetal images produced in VR; these images were stored in the app. All participants filled out questionnaires to assess MFA, depressive symptoms, and other basic medical information. The questionnaires were filled out again after the interventions. RESULTS: Basic demographic data were comparable between the 2 groups. Most of the assessments showed comparable results for the 2 groups, but the mean score to assess interaction with the fetus was significantly higher for the VR group than the control group (0.4 vs 0.1, P=.004). The proportion of participants with an increased score for this category after the intervention was significantly higher in the VR group than the control group (43% vs 13%, P=.005). The feedback questionnaire revealed that scores for the degree of perception of fetal appearance all increased after the intervention in the VR group. CONCLUSIONS: The use of a mobile app with fetal images in VR significantly increased maternal interaction with the fetus. TRIAL REGISTRATION: ClinicalTrials.gov NCT04942197; https://clinicaltrials.gov/ct2/show/NCT04942197.


Subject(s)
Mobile Applications , Virtual Reality , Infant , Humans , Pregnancy , Female , Prenatal Care , Postpartum Period , Fetus
7.
IEEE Trans Vis Comput Graph ; 29(3): 1799-1817, 2023 Mar.
Article in English | MEDLINE | ID: mdl-34851827

ABSTRACT

We present RCMVis, a visual analytics system to support interactive Route Choice Modeling analysis. It aims to model which characteristics of routes, such as distance and the number of traffic lights, affect travelers' route choice behaviors and how much they affect the choice during their trips. Through close collaboration with domain experts, we designed a visual analytics framework for Route Choice Modeling. The framework supports three interactive analysis stages: exploration, modeling, and reasoning. In the exploration stage, we help analysts interactively explore trip data from multiple origin-destination (OD) pairs and choose a subset of data they want to focus on. To this end, we provide coordinated multiple OD views with different foci that allow analysts to inspect, rank, and compare OD pairs in terms of their multidimensional attributes. In the modeling stage, we integrate a k-medoids clustering method and a path-size logit model into our system to enable analysts to model route choice behaviors from trips with support for feature selection, hyperparameter tuning, and model comparison. Finally, in the reasoning stage, we help analysts rationalize and refine the model by selectively inspecting the trips that strongly support the modeling result. For evaluation, we conducted a case study and interviews with domain experts. The domain experts discovered unexpected insights from numerous modeling results, allowing them to explore the hyperparameter space more effectively to gain better results. In addition, they gained OD- and road-level insights into which data mainly supported the modeling result, enabling further discussion of the model.

8.
IEEE Trans Vis Comput Graph ; 28(7): 2563-2576, 2022 07.
Article in English | MEDLINE | ID: mdl-33201820

ABSTRACT

We introduce Parallel Histogram Plot (PHP), a technique that overcomes the innate limitations of parallel coordinates plot (PCP) by attaching stacked-bar histograms with discrete color schemes to PCP. The color-coded histograms enable users to see an overview of the whole data without cluttering or scalability issues. Each rectangle in the PHP histograms is color coded according to the data ranking by a selected attribute. This color-coding scheme allows users to visually examine relationships between attributes, even between those that are displayed far apart, without repositioning or reordering axes. We adopt the Visual Information Seeking Mantra so that the polylines of the original PCP can be used to show details of a small number of selected items when the cluttering problem subsides. We also design interactions, such as a focus+context technique, to help users investigate small regions of interest in a space-efficient manner. We provide a real-world example in which PHP is effectively utilized compared with other visualizations, and we perform a controlled user study to evaluate the performance of PHP in helping users estimate the correlation between attributes. The results demonstrate that the performance of PHP was consistent in the estimation of correlations between two attributes regardless of the distance between them.


Subject(s)
Computer Graphics , Speech Disorders , Humans
9.
IEEE Trans Vis Comput Graph ; 28(1): 551-561, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34587063

ABSTRACT

We propose Steadiness and Cohesiveness, two novel metrics to measure the inter-cluster reliability of multidimensional projection (MDP), specifically how well the inter-cluster structures are preserved between the original high-dimensional space and the low-dimensional projection space. Measuring inter-cluster reliability is crucial as it directly affects how well inter-cluster tasks (e.g., identifying cluster relationships in the original space from a projected view) can be conducted; however, despite the importance of inter-cluster tasks, we found that previous metrics, such as Trustworthiness and Continuity, fail to measure inter-cluster reliability. Our metrics consider two aspects of the inter-cluster reliability: Steadiness measures the extent to which clusters in the projected space form clusters in the original space, and Cohesiveness measures the opposite. They extract random clusters with arbitrary shapes and positions in one space and evaluate how much the clusters are stretched or dispersed in the other space. Furthermore, our metrics can quantify pointwise distortions, allowing for the visualization of inter-cluster reliability in a projection, which we call a reliability map. Through quantitative experiments, we verify that our metrics precisely capture the distortions that harm inter-cluster reliability while previous metrics have difficulty capturing the distortions. A case study also demonstrates that our metrics and the reliability map 1) support users in selecting the proper projection techniques or hyperparameters and 2) prevent misinterpretation while performing inter-cluster tasks, thus allow an adequate identification of inter-cluster structure.

10.
IEEE Trans Vis Comput Graph ; 27(7): 3109-3122, 2021 07.
Article in English | MEDLINE | ID: mdl-31880556

ABSTRACT

We present a new visual exploration concept-Progressive Visual Analytics with Safeguards-that helps people manage the uncertainty arising from progressive data exploration. Despite its potential benefits, intermediate knowledge from progressive analytics can be incorrect due to various machine and human factors, such as a sampling bias or misinterpretation of uncertainty. To alleviate this problem, we introduce PVA-Guards, safeguards people can leave on uncertain intermediate knowledge that needs to be verified, and derive seven PVA-Guards based on previous visualization task taxonomies. PVA-Guards provide a means of ensuring the correctness of the conclusion and understanding the reason when intermediate knowledge becomes invalid. We also present ProReveal, a proof-of-concept system designed and developed to integrate the seven safeguards into progressive data exploration. Finally, we report a user study with 14 participants, which shows people voluntarily employed PVA-Guards to safeguard their findings and ProReveal's PVA-Guard view provides an overview of uncertain intermediate knowledge. We believe our new concept can also offer better consistency in progressive data exploration, alleviating people's heterogeneous interpretation of uncertainty.

11.
IEEE Trans Vis Comput Graph ; 27(2): 656-666, 2021 02.
Article in English | MEDLINE | ID: mdl-33048722

ABSTRACT

Git metadata contains rich information for developers to understand the overall context of a large software development project. Thus it can help new developers, managers, and testers understand the history of development without needing to dig into a large pile of unfamiliar source code. However, the current tools for Git visualization are not adequate to analyze and explore the metadata: They focus mainly on improving the usability of Git commands instead of on helping users understand the development history. Furthermore, they do not scale for large and complex Git commit graphs, which can play an important role in understanding the overall development history. In this paper, we present Githru, an interactive visual analytics system that enables developers to effectively understand the context of development history through the interactive exploration of Git metadata. We design an interactive visual encoding idiom to represent a large Git graph in a scalable manner while preserving the topological structures in the Git graph. To enable scalable exploration of a large Git commit graph, we propose novel techniques (graph reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods) to abstract a large-scale Git commit graph. Based on these Git commit graph abstraction techniques, Githru provides an interactive summary view to help users gain an overview of the development history and a comparison view in which users can compare different clusters of commits. The efficacy of Githru has been demonstrated by case studies with domain experts using real-world, in-house datasets from a large software development team at a major international IT company. A controlled user study with 12 developers comparing Githru to previous tools also confirms the effectiveness of Githru in terms of task completion time.


Subject(s)
Computer Graphics , Metadata , Data Interpretation, Statistical , Software
12.
IEEE Trans Vis Comput Graph ; 27(2): 1525-1535, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33052858

ABSTRACT

We present a systematic review on three comparative layouts-juxtaposition, superposition, and explicit-encoding-which are information visualization (InfoVis) layouts designed to support comparison tasks. For the last decade, these layouts have served as fundamental idioms in designing many visualization systems. However, we found that the layouts have been used with inconsistent terms and confusion, and the lessons from previous studies are fragmented. The goal of our research is to distill the results from previous studies into a consistent and reusable framework. We review 127 research papers, including 15 papers with quantitative user studies, which employed comparative layouts. We first alleviate the ambiguous boundaries in the design space of comparative layouts by suggesting lucid terminology (e.g., chart-wise and item-wise juxtaposition). We then identify the diverse aspects of comparative layouts, such as the advantages and concerns of using each layout in the real-world scenarios and researchers' approaches to overcome the concerns. Building our knowledge on top of the initial insights gained from the Gleicher et al.'s survey [19], we elaborate on relevant empirical evidence that we distilled from our survey (e.g., the actual effectiveness of the layouts in different study settings) and identify novel facets that the original work did not cover (e.g., the familiarity of the layouts to people). Finally, we show the consistent and contradictory results on the performance of comparative layouts and offer practical implications for using the layouts by suggesting trade-offs and seven actionable guidelines.

13.
IEEE Trans Vis Comput Graph ; 26(2): 1347-1360, 2020 Feb.
Article in English | MEDLINE | ID: mdl-30222575

ABSTRACT

We present PANENE, a progressive algorithm for approximate nearest neighbor indexing and querying. Although the use of k-nearest neighbor (KNN) libraries is common in many data analysis methods, most KNN algorithms can only be queried when the whole dataset has been indexed, i.e., they are not online. Even the few online implementations are not progressive in the sense that the time to index incoming data is not bounded and cannot satisfy the latency requirements of progressive systems. This long latency has significantly limited the use of many machine learning methods, such as t-SNE, in interactive visual analytics. PANENE is a novel algorithm for Progressive Approximate k-NEarest NEighbors, enabling fast KNN queries while continuously indexing new batches of data. Following the progressive computation paradigm, PANENE operations can be bounded in time, allowing analysts to access running results within an interactive latency. PANENE can also incrementally build and maintain a cache data structure, a KNN lookup table, to enable constant-time lookups for KNN queries. Finally, we present three progressive applications of PANENE, such as regression, density estimation, and responsive t-SNE, opening up new opportunities to use complex algorithms in interactive systems.

14.
Pharmaceutics ; 10(2)2018 Jun 07.
Article in English | MEDLINE | ID: mdl-29880732

ABSTRACT

Fabry disease is a rare lysosomal storage disorder resulting from the lack of α-Gal A gene activity. Globotriaosylceramide (GB3, ceramide trihexoside) is a novel endogenous biomarker which predicts the incidence of Fabry disease. At the early stage efficacy/biomarker study, a rapid method to determine this biomarker in plasma and in all relevant tissues related to this disease simultaneously is required. However, the limited sample volume, as well as the various levels of GB3 in different matrices makes the GB3 quantitation very challenging. Hereby we developed a rapid method to identify GB3 in mouse plasma and various tissues. Preliminary stability tests were also performed in three different conditions: short-term, freeze-thaw, long-term. The calibration curve was well fitted over the concentration range of 0.042⁻10 µg/mL for GB3 in plasma and 0.082⁻20 µg/g for GB3 in various tissues. This method was successfully applied for the comparison of GB3 levels in Fabry model mice (B6;129-Glatm1Kul/J), which has not been performed previously to the best of our knowledge.

15.
Methods ; 124: 78-88, 2017 07 15.
Article in English | MEDLINE | ID: mdl-28600227

ABSTRACT

In this paper, we present miRTarVis+, a Web-based interactive visual analytics tool for miRNA target predictions and integrative analyses of multiple prediction results. Various microRNA (miRNA) target prediction algorithms have been developed to improve sequence-based miRNA target prediction by exploiting miRNA-mRNA expression profile data. There are also a few analytics tools to help researchers predict targets of miRNAs. However, there still is a need for improving the performance for miRNA prediction algorithms and more importantly for interactive visualization tools for an integrative analysis of multiple prediction results. miRTarVis+ has an intuitive interface to support the analysis pipeline of load, filter, predict, and visualize. It can predict targets of miRNA by adopting Bayesian inference and maximal information-based nonparametric exploration (MINE) analyses as well as conventional correlation and mutual information analyses. miRTarVis+ supports an integrative analysis of multiple prediction results by providing an overview of multiple prediction results and then allowing users to examine a selected miRNA-mRNA network in an interactive treemap and node-link diagram. To evaluate the effectiveness of miRTarVis+, we conducted two case studies using miRNA-mRNA expression profile data of asthma and breast cancer patients and demonstrated that miRTarVis+ helps users more comprehensively analyze targets of miRNA from miRNA-mRNA expression profile data. miRTarVis+ is available at http://hcil.snu.ac.kr/research/mirtarvisplus.


Subject(s)
Asthma/genetics , Breast Neoplasms/genetics , MicroRNAs/genetics , RNA, Messenger/genetics , Sequence Analysis, RNA/methods , User-Computer Interface , Algorithms , Asthma/diagnosis , Asthma/metabolism , Asthma/pathology , Base Sequence , Bayes Theorem , Binding Sites , Breast Neoplasms/diagnosis , Breast Neoplasms/metabolism , Breast Neoplasms/pathology , Female , Gene Expression Profiling , Gene Expression Regulation , Humans , Internet , MicroRNAs/metabolism , RNA, Messenger/metabolism
16.
J Hum Genet ; 62(2): 167-174, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27829684

ABSTRACT

Hunter syndrome is an X-linked lysosomal storage disease caused by a deficiency in the enzyme iduronate-2-sulfatase (IDS), leading to the accumulation of glycosaminoglycans (GAGs). Two recombinant enzymes, idursulfase and idursulfase beta are currently available for enzyme replacement therapy for Hunter syndrome. These two enzymes exhibited some differences in various clinical parameters in a recent clinical trial. Regarding the similarities and differences of these enzymes, previous research has characterized their biochemical and physicochemical properties. We compared the in vitro and in vivo efficacy of the two enzymes on patient fibroblasts and mouse model. Two enzymes were taken up into the cell and degraded GAGs accumulated in fibroblasts. In vivo studies of two enzymes revealed similar organ distribution and decreased urinary GAGs excretion. Especially, idursulfase beta exhibited enhanced in vitro efficacy for the lower concentration of treatment, in vivo efficacy in the degradation of tissue GAGs and improvement of bones, and revealed lower anti-drug antibody formation. A biochemical analysis showed that both enzymes show largely a similar glycosylation pattern, but the several peaks were different and quantity of aggregates of idursulfase beta was lower.


Subject(s)
Enzyme Replacement Therapy/methods , Iduronate Sulfatase/pharmacology , Iduronate Sulfatase/pharmacokinetics , Iduronate Sulfatase/therapeutic use , Mucopolysaccharidosis II/drug therapy , Animals , Cell Line , Glycoproteins/genetics , Glycosaminoglycans/metabolism , Humans , Mice , Mice, Inbred C57BL , Mice, Knockout , Mucopolysaccharidosis II/genetics , Spectrometry, Mass, Matrix-Assisted Laser Desorption-Ionization
17.
IEEE Trans Vis Comput Graph ; 23(1): 311-320, 2017 01.
Article in English | MEDLINE | ID: mdl-27875147

ABSTRACT

We present an interactive visual analytics framework, GazeDx (abbr. of GazeDiagnosis), for the comparative analysis of gaze data from multiple readers examining volumetric images while integrating important contextual information with the gaze data. Gaze pattern comparison is essential to understanding how radiologists examine medical images, and to identifying factors influencing the examination. Most prior work depended upon comparisons with manually juxtaposed static images of gaze tracking results. Comparative gaze analysis with volumetric images is more challenging due to the additional cognitive load on 3D perception. A recent study proposed a visualization design based on direct volume rendering (DVR) for visualizing gaze patterns in volumetric images; however, effective and comprehensive gaze pattern comparison is still challenging due to a lack of interactive visualization tools for comparative gaze analysis. We take the challenge with GazeDx while integrating crucial contextual information such as pupil size and windowing into the analysis process for more in-depth and ecologically valid findings. Among the interactive visualization components in GazeDx, a context-embedded interactive scatterplot is especially designed to help users examine abstract gaze data in diverse contexts by embedding medical imaging representations well known to radiologists in it. We present the results from two case studies with two experienced radiologists, where they compared the gaze patterns of 14 radiologists reading two patients' volumetric CT images.


Subject(s)
Eye Movements/physiology , Imaging, Three-Dimensional , Medical Informatics/methods , Algorithms , Humans , Radiography, Abdominal , Radiography, Thoracic , Radiologists , Tomography, X-Ray Computed , User-Computer Interface
18.
BMC Proc ; 9(Suppl 6 Proceedings of the 5th Symposium on Biological Data): S2, 2015.
Article in English | MEDLINE | ID: mdl-26361498

ABSTRACT

BACKGROUND: MicroRNAs (miRNA) are short nucleotides that down-regulate its target genes. Various miRNA target prediction algorithms have used sequence complementarity between miRNA and its targets. Recently, other algorithms tried to improve sequence-based miRNA target prediction by exploiting miRNA-mRNA expression profile data. Some web-based tools are also introduced to help researchers predict targets of miRNAs from miRNA-mRNA expression profile data. A demand for a miRNA-mRNA visual analysis tool that features novel miRNA prediction algorithms and more interactive visualization techniques exists. RESULTS: We designed and implemented miRTarVis, which is an interactive visual analysis tool that predicts targets of miRNAs from miRNA-mRNA expression profile data and visualizes the resulting miRNA-target interaction network. miRTarVis has intuitive interface design in accordance with the analysis procedure of load, filter, predict, and visualize. It predicts targets of miRNA by adopting Bayesian inference and MINE analyses, as well as conventional correlation and mutual information analyses. It visualizes a resulting miRNA-mRNA network in an interactive Treemap, as well as a conventional node-link diagram. miRTarVis is available at http://hcil.snu.ac.kr/~rati/miRTarVis/index.html. CONCLUSIONS: We reported findings from miRNA-mRNA expression profile data of asthma patients using miRTarVis in a case study. miRTarVis helps to predict and understand targets of miRNA from miRNA-mRNA expression profile data.

19.
BMC Bioinformatics ; 16 Suppl 11: S5, 2015.
Article in English | MEDLINE | ID: mdl-26328893

ABSTRACT

BACKGROUND: Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious. RESULTS: In this paper, we present XCluSim, a visual analytics tool that enables users to interactively compare multiple clustering results based on the Visual Information Seeking Mantra. We build a taxonomy for categorizing existing techniques of clustering results visualization in terms of the Gestalt principles of grouping. Using the taxonomy, we choose the most appropriate interactive visualizations for presenting individual clustering results from different types of clustering algorithms. The efficacy of XCluSim is shown through case studies with a bioinformatician. CONCLUSIONS: Compared to other relevant tools, XCluSim enables users to compare multiple clustering results in a more scalable manner. Moreover, XCluSim supports diverse clustering algorithms and dedicated visualizations and interactions for different types of clustering results, allowing more effective exploration of details on demand. Through case studies with a bioinformatics researcher, we received positive feedback on the functionalities of XCluSim, including its ability to help identify stably clustered items across multiple clustering results.


Subject(s)
Algorithms , Cluster Analysis , Computational Biology/methods , Computer Graphics , Software , Genome, Human , Humans
20.
IEEE Comput Graph Appl ; 35(6): 20-8, 2015.
Article in English | MEDLINE | ID: mdl-26415159

ABSTRACT

Wordle has been commonly used to summarize texts, with each word size-coded by its frequency of occurrences--the more often a word occurs in texts, the bigger it is. The interactive authoring tool WordlePlus leverages natural interaction and animation to give users more control over wordle development. WordlePlus supports direct manipulation of words with pen and touch interaction. It introduces two-word multitouch manipulation, such as concatenating and grouping two words, and provides pen interaction for adding and deleting words. In addition, WordlePlus employs animation to help users create more dynamic and engaging wordles.

SELECTION OF CITATIONS
SEARCH DETAIL
...