RESUMO
Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.
RESUMO
OBJECTIVES: To determine the effects of cerclage on twin pregnancies. METHODS: A multicenter, retrospective, cohort study was conducted at 10 tertiary centers using a web-based data collection platform. The study population included twin pregnancies delivered after 20 weeks of gestation. Patients with one or two fetal deaths before 20 weeks of gestation were excluded. Maternal characteristics, including prenatal cervical length (CL) and obstetric outcomes, were retrieved from the electronic medical records. RESULTS: A total of 1,473 patients had available data regarding the CL measured before 24 weeks of gestation. Seven patients without CL data obtained prior to cerclage were excluded from the analysis. The study population was divided into two groups according to the CL measured during the mid-trimester: the CL ≤2.5 cm group (n = 127) and the CL >2.5 cm group (n = 1,339). A total of 127 patients (8.7%) were included in the CL ≤2.5 cm group, including 41.7% (53/127) who received cerclage. Patients in the CL >2.5 cm group who received cerclage had significantly lower gestational age at delivery than the control group (hazard ratio (HR): 1.8; 95% confidence interval (CI): 1.11-2.87; p = .016). Patients in the CL ≤2.5 cm group who received cerclage had a significantly higher gestational age at delivery than the control group (HR: 0.5; 95% CI: 0.30-0.82; p value = .006). CONCLUSIONS: In twin pregnancies with a CL ≤2.5 cm, cerclage significantly prolongs gestation. However, unnecessary cerclage in women with a CL >2.5 cm may result in a higher risk of preterm labor and histologic chorioamnionitis although this study has a limitation originated from retrospective design.
Assuntos
Cerclagem Cervical , Resultado da Gravidez , Gravidez de Gêmeos , Humanos , Feminino , Gravidez , Cerclagem Cervical/estatística & dados numéricos , Cerclagem Cervical/métodos , Estudos Retrospectivos , Gravidez de Gêmeos/estatística & dados numéricos , Adulto , Resultado da Gravidez/epidemiologia , Medida do Comprimento Cervical , Nascimento Prematuro/prevenção & controle , Nascimento Prematuro/epidemiologia , Idade Gestacional , Incompetência do Colo do Útero/cirurgiaRESUMO
With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.
RESUMO
Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.
RESUMO
A common way to evaluate the reliability of dimensionality reduction (DR) embeddings is to quantify how well labeled classes form compact, mutually separated clusters in the embeddings. This approach is based on the assumption that the classes stay as clear clusters in the original high-dimensional space. However, in reality, this assumption can be violated; a single class can be fragmented into multiple separated clusters, and multiple classes can be merged into a single cluster. We thus cannot always assure the credibility of the evaluation using class labels. In this paper, we introduce two novel quality measures-Label-Trustworthiness and Label-Continuity (Label-T&C)-advancing the process of DR evaluation based on class labels. Instead of assuming that classes are well-clustered in the original space, Label-T&C work by (1) estimating the extent to which classes form clusters in the original and embedded spaces and (2) evaluating the difference between the two. A quantitative evaluation showed that Label-T&C outperform widely used DR evaluation measures (e.g., Trustworthiness and Continuity, Kullback-Leibler divergence) in terms of the accuracy in assessing how well DR embeddings preserve the cluster structure, and are also scalable. Moreover, we present case studies demonstrating that Label-T&C can be successfully used for revealing the intrinsic characteristics of DR techniques and their hyperparameters.
RESUMO
Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.
RESUMO
BACKGROUND: Insufficient physical activity due to social distancing and suppressed outdoor activities increases vulnerability to diseases like cardiovascular diseases, sarcopenia, and severe COVID-19. While bodyweight exercises, such as squats, effectively boost physical activity, incorrect postures risk abnormal muscle activation joint strain, leading to ineffective sessions or even injuries. Avoiding incorrect postures is challenging for novices without expert guidance. Existing solutions for remote coaching and computer-assisted posture correction often prove costly or inefficient. OBJECTIVE: This study aimed to use deep neural networks to develop a personal workout assistant that offers feedback on squat postures using only mobile devices-smartphones and tablets. Deep learning mimicked experts' visual assessments of proper exercise postures. The effectiveness of the mobile app was evaluated by comparing it with exercise videos, a popular at-home workout choice. METHODS: Twenty participants were recruited without squat exercise experience and divided into an experimental group (EXP) with 10 individuals aged 21.90 (SD 2.18) years and a mean BMI of 20.75 (SD 2.11) and a control group (CTL) with 10 individuals aged 22.60 (SD 1.95) years and a mean BMI of 18.72 (SD 1.23) using randomized controlled trials. A data set with over 20,000 squat videos annotated by experts was created and a deep learning model was trained using pose estimation and video classification to analyze the workout postures. Subsequently, a mobile workout assistant app, Home Alone Exercise, was developed, and a 2-week interventional study, in which the EXP used the app while the CTL only followed workout videos, showed how the app helps people improve squat exercise. RESULTS: The EXP significantly improved their squat postures evaluated by the app after 2 weeks (Pre: 0.20 vs Mid: 4.20 vs Post: 8.00, P=.001), whereas the CTL (without the app) showed no significant change in squat posture (Pre: 0.70 vs Mid: 1.30 vs Post: 3.80, P=.13). Significant differences were observed in the left (Pre: 75.06 vs Mid: 76.24 vs Post: 63.13, P=.02) and right (Pre: 71.99 vs Mid: 76.68 vs Post: 62.82, P=.03) knee joint angles in the EXP before and after exercise, with no significant effect found for the CTL in the left (Pre: 73.27 vs Mid: 74.05 vs Post: 70.70, P=.68) and right (Pre: 70.82 vs Mid: 74.02 vs Post: 70.23, P=.61) knee joint angles. CONCLUSIONS: EXP participants trained with the app experienced faster improvement and learned more nuanced details of the squat exercise. The proposed mobile app, offering cost-effective self-discovery feedback, effectively taught users about squat exercises without expensive in-person trainer sessions. TRIAL REGISTRATION: Clinical Research Information Service KCT0008178 (retrospectively registered); https://cris.nih.go.kr/cris/search/detailSearch.do/24006.
RESUMO
BACKGROUND: Maternal-fetal attachment (MFA) has been reported to be associated with the postpartum mother-infant relationship. Seeing the fetus through ultrasound might influence MFA, and the effect could be increased by more realistic images, such as those generated in virtual reality (VR). OBJECTIVE: The aim was to determine the effect of fetal images generated in VR on MFA and depressive symptoms through a prenatal-coaching mobile app. METHODS: This 2-arm parallel randomized controlled trial involved a total of 80 pregnant women. Eligible women were randomly assigned to either a mobile app-only group (n=40) or an app plus VR group (n=40). The VR group experienced their own baby's images generated in VR based on images obtained from fetal ultrasonography. The prenatal-coaching mobile app recommended health behavior for the pregnant women according to gestational age, provided feedback on entered data for maternal weight, blood pressure, and glucose levels, and included a private diary service for fetal ultrasound images. Both groups received the same app, but the VR group also viewed fetal images produced in VR; these images were stored in the app. All participants filled out questionnaires to assess MFA, depressive symptoms, and other basic medical information. The questionnaires were filled out again after the interventions. RESULTS: Basic demographic data were comparable between the 2 groups. Most of the assessments showed comparable results for the 2 groups, but the mean score to assess interaction with the fetus was significantly higher for the VR group than the control group (0.4 vs 0.1, P=.004). The proportion of participants with an increased score for this category after the intervention was significantly higher in the VR group than the control group (43% vs 13%, P=.005). The feedback questionnaire revealed that scores for the degree of perception of fetal appearance all increased after the intervention in the VR group. CONCLUSIONS: The use of a mobile app with fetal images in VR significantly increased maternal interaction with the fetus. TRIAL REGISTRATION: ClinicalTrials.gov NCT04942197; https://clinicaltrials.gov/ct2/show/NCT04942197.
Assuntos
Aplicativos Móveis , Realidade Virtual , Lactente , Humanos , Gravidez , Feminino , Cuidado Pré-Natal , Período Pós-Parto , FetoRESUMO
PubMed is the most extensively used database and search engine in the biomedical and healthcare fields. However, users could experience several difficulties in acquiring their target papers facing massive numbers of search results, especially in their unfamiliar fields. Therefore, we developed a novel user interface for PubMed and conducted three steps of study: step A, a preliminary user survey with 76 medical experts regarding the current usability for the biomedical literature search task at PubMed; step B is implementing EEEvis, a novel interactive visual analytic system for the search task; step C, a randomized user study comparing PubMed and EEEvis. First, we conducted a Google survey of 76 medical experts regarding the unmet needs of PubMed and the user requirements for a novel search interface. According to the data of preliminary Google survey, we implemented a novel interactive visual analytic system for biomedical literature search. This EEEvis provides enhanced literature data analysis functions including (1) an overview of the bibliographic features including publication date, citation count, and impact factors, (2) an overview of the co-authorship network, and (3) interactive sorting, filtering, and highlighting. In the randomized user study of 24 medical experts, the search speed of EEEvis was not inferior to PubMed in the time to reach the first article (median difference 3 sec, 95% CI -2.1 to 8.5, P = 0.535) nor in the search completion time (median difference 8 sec, 95% CI -4.7 to 19.1, P = 0.771). However, 22 participants (91.7%) responded that they are willing to use EEEvis as their first choice for a biomedical literature search task, and 21 participants (87.5%) answered the bibliographic sorting and filtering functionalities of EEEvis as a major advantage. EEEvis could be a supplementary interface for PubMed that can enhance the user experience in the search for biomedical literature.
Assuntos
Ferramenta de Busca , Humanos , MEDLINE , PubMed , Bases de Dados FactuaisRESUMO
We present RCMVis, a visual analytics system to support interactive Route Choice Modeling analysis. It aims to model which characteristics of routes, such as distance and the number of traffic lights, affect travelers' route choice behaviors and how much they affect the choice during their trips. Through close collaboration with domain experts, we designed a visual analytics framework for Route Choice Modeling. The framework supports three interactive analysis stages: exploration, modeling, and reasoning. In the exploration stage, we help analysts interactively explore trip data from multiple origin-destination (OD) pairs and choose a subset of data they want to focus on. To this end, we provide coordinated multiple OD views with different foci that allow analysts to inspect, rank, and compare OD pairs in terms of their multidimensional attributes. In the modeling stage, we integrate a k-medoids clustering method and a path-size logit model into our system to enable analysts to model route choice behaviors from trips with support for feature selection, hyperparameter tuning, and model comparison. Finally, in the reasoning stage, we help analysts rationalize and refine the model by selectively inspecting the trips that strongly support the modeling result. For evaluation, we conducted a case study and interviews with domain experts. The domain experts discovered unexpected insights from numerous modeling results, allowing them to explore the hyperparameter space more effectively to gain better results. In addition, they gained OD- and road-level insights into which data mainly supported the modeling result, enabling further discussion of the model.
RESUMO
We propose Steadiness and Cohesiveness, two novel metrics to measure the inter-cluster reliability of multidimensional projection (MDP), specifically how well the inter-cluster structures are preserved between the original high-dimensional space and the low-dimensional projection space. Measuring inter-cluster reliability is crucial as it directly affects how well inter-cluster tasks (e.g., identifying cluster relationships in the original space from a projected view) can be conducted; however, despite the importance of inter-cluster tasks, we found that previous metrics, such as Trustworthiness and Continuity, fail to measure inter-cluster reliability. Our metrics consider two aspects of the inter-cluster reliability: Steadiness measures the extent to which clusters in the projected space form clusters in the original space, and Cohesiveness measures the opposite. They extract random clusters with arbitrary shapes and positions in one space and evaluate how much the clusters are stretched or dispersed in the other space. Furthermore, our metrics can quantify pointwise distortions, allowing for the visualization of inter-cluster reliability in a projection, which we call a reliability map. Through quantitative experiments, we verify that our metrics precisely capture the distortions that harm inter-cluster reliability while previous metrics have difficulty capturing the distortions. A case study also demonstrates that our metrics and the reliability map 1) support users in selecting the proper projection techniques or hyperparameters and 2) prevent misinterpretation while performing inter-cluster tasks, thus allow an adequate identification of inter-cluster structure.
RESUMO
We introduce Parallel Histogram Plot (PHP), a technique that overcomes the innate limitations of parallel coordinates plot (PCP) by attaching stacked-bar histograms with discrete color schemes to PCP. The color-coded histograms enable users to see an overview of the whole data without cluttering or scalability issues. Each rectangle in the PHP histograms is color coded according to the data ranking by a selected attribute. This color-coding scheme allows users to visually examine relationships between attributes, even between those that are displayed far apart, without repositioning or reordering axes. We adopt the Visual Information Seeking Mantra so that the polylines of the original PCP can be used to show details of a small number of selected items when the cluttering problem subsides. We also design interactions, such as a focus+context technique, to help users investigate small regions of interest in a space-efficient manner. We provide a real-world example in which PHP is effectively utilized compared with other visualizations, and we perform a controlled user study to evaluate the performance of PHP in helping users estimate the correlation between attributes. The results demonstrate that the performance of PHP was consistent in the estimation of correlations between two attributes regardless of the distance between them.
Assuntos
Gráficos por Computador , Distúrbios da Fala , HumanosRESUMO
We present a new visual exploration concept-Progressive Visual Analytics with Safeguards-that helps people manage the uncertainty arising from progressive data exploration. Despite its potential benefits, intermediate knowledge from progressive analytics can be incorrect due to various machine and human factors, such as a sampling bias or misinterpretation of uncertainty. To alleviate this problem, we introduce PVA-Guards, safeguards people can leave on uncertain intermediate knowledge that needs to be verified, and derive seven PVA-Guards based on previous visualization task taxonomies. PVA-Guards provide a means of ensuring the correctness of the conclusion and understanding the reason when intermediate knowledge becomes invalid. We also present ProReveal, a proof-of-concept system designed and developed to integrate the seven safeguards into progressive data exploration. Finally, we report a user study with 14 participants, which shows people voluntarily employed PVA-Guards to safeguard their findings and ProReveal's PVA-Guard view provides an overview of uncertain intermediate knowledge. We believe our new concept can also offer better consistency in progressive data exploration, alleviating people's heterogeneous interpretation of uncertainty.
RESUMO
Git metadata contains rich information for developers to understand the overall context of a large software development project. Thus it can help new developers, managers, and testers understand the history of development without needing to dig into a large pile of unfamiliar source code. However, the current tools for Git visualization are not adequate to analyze and explore the metadata: They focus mainly on improving the usability of Git commands instead of on helping users understand the development history. Furthermore, they do not scale for large and complex Git commit graphs, which can play an important role in understanding the overall development history. In this paper, we present Githru, an interactive visual analytics system that enables developers to effectively understand the context of development history through the interactive exploration of Git metadata. We design an interactive visual encoding idiom to represent a large Git graph in a scalable manner while preserving the topological structures in the Git graph. To enable scalable exploration of a large Git commit graph, we propose novel techniques (graph reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods) to abstract a large-scale Git commit graph. Based on these Git commit graph abstraction techniques, Githru provides an interactive summary view to help users gain an overview of the development history and a comparison view in which users can compare different clusters of commits. The efficacy of Githru has been demonstrated by case studies with domain experts using real-world, in-house datasets from a large software development team at a major international IT company. A controlled user study with 12 developers comparing Githru to previous tools also confirms the effectiveness of Githru in terms of task completion time.
Assuntos
Gráficos por Computador , Metadados , Interpretação Estatística de Dados , SoftwareRESUMO
We present a systematic review on three comparative layouts-juxtaposition, superposition, and explicit-encoding-which are information visualization (InfoVis) layouts designed to support comparison tasks. For the last decade, these layouts have served as fundamental idioms in designing many visualization systems. However, we found that the layouts have been used with inconsistent terms and confusion, and the lessons from previous studies are fragmented. The goal of our research is to distill the results from previous studies into a consistent and reusable framework. We review 127 research papers, including 15 papers with quantitative user studies, which employed comparative layouts. We first alleviate the ambiguous boundaries in the design space of comparative layouts by suggesting lucid terminology (e.g., chart-wise and item-wise juxtaposition). We then identify the diverse aspects of comparative layouts, such as the advantages and concerns of using each layout in the real-world scenarios and researchers' approaches to overcome the concerns. Building our knowledge on top of the initial insights gained from the Gleicher et al.'s survey [19], we elaborate on relevant empirical evidence that we distilled from our survey (e.g., the actual effectiveness of the layouts in different study settings) and identify novel facets that the original work did not cover (e.g., the familiarity of the layouts to people). Finally, we show the consistent and contradictory results on the performance of comparative layouts and offer practical implications for using the layouts by suggesting trade-offs and seven actionable guidelines.
RESUMO
We present PANENE, a progressive algorithm for approximate nearest neighbor indexing and querying. Although the use of k-nearest neighbor (KNN) libraries is common in many data analysis methods, most KNN algorithms can only be queried when the whole dataset has been indexed, i.e., they are not online. Even the few online implementations are not progressive in the sense that the time to index incoming data is not bounded and cannot satisfy the latency requirements of progressive systems. This long latency has significantly limited the use of many machine learning methods, such as t-SNE, in interactive visual analytics. PANENE is a novel algorithm for Progressive Approximate k-NEarest NEighbors, enabling fast KNN queries while continuously indexing new batches of data. Following the progressive computation paradigm, PANENE operations can be bounded in time, allowing analysts to access running results within an interactive latency. PANENE can also incrementally build and maintain a cache data structure, a KNN lookup table, to enable constant-time lookups for KNN queries. Finally, we present three progressive applications of PANENE, such as regression, density estimation, and responsive t-SNE, opening up new opportunities to use complex algorithms in interactive systems.
RESUMO
Fabry disease is a rare lysosomal storage disorder resulting from the lack of α-Gal A gene activity. Globotriaosylceramide (GB3, ceramide trihexoside) is a novel endogenous biomarker which predicts the incidence of Fabry disease. At the early stage efficacy/biomarker study, a rapid method to determine this biomarker in plasma and in all relevant tissues related to this disease simultaneously is required. However, the limited sample volume, as well as the various levels of GB3 in different matrices makes the GB3 quantitation very challenging. Hereby we developed a rapid method to identify GB3 in mouse plasma and various tissues. Preliminary stability tests were also performed in three different conditions: short-term, freeze-thaw, long-term. The calibration curve was well fitted over the concentration range of 0.042â»10 µg/mL for GB3 in plasma and 0.082â»20 µg/g for GB3 in various tissues. This method was successfully applied for the comparison of GB3 levels in Fabry model mice (B6;129-Glatm1Kul/J), which has not been performed previously to the best of our knowledge.
RESUMO
In this paper, we present miRTarVis+, a Web-based interactive visual analytics tool for miRNA target predictions and integrative analyses of multiple prediction results. Various microRNA (miRNA) target prediction algorithms have been developed to improve sequence-based miRNA target prediction by exploiting miRNA-mRNA expression profile data. There are also a few analytics tools to help researchers predict targets of miRNAs. However, there still is a need for improving the performance for miRNA prediction algorithms and more importantly for interactive visualization tools for an integrative analysis of multiple prediction results. miRTarVis+ has an intuitive interface to support the analysis pipeline of load, filter, predict, and visualize. It can predict targets of miRNA by adopting Bayesian inference and maximal information-based nonparametric exploration (MINE) analyses as well as conventional correlation and mutual information analyses. miRTarVis+ supports an integrative analysis of multiple prediction results by providing an overview of multiple prediction results and then allowing users to examine a selected miRNA-mRNA network in an interactive treemap and node-link diagram. To evaluate the effectiveness of miRTarVis+, we conducted two case studies using miRNA-mRNA expression profile data of asthma and breast cancer patients and demonstrated that miRTarVis+ helps users more comprehensively analyze targets of miRNA from miRNA-mRNA expression profile data. miRTarVis+ is available at http://hcil.snu.ac.kr/research/mirtarvisplus.
Assuntos
Asma/genética , Neoplasias da Mama/genética , MicroRNAs/genética , RNA Mensageiro/genética , Análise de Sequência de RNA/métodos , Interface Usuário-Computador , Algoritmos , Asma/diagnóstico , Asma/metabolismo , Asma/patologia , Sequência de Bases , Teorema de Bayes , Sítios de Ligação , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/metabolismo , Neoplasias da Mama/patologia , Feminino , Perfilação da Expressão Gênica , Regulação da Expressão Gênica , Humanos , Internet , MicroRNAs/metabolismo , RNA Mensageiro/metabolismoRESUMO
We present an interactive visual analytics framework, GazeDx (abbr. of GazeDiagnosis), for the comparative analysis of gaze data from multiple readers examining volumetric images while integrating important contextual information with the gaze data. Gaze pattern comparison is essential to understanding how radiologists examine medical images, and to identifying factors influencing the examination. Most prior work depended upon comparisons with manually juxtaposed static images of gaze tracking results. Comparative gaze analysis with volumetric images is more challenging due to the additional cognitive load on 3D perception. A recent study proposed a visualization design based on direct volume rendering (DVR) for visualizing gaze patterns in volumetric images; however, effective and comprehensive gaze pattern comparison is still challenging due to a lack of interactive visualization tools for comparative gaze analysis. We take the challenge with GazeDx while integrating crucial contextual information such as pupil size and windowing into the analysis process for more in-depth and ecologically valid findings. Among the interactive visualization components in GazeDx, a context-embedded interactive scatterplot is especially designed to help users examine abstract gaze data in diverse contexts by embedding medical imaging representations well known to radiologists in it. We present the results from two case studies with two experienced radiologists, where they compared the gaze patterns of 14 radiologists reading two patients' volumetric CT images.
Assuntos
Movimentos Oculares/fisiologia , Imageamento Tridimensional , Informática Médica/métodos , Algoritmos , Humanos , Radiografia Abdominal , Radiografia Torácica , Radiologistas , Tomografia Computadorizada por Raios X , Interface Usuário-ComputadorRESUMO
Hunter syndrome is an X-linked lysosomal storage disease caused by a deficiency in the enzyme iduronate-2-sulfatase (IDS), leading to the accumulation of glycosaminoglycans (GAGs). Two recombinant enzymes, idursulfase and idursulfase beta are currently available for enzyme replacement therapy for Hunter syndrome. These two enzymes exhibited some differences in various clinical parameters in a recent clinical trial. Regarding the similarities and differences of these enzymes, previous research has characterized their biochemical and physicochemical properties. We compared the in vitro and in vivo efficacy of the two enzymes on patient fibroblasts and mouse model. Two enzymes were taken up into the cell and degraded GAGs accumulated in fibroblasts. In vivo studies of two enzymes revealed similar organ distribution and decreased urinary GAGs excretion. Especially, idursulfase beta exhibited enhanced in vitro efficacy for the lower concentration of treatment, in vivo efficacy in the degradation of tissue GAGs and improvement of bones, and revealed lower anti-drug antibody formation. A biochemical analysis showed that both enzymes show largely a similar glycosylation pattern, but the several peaks were different and quantity of aggregates of idursulfase beta was lower.