RESUMO
Temporal action localization aims to identify the boundaries and categories of actions in videos, such as scoring a goal in a football match. Single-frame supervision has emerged as a labor-efficient way to train action localizers as it requires only one annotated frame per action. However, it often suffers from poor performance due to the lack of precise boundary annotations. To address this issue, we propose a visual analysis method that aligns similar actions and then propagates a few user-provided annotations (e.g., boundaries, category labels) to similar actions via the generated alignments. Our method models the alignment between actions as a heaviest path problem and the annotation propagation as a quadratic optimization problem. As the automatically generated alignments may not accurately match the associated actions and could produce inaccurate localization results, we develop a storyline visualization to explain the localization results of actions and their alignments. This visualization facilitates users in correcting wrong localization results and misalignments. The corrections are then used to improve the localization results of other actions. The effectiveness of our method in improving localization performance is demonstrated through quantitative evaluation and a case study.
RESUMO
Breaking news and first-hand reports often trend on social media platforms before traditional news outlets cover them. The real-time analysis of posts on such platforms can reveal valuable and timely insights for journalists, politicians, business analysts, and first responders, but the high number and diversity of new posts pose a challenge. In this work, we present an interactive system that enables the visual analysis of streaming social media data on a large scale in real-time. We propose an efficient and explainable dynamic clustering algorithm that powers a continuously updated visualization of the current thematic landscape as well as detailed visual summaries of specific topics of interest. Our parallel clustering strategy provides an adaptive stream with a digestible but diverse selection of recent posts related to relevant topics. We also integrate familiar visual metaphors that are highly interlinked for enabling both explorative and more focused monitoring tasks. Analysts can gradually increase the resolution to dive deeper into particular topics. In contrast to previous work, our system also works with non-geolocated posts and avoids extensive preprocessing such as detecting events. We evaluated our dynamic clustering algorithm and discuss several use cases that show the utility of our system.
RESUMO
Investigating relationships between variables in multi-dimensional data sets is a common task for data analysts and engineers. More specifically, it is often valuable to understand which ranges of which input variables lead to particular values of a given target variable. Unfortunately, with an increasing number of independent variables, this process may become cumbersome and time-consuming due to the many possible combinations that have to be explored. In this paper, we propose a novel approach to visualize correlations between input variables and a target output variable that scales to hundreds of variables. We developed a visual model based on neural networks that can be explored in a guided way to help analysts find and understand such correlations. First, we train a neural network to predict the target from the input variables. Then, we visualize the inner workings of the resulting model to help understand relations within the data set. We further introduce a new regularization term for the backpropagation algorithm that encourages the neural network to learn representations that are easier to interpret visually. We apply our method to artificial and real-world data sets to show its utility.
Assuntos
Gráficos por Computador , Redes Neurais de Computação , AlgoritmosRESUMO
It is difficult to explore large text collections if no or little information is available on the contained documents. Hence, starting analytic tasks on such corpora is challenging for many stakeholders from various domains. As a remedy, recent visualization research suggests to use visual spatializations of representative text documents or tags to explore text collections. With PyramidTags, we introduce a novel approach for summarizing large text collections visually. In contrast to previous work, PyramidTags in particular aims at creating an improved representation that incorporates both temporal evolution and semantic relationship of visualized tags within the summarized document collection. As a result, it equips analysts with a visual starting point for interactive exploration to not only get an overview of the main terms and phrases of the corpus, but also to grasp important ideas and stories. Analysts can hover and select multiple tags to explore relationships and retrieve the most relevant documents. In this work, we apply PyramidTags to hundreds of thousands of web-crawled news reports. Our benchmarks suggest that PyramidTags creates time- and context-aware layouts, while preserving the inherent word order of important pairs.
RESUMO
Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.
RESUMO
At present, the production of tissue engineered cartilage requires the concurrent production of two identical transplants. One transplant is used for destructive quality control and the second one is implanted into the patient. A non-invasive characterization of such tissue engineering samples would be a promising tool to achieve a production process of just one transplant that is both implanted and tested. Raman spectroscopy is a method that satisfies this requirement by analyzing cells without lysis, fixation or the use of any chemicals. This pure optical technique is based on inelastic scattering of laser photons by molecular vibrations of biopolymers. Characteristic peaks in Raman spectra of cells could be assigned to typical biochemical molecules present in biological samples. For the analysis of chondrocytes present in cartilage transplants, the determination of the cell vitality as well as the discrimination of another cell type have been studied by Raman spectroscopy. Another bottleneck in such biological processes under GMP conditions is sterility control, as most of the commonly used methods require long cultivation times. Raman spectroscopy provides a good alternative to conventional methods in terms of time saving. In this study, the potential of Raman spectroscopy as a quality and sterility control tool for tissue engineering applications was studied by analyzing and comparing the spectra of cell and bacteria cultures.
Assuntos
Cartilagem Articular/química , Análise Espectral Raman , Engenharia Tecidual , Animais , Cartilagem/transplante , Cartilagem Articular/cirurgia , Técnicas de Cultura de Células/instrumentação , Condrócitos/química , Estudos de Viabilidade , Análise de Componente Principal , Controle de Qualidade , Análise Espectral Raman/métodos , Análise Espectral Raman/normas , SuínosRESUMO
Magic lens based focus+context techniques are powerful means for exploring document spatializations. Typically, they only offer additional summarized or abstracted views on focused documents. As a consequence, users might miss important information that is either not shown in aggregated form or that never happens to get focused. In this work, we present the design process and user study results for improving a magic lens based document exploration approach with exemplary visual quality cues to guide users in steering the exploration and support them in interpreting the summarization results. We contribute a thorough analysis of potential sources of information loss involved in these techniques, which include the visual spatialization of text documents, user-steered exploration, and the visual summarization. With lessons learned from previous research, we highlight the various ways those information losses could hamper the exploration. Furthermore, we formally define measures for the aforementioned different types of information losses and bias. Finally, we present the visual cues to depict these quality measures that are seamlessly integrated into the exploration approach. These visual cues guide users during the exploration and reduce the risk of misinterpretation and accelerate insight generation. We conclude with the results of a controlled user study and discuss the benefits and challenges of integrating quality guidance in exploration techniques.
RESUMO
Immersive technologies like stereo rendering, virtual reality, or augmented reality (AR) are often used in the field of molecular visualisation. Modern, comparably lightweight and affordable AR headsets like Microsoft's HoloLens open up new possibilities for immersive analytics in molecular visualisation. A crucial factor for a comprehensive analysis of molecular data in AR is the rendering speed. HoloLens, however, has limited hardware capabilities due to requirements like battery life, fanless cooling and weight. Consequently, insights from best practises for powerful desktop hardware may not be transferable. Therefore, we evaluate the capabilities of the HoloLens hardware for modern, GPU-enabled, high-quality rendering methods for the space-filling model commonly used in molecular visualisation. We also assess the scalability for large molecular data sets. Based on the results, we discuss ideas and possibilities for immersive molecular analytics. Besides more obvious benefits like the stereoscopic rendering offered by the device, this specifically includes natural user interfaces that use physical navigation instead of the traditional virtual one. Furthermore, we consider different scenarios for such an immersive system, ranging from educational use to collaborative scenarios.
Assuntos
Gráficos por Computador , Simulação por Computador , Software , Realidade Virtual , Humanos , Modelos Estruturais , Interface Usuário-ComputadorRESUMO
The increasingly large number of available writings describing technical and scientific progress, calls for advanced analytic tools for their efficient analysis. This is true for many application scenarios in science and industry and for different types of writings, comprising patents and scientific articles. Despite important differences between patents and scientific articles, both have a variety of common characteristics that lead to similar search and analysis tasks. However, the analysis and visualization of these documents is not a trivial task due to the complexity of the documents as well as the large number of possible relations between their multivariate attributes. In this survey, we review interactive analysis and visualization approaches of patents and scientific articles, ranging from exploration tools to sophisticated mining methods. In a bottom-up approach, we categorize them according to two aspects: (a) data type (text, citations, authors, metadata, and combinations thereof), and (b) task (finding and comparing single entities, seeking elementary relations, finding complex patterns, and in particular temporal patterns, and investigating connections between multiple behaviours). Finally, we identify challenges and research directions in this area that ask for future investigations.
RESUMO
We have created and made available to all a dataset with information about every paper that has appeared at the IEEE Visualization (VIS) set of conferences: InfoVis, SciVis, VAST, and Vis. The information about each paper includes its title, abstract, authors, and citations to other papers in the conference series, among many other attributes. This article describes the motivation for creating the dataset, as well as our process of coalescing and cleaning the data, and a set of three visualizations we created to facilitate exploration of the data. This data is meant to be useful to the broad data visualization community to help understand the evolution of the field and as an example document collection for text data visualization research.
RESUMO
The exploration and analysis of scientific literature collections is an important task for effective knowledge management. Past interest in such document sets has spurred the development of numerous visualization approaches for their interactive analysis. They either focus on the textual content of publications, or on document metadata including authors and citations. Previously presented approaches for citation analysis aim primarily at the visualization of the structure of citation networks and their exploration. We extend the state-of-the-art by presenting an approach for the interactive visual analysis of the contents of scientific documents, and combine it with a new and flexible technique to analyze their citations. This technique facilitates user-steered aggregation of citations which are linked to the content of the citing publications using a highly interactive visualization approach. Through enriching the approach with additional interactive views of other important aspects of the data, we support the exploration of the dataset over time and enable users to analyze citation patterns, spot trends, and track long-term developments. We demonstrate the strengths of our approach through a use case and discuss it based on expert user feedback.
RESUMO
Evaluation has become a fundamental part of visualization research and researchers have employed many approaches from the field of human-computer interaction like measures of task performance, thinking aloud protocols, and analysis of interaction logs. Recently, eye tracking has also become popular to analyze visual strategies of users in this context. This has added another modality and more data, which requires special visualization techniques to analyze this data. However, only few approaches exist that aim at an integrated analysis of multiple concurrent evaluation procedures. The variety, complexity, and sheer amount of such coupled multi-source data streams require a visual analytics approach. Our approach provides a highly interactive visualization environment to display and analyze thinking aloud, interaction, and eye movement data in close relation. Automatic pattern finding algorithms allow an efficient exploratory search and support the reasoning process to derive common eye-interaction-thinking patterns between participants. In addition, our tool equips researchers with mechanisms for searching and verifying expected usage patterns. We apply our approach to a user study involving a visual analytics application and we discuss insights gained from this joint analysis. We anticipate our approach to be applicable to other combinations of evaluation techniques and a broad class of visualization applications.
Assuntos
Gráficos por Computador , Movimentos Oculares/fisiologia , Interface Usuário-Computador , Adulto , Feminino , Humanos , Masculino , Análise e Desempenho de Tarefas , Adulto JovemRESUMO
Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.
RESUMO
The number of microblog posts published daily has reached a level that hampers the effective retrieval of relevant messages, and the amount of information conveyed through services such as Twitter is still increasing. Analysts require new methods for monitoring their topic of interest, dealing with the data volume and its dynamic nature. It is of particular importance to provide situational awareness for decision making in time-critical tasks. Current tools for monitoring microblogs typically filter messages based on user-defined keyword queries and metadata restrictions. Used on their own, such methods can have drawbacks with respect to filter accuracy and adaptability to changes in trends and topic structure. We suggest ScatterBlogs2, a new approach to let analysts build task-tailored message filters in an interactive and visual manner based on recorded messages of well-understood previous events. These message filters include supervised classification and query creation backed by the statistical distribution of terms and their co-occurrences. The created filter methods can be orchestrated and adapted afterwards for interactive, visual real-time monitoring and analysis of microblog feeds. We demonstrate the feasibility of our approach for analyzing the Twitter stream in emergency management scenarios.
Assuntos
Algoritmos , Blogging/estatística & dados numéricos , Gráficos por Computador , Armazenamento e Recuperação da Informação/métodos , Mídias Sociais/estatística & dados numéricos , Software , Interface Usuário-Computador , Sistemas Computacionais , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Stem cells offer great potential for regenerative medicine because they regenerate damaged tissue by cell replacement and/or by stimulating endogenous repair mechanisms. Although stem cells are defined by their functional properties, such as the potential to proliferate, to self-renew, and to differentiate into specific cell types, their identification based on the expression of specific markers remains vague. Here, profiles of stem cell metabolism might highlight stem cell function more than the expression of single genes/markers. Thus, systematic approaches including spectroscopy might yield insight into stem cell function, identity, and stemness. We review the findings gained by means of metabolic and spectroscopic profiling methodologies, for example, nuclear magnetic resonance spectroscopy (NMRS), mass spectrometry (MS), and Raman spectroscopy (RS), with a focus on neural stem cells and neurogenesis.
Assuntos
Técnicas Citológicas/métodos , Metabolômica/métodos , Análise Espectral/métodos , Células-Tronco/química , Células-Tronco/metabolismo , HumanosRESUMO
Noninvasive monitoring of tissue-engineered (TE) constructs during their in vitro maturation or postimplantation in vivo is highly relevant for graft evaluation. However, traditional methods for studying cell and matrix components in engineered tissues such as histology, immunohistochemistry, or biochemistry require invasive tissue processing, resulting in the need to sacrifice of TE constructs. Raman spectroscopy offers the unique possibility to analyze living cells label-free in situ and in vivo solely based on their phenotype-specific biochemical fingerprint. In this study, we aimed to determine the applicability of Raman spectroscopy for the noninvasive identification and spectral separation of primary human skin fibroblasts, keratinocytes, and melanocytes, as well as immortalized keratinocytes (HaCaT cells). Multivariate analysis of cell-type-specific Raman spectra enabled the discrimination between living primary and immortalized keratinocytes. We further noninvasively distinguished between fibroblasts, keratinocytes, and melanocytes. Our findings are especially relevant for the engineering of in vitro skin models and for the production of artificial skin, where both the biopsy and the transplant consist of several cell types. To realize a reproducible quality of TE skin, the determination of the purity of the cell populations as well as the detection of potential molecular changes are important. We conclude therefore that Raman spectroscopy is a suitable tool for the noninvasive in situ quality control of cells used in skin tissue engineering applications.