Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-34882554

RESUMO

Augmented Reality (AR) embeds digital information into objects of the physical world. Data can be shown in-situ, thereby enabling real-time visual comparisons and object search in real-life user tasks, such as comparing products and looking up scores in a sports game. While there have been studies on designing AR interfaces for situated information retrieval, there has only been limited research on AR object labeling for visual search tasks in the spatial environment. In this paper, we identify and categorize different design aspects in AR label design and report on a formal user study on labels for out-of-view objects to support visual search tasks in AR. We design three visualization techniques for out-of-view object labeling in AR, which respectively encode the relative physical position (height-encoded), the rotational direction (angle-encoded), and the label values (value-encoded) of the objects. We further implement two traditional in-view object labeling techniques, where labels are placed either next to the respective objects (situated) or at the edge of the AR FoV (boundary). We evaluate these ve different label conditions in three visual search tasks for static objects. Our study shows that out-of-view object labels are benecial when searching for objects outside the FoV, spatial orientation, and when comparing multiple spatially sparse objects. Angle-encoded labels with directional cues of the surrounding objects have the overall best performance with the highest user satisfaction. We discuss the implications of our ndings for future immersive AR interface design.

2.
Nat Biomed Eng ; 2021 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-34750536

RESUMO

Multiplexed tissue imaging facilitates the diagnosis and understanding of complex disease traits. However, the analysis of such digital images heavily relies on the experience of anatomical pathologists for the review, annotation and description of tissue features. In addition, the wider use of data from tissue atlases in basic and translational research and in classrooms would benefit from software that facilitates the easy visualization and sharing of the images and the results of their analyses. In this Perspective, we describe the ecosystem of software available for the analysis of tissue images and discuss the need for interactive online guides that help histopathologists make complex images comprehensible to non-specialists. We illustrate this idea via a software interface (Minerva), accessible via web browsers, that integrates multi-omic and tissue-atlas features. We argue that such interactive narrative guides can effectively disseminate digital histology data and aid their interpretation.

3.
Artigo em Inglês | MEDLINE | ID: mdl-34606456

RESUMO

Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of individual cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing regions of interest (ROIs) in an intuitive and cohesive manner. Building on a scope-to-screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared with these regions. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.

4.
Artigo em Inglês | MEDLINE | ID: mdl-34671767

RESUMO

The developmental process of embryos follows a monotonic order. An embryo can progressively cleave from one cell to multiple cells and finally transform to morula and blastocyst. For time-lapse videos of embryos, most existing developmental stage classification methods conduct per-frame predictions using an image frame at each time step. However, classification using only images suffers from overlapping between cells and imbalance between stages. Temporal information can be valuable in addressing this problem by capturing movements between neighboring frames. In this work, we propose a two-stream model for developmental stage classification. Unlike previous methods, our two-stream model accepts both temporal and image information. We develop a linear-chain conditional random field (CRF) on top of neural network features extracted from the temporal and image streams to make use of both modalities. The linear-chain CRF formulation enables tractable training of global sequential models over multiple frames while also making it possible to inject monotonic development order constraints into the learning process explicitly. We demonstrate our algorithm on two time-lapse embryo video datasets: i) mouse and ii) human embryo datasets. Our method achieves 98.1% and 80.6% for mouse and human embryo stage classification, respectively. Our approach will enable more pro-found clinical and biological studies and suggests a new direction for developmental stage classification by utilizing temporal information.

5.
Artigo em Inglês | MEDLINE | ID: mdl-34587072

RESUMO

Table2Text systems generate textual output based on structured data utilizing machine learning. These systems are essential for fluent natural language interfaces in tools such as virtual assistants; however, left to generate freely these ML systems often produce misleading or unexpected outputs. GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text. The tool utilizes a deep learning model designed with explicit control states. These controls allow users to globally constrain model generations, without sacrificing the representation power of the deep learning models. The visual interface makes it possible for users to interact with AI systems following a Refine-Forecast paradigm to ensure that the generation system acts in a manner human users find suitable. We report multiple use cases on two experiments that improve over uncontrolled generation approaches, while at the same time providing fine-grained control. A demo and source code are available at https://genni.vizhub.ai.

6.
IEEE Comput Graph Appl ; 41(6): 37-47, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34559644

RESUMO

We present how to integrate Design Sprints and project-based learning into introductory visualization courses. A design sprint is a unique process based on rapid prototyping and user testing to define goals and validate ideas before starting costly development. The well-defined, interactive, and time-constrained design cycle makes design sprints a promising option for teaching project-based and active-learning-centered courses to increase student engagement and hands-on experience. Over the past five years, we have adjusted the design sprint methodology for teaching a range of visualization courses. We present a detailed guide on incorporating design sprints into large undergraduate and small professional development courses in both online and on-campus settings. Design sprint results, including quantitative and qualitative student feedback, show that design sprints engage students and help practice and apply visualization and design skills. We provide design sprint teaching materials, show examples of student-created work, and discuss limitations and lessons learned.

7.
Nature ; 593(7858): 238-243, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33828297

RESUMO

Genome-wide association studies (GWAS) have identified thousands of noncoding loci that are associated with human diseases and complex traits, each of which could reveal insights into the mechanisms of disease1. Many of the underlying causal variants may affect enhancers2,3, but we lack accurate maps of enhancers and their target genes to interpret such variants. We recently developed the activity-by-contact (ABC) model to predict which enhancers regulate which genes and validated the model using CRISPR perturbations in several cell types4. Here we apply this ABC model to create enhancer-gene maps in 131 human cell types and tissues, and use these maps to interpret the functions of GWAS variants. Across 72 diseases and complex traits, ABC links 5,036 GWAS signals to 2,249 unique genes, including a class of 577 genes that appear to influence multiple phenotypes through variants in enhancers that act in different cell types. In inflammatory bowel disease (IBD), causal variants are enriched in predicted enhancers by more than 20-fold in particular cell types such as dendritic cells, and ABC achieves higher precision than other regulatory methods at connecting noncoding variants to target genes. These variant-to-function maps reveal an enhancer that contains an IBD risk variant and that regulates the expression of PPIF to alter the membrane potential of mitochondria in macrophages. Our study reveals principles of genome regulation, identifies genes that affect IBD and provides a resource and generalizable strategy to connect risk variants of common diseases to their molecular and cellular functions.


Assuntos
Elementos Facilitadores Genéticos/genética , Predisposição Genética para Doença , Variação Genética/genética , Genoma Humano/genética , Estudo de Associação Genômica Ampla , Doenças Inflamatórias Intestinais/genética , Linhagem Celular , Cromossomos Humanos Par 10/genética , Ciclofilinas/genética , Células Dendríticas , Feminino , Humanos , Macrófagos/metabolismo , Masculino , Mitocôndrias/metabolismo , Especificidade de Órgãos/genética , Fenótipo
8.
IEEE Trans Vis Comput Graph ; 27(2): 1214-1224, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048730

RESUMO

data has no natural scale and so interactive data visualizations must provide techniques to allow the user to choose their viewpoint and scale. Such techniques are well established in desktop visualization tools. The two most common techniques are zoom+pan and overview+detail. However, how best to enable the analyst to navigate and view abstract data at different levels of scale in immersive environments has not previously been studied. We report the findings of the first systematic study of immersive navigation techniques for 3D scatterplots. We tested four conditions that represent our best attempt to adapt standard 2D navigation techniques to data visualization in an immersive environment while still providing standard immersive navigation techniques through physical movement and teleportation. We compared room-sized visualization versus a zooming interface, each with and without an overview. We find significant differences in participants' response times and accuracy for a number of standard visual analysis tasks. Both zoom and overview provide benefits over standard locomotion support alone (i.e., physical movement and pointer teleportation). However, which variation is superior, depends on the task. We obtain a more nuanced understanding of the results by analyzing them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.

9.
IEEE Trans Vis Comput Graph ; 27(2): 283-293, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048741

RESUMO

Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.

10.
IEEE Trans Vis Comput Graph ; 27(2): 358-368, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33026994

RESUMO

Small multiples are miniature representations of visual information used generically across many domains. Handling large numbers of small multiples imposes challenges on many analytic tasks like inspection, comparison, navigation, or annotation. To address these challenges, we developed a framework and implemented a library called PILlNG.JS for designing interactive piling interfaces. Based on the piling metaphor, such interfaces afford flexible organization, exploration, and comparison of large numbers of small multiples by interactively aggregating visual objects into piles. Based on a systematic analysis of previous work, we present a structured design space to guide the design of visual piling interfaces. To enable designers to efficiently build their own visual piling interfaces, PILlNG.JS provides a declarative interface to avoid having to write low-level code and implements common aspects of the design space. An accompanying GUI additionally supports the dynamic configuration of the piling interface. We demonstrate the expressiveness of PILlNG.JS with examples from machine learning, immunofluorescence microscopy, genomics, and public health.

11.
Comput Vis ECCV ; 12363: 103-120, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33345257

RESUMO

For large-scale vision tasks in biomedical images, the labeled data is often limited to train effective deep models. Active learning is a common solution, where a query suggestion method selects representative unlabeled samples for annotation, and the new labels are used to improve the base model. However, most query suggestion models optimize their learnable parameters only on the limited labeled data and consequently become less effective for the more challenging unlabeled data. To tackle this, we propose a two-stream active query suggestion approach. In addition to the supervised feature extractor, we introduce an unsupervised one optimized on all raw images to capture diverse image features, which can later be improved by fine-tuning on new labels. As a use case, we build an end-to-end active learning framework with our query suggestion method for 3D synapse detection and mitochondria segmentation in connectomics. With the framework, we curate, to our best knowledge, the largest connectomics dataset with dense synapses and mitochondria annotation. On this new dataset, our method outperforms previous state-of-the-art methods by 3.1% for synapse and 3.8% for mitochondria in terms of region-of-interest proposal accuracy. We also apply our method to image classification, where it outperforms previous approaches on CIFAR-10 under the same limited annotation budget. The project page is https://zudi-lin.github.io/projects/#two_stream_active.

12.
Artigo em Inglês | MEDLINE | ID: mdl-33283211

RESUMO

Interest is growing rapidly in using deep learning to classify biomedical images, and interpreting these deep-learned models is necessary for life-critical decisions and scientific discovery. Effective interpretation techniques accelerate biomarker discovery and provide new insights into the etiology, diagnosis, and treatment of disease. Most interpretation techniques aim to discover spatially-salient regions within images, but few techniques consider imagery with multiple channels of information. For instance, highly multiplexed tumor and tissue images have 30-100 channels and require interpretation methods that work across many channels to provide deep molecular insights. We propose a novel channel embedding method that extracts features from each channel. We then use these features to train a classifier for prediction. Using this channel embedding, we apply an interpretation method to rank the most discriminative channels. To validate our approach, we conduct an ablation study on a synthetic dataset. Moreover, we demonstrate that our method aligns with biological findings on highly multiplexed images of breast cancer cells while outperforming baseline pipelines. Code is available at https://sabdelmagid.github.io/miccai2020-project/.

13.
Med Image Comput Comput Assist Interv ; 12265: 66-76, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33283212

RESUMO

Electron microscopy (EM) allows the identification of intracellular organelles such as mitochondria, providing insights for clinical and scientific studies. However, public mitochondria segmentation datasets only contain hundreds of instances with simple shapes. It is unclear if existing methods achieving human-level accuracy on these small datasets are robust in practice. To this end, we introduce the MitoEM dataset, a 3D mitochondria instance segmentation dataset with two (30µm)3 volumes from human and rat cortices respectively, 3, 600× larger than previous benchmarks. With around 40K instances, we find a great diversity of mitochondria in terms of shape and density. For evaluation, we tailor the implementation of the average precision (AP) metric for 3D data with a 45× speedup. On MitoEM, we find existing instance segmentation methods often fail to correctly segment mitochondria with complex shapes or close contacts with other instances. Thus, our MitoEM dataset poses new challenges to the field. We release our code and data: https://donglaiw.github.io/page/mitoEM/index.html.

14.
Artigo em Inglês | MEDLINE | ID: mdl-32946396

RESUMO

Blazars are celestial bodies of high interest to astronomers. In particular, through the analysis of photometric and polarimetric observations of blazars, astronomers aim to understand the physics of the blazar's relativistic jet. However, it is challenging to recognize correlations and time variations of the observed polarization, intensity, and color of the emitted light. In our prior study, we proposed TimeTubes to visualize a blazar dataset as a 3D volumetric tube. In this paper, we build primarily on the TimeTubes representation of blazar datasets to present a new visual analytics environment named TimeTubesX, into which we have integrated sophisticated feature and pattern detection techniques for effective location of observable and recurring time variation patterns in long-term, multi-dimensional datasets. Automatic feature extraction detects time intervals corresponding to well-known blazar behaviors. Dynamic visual querying allows users to search long-term observations for time intervals similar to a time interval of interest (query-by-example) or a sketch of temporal patterns (query-by-sketch). Users are also allowed to build up another visual query guided by the time interval of interest found in the previous process and refine the results. We demonstrate how TimeTubesX has been used successfully by domain experts for the detailed analysis of blazar datasets and report on the results.

15.
Artigo em Inglês | MEDLINE | ID: mdl-33768192

RESUMO

Advances in highly multiplexed tissue imaging are transforming our understanding of human biology by enabling detection and localization of 10-100 proteins at subcellular resolution (Bodenmiller, 2016). Efforts are now underway to create public atlases of multiplexed images of normal and diseased tissues (Rozenblatt-Rosen et al., 2020). Both research and clinical applications of tissue imaging benefit from recording data from complete specimens so that data on cell state and composition can be studied in the context of overall tissue architecture. As a practical matter, specimen size is limited by the dimensions of microscopy slides (2.5 × 7.5 cm or ~2-8 cm2 of tissue depending on shape). With current microscopy technology, specimens of this size can be imaged at sub-micron resolution across ~60 spectral channels and ~106 cells, resulting in image files of terabyte size. However, the rich detail and multiscale properties of these images pose a substantial computational challenge (Rashid et al., 2020). See Rashid et al. (2020) for an comparison of existing visualization tools targeting these multiplexed tissue images.

16.
IEEE Trans Vis Comput Graph ; 26(1): 884-894, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31425116

RESUMO

Automation of tasks can have critical consequences when humans lose agency over decision processes. Deep learning models are particularly susceptible since current black-box approaches lack explainable reasoning. We argue that both the visual interface and model structure of deep learning systems need to take into account interaction design. We propose a framework of collaborative semantic inference (CSI) for the co-design of interactions and models to enable visual collaboration between humans and algorithms. The approach exposes the intermediate reasoning process of models which allows semantic interactions with the visual metaphors of a problem, which means that a user can both understand and control parts of the model reasoning process. We demonstrate the feasibility of CSI with a co-designed case study of a document summarization system.


Assuntos
Gráficos por Computador , Aprendizado Profundo , Semântica , Simulação por Computador , Humanos , Interface Usuário-Computador , Redação
17.
IEEE Trans Vis Comput Graph ; 26(1): 611-621, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31442989

RESUMO

We present Scalable Insets, a technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visualizations such as gigapixel images, matrices, or maps. Exploration of many but sparsely-distributed patterns in multiscale visualizations is challenging as visual representations change across zoom levels, context and navigational cues get lost upon zooming, and navigation is time consuming. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the annotated patterns. Insets support users in searching, comparing, and contextualizing patterns while reducing the amount of navigation needed. They are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport. In a controlled user study with 18 participants, we found that Scalable Insets can speed up visual search and improve the accuracy of pattern comparison at the cost of slower frequency estimation compared to a baseline technique. A second study with 6 experts in the field of genomics showed that Scalable Insets is easy to learn and provides first insights into how Scalable Insets can be applied in an open-ended data exploration scenario.


Assuntos
Gráficos por Computador , Curadoria de Dados , Interface Usuário-Computador , Adolescente , Adulto , Algoritmos , Bases de Dados Factuais , Feminino , Genômica , Humanos , Masculino , Mapas como Assunto , Análise e Desempenho de Tarefas , Adulto Jovem
18.
IEEE Trans Vis Comput Graph ; 26(1): 227-237, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31514138

RESUMO

Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias , Redes Neurais de Computação , Análise por Conglomerados , Humanos , Neoplasias/classificação , Neoplasias/diagnóstico por imagem , Neoplasias/patologia , Fenótipo , Software , Biologia de Sistemas
19.
Comput Graph Forum ; 39(3): 167-179, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34334852

RESUMO

We present Peax, a novel feature-based technique for interactive visual pattern search in sequential data, like time series or data mapped to a genome sequence. Visually searching for patterns by similarity is often challenging because of the large search space, the visual complexity of patterns, and the user's perception of similarity. For example, in genomics, researchers try to link patterns in multivariate sequential data to cellular or pathogenic processes, but a lack of ground truth and high variance makes automatic pattern detection unreliable. We have developed a convolutional autoencoder for unsupervised representation learning of regions in sequential data that can capture more visual details of complex patterns compared to existing similarity measures. Using this learned representation as features of the sequential data, our accompanying visual query system enables interactive feedback-driven adjustments of the pattern search to adapt to the users' perceived similarity. Using an active learning sampling strategy, Peax collects user-generated binary relevance feedback. This feedback is used to train a model for binary classification, to ultimately find other regions that exhibit patterns similar to the search target. We demonstrate Peax's features through a case study in genomics and report on a user study with eight domain experts to assess the usability and usefulness of Peax. Moreover, we evaluate the effectiveness of the learned feature representation for visual similarity search in two additional user studies. We find that our models retrieve significantly more similar patterns than other commonly used techniques.

20.
Cell Rep ; 29(9): 2849-2861.e6, 2019 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-31775050

RESUMO

During postnatal development, cerebellar climbing fibers alter their innervation strengths onto supernumerary Purkinje cell targets, generating a one-to-few connectivity pattern in adulthood. To get insight about the processes responsible for this remapping, we reconstructed serial electron microscopy datasets from mice during the first postnatal week. Between days 3 and 7, individual climbing fibers selectively add many synapses onto a subset of Purkinje targets in a positive-feedback manner, without pruning synapses from other targets. Active zone sizes of synapses associated with powerful versus weak inputs are indistinguishable. Changes in synapse number are thus the predominant form of early developmental plasticity. Finally, the numbers of climbing fibers and Purkinje cells in a local region nearly match. Initial over-innervation of Purkinje cells by climbing fibers is therefore economical: the number of axons entering a region is enough to assure that each ultimately retains a postsynaptic target and that none branched there in vain.


Assuntos
Cerebelo/fisiopatologia , Fibras Nervosas/metabolismo , Sinapses/metabolismo , Animais , Humanos , Camundongos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...