Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-34882554

RESUMO

Augmented Reality (AR) embeds digital information into objects of the physical world. Data can be shown in-situ, thereby enabling real-time visual comparisons and object search in real-life user tasks, such as comparing products and looking up scores in a sports game. While there have been studies on designing AR interfaces for situated information retrieval, there has only been limited research on AR object labeling for visual search tasks in the spatial environment. In this paper, we identify and categorize different design aspects in AR label design and report on a formal user study on labels for out-of-view objects to support visual search tasks in AR. We design three visualization techniques for out-of-view object labeling in AR, which respectively encode the relative physical position (height-encoded), the rotational direction (angle-encoded), and the label values (value-encoded) of the objects. We further implement two traditional in-view object labeling techniques, where labels are placed either next to the respective objects (situated) or at the edge of the AR FoV (boundary). We evaluate these ve different label conditions in three visual search tasks for static objects. Our study shows that out-of-view object labels are benecial when searching for objects outside the FoV, spatial orientation, and when comparing multiple spatially sparse objects. Angle-encoded labels with directional cues of the surrounding objects have the overall best performance with the highest user satisfaction. We discuss the implications of our ndings for future immersive AR interface design.

2.
Artigo em Inglês | MEDLINE | ID: mdl-34587072

RESUMO

Table2Text systems generate textual output based on structured data utilizing machine learning. These systems are essential for fluent natural language interfaces in tools such as virtual assistants; however, left to generate freely these ML systems often produce misleading or unexpected outputs. GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text. The tool utilizes a deep learning model designed with explicit control states. These controls allow users to globally constrain model generations, without sacrificing the representation power of the deep learning models. The visual interface makes it possible for users to interact with AI systems following a Refine-Forecast paradigm to ensure that the generation system acts in a manner human users find suitable. We report multiple use cases on two experiments that improve over uncontrolled generation approaches, while at the same time providing fine-grained control. A demo and source code are available at https://genni.vizhub.ai.

3.
IEEE Comput Graph Appl ; 41(6): 37-47, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34559644

RESUMO

We present how to integrate Design Sprints and project-based learning into introductory visualization courses. A design sprint is a unique process based on rapid prototyping and user testing to define goals and validate ideas before starting costly development. The well-defined, interactive, and time-constrained design cycle makes design sprints a promising option for teaching project-based and active-learning-centered courses to increase student engagement and hands-on experience. Over the past five years, we have adjusted the design sprint methodology for teaching a range of visualization courses. We present a detailed guide on incorporating design sprints into large undergraduate and small professional development courses in both online and on-campus settings. Design sprint results, including quantitative and qualitative student feedback, show that design sprints engage students and help practice and apply visualization and design skills. We provide design sprint teaching materials, show examples of student-created work, and discuss limitations and lessons learned.

4.
IEEE Trans Vis Comput Graph ; 27(2): 1214-1224, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048730

RESUMO

data has no natural scale and so interactive data visualizations must provide techniques to allow the user to choose their viewpoint and scale. Such techniques are well established in desktop visualization tools. The two most common techniques are zoom+pan and overview+detail. However, how best to enable the analyst to navigate and view abstract data at different levels of scale in immersive environments has not previously been studied. We report the findings of the first systematic study of immersive navigation techniques for 3D scatterplots. We tested four conditions that represent our best attempt to adapt standard 2D navigation techniques to data visualization in an immersive environment while still providing standard immersive navigation techniques through physical movement and teleportation. We compared room-sized visualization versus a zooming interface, each with and without an overview. We find significant differences in participants' response times and accuracy for a number of standard visual analysis tasks. Both zoom and overview provide benefits over standard locomotion support alone (i.e., physical movement and pointer teleportation). However, which variation is superior, depends on the task. We obtain a more nuanced understanding of the results by analyzing them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.

5.
IEEE Trans Vis Comput Graph ; 27(2): 283-293, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048741

RESUMO

Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.

6.
Artigo em Inglês | MEDLINE | ID: mdl-32946396

RESUMO

Blazars are celestial bodies of high interest to astronomers. In particular, through the analysis of photometric and polarimetric observations of blazars, astronomers aim to understand the physics of the blazar's relativistic jet. However, it is challenging to recognize correlations and time variations of the observed polarization, intensity, and color of the emitted light. In our prior study, we proposed TimeTubes to visualize a blazar dataset as a 3D volumetric tube. In this paper, we build primarily on the TimeTubes representation of blazar datasets to present a new visual analytics environment named TimeTubesX, into which we have integrated sophisticated feature and pattern detection techniques for effective location of observable and recurring time variation patterns in long-term, multi-dimensional datasets. Automatic feature extraction detects time intervals corresponding to well-known blazar behaviors. Dynamic visual querying allows users to search long-term observations for time intervals similar to a time interval of interest (query-by-example) or a sketch of temporal patterns (query-by-sketch). Users are also allowed to build up another visual query guided by the time interval of interest found in the previous process and refine the results. We demonstrate how TimeTubesX has been used successfully by domain experts for the detailed analysis of blazar datasets and report on the results.

7.
IEEE Trans Vis Comput Graph ; 26(1): 227-237, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31514138

RESUMO

Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias , Redes Neurais de Computação , Análise por Conglomerados , Humanos , Neoplasias/classificação , Neoplasias/diagnóstico por imagem , Neoplasias/patologia , Fenótipo , Software , Biologia de Sistemas
8.
Artigo em Inglês | MEDLINE | ID: mdl-30136947

RESUMO

With the rapid increase in raw volume data sizes, such as terabyte-sized microscopy volumes, the corresponding segmentation label volumes have become extremely large as well. We focus on integer label data, whose efficient representation in memory, as well as fast random data access, pose an even greater challenge than the raw image data. Often, it is crucial to be able to rapidly identify which segments are located where, whether for empty space skipping for fast rendering, or for spatial proximity queries. We refer to this process as culling. In order to enable efficient culling of millions of labeled segments, we present a novel hybrid approach that combines deterministic and probabilistic representations of label data in a data-adaptive hierarchical data structure that we call the label list tree. In each node, we adaptively encode label data using either a probabilistic constant-time access representation for fast conservative culling, or a deterministic logarithmic-time access representation for exact queries. We choose the best data structures for representing the labels of each spatial region while building the label list tree. At run time, we further employ a novel query-adaptive culling strategy. While filtering a query down the tree, we prune it successively, and in each node adaptively select the representation that is best suited for evaluating the pruned query, depending on its size. We show an analysis of the efficiency of our approach with several large data sets from connectomics, including a brain scan with more than 13 million labeled segments, and compare our method to conventional culling approaches. Our approach achieves significant reductions in storage size as well as faster query times.

9.
IEEE Trans Vis Comput Graph ; 24(1): 974-983, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866532

RESUMO

Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.

10.
IEEE Trans Vis Comput Graph ; 24(1): 853-861, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866534

RESUMO

This paper presents Abstractocyte, a system for the visual analysis of astrocytes and their relation to neurons, in nanoscale volumes of brain tissue. Astrocytes are glial cells, i.e., non-neuronal cells that support neurons and the nervous system. The study of astrocytes has immense potential for understanding brain function. However, their complex and widely-branching structure requires high-resolution electron microscopy imaging and makes visualization and analysis challenging. Furthermore, the structure and function of astrocytes is very different from neurons, and therefore requires the development of new visualization and analysis tools. With Abstractocyte, biologists can explore the morphology of astrocytes using various visual abstraction levels, while simultaneously analyzing neighboring neurons and their connectivity. We define a novel, conceptual 2D abstraction space for jointly visualizing astrocytes and neurons. Neuroscientists can choose a specific joint visualization as a point in this space. Interactively moving this point allows them to smoothly transition between different abstraction levels in an intuitive manner. In contrast to simply switching between different visualizations, this preserves the visual context and correlations throughout the transition. Users can smoothly navigate from concrete, highly-detailed 3D views to simplified and abstracted 2D views. In addition to investigating astrocytes, neurons, and their relationships, we enable the interactive analysis of the distribution of glycogen, which is of high importance to neuroscientists. We describe the design of Abstractocyte, and present three case studies in which neuroscientists have successfully used our system to assess astrocytic coverage of synapses, glycogen distribution in relation to synapses, and astrocytic-mitochondria coverage.


Assuntos
Astrócitos/citologia , Conectoma/métodos , Imageamento Tridimensional/métodos , Software , Gráficos por Computador , Humanos , Neurônios/citologia
11.
IEEE Trans Vis Comput Graph ; 24(1): 457-467, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866590

RESUMO

We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.


Assuntos
Gráficos por Computador , Holografia/métodos , Imageamento Tridimensional/métodos , Interface Usuário-Computador , Realidade Virtual , Feminino , Humanos , Masculino , Percepção , Análise e Desempenho de Tarefas
12.
Oncotarget ; 7(25): 38408-38426, 2016 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-27224909

RESUMO

We have previously shown that stromal cells desensitize breast cancer cells to the anti-estrogen fulvestrant and, along with it, downregulate the expression of TMEM26 (transmembrane protein 26). In an effort to study the function and regulation of TMEM26 in breast cancer cells, we found that breast cancer cells express non-glycosylated and N-glycosylated isoforms of the TMEM26 protein and demonstrate that N-glycosylation is important for its retention at the plasma membrane. Fulvestrant induced significant changes in expression and in the N-glycosylation status of TMEM26. In primary breast cancer, TMEM26 protein expression was higher in ERα (estrogen receptor α)/PR (progesterone receptor)-positive cancers. These data suggest that ERα is a major regulator of TMEM26. Significant changes in TMEM26 expression and N-glycosylation were also found, when MCF-7 and T47D cells acquired fulvestrant resistance. Furthermore, patients who received aromatase inhibitor treatment tend to have a higher risk of recurrence when tumoral TMEM26 protein expression is low. In addition, TMEM26 negatively regulates the expression of integrin ß1, an important factor involved in endocrine resistance. Data obtained by spheroid formation assays confirmed that TMEM26 and integrin ß1 can have opposite effects in breast cancer cells. These data are consistent with the hypothesis that, in ERα-positive breast cancer, TMEM26 may function as a tumor suppressor by impeding the acquisition of endocrine resistance. In contrast, in ERα-negative breast cancer, particularly triple-negative cancer, high TMEM26 expression was found to be associated with a higher risk of recurrence. This implies that TMEM26 has different functions in ERα-positive and -negative breast cancer.


Assuntos
Neoplasias da Mama/tratamento farmacológico , Neoplasias da Mama/metabolismo , Proteínas de Membrana/biossíntese , Biomarcadores Farmacológicos/metabolismo , Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Linhagem Celular Tumoral , Resistencia a Medicamentos Antineoplásicos , Estradiol/análogos & derivados , Estradiol/farmacologia , Receptor alfa de Estrogênio/biossíntese , Feminino , Fulvestranto , Humanos , Integrina beta1/biossíntese , Células MCF-7 , Proteínas de Membrana/genética , Recidiva Local de Neoplasia/genética , Recidiva Local de Neoplasia/metabolismo , Recidiva Local de Neoplasia/patologia , RNA/genética , RNA/metabolismo
13.
IEEE Trans Vis Comput Graph ; 22(1): 738-46, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26529725

RESUMO

In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.


Assuntos
Gráficos por Computador , Conectoma/métodos , Processamento de Imagem Assistida por Computador/métodos , Neurociências/métodos , Humanos , Internet , Interface Usuário-Computador
14.
IEEE Trans Vis Comput Graph ; 20(12): 2369-78, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356951

RESUMO

We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.


Assuntos
Encéfalo/fisiologia , Gráficos por Computador , Conectoma/métodos , Imageamento Tridimensional/métodos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Encéfalo/citologia , Humanos , Modelos Teóricos , Neurônios/citologia , Ferrovias
15.
IEEE Trans Vis Comput Graph ; 20(12): 2466-75, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356960

RESUMO

Proofreading refers to the manual correction of automatic segmentations of image data. In connectomics, electron microscopy data is acquired at nanometer-scale resolution and results in very large image volumes of brain tissue that require fully automatic segmentation algorithms to identify cell boundaries. However, these algorithms require hundreds of corrections per cubic micron of tissue. Even though this task is time consuming, it is fairly easy for humans to perform corrections through splitting, merging, and adjusting segments during proofreading. In this paper we present the design and implementation of Mojo, a fully-featured single-user desktop application for proofreading, and Dojo, a multi-user web-based application for collaborative proofreading. We evaluate the accuracy and speed of Mojo, Dojo, and Raveler, a proofreading tool from Janelia Farm, through a quantitative user study. We designed a between-subjects experiment and asked non-experts to proofread neurons in a publicly available connectomics dataset. Our results show a significant improvement of corrections using web-based Dojo, when given the same amount of time. In addition, all participants using Dojo reported better usability. We discuss our findings and provide an analysis of requirements for designing visual proofreading software.


Assuntos
Gráficos por Computador , Conectoma/métodos , Imageamento Tridimensional/métodos , Adulto , Algoritmos , Animais , Feminino , Humanos , Internet , Masculino , Camundongos , Software , Adulto Jovem
16.
IEEE Trans Vis Comput Graph ; 19(12): 2868-77, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24051854

RESUMO

This paper presents ConnectomeExplorer, an application for the interactive exploration and query-guided visual analysis of large volumetric electron microscopy (EM) data sets in connectomics research. Our system incorporates a knowledge-based query algebra that supports the interactive specification of dynamically evaluated queries, which enable neuroscientists to pose and answer domain-specific questions in an intuitive manner. Queries are built step by step in a visual query builder, building more complex queries from combinations of simpler queries. Our application is based on a scalable volume visualization framework that scales to multiple volumes of several teravoxels each, enabling the concurrent visualization and querying of the original EM volume, additional segmentation volumes, neuronal connectivity, and additional meta data comprising a variety of neuronal data attributes. We evaluate our application on a data set of roughly one terabyte of EM data and 750 GB of segmentation data, containing over 4,000 segmented structures and 1,000 synapses. We demonstrate typical use-case scenarios of our collaborators in neuroscience, where our system has enabled them to answer specific scientific questions using interactive querying and analysis on the full-size data for the first time.


Assuntos
Encéfalo/ultraestrutura , Conectoma/métodos , Mineração de Dados/métodos , Imageamento Tridimensional/métodos , Microscopia Eletrônica/métodos , Fibras Nervosas Mielinizadas/ultraestrutura , Interface Usuário-Computador , Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
IEEE Comput Graph Appl ; 33(4): 50-61, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24808059

RESUMO

Recent advances in high-resolution microscopy let neuroscientists acquire neural-tissue volume data of extremely large sizes. However, the tremendous resolution and the high complexity of neural structures present big challenges to storage, processing, and visualization at interactive rates. A proposed system provides interactive exploration of petascale (petavoxel) volumes resulting from high-throughput electron microscopy data streams. The system can concurrently handle multiple volumes and can support the simultaneous visualization of high-resolution voxel segmentation data. Its visualization-driven design restricts most computations to a small subset of the data. It employs a multiresolution virtual-memory architecture for better scalability than previous approaches and for handling incomplete data. Researchers have employed it for a 1-teravoxel mouse cortex volume, of which several hundred axons and dendrites as well as synapses have been segmented and labeled.


Assuntos
Gráficos por Computador , Conectoma , Sistemas de Gerenciamento de Base de Dados , Processamento de Imagem Assistida por Computador/métodos , Microscopia Eletrônica , Animais , Encéfalo/citologia , Encéfalo/fisiologia , Química Encefálica , Imageamento Tridimensional/métodos , Camundongos , Ratos
18.
IEEE Comput Graph Appl ; 30(3): 58-70, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20650718

RESUMO

Data sets imaged with modern electron microscopes can range from tens of terabytes to about one petabyte. Two new tools, Ssecrett and NeuroTrace, support interactive exploration and analysis of large-scale optical-and electron-microscopy images to help scientists reconstruct complex neural circuits of the mammalian nervous system.


Assuntos
Encéfalo/anatomia & histologia , Gráficos por Computador , Microscopia Eletrônica , Modelos Neurológicos , Neurociências/métodos , Software , Encéfalo/fisiologia , Biologia Computacional , Bases de Dados Factuais , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador
19.
IEEE Trans Vis Comput Graph ; 15(6): 1505-14, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19834227

RESUMO

Recent advances in scanning technology provide high resolution EM (Electron Microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes.


Assuntos
Gráficos por Computador , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos , Microscopia Eletrônica/métodos , Rede Nervosa/anatomia & histologia , Algoritmos , Animais , Córtex Cerebral/anatomia & histologia , Distribuição de Qui-Quadrado , Camundongos , Distribuição Normal
20.
IEEE Trans Vis Comput Graph ; 13(6): 1696-703, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17968127

RESUMO

Surgical approaches tailored to an individual patient's anatomy and pathology have become standard in neurosurgery. Precise preoperative planning of these procedures, however, is necessary to achieve an optimal therapeutic effect. Therefore, multiple radiological imaging modalities are used prior to surgery to delineate the patient's anatomy, neurological function, and metabolic processes. Developing a three-dimensional perception of the surgical approach, however, is traditionally still done by mentally fusing multiple modalities. Concurrent 3D visualization of these datasets can, therefore, improve the planning process significantly. In this paper we introduce an application for planning of individual neurosurgical approaches with high-quality interactive multimodal volume rendering. The application consists of three main modules which allow to (1) plan the optimal skin incision and opening of the skull tailored to the underlying pathology; (2) visualize superficial brain anatomy, function and metabolism; and (3) plan the patient-specific approach for surgery of deep-seated lesions. The visualization is based on direct multi-volume raycasting on graphics hardware, where multiple volumes from different modalities can be displayed concurrently at interactive frame rates. Graphics memory limitations are avoided by performing raycasting on bricked volumes. For preprocessing tasks such as registration or segmentation, the visualization modules are integrated into a larger framework, thus supporting the entire workflow of preoperative planning.


Assuntos
Gráficos por Computador , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Modelos Neurológicos , Neurocirurgia/métodos , Cirurgia Assistida por Computador/métodos , Interface Usuário-Computador , Algoritmos , Simulação por Computador , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Anatômicos , Cuidados Pré-Operatórios/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...