Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
IEEE Trans Vis Comput Graph ; 29(12): 4832-4844, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-35914058

RESUMO

The MD-Cave is an immersive analytics system that provides enhanced stereoscopic visualizations to support visual diagnoses performed by radiologists. The system harnesses contemporary paradigms in immersive visualization and 3D interaction, which are better suited for investigating 3D volumetric data. We retain practicality through efficient utilization of desk space and comfort for radiologists in terms of frequent long duration use. MD-Cave is general and incorporates: (1) high resolution stereoscopic visualizations through a surround triple-monitor setup, (2) 3D interactions through head and hand tracking, (3) and a general framework that supports 3D visualization of deep-seated anatomical structures without the need for explicit segmentation algorithms. Such a general framework expands the utility of our system to many diagnostic scenarios. We have developed MD-Cave through close collaboration and feedback from two expert radiologists who evaluated the utility of MD-Cave and the 3D interactions in the context of radiological examinations. We also provide evaluation of MD-Cave through case studies performed by an expert radiologist and concrete examples on multiple real-world diagnostic scenarios, such as pancreatic cancer, shoulder-CT, and COVID-19 Chest CT examination.


Assuntos
Algoritmos , Gráficos por Computador , Humanos , Tomografia Computadorizada por Raios X , Retroalimentação , Radiologistas
2.
IEEE Trans Vis Comput Graph ; 29(7): 3182-3194, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35213310

RESUMO

The growing complexity of spatial and structural information in 3D data makes data inspection and visualization a challenging task. We describe a method to create a planar embedding of 3D treelike structures using their skeleton representations. Our method maintains the original geometry, without overlaps, to the best extent possible, allowing exploration of the topology within a single view. We present a novel camera view generation method which maximizes the visible geometric attributes (segment shape and relative placement between segments). Camera views are created for individual segments and are used to determine local bending angles at each node by projecting them to 2D. The final embedding is generated by minimizing an energy function (the weights of which are user adjustable) based on branch length and the 2D angles, while avoiding intersections. The user can also interactively modify segment placement within the 2D embedding, and the overall embedding will update accordingly. A global to local interactive exploration is provided using hierarchical camera views that are created for subtrees within the structure. We evaluate our method both qualitatively and quantitatively and demonstrate our results by constructing planar visualizations of line data (traced neurons) and volume data (CT vascular and bronchial data).

3.
IEEE Trans Vis Comput Graph ; 29(3): 1651-1663, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34780328

RESUMO

We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37966931

RESUMO

We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38096098

RESUMO

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language. Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

6.
IEEE Trans Vis Comput Graph ; 29(3): 1625-1637, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-34757909

RESUMO

Recent advances in high-resolution microscopy have allowed scientists to better understand the underlying brain connectivity. However, due to the limitation that biological specimens can only be imaged at a single timepoint, studying changes to neural projections over time is limited to observations gathered using population analysis. In this article, we introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject across specified age-timepoints. To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (GAN) that translates features of neuronal structures across age-timepoints for large brain microscopy volumes. We improve the reconstruction quality of the predicted neuronal structures by implementing a density multiplier and a new loss function, called the hallucination loss. Moreover, to alleviate artifacts that occur due to tiling of large input volumes, we introduce a spatial-consistency module in the training pipeline of neuReGANerator. Finally, to visualize the change in projections, predicted using neuReGANerator, NeuRegenerate offers two modes: (i) neuroCompare to simultaneously visualize the difference in the structures of the neuronal projections, from two age domains (using structural view and bounded view), and (ii) neuroMorph, a vesselness-based morphing technique to interactively visualize the transformation of the structures from one age-timepoint to the other. Our framework is designed specifically for volumes acquired using wide-field microscopy. We demonstrate our framework by visualizing the structural changes within the cholinergic system of the mouse brain between a young and old specimen.


Assuntos
Gráficos por Computador , Processamento de Imagem Assistida por Computador , Animais , Camundongos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Microscopia
7.
IEEE Trans Vis Comput Graph ; 28(1): 227-237, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34587075

RESUMO

Significant work has been done towards deep learning (DL) models for automatic lung and lesion segmentation and classification of COVID-19 on chest CT data. However, comprehensive visualization systems focused on supporting the dual visual+DL diagnosis of COVID-19 are non-existent. We present COVID-view, a visualization application specially tailored for radiologists to diagnose COVID-19 from chest CT data. The system incorporates a complete pipeline of automatic lungs segmentation, localization/isolation of lung abnormalities, followed by visualization, visual and DL analysis, and measurement/quantification tools. Our system combines the traditional 2D workflow of radiologists with newer 2D and 3D visualization techniques with DL support for a more comprehensive diagnosis. COVID-view incorporates a novel DL model for classifying the patients into positive/negative COVID-19 cases, which acts as a reading aid for the radiologist using COVID-view and provides the attention heatmap as an explainable DL for the model output. We designed and evaluated COVID-view through suggestions, close feedback and conducting case studies of real-world patient data by expert radiologists who have substantial experience diagnosing chest CT scans for COVID-19, pulmonary embolism, and other forms of lung infections. We present requirements and task analysis for the diagnosis of COVID-19 that motivate our design choices and results in a practical system which is capable of handling real-world patient cases.


Assuntos
COVID-19 , Gráficos por Computador , Humanos , Pulmão/diagnóstico por imagem , SARS-CoV-2 , Tomografia Computadorizada por Raios X
8.
IEEE Trans Vis Comput Graph ; 28(3): 1457-1468, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-32870794

RESUMO

We present 3D virtual pancreatography (VP), a novel visualization procedure and application for non-invasive diagnosis and classification of pancreatic lesions, the precursors of pancreatic cancer. Currently, non-invasive screening of patients is performed through visual inspection of 2D axis-aligned CT images, though the relevant features are often not clearly visible nor automatically detected. VP is an end-to-end visual diagnosis system that includes: A machine learning based automatic segmentation of the pancreatic gland and the lesions, a semi-automatic approach to extract the primary pancreatic duct, a machine learning based automatic classification of lesions into four prominent types, and specialized 3D and 2D exploratory visualizations of the pancreas, lesions and surrounding anatomy. We combine volume rendering with pancreas- and lesion-centric visualizations and measurements for effective diagnosis. We designed VP through close collaboration and feedback from expert radiologists, and evaluated it on multiple real-world CT datasets with various pancreatic lesions and case studies examined by the expert radiologists.


Assuntos
Neoplasias Pancreáticas , Tomografia Computadorizada por Raios X , Gráficos por Computador , Humanos , Aprendizado de Máquina , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
9.
IEEE Trans Vis Comput Graph ; 28(12): 4951-4965, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34478372

RESUMO

We introduce NeuroConstruct, a novel end-to-end application for the segmentation, registration, and visualization of brain volumes imaged using wide-field microscopy. NeuroConstruct offers a Segmentation Toolbox with various annotation helper functions that aid experts to effectively and precisely annotate micrometer resolution neurites. It also offers an automatic neurites segmentation using convolutional neuronal networks (CNN) trained by the Toolbox annotations and somas segmentation using thresholding. To visualize neurites in a given volume, NeuroConstruct offers a hybrid rendering by combining iso-surface rendering of high-confidence classified neurites, along with real-time rendering of raw volume using a 2D transfer function for voxel classification score versus voxel intensity value. For a complete reconstruction of the 3D neurites, we introduce a Registration Toolbox that provides automatic coarse-to-fine alignment of serially sectioned samples. The quantitative and qualitative analysis show that NeuroConstruct outperforms the state-of-the-art in all design aspects. NeuroConstruct was developed as a collaboration between computer scientists and neuroscientists, with an application to the study of cholinergic neurons, which are severely affected in Alzheimer's disease.


Assuntos
Encéfalo , Imageamento Tridimensional , Microscopia , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Gráficos por Computador , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Neuritos
10.
IEEE Trans Vis Comput Graph ; 27(3): 2174-2185, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-31613771

RESUMO

Machine learning is a powerful and effective tool for medical image analysis to perform computer-aided diagnosis (CAD). Having great potential in improving the accuracy of a diagnosis, CAD systems are often analyzed in terms of the final accuracy, leading to a limited understanding of the internal decision process, impossibility to gain insights, and ultimately to skepticism from clinicians. We present a visual analytics approach to uncover the decision-making process of a CAD system for classifying pancreatic cystic lesions. This CAD algorithm consists of two distinct components: random forest (RF), which classifies a set of predefined features, including demographic features, and a convolutional neural network (CNN), which analyzes radiological (imaging) features of the lesions. We study the class probabilities generated by the RF and the semantical meaning of the features learned by the CNN. We also use an eye tracker to better understand which radiological features are particularly useful for a radiologist to make a diagnosis and to quantitatively compare with the features that lead the CNN to its final classification decision. Additionally, we evaluate the effects and benefits of supplying the CAD system with a case-based visual aid in a second-reader setting.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias Pancreáticas/diagnóstico por imagem , Adulto , Algoritmos , Feminino , Humanos , Masculino , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Radiografia Abdominal , Adulto Jovem
11.
IEEE Trans Vis Comput Graph ; 15(5): 802-14, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19590106

RESUMO

The Lattice Boltzmann method (LBM) for visual simulation of fluid flow generally employs cubic Cartesian (CC) lattices such as the D3Q13 and D3Q19 lattices for the particle transport. However, the CC lattices lead to suboptimal representation of the simulation space. We introduce the face-centered cubic (FCC) lattice, fD3Q13, for LBM simulations. Compared to the CC lattices, the fD3Q13 lattice creates a more isotropic sampling of the simulation domain and its single lattice speed (i.e., link length) simplifies the computations and data storage. Furthermore, the fD3Q13 lattice can be decomposed into two independent interleaved lattices, one of which can be discarded, which doubles the simulation speed. The resulting LBM simulation can be efficiently mapped to the GPU, further increasing the computational performance. We show the numerical advantages of the FCC lattice on channeled flow in 2D and the flow-past-a-sphere benchmark in 3D. In both cases, the comparison is against the corresponding CC lattices using the analytical solutions for the systems as well as velocity field visualizations. We also demonstrate the performance advantages of the fD3Q13 lattice for interactive simulation and rendering of hot smoke in an urban environment using thermal LBM.

12.
IEEE Trans Vis Comput Graph ; 24(12): 3111-3122, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29990124

RESUMO

Visualization of medical organs and biological structures is a challenging task because of their complex geometry and the resultant occlusions. Global spherical and planar mapping techniques simplify the complex geometry and resolve the occlusions to aid in visualization. However, while resolving the occlusions these techniques do not preserve the geometric context, making them less suitable for mission-critical biomedical visualization tasks. In this paper, we present a shape-preserving local mapping technique for resolving occlusions locally while preserving the overall geometric context. More specifically, we present a novel visualization algorithm, LMap, for conformally parameterizing and deforming a selected local region-of-interest (ROI) on an arbitrary surface. The resultant shape-preserving local mappings help to visualize complex surfaces while preserving the overall geometric context. The algorithm is based on the robust and efficient extrinsic Ricci flow technique, and uses the dynamic Ricci flow algorithm to guarantee the existence of a local map for a selected ROI on an arbitrary surface. We show the effectiveness and efficacy of our method in three challenging use cases: (1) multimodal brain visualization, (2) optimal coverage of virtual colonoscopy centerline flythrough, and (3) molecular surface visualization.


Assuntos
Gráficos por Computador , Imageamento Tridimensional/métodos , Algoritmos , Encéfalo/diagnóstico por imagem , Colonografia Tomográfica Computadorizada/métodos , Humanos , Imagem Multimodal , Propriedades de Superfície
13.
IEEE Trans Vis Comput Graph ; 24(7): 2209-2222, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-28600252

RESUMO

We introduce a novel approach for flame volume reconstruction from videos using inexpensive charge-coupled device (CCD) consumer cameras. The approach includes an economical data capture technique using inexpensive CCD cameras. Leveraging the smear feature of the CCD chip, we present a technique for synchronizing CCD cameras while capturing flame videos from different views. Our reconstruction is based on the radiative transport equation which enables complex phenomena such as emission, extinction, and scattering to be used in the rendering process. Both the color intensity and temperature reconstructions are implemented using the CUDA parallel computing framework, which provides real-time performance and allows visualization of reconstruction results after every iteration. We present the results of our approach using real captured data and physically-based simulated data. Finally, we also compare our approach against the other state-of-the-art flame volume reconstruction methods and demonstrate the efficacy and efficiency of our approach in four different applications: (1) rendering of reconstructed flames in virtual environments, (2) rendering of reconstructed flames in augmented reality, (3) flame stylization, and (4) reconstruction of other semitransparent phenomena.

14.
IEEE Trans Vis Comput Graph ; 13(1): 179-89, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17093346

RESUMO

We provide a physically-based framework for simulating the natural phenomena related to heat interaction between objects and the surrounding air. We introduce a heat transfer model between the heat source objects and the ambient flow environment, which includes conduction, convection, and radiation. The heat distribution of the objects is represented by a novel temperature texture. We simulate the thermal flow dynamics that models the air flow interacting with the heat by a hybrid thermal lattice Boltzmann model (HTLBM). The computational approach couples a multiple-relaxation-time LBM (MRTLBM) with a finite difference discretization of a standard advection-diffusion equation for temperature. In heat shimmering and mirage, the changes in the index of refraction of the surrounding air are attributed to temperature variation. A nonlinear ray tracing method is used for rendering. Interactive performance is achieved by accelerating the computation of both the MRTLBM and the heat transfer, as well as the rendering on contemporary graphics hardware (GPU).


Assuntos
Algoritmos , Gráficos por Computador , Temperatura Alta , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Modelos Teóricos , Ilusões Ópticas , Simulação por Computador , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Termodinâmica
15.
IEEE Trans Vis Comput Graph ; 23(1): 171-180, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27514050

RESUMO

We present a novel visualization framework, AnaFe, targeted at observing changes in the spleen over time through multiple image-derived features. Accurate monitoring of progressive changes is crucial for diseases that result in enlargement of the organ. Our system is comprised of multiple linked views combining visualization of temporal 3D organ data, related measurements, and features. Thus it enables the observation of progression and allows for simultaneous comparison within and between the subjects. AnaFe offers insights into the overall distribution of robustly extracted and reproducible quantitative imaging features and their changes within the population, and also enables detailed analysis of individual cases. It performs similarity comparison of temporal series of one subject to all other series in both sick and healthy groups. We demonstrate our system through two use case scenarios on a population of 189 spleen datasets from 68 subjects with various conditions observed over time.


Assuntos
Gráficos por Computador , Processamento de Imagem Assistida por Computador , Modelos Biológicos , Baço/diagnóstico por imagem , Feminino , Humanos , Masculino
16.
Med Image Comput Comput Assist Interv ; 10435: 150-158, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29881827

RESUMO

There are many different types of pancreatic cysts. These range from completely benign to malignant, and identifying the exact cyst type can be challenging in clinical practice. This work describes an automatic classification algorithm that classifies the four most common types of pancreatic cysts using computed tomography images. The proposed approach utilizes the general demographic information about a patient as well as the imaging appearance of the cyst. It is based on a Bayesian combination of the random forest classifier, which learns subclass-specific demographic, intensity, and shape features, and a new convolutional neural network that relies on the fine texture information. Quantitative assessment of the proposed method was performed using a 10-fold cross validation on 134 patients and reported a classification accuracy of 83.6%.

17.
IEEE Trans Vis Comput Graph ; 22(2): 1076-87, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26731452

RESUMO

We have developed a novel visualization system based on the reconstruction of high resolution and high frame rate images from a multi-tiered stream of samples that are rendered framelessly. This decoupling of the rendering system from the display system is particularly suitable when dealing with very high resolution displays or expensive rendering algorithms, where the latency of generating complete frames may be prohibitively high for interactive applications. In contrast to the traditional frameless rendering technique, we generate the lowest latency samples on the optimal sampling lattice in the 3D domain. This approach avoids many of the artifacts associated with existing sample caching and reprojection methods during interaction that may not be acceptable in many visualization applications. Advanced visualization effects are generated remotely and streamed into the reconstruction system using tiered samples with varying latencies and quality levels. We demonstrate the use of our visualization system for the exploration of volumetric data at stable guaranteed frame rates on high resolution displays, including a 470 megapixel tiled display as part of the Reality Deck immersive visualization facility.

18.
IEEE Trans Vis Comput Graph ; 10(4): 410-21, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-18579969

RESUMO

We present an innovative modeling and rendering primitive, called the O-buffer, as a framework for sample-based graphics. The 2D or 3D O-buffer is, in essence, a conventional image or a volume, respectively, except that samples are not restricted to a regular grid. A sample position in the O-buffer is recorded as an offset to the nearest grid point of a regular base grid (hence the name O-buffer). The O-buffer can greatly improve the expressive power of images and volumes. Image quality can be improved by storing more spatial information with samples and by avoiding multiple resamplings. It can be exploited to represent and render unstructured primitives, such as points, particles, and curvilinear or irregular volumes. The O-buffer is therefore a unified representation for a variety of graphics primitives and supports mixing them in the same scene. It is a semiregular structure which lends itself to efficient construction and rendering. O-buffers may assume a variety of forms including 2D O-buffers, 3D O-buffers, uniform O-buffers, nonuniform O-buffers, adaptive O-buffers, layered-depth O-buffers, and O-buffer trees. We demonstrate the effectiveness of the O--buffer in a variety of applications, such as image-based rendering, point sample rendering, and volume rendering.


Assuntos
Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
IEEE Trans Vis Comput Graph ; 10(2): 230-40, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15384648

RESUMO

We study texture projection based on a four region subdivision: magnification, minification, and two mixed regions. We propose improved versions of existing techniques by providing exact filtering methods which reduce both aliasing and overblurring, especially in the mixed regions. We further present a novel texture mapping algorithm called FAST (Footprint Area Sampled Texturing), which not only delivers high quality, but also is efficient. By utilizing coherence between neighboring pixels, performing prefiltering, and applying an area sampling scheme, we guarantee a minimum number of samples sufficient for effective antialiasing. Unlike existing methods (e.g., MIP-map, Feline), our method adapts the sampling rate in each chosen MIP-map level separately to avoid undersampling in the lower level l for effective antialiasing and to avoid oversampling in the higher level l + 1 for efficiency. Our method has been shown to deliver superior image quality to Feline and other methods while retaining the same efficiency. We also provide implementation trade offs to apply a variable degree of accuracy versus speed.


Assuntos
Algoritmos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão , Processamento de Sinais Assistido por Computador , Simulação por Computador , Imageamento Tridimensional/métodos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade
20.
IEEE Trans Vis Comput Graph ; 10(1): 15-28, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15382695

RESUMO

We present an efficient stereoscopic rendering algorithm supporting interactive navigation through large-scale 3D voxel-based environments. In this algorithm, most of the pixel values of the right image are derived from the left image by a fast 3D warping based on a specific stereoscopic projection geometry. An accelerated volumetric ray casting then fills the remaining gaps in the warped right image. Our algorithm has been parallelized on a multiprocessor by employing effective task partitioning schemes and achieved a high cache coherency and load balancing. We also extend our stereoscopic rendering to include view-dependent shading and transparency effects. We have applied our algorithm in two virtual navigation systems, flythrough over terrain and virtual colonoscopy, and reached interactive stereoscopic rendering rates of more than 10 frames per second on a 16-processor SGI Challenge.


Assuntos
Algoritmos , Gráficos por Computador , Imageamento Tridimensional/métodos , Sistemas On-Line , Fotogrametria/métodos , Interface Usuário-Computador , Gravação em Vídeo/métodos , Meio Ambiente , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão , Processamento de Sinais Assistido por Computador , Telemedicina/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA