Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Ann Surg ; 270(2): 384-389, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-29672404

RESUMO

OBJECTIVE: This study investigates the benefits of a surgical telementoring system based on an augmented reality head-mounted display (ARHMD) that overlays surgical instructions directly onto the surgeon's view of the operating field, without workspace obstruction. SUMMARY BACKGROUND DATA: In conventional telestrator-based telementoring, the surgeon views annotations of the surgical field by shifting focus to a nearby monitor, which substantially increases cognitive load. As an alternative, tablets have been used between the surgeon and the patient to display instructions; however, tablets impose additional obstructions of surgeon's motions. METHODS: Twenty medical students performed anatomical marking (Task1) and abdominal incision (Task2) on a patient simulator, in 1 of 2 telementoring conditions: ARHMD and telestrator. The dependent variables were placement error, number of focus shifts, and completion time. Furthermore, workspace efficiency was quantified as the number and duration of potential surgeon-tablet collisions avoided by the ARHMD. RESULTS: The ARHMD condition yielded smaller placement errors (Task1: 45%, P < 0.001; Task2: 14%, P = 0.01), fewer focus shifts (Task1: 93%, P < 0.001; Task2: 88%, P = 0.0039), and longer completion times (Task1: 31%, P < 0.001; Task2: 24%, P = 0.013). Furthermore, the ARHMD avoided potential tablet collisions (4.8 for 3.2 seconds in Task1; 3.8 for 1.3 seconds in Task2). CONCLUSION: The ARHMD system promises to improve accuracy and to eliminate focus shifts in surgical telementoring. Because ARHMD participants were able to refine their execution of instructions, task completion time increased. Unlike a tablet system, the ARHMD does not require modifying natural motions to avoid collisions.


Assuntos
Realidade Aumentada , Educação Médica/métodos , Cirurgia Geral/educação , Monitorização Intraoperatória/métodos , Simulação de Paciente , Procedimentos Cirúrgicos Operatórios/educação , Telemedicina/métodos , Adulto , Feminino , Humanos , Imageamento Tridimensional , Masculino , Cirurgiões/educação , Adulto Jovem
2.
IEEE Trans Vis Comput Graph ; 30(5): 2337-2346, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437098

RESUMO

VR headsets have limited rendering capability, which limits the size and detail of the virtual environment (VE) that can be used in VR applications. One solution is cloud VR, where the "thin" VR clients are assisted by a server. This paper describes Cio VR, a cloud VR system that provides fast loading times, as needed to let users see and interact with the VE quickly at session startup or after teleportation. The server reduces the original VE to a compact representation through near-far partitioning. The server renders the far region to an environment map which it sends to the client together with the near region geometry, from which the client renders quality frames locally, with low latency. The near region starts out small and grows progressively, with strict visual continuity, minimizing startup time. The low-latency and fast-startup advantages of CloVR have been validated in a user study where groups of 8 participants wearing all-in-one VR headsets (Quest 2's) were supported by a laptop server to run a collaborative VR application with a 25 million triangle VE.

3.
IEEE Trans Vis Comput Graph ; 30(5): 2454-2463, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437137

RESUMO

The paper introduces PreVR, a method for allowing the user of a VR application to preview a virtual environment (VE) around any number of corners. This way the user can gain line of sight to any part of the VE, no matter how distant or how heavily occluded it is. PreVR relies on a multiperspective visualization that implements a higher-order disocclusion effect with piecewise linear rays that bend multiple times as needed to reach the visualization target. PreVR was evaluated in a user study ($\mathrm{N}=88$) that investigates four points on the VR interface design continuum defined by the maximum disocclusion order $\delta$. In a first control condition (CC0), $\delta=0$, corresponds to conventional VR exploration with no preview capability. In a second control condition (CC1), $\delta=1$, corresponds to the prior art approach of giving the user a preview around the first corner. In a first experimental condition (EC3), $\delta=3$, so PreVR provided up to third-order disocclusion. In a second experimental condition (ECN), $\delta$ was not capped, so PreVR could provide a disocclusion effect of any order, as needed to reach any location in the VE. Participants searched for a stationary target, for a dynamic target moving on a random continuous trajectory, and for a transient dynamic target that appeared at random locations in the maze and disappeared 5s later. The study quantified VE exploration efficiency with four metrics: viewpoint translation, view direction rotation, number of teleportations, and task completion time. Results show that the previews afforded by PreVR bring a significant VE exploration efficiency advantage. ECN outperforms EC3, CC1, and CC0 for all metrics and all tasks, and EC3 frequently outperforms CC1 and CC0.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37027709

RESUMO

This paper proposes a general handheld stick haptic redirection method that allows the user to experience complex shapes with haptic feedback through both tapping and extended contact, such as in contour tracing. As the user extends the stick to make contact with a virtual object, the contact point with the virtual object and the targeted contact point with the physical object are continually updated, and the virtual stick is redirected to synchronize the virtual and real contacts. Redirection is applied either just to the virtual stick, or to both the virtual stick and hand. A user study (N = 26) confirms the effectiveness of the proposed redirection method. A first experiment following a two-interval forced-choice design reveals that the offset detection thresholds are [-15cm, +15cm]. A second experiment asks participants to guess the shape of an invisible virtual object by tapping it and by tracing its contour with the handheld stick, using a real world disk as a source of passive haptic feedback. The experiment reveals that using our haptic redirection method participants can identify the invisible object with 78% accuracy.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37390001

RESUMO

This paper presents two from-point visibility algorithms: one aggressive and one exact. The aggressive algorithm efficiently computes a nearly complete visible set, with the guarantee of finding all triangles of a front surface, no matter how small their image footprint. The exact algorithm starts from the aggressive visible set and finds the remaining visible triangles efficiently and robustly. The algorithms are based on the idea of generalizing the set of sampling locations defined by the pixels of an image. Starting from a conventional image with one sampling location at each pixel center, the aggressive algorithm adds sampling locations to make sure that a triangle is sampled at all the pixels it touches. Thereby, the aggressive algorithm finds all triangles that are completely visible at a pixel regardless of geometric level of detail, distance from viewpoint, or view direction. The exact algorithm builds an initial visibility subdivision from the aggressive visible set, which it then uses to find most of the hidden triangles. The triangles whose visibility status is yet to be determined are processed iteratively, with the help of additional sampling locations. Since the initial visible set is almost complete, and since each additional sampling location finds a new visible triangle, the algorithm converges in a few iterations.

6.
IEEE Trans Vis Comput Graph ; 28(9): 3154-3167, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33476271

RESUMO

This article presents an occlusion management approach that handles fine-grain occlusions, and that quantifies and localizes occlusions as a user explores a virtual environment (VE). Fine-grain occlusions are handled by finding the VE region where they occur, and by constructing a multiperspective visualization that lets the user explore the region from the current location, with intuitive head motions, without first having to walk to the region. VE geometry close to the user is rendered conventionally, from the user's viewpoint, to anchor the user, avoiding disorientation and simulator sickness. Given a viewpoint, residual occlusions are quantified and localized as VE voxels that cannot be seen from the given viewpoint but that can be seen from nearby viewpoints. This residual occlusion quantification and localization helps the user ascertain that a VE region has been explored exhaustively. The occlusion management approach was tested in three controlled studies, which confirmed the exploration efficiency benefit of the approach, and in perceptual experiments, which confirmed that exploration efficiency does not come at the cost of reducing spatial awareness and sense of presence, or of increasing simulator sickness.

7.
IEEE Trans Vis Comput Graph ; 27(11): 4087-4096, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34449378

RESUMO

A common approach for Augmented Reality labeling is to display the label text on a flag planted into the real world element at a 3D anchor point. When there are more than just a few labels, the efficiency of the interface decreases as the user has to search for a given label sequentially. The search can be accelerated by sorting the labels alphabetically, but sorting all labels results in long and intersecting leader lines from the anchor points to the labels. This paper proposes a partially-sorted concentric label layout that leverages the search efficiency of sorting while avoiding the label display problems of long or intersecting leader lines. The labels are partitioned into a small number of sorted sequences displayed on circles of increasing radii. Since the labels on a circle are sorted, the user can quickly search each circle. A tight upper bound derived from circular permutation theory limits the number of circles and thereby the complexity of the label layout. For example, 12 labels require at most three circles. When the application allows it, the labels are presorted to further reduce the number of circles in the layout. The layout was tested in a user study where it significantly reduced the label searching time compared to a conventional single-circle layout.

8.
Phys Rev Lett ; 104(23): 236403, 2010 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-20867256

RESUMO

Random substitutional A(x)B(1-x) alloys lack formal translational symmetry and thus cannot be described by the language of band-structure dispersion E(k(→)). Yet, many alloy experiments are interpreted phenomenologically precisely by constructs derived from wave vector k(→), e.g., effective masses or van Hove singularities. Here we use large supercells with randomly distributed A and B atoms, whereby many different local environments are allowed to coexist, and transform the eigenstates into an effective band structure (EBS) in the primitive cell using a spectral decomposition. The resulting EBS reveals the extent to which band characteristics are preserved or lost at different compositions, band indices, and k(→) points, showing in (In,Ga)N the rapid disintegration of the valence band Bloch character and in Ga(N,P) the appearance of a pinned impurity band.

9.
IEEE Trans Vis Comput Graph ; 16(6): 1235-42, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20975163

RESUMO

Most images used in visualization are computed with the planar pinhole camera. This classic camera model has important advantages such as simplicity, which enables efficient software and hardware implementations, and similarity to the human eye, which yields images familiar to the user. However, the planar pinhole camera has only a single viewpoint, which limits images to parts of the scene to which there is direct line of sight. In this paper we introduce the curved ray camera to address the single viewpoint limitation. Rays are C1-continuous curves that bend to circumvent occluders. Our camera is designed to provide a fast 3-D point projection operation, which enables interactive visualization. The camera supports both 3-D surface and volume datasets. The camera is a powerful tool that enables seamless integration of multiple perspectives for overcoming occlusions in visualization while minimizing distortions.

10.
IEEE Trans Vis Comput Graph ; 16(5): 777-90, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20616393

RESUMO

We introduce the general pinhole camera (GPC), defined by a center of projection (i.e., the pinhole), an image plane, and a set of sampling locations in the image plane. We demonstrate the advantages of the GPC in the contexts of remote visualization, focus-plus-context visualization, and extreme antialiasing, which benefit from the GPC sampling flexibility. For remote visualization, we describe a GPC that allows zooming-in at the client without the need for transferring additional data from the server. For focus-plus-context visualization, we describe a GPC with multiple regions of interest with sampling rate continuity to the surrounding areas. For extreme antialiasing, we describe a GPC variant that allows supersampling locally with a very high number of color samples per output pixel (e.g., 1,024{\times}), supersampling levels that are out of reach for conventional approaches that supersample the entire image. The GPC supports many types of data, including surface geometry, volumetric, and image data, as well as many rendering modes, including highly view-dependent effects such as volume rendering. Finally, GPC visualization is efficient-GPC images are rendered and resampled with the help of graphics hardware at interactive rates.

11.
Mil Med ; 185(Suppl 1): 513-520, 2020 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-32074347

RESUMO

INTRODUCTION: Point-of-injury (POI) care requires immediate specialized assistance but delays and expertise lapses can lead to complications. In such scenarios, telementoring can benefit health practitioners by transmitting guidance from remote specialists. However, current telementoring systems are not appropriate for POI care. This article clinically evaluates our System for Telementoring with Augmented Reality (STAR), a novel telementoring system based on an augmented reality head-mounted display. The system is portable, self-contained, and displays virtual surgical guidance onto the operating field. These capabilities can facilitate telementoring in POI scenarios while mitigating limitations of conventional telementoring systems. METHODS: Twenty participants performed leg fasciotomies on cadaveric specimens under either one of two experimental conditions: telementoring using STAR; or without telementoring but reviewing the procedure beforehand. An expert surgeon evaluated the participants' performance in terms of completion time, number of errors, and procedure-related scores. Additional metrics included a self-reported confidence score and postexperiment questionnaires. RESULTS: STAR effectively delivered surgical guidance to nonspecialist health practitioners: participants using STAR performed fewer errors and obtained higher procedure-related scores. CONCLUSIONS: This work validates STAR as a viable surgical telementoring platform, which could be further explored to aid in scenarios where life-saving care must be delivered in a prehospital setting.


Assuntos
Educação Médica Continuada/normas , Fasciotomia/métodos , Tutoria/normas , Telemedicina/normas , Realidade Aumentada , Cadáver , Educação Médica Continuada/métodos , Educação Médica Continuada/estatística & dados numéricos , Fasciotomia/estatística & dados numéricos , Humanos , Indiana , Tutoria/métodos , Tutoria/estatística & dados numéricos , Avaliação de Programas e Projetos de Saúde/métodos , Telemedicina/métodos , Telemedicina/estatística & dados numéricos
12.
Surgery ; 167(4): 724-731, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31916990

RESUMO

BACKGROUND: The surgical workforce particularly in rural regions needs novel approaches to reinforce the skills and confidence of health practitioners. Although conventional telementoring systems have proven beneficial to address this gap, the benefits of platforms of augmented reality-based telementoring in the coaching and confidence of medical personnel are yet to be evaluated. METHODS: A total of 20 participants were guided by remote expert surgeons to perform leg fasciotomies on cadavers under one of two conditions: (1) telementoring (with our System for Telementoring with Augmented Reality) or (2) independently reviewing the procedure beforehand. Using the Individual Performance Score and the Weighted Individual Performance Score, two on-site, expert surgeons evaluated the participants. Postexperiment metrics included number of errors, procedure completion time, and self-reported confidence scores. A total of six objective measurements were obtained to describe the self-reported confidence scores and the overall quality of the coaching. Additional analyses were performed based on the participants' expertise level. RESULTS: Participants using the System for Telementoring with Augmented Reality received 10% greater Weighted Individual Performance Score (P = .03) and performed 67% fewer errors (P = .04). Moreover, participants with lower surgical expertise that used the System for Telementoring with Augmented Reality received 17% greater Individual Performance Score (P = .04), 32% greater Weighted Individual Performance Score (P < .01) and performed 92% fewer errors (P < .001). In addition, participants using the System for Telementoring with Augmented Reality reported 25% more confidence in all evaluated aspects (P < .03). On average, participants using the System for Telementoring with Augmented Reality received augmented reality guidance 19 times on average and received guidance for 47% of their total task completion time. CONCLUSION: Participants using the System for Telementoring with Augmented Reality performed leg fasciotomies with fewer errors and received better performance scores. In addition, participants using the System for Telementoring with Augmented Reality reported being more confident when performing fasciotomies under telementoring. Augmented Reality Head-Mounted Display-based telementoring successfully provided confidence and coaching to medical personnel.


Assuntos
Realidade Aumentada , Cirurgia Geral/educação , Tutoria/métodos , Telemedicina/métodos , Adulto , Feminino , Humanos , Masculino
13.
NPJ Digit Med ; 3: 75, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32509972

RESUMO

Telementoring platforms can help transfer surgical expertise remotely. However, most telementoring platforms are not designed to assist in austere, pre-hospital settings. This paper evaluates the system for telementoring with augmented reality (STAR), a portable and self-contained telementoring platform based on an augmented reality head-mounted display (ARHMD). The system is designed to assist in austere scenarios: a stabilized first-person view of the operating field is sent to a remote expert, who creates surgical instructions that a local first responder wearing the ARHMD can visualize as three-dimensional models projected onto the patient's body. Our hypothesis evaluated whether remote guidance with STAR could lead to performing a surgical procedure better, as opposed to remote audio-only guidance. Remote expert surgeons guided first responders through training cricothyroidotomies in a simulated austere scenario, and on-site surgeons evaluated the participants using standardized evaluation tools. The evaluation comprehended completion time and technique performance of specific cricothyroidotomy steps. The analyses were also performed considering the participants' years of experience as first responders, and their experience performing cricothyroidotomies. A linear mixed model analysis showed that using STAR was associated with higher procedural and non-procedural scores, and overall better performance. Additionally, a binary logistic regression analysis showed that using STAR was associated to safer and more successful executions of cricothyroidotomies. This work demonstrates that remote mentors can use STAR to provide first responders with guidance and surgical knowledge, and represents a first step towards the adoption of ARHMDs to convey clinical expertise remotely in austere scenarios.

14.
IEEE Trans Vis Comput Graph ; 25(11): 3073-3082, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31403415

RESUMO

Photogrammetry is a popular method of 3D reconstruction that uses conventional photos as input. This method can achieve high quality reconstructions so long as the scene is densely acquired from multiple views with sufficient overlap between nearby images. However, it is challenging for a human operator to know during acquisition if sufficient coverage has been achieved. Insufficient coverage of the scene can result in holes, missing regions, or even a complete failure of reconstruction. These errors require manually repairing the model or returning to the scene to acquire additional views, which is time-consuming and often infeasible. We present a novel approach to photogrammetric acquisition that uses an AR HMD to predict a set of covering views and to interactively guide an operator to capture imagery from each view. The operator wears an AR HMD and uses a handheld camera rig that is tracked relative to the AR HMD with a fiducial marker. The AR HMD tracks its pose relative to the environment and automatically generates a coarse geometric model of the scene, which our approach analyzes at runtime to generate a set of human-reachable acquisition views covering the scene with consistent camera-to-scene distance and image overlap. The generated view locations are rendered to the operator on the AR HMD. Interactive visual feedback informs the operator how to align the camera to assume each suggested pose. When the camera is in range, an image is automatically captured. In this way, a set of images suitable for 3D reconstruction can be captured in a matter of minutes. In a user study, participants who were novices at photogrammetry were tasked with acquiring a challenging and complex scene either without guidance or with our AR HMD based guidance. Participants using our guidance achieved improved reconstructions without cases of reconstruction failure as in the control condition. Our AR HMD based approach is self-contained, portable, and provides specific acquisition guidance tailored to the geometry of the scene being captured.

15.
IEEE Trans Vis Comput Graph ; 25(6): 2242-2254, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29993912

RESUMO

We present a method for the fast computation of the intersection between a ray and the geometry of a scene. The scene geometry is simplified with a 2D array of voxelizations computed from different directions, sampling the space of all possible directions. The 2D array of voxelizations is compressed using a vector quantization approach. The ray-scene intersection is approximated using the voxelization whose rows are most closely aligned with the ray. The voxelization row that contains the ray is looked up, the row is truncated to the extent of the ray using bit operations, and a truncated row with non-zero bits indicates that the ray intersects the scene. We support dynamic scenes with rigidly moving objects by building a separate 2D array of voxelizations for each type of object, and by using the same 2D array of voxelizations for all instances of an object type. We support complex dynamic scenes and scenes with deforming geometry by computing and rotating a single voxelization on the fly. We demonstrate the benefits of our method in the context of interactive rendering of scenes with thousands of moving lights, where we compare our method to ray tracing, to conventional shadow mapping, and to imperfect shadow maps.

16.
IEEE Trans Vis Comput Graph ; 25(5): 2083-2092, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30762556

RESUMO

Virtual Reality (VR) applications allow a user to explore a scene intuitively through a tracked head-mounted display (HMD). However, in complex scenes, occlusions make scene exploration inefficient, as the user has to navigate around occluders to gain line of sight to potential regions of interest. When a scene region proves to be of no interest, the user has to retrace their path, and such a sequential scene exploration implies significant amounts of wasted navigation. Furthermore, as the virtual world is typically much larger than the tracked physical space hosting the VR application, the intuitive one-to-one mapping between the virtual and real space has to be temporarily suspended for the user to teleport or redirect in order to conform to the physical space constraints. In this paper we introduce a method for improving VR exploration efficiency by automatically constructing a multiperspective visualization that removes occlusions. For each frame, the scene is first rendered conventionally, the z-buffer is analyzed to detect horizontal and vertical depth discontinuities, the discontinuities are used to define disocclusion portals which are 3D scene rectangles for routing rays around occluders, and the disocclusion portals are used to render a multiperpsective image that alleviates occlusions. The user controls the multiperspective disocclusion effect, deploying and retracting it with small head translations. We have quantified the VR exploration efficiency brought by our occlusion removal method in a study where participants searched for a stationary target, and chased a dynamic target. Our method showed an advantage over conventional VR exploration in terms of reducing the navigation distance, the view direction rotation, the number of redirections, and the task completion time. These advantages did not come at the cost of a reduction in depth perception or situational awareness, or of an increase in simulator sickness.

17.
Mil Med ; 184(Suppl 1): 57-64, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30901394

RESUMO

Combat trauma injuries require urgent and specialized care. When patient evacuation is infeasible, critical life-saving care must be given at the point of injury in real-time and under austere conditions associated to forward operating bases. Surgical telementoring allows local generalists to receive remote instruction from specialists thousands of miles away. However, current telementoring systems have limited annotation capabilities and lack of direct visualization of the future result of the surgical actions by the specialist. The System for Telementoring with Augmented Reality (STAR) is a surgical telementoring platform that improves the transfer of medical expertise by integrating a full-size interaction table for mentors to create graphical annotations, with augmented reality (AR) devices to display surgical annotations directly onto the generalist's field of view. Along with the explanation of the system's features, this paper provides results of user studies that validate STAR as a comprehensive AR surgical telementoring platform. In addition, potential future applications of STAR are discussed, which are desired features that state-of-the-art AR medical telementoring platforms should have when combat trauma scenarios are in the spotlight of such technologies.


Assuntos
Tutoria/métodos , Consulta Remota/métodos , Ensino/normas , Realidade Virtual , Humanos , Ensino/tendências
18.
Simul Healthc ; 14(1): 59-66, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30395078

RESUMO

INTRODUCTION: Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. METHODS: Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a "future library" of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. RESULTS: Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. CONCLUSIONS: Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems.


Assuntos
Mentores , Procedimentos Cirúrgicos Operatórios/educação , Telemedicina/instrumentação , Interface Usuário-Computador , Competência Clínica , Computadores de Mão , Humanos , Fatores de Tempo
19.
IEEE Trans Vis Comput Graph ; 14(4): 937-47, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18467766

RESUMO

In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.


Assuntos
Algoritmos , Gráficos por Computador , Imageamento Tridimensional/métodos , Análise Numérica Assistida por Computador , Ataques Terroristas de 11 de Setembro , Interface Usuário-Computador
20.
IEEE Trans Vis Comput Graph ; 24(12): 3069-3080, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29990065

RESUMO

Immersive navigation in virtual reality (VR) and augmented reality (AR) leverages physical locomotion through pose tracking of the head-mounted display. While this navigation modality is intuitive, regions of interest in the scene may suffer from occlusion and require significant viewpoint translation. Moreover, limited physical space and user mobility need to be taken into consideration. Some regions of interest may require viewpoints that are physically unreachable without less intuitive methods such as walking in-place or redirected walking. We propose a novel approach for increasing navigation efficiency in VR and AR using multiperspective visualization. Our approach samples occluded regions of interest from additional perspectives, which are integrated seamlessly into the user's perspective. This approach improves navigation efficiency by bringing simultaneously into view multiple regions of interest, allowing the user to explore more while moving less. We have conducted a user study that shows that our method brings significant performance improvement in VR and AR environments, on tasks that include tracking, matching, searching, and ambushing objects of interest.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa