Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
IEEE Trans Vis Comput Graph ; 30(5): 2337-2346, 2024 May.
Article in English | MEDLINE | ID: mdl-38437098

ABSTRACT

VR headsets have limited rendering capability, which limits the size and detail of the virtual environment (VE) that can be used in VR applications. One solution is cloud VR, where the "thin" VR clients are assisted by a server. This paper describes Cio VR, a cloud VR system that provides fast loading times, as needed to let users see and interact with the VE quickly at session startup or after teleportation. The server reduces the original VE to a compact representation through near-far partitioning. The server renders the far region to an environment map which it sends to the client together with the near region geometry, from which the client renders quality frames locally, with low latency. The near region starts out small and grows progressively, with strict visual continuity, minimizing startup time. The low-latency and fast-startup advantages of CloVR have been validated in a user study where groups of 8 participants wearing all-in-one VR headsets (Quest 2's) were supported by a laptop server to run a collaborative VR application with a 25 million triangle VE.

2.
IEEE Trans Vis Comput Graph ; 30(5): 2454-2463, 2024 May.
Article in English | MEDLINE | ID: mdl-38437137

ABSTRACT

The paper introduces PreVR, a method for allowing the user of a VR application to preview a virtual environment (VE) around any number of corners. This way the user can gain line of sight to any part of the VE, no matter how distant or how heavily occluded it is. PreVR relies on a multiperspective visualization that implements a higher-order disocclusion effect with piecewise linear rays that bend multiple times as needed to reach the visualization target. PreVR was evaluated in a user study ($\mathrm{N}=88$) that investigates four points on the VR interface design continuum defined by the maximum disocclusion order $\delta$. In a first control condition (CC0), $\delta=0$, corresponds to conventional VR exploration with no preview capability. In a second control condition (CC1), $\delta=1$, corresponds to the prior art approach of giving the user a preview around the first corner. In a first experimental condition (EC3), $\delta=3$, so PreVR provided up to third-order disocclusion. In a second experimental condition (ECN), $\delta$ was not capped, so PreVR could provide a disocclusion effect of any order, as needed to reach any location in the VE. Participants searched for a stationary target, for a dynamic target moving on a random continuous trajectory, and for a transient dynamic target that appeared at random locations in the maze and disappeared 5s later. The study quantified VE exploration efficiency with four metrics: viewpoint translation, view direction rotation, number of teleportations, and task completion time. Results show that the previews afforded by PreVR bring a significant VE exploration efficiency advantage. ECN outperforms EC3, CC1, and CC0 for all metrics and all tasks, and EC3 frequently outperforms CC1 and CC0.

3.
Article in English | MEDLINE | ID: mdl-37390001

ABSTRACT

This paper presents two from-point visibility algorithms: one aggressive and one exact. The aggressive algorithm efficiently computes a nearly complete visible set, with the guarantee of finding all triangles of a front surface, no matter how small their image footprint. The exact algorithm starts from the aggressive visible set and finds the remaining visible triangles efficiently and robustly. The algorithms are based on the idea of generalizing the set of sampling locations defined by the pixels of an image. Starting from a conventional image with one sampling location at each pixel center, the aggressive algorithm adds sampling locations to make sure that a triangle is sampled at all the pixels it touches. Thereby, the aggressive algorithm finds all triangles that are completely visible at a pixel regardless of geometric level of detail, distance from viewpoint, or view direction. The exact algorithm builds an initial visibility subdivision from the aggressive visible set, which it then uses to find most of the hidden triangles. The triangles whose visibility status is yet to be determined are processed iteratively, with the help of additional sampling locations. Since the initial visible set is almost complete, and since each additional sampling location finds a new visible triangle, the algorithm converges in a few iterations.

4.
Article in English | MEDLINE | ID: mdl-37027709

ABSTRACT

This paper proposes a general handheld stick haptic redirection method that allows the user to experience complex shapes with haptic feedback through both tapping and extended contact, such as in contour tracing. As the user extends the stick to make contact with a virtual object, the contact point with the virtual object and the targeted contact point with the physical object are continually updated, and the virtual stick is redirected to synchronize the virtual and real contacts. Redirection is applied either just to the virtual stick, or to both the virtual stick and hand. A user study (N = 26) confirms the effectiveness of the proposed redirection method. A first experiment following a two-interval forced-choice design reveals that the offset detection thresholds are [-15cm, +15cm]. A second experiment asks participants to guess the shape of an invisible virtual object by tapping it and by tracing its contour with the handheld stick, using a real world disk as a source of passive haptic feedback. The experiment reveals that using our haptic redirection method participants can identify the invisible object with 78% accuracy.

5.
IEEE Trans Vis Comput Graph ; 28(9): 3154-3167, 2022 Sep.
Article in English | MEDLINE | ID: mdl-33476271

ABSTRACT

This article presents an occlusion management approach that handles fine-grain occlusions, and that quantifies and localizes occlusions as a user explores a virtual environment (VE). Fine-grain occlusions are handled by finding the VE region where they occur, and by constructing a multiperspective visualization that lets the user explore the region from the current location, with intuitive head motions, without first having to walk to the region. VE geometry close to the user is rendered conventionally, from the user's viewpoint, to anchor the user, avoiding disorientation and simulator sickness. Given a viewpoint, residual occlusions are quantified and localized as VE voxels that cannot be seen from the given viewpoint but that can be seen from nearby viewpoints. This residual occlusion quantification and localization helps the user ascertain that a VE region has been explored exhaustively. The occlusion management approach was tested in three controlled studies, which confirmed the exploration efficiency benefit of the approach, and in perceptual experiments, which confirmed that exploration efficiency does not come at the cost of reducing spatial awareness and sense of presence, or of increasing simulator sickness.

6.
IEEE Trans Vis Comput Graph ; 27(11): 4087-4096, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34449378

ABSTRACT

A common approach for Augmented Reality labeling is to display the label text on a flag planted into the real world element at a 3D anchor point. When there are more than just a few labels, the efficiency of the interface decreases as the user has to search for a given label sequentially. The search can be accelerated by sorting the labels alphabetically, but sorting all labels results in long and intersecting leader lines from the anchor points to the labels. This paper proposes a partially-sorted concentric label layout that leverages the search efficiency of sorting while avoiding the label display problems of long or intersecting leader lines. The labels are partitioned into a small number of sorted sequences displayed on circles of increasing radii. Since the labels on a circle are sorted, the user can quickly search each circle. A tight upper bound derived from circular permutation theory limits the number of circles and thereby the complexity of the label layout. For example, 12 labels require at most three circles. When the application allows it, the labels are presorted to further reduce the number of circles in the layout. The layout was tested in a user study where it significantly reduced the label searching time compared to a conventional single-circle layout.

7.
NPJ Digit Med ; 3: 75, 2020.
Article in English | MEDLINE | ID: mdl-32509972

ABSTRACT

Telementoring platforms can help transfer surgical expertise remotely. However, most telementoring platforms are not designed to assist in austere, pre-hospital settings. This paper evaluates the system for telementoring with augmented reality (STAR), a portable and self-contained telementoring platform based on an augmented reality head-mounted display (ARHMD). The system is designed to assist in austere scenarios: a stabilized first-person view of the operating field is sent to a remote expert, who creates surgical instructions that a local first responder wearing the ARHMD can visualize as three-dimensional models projected onto the patient's body. Our hypothesis evaluated whether remote guidance with STAR could lead to performing a surgical procedure better, as opposed to remote audio-only guidance. Remote expert surgeons guided first responders through training cricothyroidotomies in a simulated austere scenario, and on-site surgeons evaluated the participants using standardized evaluation tools. The evaluation comprehended completion time and technique performance of specific cricothyroidotomy steps. The analyses were also performed considering the participants' years of experience as first responders, and their experience performing cricothyroidotomies. A linear mixed model analysis showed that using STAR was associated with higher procedural and non-procedural scores, and overall better performance. Additionally, a binary logistic regression analysis showed that using STAR was associated to safer and more successful executions of cricothyroidotomies. This work demonstrates that remote mentors can use STAR to provide first responders with guidance and surgical knowledge, and represents a first step towards the adoption of ARHMDs to convey clinical expertise remotely in austere scenarios.

8.
Mil Med ; 185(Suppl 1): 513-520, 2020 01 07.
Article in English | MEDLINE | ID: mdl-32074347

ABSTRACT

INTRODUCTION: Point-of-injury (POI) care requires immediate specialized assistance but delays and expertise lapses can lead to complications. In such scenarios, telementoring can benefit health practitioners by transmitting guidance from remote specialists. However, current telementoring systems are not appropriate for POI care. This article clinically evaluates our System for Telementoring with Augmented Reality (STAR), a novel telementoring system based on an augmented reality head-mounted display. The system is portable, self-contained, and displays virtual surgical guidance onto the operating field. These capabilities can facilitate telementoring in POI scenarios while mitigating limitations of conventional telementoring systems. METHODS: Twenty participants performed leg fasciotomies on cadaveric specimens under either one of two experimental conditions: telementoring using STAR; or without telementoring but reviewing the procedure beforehand. An expert surgeon evaluated the participants' performance in terms of completion time, number of errors, and procedure-related scores. Additional metrics included a self-reported confidence score and postexperiment questionnaires. RESULTS: STAR effectively delivered surgical guidance to nonspecialist health practitioners: participants using STAR performed fewer errors and obtained higher procedure-related scores. CONCLUSIONS: This work validates STAR as a viable surgical telementoring platform, which could be further explored to aid in scenarios where life-saving care must be delivered in a prehospital setting.


Subject(s)
Education, Medical, Continuing/standards , Fasciotomy/methods , Mentoring/standards , Telemedicine/standards , Augmented Reality , Cadaver , Education, Medical, Continuing/methods , Education, Medical, Continuing/statistics & numerical data , Fasciotomy/statistics & numerical data , Humans , Indiana , Mentoring/methods , Mentoring/statistics & numerical data , Program Evaluation/methods , Telemedicine/methods , Telemedicine/statistics & numerical data
9.
Surgery ; 167(4): 724-731, 2020 04.
Article in English | MEDLINE | ID: mdl-31916990

ABSTRACT

BACKGROUND: The surgical workforce particularly in rural regions needs novel approaches to reinforce the skills and confidence of health practitioners. Although conventional telementoring systems have proven beneficial to address this gap, the benefits of platforms of augmented reality-based telementoring in the coaching and confidence of medical personnel are yet to be evaluated. METHODS: A total of 20 participants were guided by remote expert surgeons to perform leg fasciotomies on cadavers under one of two conditions: (1) telementoring (with our System for Telementoring with Augmented Reality) or (2) independently reviewing the procedure beforehand. Using the Individual Performance Score and the Weighted Individual Performance Score, two on-site, expert surgeons evaluated the participants. Postexperiment metrics included number of errors, procedure completion time, and self-reported confidence scores. A total of six objective measurements were obtained to describe the self-reported confidence scores and the overall quality of the coaching. Additional analyses were performed based on the participants' expertise level. RESULTS: Participants using the System for Telementoring with Augmented Reality received 10% greater Weighted Individual Performance Score (P = .03) and performed 67% fewer errors (P = .04). Moreover, participants with lower surgical expertise that used the System for Telementoring with Augmented Reality received 17% greater Individual Performance Score (P = .04), 32% greater Weighted Individual Performance Score (P < .01) and performed 92% fewer errors (P < .001). In addition, participants using the System for Telementoring with Augmented Reality reported 25% more confidence in all evaluated aspects (P < .03). On average, participants using the System for Telementoring with Augmented Reality received augmented reality guidance 19 times on average and received guidance for 47% of their total task completion time. CONCLUSION: Participants using the System for Telementoring with Augmented Reality performed leg fasciotomies with fewer errors and received better performance scores. In addition, participants using the System for Telementoring with Augmented Reality reported being more confident when performing fasciotomies under telementoring. Augmented Reality Head-Mounted Display-based telementoring successfully provided confidence and coaching to medical personnel.


Subject(s)
Augmented Reality , General Surgery/education , Mentoring/methods , Telemedicine/methods , Adult , Female , Humans , Male
10.
IEEE Trans Vis Comput Graph ; 25(11): 3073-3082, 2019 11.
Article in English | MEDLINE | ID: mdl-31403415

ABSTRACT

Photogrammetry is a popular method of 3D reconstruction that uses conventional photos as input. This method can achieve high quality reconstructions so long as the scene is densely acquired from multiple views with sufficient overlap between nearby images. However, it is challenging for a human operator to know during acquisition if sufficient coverage has been achieved. Insufficient coverage of the scene can result in holes, missing regions, or even a complete failure of reconstruction. These errors require manually repairing the model or returning to the scene to acquire additional views, which is time-consuming and often infeasible. We present a novel approach to photogrammetric acquisition that uses an AR HMD to predict a set of covering views and to interactively guide an operator to capture imagery from each view. The operator wears an AR HMD and uses a handheld camera rig that is tracked relative to the AR HMD with a fiducial marker. The AR HMD tracks its pose relative to the environment and automatically generates a coarse geometric model of the scene, which our approach analyzes at runtime to generate a set of human-reachable acquisition views covering the scene with consistent camera-to-scene distance and image overlap. The generated view locations are rendered to the operator on the AR HMD. Interactive visual feedback informs the operator how to align the camera to assume each suggested pose. When the camera is in range, an image is automatically captured. In this way, a set of images suitable for 3D reconstruction can be captured in a matter of minutes. In a user study, participants who were novices at photogrammetry were tasked with acquiring a challenging and complex scene either without guidance or with our AR HMD based guidance. Participants using our guidance achieved improved reconstructions without cases of reconstruction failure as in the control condition. Our AR HMD based approach is self-contained, portable, and provides specific acquisition guidance tailored to the geometry of the scene being captured.

11.
Mil Med ; 184(Suppl 1): 57-64, 2019 03 01.
Article in English | MEDLINE | ID: mdl-30901394

ABSTRACT

Combat trauma injuries require urgent and specialized care. When patient evacuation is infeasible, critical life-saving care must be given at the point of injury in real-time and under austere conditions associated to forward operating bases. Surgical telementoring allows local generalists to receive remote instruction from specialists thousands of miles away. However, current telementoring systems have limited annotation capabilities and lack of direct visualization of the future result of the surgical actions by the specialist. The System for Telementoring with Augmented Reality (STAR) is a surgical telementoring platform that improves the transfer of medical expertise by integrating a full-size interaction table for mentors to create graphical annotations, with augmented reality (AR) devices to display surgical annotations directly onto the generalist's field of view. Along with the explanation of the system's features, this paper provides results of user studies that validate STAR as a comprehensive AR surgical telementoring platform. In addition, potential future applications of STAR are discussed, which are desired features that state-of-the-art AR medical telementoring platforms should have when combat trauma scenarios are in the spotlight of such technologies.


Subject(s)
Mentoring/methods , Remote Consultation/methods , Teaching/standards , Virtual Reality , Humans , Teaching/trends
12.
IEEE Trans Vis Comput Graph ; 25(5): 2083-2092, 2019 May.
Article in English | MEDLINE | ID: mdl-30762556

ABSTRACT

Virtual Reality (VR) applications allow a user to explore a scene intuitively through a tracked head-mounted display (HMD). However, in complex scenes, occlusions make scene exploration inefficient, as the user has to navigate around occluders to gain line of sight to potential regions of interest. When a scene region proves to be of no interest, the user has to retrace their path, and such a sequential scene exploration implies significant amounts of wasted navigation. Furthermore, as the virtual world is typically much larger than the tracked physical space hosting the VR application, the intuitive one-to-one mapping between the virtual and real space has to be temporarily suspended for the user to teleport or redirect in order to conform to the physical space constraints. In this paper we introduce a method for improving VR exploration efficiency by automatically constructing a multiperspective visualization that removes occlusions. For each frame, the scene is first rendered conventionally, the z-buffer is analyzed to detect horizontal and vertical depth discontinuities, the discontinuities are used to define disocclusion portals which are 3D scene rectangles for routing rays around occluders, and the disocclusion portals are used to render a multiperpsective image that alleviates occlusions. The user controls the multiperspective disocclusion effect, deploying and retracting it with small head translations. We have quantified the VR exploration efficiency brought by our occlusion removal method in a study where participants searched for a stationary target, and chased a dynamic target. Our method showed an advantage over conventional VR exploration in terms of reducing the navigation distance, the view direction rotation, the number of redirections, and the task completion time. These advantages did not come at the cost of a reduction in depth perception or situational awareness, or of an increase in simulator sickness.

13.
Ann Surg ; 270(2): 384-389, 2019 08.
Article in English | MEDLINE | ID: mdl-29672404

ABSTRACT

OBJECTIVE: This study investigates the benefits of a surgical telementoring system based on an augmented reality head-mounted display (ARHMD) that overlays surgical instructions directly onto the surgeon's view of the operating field, without workspace obstruction. SUMMARY BACKGROUND DATA: In conventional telestrator-based telementoring, the surgeon views annotations of the surgical field by shifting focus to a nearby monitor, which substantially increases cognitive load. As an alternative, tablets have been used between the surgeon and the patient to display instructions; however, tablets impose additional obstructions of surgeon's motions. METHODS: Twenty medical students performed anatomical marking (Task1) and abdominal incision (Task2) on a patient simulator, in 1 of 2 telementoring conditions: ARHMD and telestrator. The dependent variables were placement error, number of focus shifts, and completion time. Furthermore, workspace efficiency was quantified as the number and duration of potential surgeon-tablet collisions avoided by the ARHMD. RESULTS: The ARHMD condition yielded smaller placement errors (Task1: 45%, P < 0.001; Task2: 14%, P = 0.01), fewer focus shifts (Task1: 93%, P < 0.001; Task2: 88%, P = 0.0039), and longer completion times (Task1: 31%, P < 0.001; Task2: 24%, P = 0.013). Furthermore, the ARHMD avoided potential tablet collisions (4.8 for 3.2 seconds in Task1; 3.8 for 1.3 seconds in Task2). CONCLUSION: The ARHMD system promises to improve accuracy and to eliminate focus shifts in surgical telementoring. Because ARHMD participants were able to refine their execution of instructions, task completion time increased. Unlike a tablet system, the ARHMD does not require modifying natural motions to avoid collisions.


Subject(s)
Augmented Reality , Education, Medical/methods , General Surgery/education , Monitoring, Intraoperative/methods , Patient Simulation , Surgical Procedures, Operative/education , Telemedicine/methods , Adult , Female , Humans , Imaging, Three-Dimensional , Male , Surgeons/education , Young Adult
14.
Simul Healthc ; 14(1): 59-66, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30395078

ABSTRACT

INTRODUCTION: Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. METHODS: Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a "future library" of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. RESULTS: Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. CONCLUSIONS: Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems.


Subject(s)
Mentors , Surgical Procedures, Operative/education , Telemedicine/instrumentation , User-Computer Interface , Clinical Competence , Computers, Handheld , Humans , Time Factors
15.
IEEE Trans Vis Comput Graph ; 25(6): 2242-2254, 2019 Jun.
Article in English | MEDLINE | ID: mdl-29993912

ABSTRACT

We present a method for the fast computation of the intersection between a ray and the geometry of a scene. The scene geometry is simplified with a 2D array of voxelizations computed from different directions, sampling the space of all possible directions. The 2D array of voxelizations is compressed using a vector quantization approach. The ray-scene intersection is approximated using the voxelization whose rows are most closely aligned with the ray. The voxelization row that contains the ray is looked up, the row is truncated to the extent of the ray using bit operations, and a truncated row with non-zero bits indicates that the ray intersects the scene. We support dynamic scenes with rigidly moving objects by building a separate 2D array of voxelizations for each type of object, and by using the same 2D array of voxelizations for all instances of an object type. We support complex dynamic scenes and scenes with deforming geometry by computing and rotating a single voxelization on the fly. We demonstrate the benefits of our method in the context of interactive rendering of scenes with thousands of moving lights, where we compare our method to ray tracing, to conventional shadow mapping, and to imperfect shadow maps.

16.
IEEE Trans Vis Comput Graph ; 24(12): 3069-3080, 2018 12.
Article in English | MEDLINE | ID: mdl-29990065

ABSTRACT

Immersive navigation in virtual reality (VR) and augmented reality (AR) leverages physical locomotion through pose tracking of the head-mounted display. While this navigation modality is intuitive, regions of interest in the scene may suffer from occlusion and require significant viewpoint translation. Moreover, limited physical space and user mobility need to be taken into consideration. Some regions of interest may require viewpoints that are physically unreachable without less intuitive methods such as walking in-place or redirected walking. We propose a novel approach for increasing navigation efficiency in VR and AR using multiperspective visualization. Our approach samples occluded regions of interest from additional perspectives, which are integrated seamlessly into the user's perspective. This approach improves navigation efficiency by bringing simultaneously into view multiple regions of interest, allowing the user to explore more while moving less. We have conducted a user study that shows that our method brings significant performance improvement in VR and AR environments, on tasks that include tracking, matching, searching, and ambushing objects of interest.

17.
IEEE Comput Graph Appl ; 37(4): 72-83, 2017.
Article in English | MEDLINE | ID: mdl-28829295

ABSTRACT

Education research has shown that instructor gestures can help capture, maintain, and direct the student's attention during a lecture as well as enhance learning and retention. Traditional education research on instructor gestures relies on video stimuli, which are time consuming to produce, especially when gesture precision and consistency across conditions are strictly enforced. The proposed system allows users to efficiently create accurate and effective stimuli for complex studies on gesture, without the need for computer animation expertise or artist talent.

18.
Mil Med ; 182(S1): 310-315, 2017 03.
Article in English | MEDLINE | ID: mdl-28291491

ABSTRACT

Telementoring can improve treatment of combat trauma injuries by connecting remote experienced surgeons with local less-experienced surgeons in an austere environment. Current surgical telementoring systems force the local surgeon to regularly shift focus away from the operating field to receive expert guidance, which can lead to surgery delays or even errors. The System for Telementoring with Augmented Reality (STAR) integrates expert-created annotations directly into the local surgeon's field of view. The local surgeon views the operating field by looking at a tablet display suspended between the patient and the surgeon that captures video of the surgical field. The remote surgeon remotely adds graphical annotations to the video. The annotations are sent back and displayed to the local surgeon while being automatically anchored to the operating field elements they describe. A technical evaluation demonstrates that STAR robustly anchors annotations despite tablet repositioning and occlusions. In a user study, participants used either STAR or a conventional telementoring system to precisely mark locations on a surgical simulator under a remote surgeon's guidance. Participants who used STAR completed the task with fewer focus shifts and with greater accuracy. The STAR reduces the local surgeon's need to shift attention during surgery, allowing him or her to continuously work while looking "through" the tablet screen.


Subject(s)
Mentoring/methods , Patient Simulation , Remote Consultation/methods , Surgeons/standards , Telemedicine/methods , Clinical Competence/standards , Humans , Mentoring/standards , Remote Consultation/standards , Telemedicine/standards , Warfare
19.
Cogn Sci ; 41(2): 518-535, 2017 03.
Article in English | MEDLINE | ID: mdl-27128822

ABSTRACT

A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture is instead attributable to these other behaviors. We used a computer-generated animated pedagogical agent to control both verbal and non-verbal behavior. Children viewed lessons on mathematical equivalence in which an avatar either gestured or did not gesture, while eye gaze, head position, and lip movements remained identical across gesture conditions. Children who observed the gesturing avatar learned more, and they solved problems more quickly. Moreover, those children who learned were more likely to transfer and generalize their knowledge. These findings provide converging evidence that gesture facilitates math learning, and they reveal the potential for using technology to study non-verbal behavior in controlled experiments.


Subject(s)
Gestures , Learning , Mathematics , Teaching , Transfer, Psychology , Child , Female , Humans , Male , Problem Solving
20.
IEEE Trans Vis Comput Graph ; 22(5): 1555-67, 2016 May.
Article in English | MEDLINE | ID: mdl-27045911

ABSTRACT

Occlusions are a severe bottleneck for the visualization of large and complex datasets. Conventional images only show dataset elements to which there is a direct line of sight, which significantly limits the information bandwidth of the visualization. Multiperspective visualization is a powerful approach for alleviating occlusions to show more than what is visible from a single viewpoint. However, constructing and rendering multiperspective visualizations is challenging. We present a framework for designing multiperspective focus+context visualizations with great flexibility by manipulating the underlying camera model. The focus region viewpoint is adapted to alleviate occlusions. The framework supports multiperspective visualization in three scenarios. In a first scenario, the viewpoint is altered independently for individual image regions to avoid occlusions. In a second scenario, conventional input images are connected into a multiperspective image. In a third scenario, one or several data subsets of interest (i.e., targets) are visualized where they would be seen in the absence of occluders, as the user navigates or the targets move. The multiperspective images are rendered at interactive rates, leveraging the camera model's fast projection operation. We demonstrate the framework on terrain, urban, and molecular biology geometric datasets, as well as on volume rendered density datasets.


Subject(s)
Computer Graphics , Imaging, Three-Dimensional/methods , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL
...