Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Vis Comput Graph ; 29(11): 4611-4621, 2023 11.
Article in English | MEDLINE | ID: mdl-37788213

ABSTRACT

In this paper, we present a prototype system for sharing a user's hand force in mixed reality (MR) remote collaboration on physical tasks, where hand force is estimated using wearable surface electromyography (sEMG) sensor. In a remote collaboration between a worker and an expert, hand activity plays a crucial role. However, the force exerted by the worker's hand has not been extensively investigated. Our sEMG-based system reliably captures the worker's hand force during physical tasks and conveys this information to the expert through hand force visualization, overlaid on the worker's view or on the worker's avatar. A user study was conducted to evaluate the impact of visualizing a worker's hand force on collaboration, employing three distinct visualization methods across two view modes. Our findings demonstrate that sensing and sharing hand force in MR remote collaboration improves the expert's awareness of the worker's task, significantly enhances the expert's perception of the collaborator's hand force and the weight of the interacting object, and promotes a heightened sense of social presence for the expert. Based on the findings, we provide design implications for future mixed reality remote collaboration systems that incorporate hand force sensing and visualization.


Subject(s)
Augmented Reality , Wearable Electronic Devices , Computer Graphics , Muscles
2.
IEEE Trans Vis Comput Graph ; 29(11): 4578-4588, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37782600

ABSTRACT

Despite the importance of avatar representation on user experience for Mixed Reality (MR) remote collaboration involving various device environments and large amounts of task-related information, studies on how controlling visual parameters for avatars can benefit users in such situations have been scarce. Thus, we conducted a user study comparing the effects of three avatars with different transparency levels (Nontransparent, Semi-transparent, and Near-transparent) on social presence for users in Augmented Reality (AR) and Virtual Reality (VR) during task-centric MR remote collaboration. Results show that avatars with a strong visual presence are not required in situations where accomplishing the collaborative task is prioritized over social interaction. However, AR users preferred more vivid avatars than VR users. Based on our findings, we suggest guidelines on how different levels of avatar transparency should be applied based on the context of the task and device type for MR remote collaboration.

3.
IEEE Trans Vis Comput Graph ; 29(12): 5137-5148, 2023 Dec.
Article in English | MEDLINE | ID: mdl-36054403

ABSTRACT

A critical yet unresolved challenge in designing space-adaptive narratives for Augmented Reality (AR) is to provide consistently immersive user experiences anywhere, regardless of physical features specific to a space. For this, we present a comprehensive analysis on a series of user studies investigating how the size, density, and layout of real indoor spaces affect users playing Fragments, a space-adaptive AR detective game. Based on the studies, we assert that moderate levels of traversability and visual complexity afforded in counteracting combinations of size and complexity are beneficial for narrative experience. To confirm our argument, we combined the experimental data of the studies (n=112) to compare how five different spatial complexity conditions impact narrative experience when applied to contrasting room sizes. Results show that whereas factors of narrative experience are rated significantly higher in relatively simple settings for a small space, they are less affected by complexity in a large space. Ultimately, we establish guidelines on the design and placement of space-adaptive augmentations in location-independent AR narratives to compensate for the lack or excess of affordances in various real spaces and enhance user experiences therein.

4.
Virtual Real ; 26(3): 1059-1077, 2022.
Article in English | MEDLINE | ID: mdl-35013665

ABSTRACT

There have been attempts to provide new cinematic experiences by connecting TV or movie content to suitable locations through augmented reality (AR). However, few studies have suggested a method to manage breakdowns in continuity due to spatial transitions. Thus, we propose a method to manage the spatial transition that occurs when we create a TV show trajectory by mapping TV show scenes with spatiotemporal information to the real world. Our approach involved two steps. The first step is to reduce the spatial transition considering the sequence, location, and importance of TV show scenes when creating the TV show trajectory in the authoring tool. The second is to fill the spatial transition with additional TV show scenes considering sequence, importance, and user interest when providing the TV show trajectory in the mobile application. The user study results showed that reducing spatial transition increases narrative engagement by allowing participants to see important content within the trajectory. The additional content in spatial transition decreased the physical demand and effort in terms of the perceived workload, although it increased the task completion time. Integrated spatial transition management improved the overall cinematic augmented reality (CAR) experience of the TV show. Furthermore, we suggest design implications for realizing the CAR of TV shows based on our findings.

5.
IEEE Trans Vis Comput Graph ; 27(5): 2746-2756, 2021 May.
Article in English | MEDLINE | ID: mdl-33760735

ABSTRACT

This paper proposes a novel panoramic texture mapping-based rendering system for real-time, photorealistic reproduction of large-scale urban scenes at a street level. Various image-based rendering (IBR) methods have recently been employed to synthesize high-quality novel views, although they require an excessive number of adjacent input images or detailed geometry just to render local views. While the development of global data, such as Google Street View, has accelerated interactive IBR techniques for urban scenes, such methods have hardly been aimed at high-quality street-level rendering. To provide users with free walk-through experiences in global urban streets, our system effectively covers large-scale scenes by using sparsely sampled panoramic street-view images and simplified scene models, which are easily obtainable from open databases. Our key concept is to extract semantic information from the given street-view images and to deploy it in proper intermediate steps of the suggested pipeline, which results in enhanced rendering accuracy and performance time. Furthermore, our method supports real-time semantic 3D inpainting to handle occluded and untextured areas, which appear often when the user's viewpoint dynamically changes. Experimental results validate the effectiveness of this method in comparison with the state-of-the-art approaches. We also present real-time demos in various urban streets.

6.
IEEE Trans Vis Comput Graph ; 26(5): 2002-2011, 2020 May.
Article in English | MEDLINE | ID: mdl-32070961

ABSTRACT

In mixed reality (MR), augmenting virtual objects consistently with real-world illumination is one of the key factors that provide a realistic and immersive user experience. For this purpose, we propose a novel deep learning-based method to estimate high dynamic range (HDR) illumination from a single RGB image of a reference object. To obtain illumination of a current scene, previous approaches inserted a special camera in that scene, which may interfere with user's immersion, or they analyzed reflected radiances from a passive light probe with a specific type of materials or a known shape. The proposed method does not require any additional gadgets or strong prior cues, and aims to predict illumination from a single image of an observed object with a wide range of homogeneous materials and shapes. To effectively solve this ill-posed inverse rendering problem, three sequential deep neural networks are employed based on a physically-inspired design. These networks perform end-to-end regression to gradually decrease dependency on the material and shape. To cover various conditions, the proposed networks are trained on a large synthetic dataset generated by physically-based rendering. Finally, the reconstructed HDR illumination enables realistic image-based lighting of virtual objects in MR. Experimental results demonstrate the effectiveness of this approach compared against state-of-the-art methods. The paper also suggests some interesting MR applications in indoor and outdoor scenes.

7.
IEEE Trans Vis Comput Graph ; 26(5): 1891-1901, 2020 May.
Article in English | MEDLINE | ID: mdl-32070966

ABSTRACT

We present a sensor-fusion method that exploits a depth camera and a gyroscope to track the articulation of a hand in the presence of excessive motion blur. In case of slow and smooth hand motions, the existing methods estimate the hand pose fairly accurately and robustly, despite challenges due to the high dimensionality of the problem, self-occlusions, uniform appearance of hand parts, etc. However, the accuracy of hand pose estimation drops considerably for fast-moving hands because the depth image is severely distorted due to motion blur. Moreover, when hands move fast, the actual hand pose is far from the one estimated in the previous frame, therefore the assumption of temporal continuity on which tracking methods rely, is not valid. In this paper, we track fast-moving hands with the combination of a gyroscope and a depth camera. As a first step, we calibrate a depth camera and a gyroscope attached to a hand so as to identify their time and pose offsets. Following that, we fuse the rotation information of the calibrated gyroscope with model-based hierarchical particle filter tracking. A series of quantitative and qualitative experiments demonstrate that the proposed method performs more accurately and robustly in the presence of motion blur, when compared to state of the art algorithms, especially in the case of very fast hand rotations.

8.
IEEE Trans Vis Comput Graph ; 18(9): 1449-59, 2012 Sep.
Article in English | MEDLINE | ID: mdl-21931174

ABSTRACT

The contribution of this paper is two-fold. First, we show how to extend the ESM algorithm to handle motion blur in 3D object tracking. ESM is a powerful algorithm for template matching-based tracking, but it can fail under motion blur. We introduce an image formation model that explicitly consider the possibility of blur, and shows its results in a generalization of the original ESM algorithm. This allows to converge faster, more accurately and more robustly even under large amount of blur. Our second contribution is an efficient method for rendering the virtual objects under the estimated motion blur. It renders two images of the object under 3D perspective, and warps them to create many intermediate images. By fusing these images we obtain a final image for the virtual objects blurred consistently with the captured image. Because warping is much faster than 3D rendering, we can create realistically blurred images at a very low computational cost.

9.
IEEE Trans Pattern Anal Mach Intell ; 33(7): 1429-41, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21079278

ABSTRACT

In this paper, we present a method for extracting consistent foreground regions when multiple views of a scene are available. We propose a framework that automatically identifies such regions in images under the assumption that, in each image, background and foreground regions present different color properties. To achieve this task, monocular color information is not sufficient and we exploit the spatial consistency constraint that several image projections of the same space region must satisfy. Combining the monocular color consistency constraint with multiview spatial constraints allows us to automatically and simultaneously segment the foreground and background regions in multiview images. In contrast to standard background subtraction methods, the proposed approach does not require a priori knowledge of the background nor user interaction. Experimental results under realistic scenarios demonstrate the effectiveness of the method for multiple camera set ups.

SELECTION OF CITATIONS
SEARCH DETAIL