RESUMO
In this article, we investigated the representation of wind in urban spaces through computational fluid dynamics simulations in virtual environments (VE). We compared wind perception (force and direction) as well as the sense of presence and embodiment in VE using different display technologies: head-mounted displays (HMD) and large screens, with or without an avatar. The tactile display was found to be most effective for detecting wind characteristics and enhancing presence and embodiment in virtual scenes, regardless of display type. Wind force and overall presence showed no significant differences between projection methods, but the perception of wind direction varied, which can be attributed to the head tracking of the HMD. In addition, gender differences emerged: females had a 7.42% higher presence on large screens, while males had a 23.13% higher presence with HMD (avatar present). These results highlight nuances in wind perception, the influence of technology, and gender differences in VE.
RESUMO
In virtual reality, the avatar - the user's digital representation - is an important element which can drastically influence the immersive experience. In this paper, we especially focus on the use of "dissimilar" avatars i.e., avatars diverging from the real appearance of the user, whether they preserve an anthropomorphic aspect or not. Previous studies reported that dissimilar avatars can positively impact the user experience, in terms for example of interaction, perception or behaviour. However, given the sparsity and multi-disciplinary character of research related to dissimilar avatars, it tends to lack common understanding and methodology, hampering the establishment of novel knowledge on this topic. In this paper, we propose to address these limitations by discussing: (i) a methodology for dissimilar avatars characterization, (ii) their impacts on the user experience, (iii) their different fields of application, and finally, (iv) future research direction on this topic. Taken together, we believe that this paper can support future research related to dissimilar avatars, and help designers of VR applications to leverage dissimilar avatars appropriately.
RESUMO
Technological developments provide solutions to alleviate the tremendous impact on the health and autonomy due to the impact of dementia on navigation abilities. We systematically reviewed the literature on devices tested to provide assistance to people with dementia during indoor, outdoor and virtual navigation (PROSPERO ID number: 215585). Medline and Scopus databases were searched from inception. Our aim was to summarize the results from the literature to guide future developments. Twenty-three articles were included in our study. Three types of information were extracted from these studies. First, the types of navigation advice the devices provided were assessed through: (i) the sensorial modality of presentation, e.g., visual and tactile stimuli, (ii) the navigation content, e.g., landmarks, and (iii) the timing of presentation, e.g., systematically at intersections. Second, we analyzed the technology that the devices were based on, e.g., smartphone. Third, the experimental methodology used to assess the devices and the navigation outcome was evaluated. We report and discuss the results from the literature based on these three main characteristics. Finally, based on these considerations, recommendations are drawn, challenges are identified and potential solutions are suggested. Augmented reality-based devices, intelligent tutoring systems and social support should particularly further be explored.
Assuntos
Realidade Aumentada , Demência , Humanos , Gráficos por Computador , Bases de Dados Factuais , Smartphone , Demência/diagnóstico , Demência/terapiaRESUMO
Augmented reality (AR) is an emerging technology that is applied in many fields. One of the limitations that still prevents AR to be even more widely used relates to the accessibility of devices. Indeed, the devices currently used are usually high end, expensive glasses or mobile devices. vSLAM (visual simultaneous localization and mapping) algorithms circumvent this problem by requiring relatively cheap cameras for AR. vSLAM algorithms can be classified as direct or indirect methods based on the type of data used. Each class of algorithms works optimally on a type of scene (e.g., textured or untextured) but unfortunately with little overlap. In this work, a method is proposed to fuse a direct and an indirect methods in order to have a higher robustness and to offer the possibility for AR to move seamlessly between different types of scenes. Our method is tested on three datasets against state-of-the-art direct (LSD-SLAM), semi-direct (LCSD) and indirect (ORBSLAM2) algorithms in two different scenarios: a trajectory planning and an AR scenario where a virtual object is displayed on top of the video feed; furthermore, a similar method (LCSD SLAM) is also compared to our proposal. Results show that our fusion algorithm is generally as efficient as the best algorithm both in terms of trajectory (mean errors with respect to ground truth trajectory measurements) as well as in terms of quality of the augmentation (robustness and stability). In short, we can propose a fusion algorithm that, in our tests, takes the best of both the direct and indirect methods.
RESUMO
Research in Virtual Reality (VR) showed that embodiment can influence participants' perceptions and behavior when embodied in a different yet plausible virtual body. In this paper, we study the changes an obese virtual body has on products perception (e.g., taste, etc.) and purchase behavior (e.g., number purchased) in an immersive virtual retail store. Participants (of a normal BMI on average) were embodied in a normal (N) or an obese (OB) virtual body and were asked to buy and evaluate food products in the immersive virtual store. Based on stereotypes that are classically associated with obese people, we expected that the group embodied in obese avatars would show a more unhealthy diet, (i.e., buy more food products and also buy more products with high energy intake, or saturated fat) and would rate unhealthy food as being tastier and healthier than participants embodied in "normal weight" avatars. Our participants also rated the perception of their virtual body: the OB group perceived their virtual body as significantly heavier and older. They also rated their sense of embodiment and presence within the immersive virtual store. These measures did not show any significant difference between groups. Finally, we asked them to rate different food products in terms of tastiness, healthiness, sustainability and price. The only difference we noticed is that participants embodied in an obese avatar (OB group) rated the coke as being significantly tastier and the apple as being significantly healthier. Nevertheless, while we hypothesized that participants embodied in a virtual body with obesity would show differences in their shopping patterns (e.g., more "unhealthy" products bought) there were no significant differences between the groups. Stereotype activation failed for our participants embodied in obese avatars, who did not exhibit a shopping behavior following the (negative) stereotypes related to obese people. conversely, while the opposite hypothesis (participants embodied in obese avatars would buy significantly more healthy products in order to "transform" their virtual bodies) could have been made, it was not the case either. We discuss these results and propose hypotheses as to why the behavior of the manipulated group differed from the one we expected. Indeed, unlike previous research, our participants were embodied in virtual avatars which differed greatly from their real bodies. Obese avatars should not only modify users' visual characteristics such as hair or skin color, etc. We hypothesize that an obese virtual body may require some other non-visual stimulus, e.g., the sensation of the extra weight or the change in body size. This main difference could then explain why we did not notice any important modification on participants' behavior and perceptions of food products. We also hypothesize that the absence of stereotype activation and thus of statistical difference between our N and OB groups might be due to higher-level cognitive processes involved while purchasing food products. Indeed our participants might have rejected their virtual bodies when performing the shopping task, while the embodiment and presence ratings did not show significant differences, and purchased products based on their real (non-obese) bodies. This could mean that stereotype activation is more complex that previously thought.
RESUMO
We present a method which can quickly and robustly match 2D and 3D point patterns based on their sole spatial distribution, but it can also handle other cues if available. This method can be easily adapted to many transformations such as similarity transformations in 2D/3D, and affine and perspective transformations in 2D. It is based on local geometric consensus among several local matchings and a refinement scheme. We provide two implementations of this general scheme, one for the 2D homography case (which can be used for marker or image tracking) and one for the 3D similarity case. We demonstrate the robustness and speed performance of our proposal on both synthetic and real images and show that our method can be used to augment any (textured/textureless) planar objects but also 3D objects.
RESUMO
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part - the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
RESUMO
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Assuntos
Terapia de Exposição à Realidade Virtual/instrumentação , Animais , Humanos , Relações Interpessoais , Movimento , Ratos , Robótica , Fatores de TempoRESUMO
BACKGROUND: Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area. METHODOLOGY: Twenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective. CONCLUSIONS: The results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental manipulation, and (c) significant positive correlations between these two measures. We discuss this result in the general context of body ownership illusions, and suggest applications including treatment for body size distortion illnesses.