Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 27(8): 3534-3545, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-31869794

RESUMO

In this article, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in a mixed reality environment. In Experiment 1, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through augmented reality (AR) glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. In Experiment 2, we further looked into the question how the physical-virtual latency in movements affected the perceived plausibility of the VH's interaction with the real world. Our results indicate that a slight temporal difference between the physical token reacting to the virtual hand's movement increased the perceived realism and causality of the mixed reality interaction. We discuss potential explanations for the findings and implications for future shared mixed reality tabletop setups.


Assuntos
Realidade Aumentada , Gráficos por Computador , Interação Social , Jogos de Vídeo , Realidade Virtual , Adolescente , Adulto , Feminino , Humanos , Masculino , Movimento/fisiologia , Óculos Inteligentes , Fatores de Tempo , Adulto Jovem
2.
IEEE Trans Vis Comput Graph ; 27(11): 4321-4331, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34449376

RESUMO

360-degree experiences such as cinematic virtual reality and 360-degree videos are becoming increasingly popular. In most examples, viewers can freely explore the content by changing their orientation. However, in some cases, this increased freedom may lead to viewers missing important events within such experiences. Thus, a recent research thrust has focused on studying mechanisms for guiding viewers' attention while maintaining their sense of presence and fostering a positive user experience. One approach is the utilization of diegetic mechanisms, characterized by an internal consistency with respect to the narrative and the environment, for attention guidance. While such mechanisms are highly attractive, their uses and potential implementations are still not well understood. Additionally, acknowledging the user in 360-degree experiences has been linked to a higher sense of presence and connection. However, less is known when acknowledging behaviors are carried out by attention guiding mechanisms. To close these gaps, we conducted a within-subjects user study with five conditions of no guide and virtual arrows, birds, dogs, and dogs that acknowledge the user and the environment. Through our mixed-methods analysis, we found that the diegetic virtual animals resulted in a more positive user experience, all of which were at least as effective as the non-diegetic arrow in guiding users towards target events. The acknowledging dog received the most positive responses from our participants in terms of preference and user experience and significantly improved their sense of presence compared to the non-diegetic arrow. Lastly, three themes emerged from a qualitative analysis of our participants' feedback, indicating the importance of the guide's blending in, its acknowledging behavior, and participants' positive associations as the main factors for our participants' preferences.

3.
Front Psychol ; 11: 554706, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33281659

RESUMO

Recent times have seen increasing interest in conversational assistants (e.g., Amazon Alexa) designed to help users in their daily tasks. In military settings, it is critical to design assistants that are, simultaneously, helpful and able to minimize the user's cognitive load. Here, we show that embodiment plays a key role in achieving that goal. We present an experiment where participants engaged in an augmented reality version of the relatively well-known desert survival task. Participants were paired with a voice assistant, an embodied assistant, or no assistant. The assistants made suggestions verbally throughout the task, whereas the embodied assistant further used gestures and emotion to communicate with the user. Our results indicate that both assistant conditions led to higher performance over the no assistant condition, but the embodied assistant achieved this with less cognitive burden on the decision maker than the voice assistant, which is a novel contribution. We discuss implications for the design of intelligent collaborative systems for the warfighter.

4.
IEEE Trans Vis Comput Graph ; 26(5): 1934-1944, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32070964

RESUMO

Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field.


Assuntos
Realidade Aumentada , Gráficos por Computador , Tecnologia de Rastreamento Ocular , Fixação Ocular/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Análise e Desempenho de Tarefas , Adulto Jovem
5.
J Rehabil Assist Technol Eng ; 6: 2055668319841309, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31245034

RESUMO

INTRODUCTION: A large body of research in the field of virtual reality is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement. METHODS: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds. RESULTS: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker, e.g. in cases when no physical head rotations are possible. CONCLUSIONS: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA