Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
IEEE Trans Vis Comput Graph ; 30(5): 2206-2216, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38437082

RESUMEN

In Mixed Reality (MR), users' heads are largely (if not completely) occluded by the MR Head-Mounted Display (HMD) they are wearing. As a consequence, one cannot see their facial expressions and other communication cues when interacting locally. In this paper, we investigate how displaying virtual avatars' heads on-top of the (HMD-occluded) heads of participants in a Video See-Through (VST) Mixed Reality local collaborative task could improve their collaboration as well as social presence. We hypothesized that virtual heads would convey more communicative cues (such as eye direction or facial expressions) hidden by the MR HMDs and lead to better collaboration and social presence. To do so, we conducted a between-subject study ($\mathrm{n}=88$) with two independent variables: the type of avatar (CartoonAvatar/RealisticAvatar/NoAvatar) and the level of facial expressions provided (HighExpr/LowExpr). The experiment involved two dyadic communication tasks: (i) the "20-question" game where one participant asks questions to guess a hidden word known by the other participant and (ii) a urban planning problem where participants have to solve a puzzle by collaborating. Each pair of participants performed both tasks using a specific type of avatar and facial animation. Our results indicate that while adding an avatar's head does not necessarily improve social presence, the amount of facial expressions provided through the social interaction does have an impact. Moreover, participants rated their performance higher when observing a realistic avatar but rated the cartoon avatars as less uncanny. Taken together, our results contribute to a better understanding of the role of partial avatars in local MR collaboration and pave the way for further research exploring collaboration in different scenarios, with different avatar types or MR setups.


Asunto(s)
Realidad Aumentada , Avatar , Humanos , Interfaz Usuario-Computador , Gráficos por Computador , Expresión Facial
2.
IEEE Trans Vis Comput Graph ; 29(11): 4438-4448, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37782596

RESUMEN

Text entry in Virtual Reality (VR) is becoming an increasingly important task as the availability of hardware increases and the range of VR applications widens. This is especially true for VR industrial applications where users need to input data frequently. Large-scale industrial adoption of VR is still hampered by the productivity gap between entering data via a physical keyboard and VR data entry methods. Data entry needs to be efficient, easy-to-use and to learn and not frustrating. In this paper, we present a new data entry method based on handwriting recognition (HWR). Users can input text by simply writing on a virtual surface. We conduct a user study to determine the best writing conditions when it comes to surface orientation and sensory feedback. This feedback consists of visual, haptic, and auditory cues. We find that using a slanted board with sensory feedback is best to maximize writing speeds and minimize physical demand. We also evaluate the performance of our method in terms of text entry speed, error rate, usability and workload. The results show that handwriting in VR has high entry speed, usability with little training compared to other controller-based virtual text entry techniques. The system could be further improved by reducing high error rates through the use of more efficient handwriting recognition tools. In fact, the total error rate is 9.28% in the best condition. After 40 phrases of training, participants reach an average of 14.5 WPM, while a group with high VR familiarity reach 16.16 WPM after the same training. The highest observed textual data entry speed is 21.11 WPM.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA