Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Vis Comput Graph ; 30(9): 6493-6506, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38170655

RESUMO

Alphanumeric and special characters are essential during text entry. Text entry in virtual reality (VR) is usually performed on a virtual Qwerty keyboard to minimize the need to learn new layouts. As such, entering capitals, symbols, and numbers in VR is often a direct migration from a physical/touchscreen Qwerty keyboard-that is, using the mode-switching keys to switch between different types of characters and symbols. However, there are inherent differences between a keyboard in VR and a physical/touchscreen keyboard, and as such, a direct adaptation of mode-switching via switch keys may not be suitable for VR. The high flexibility afforded by VR opens up more possibilities for entering alphanumeric and special characters using the Qwerty layout. In this work, we designed two controller-based raycasting text entry methods for alphanumeric and special characters input (Layer-ButtonSwitch and Key-ButtonSwitch) and compared them with two other methods (Standard Qwerty Keyboard and Layer-PointSwitch) that were derived from physical and soft Qwerty keyboards. We explored the performance and user preference of these four methods via two user studies (one short-term and one prolonged use), where participants were instructed to input text containing alphanumeric and special characters. Our results show that Layer-ButtonSwitch led to the highest statistically significant performance, followed by Key-ButtonSwitch and Standard Qwerty Keyboard, while Layer-PointSwitch had the slowest speed. With continuous practice, participants' performance using Key-ButtonSwitch reached that of Layer-ButtonSwitch. Further, the results show that the key-level layout used in Key-ButtonSwitch led users to parallel mode switching and character input operations because this layout showed all characters on one layer. We distill three recommendations from the results that can help guide the design of text entry techniques for alphanumeric and special characters in VR.

2.
IEEE Trans Vis Comput Graph ; 29(11): 4622-4632, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37782613

RESUMO

We present a fast mid-air gesture keyboard for head-mounted optical see-through augmented reality (OST AR) that supports users in articulating word patterns by merely moving their own physical index finger in relation to a virtual keyboard plane without a need to indirectly control a visual 2D cursor on a keyboard plane. To realize this, we introduce a novel decoding method that directly translates users' three-dimensional fingertip gestural trajectories into their intended text. We evaluate the efficacy of the system in three studies that investigate various design aspects, such as immediate efficacy, accelerated learning, and whether it is possible to maintain performance without providing visual feedback. We find that the new 3D trajectory decoding design results in significant improvements in entry rates while maintaining low error rates. In addition, we demonstrate that users can maintain their performance even without fingertip and gesture trace visualization.

3.
IEEE Trans Vis Comput Graph ; 28(11): 3618-3628, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36048982

RESUMO

In this paper we examine the task of key gesture spotting: accurate and timely online recognition of hand gestures. We specifically seek to address two key challenges faced by developers when integrating key gesture spotting functionality into their applications. These are: i) achieving high accuracy and zero or negative activation lag with single-time activation; and ii) avoiding the requirement for deep domain expertise in machine learning. We address the first challenge by proposing a key gesture spotting architecture consisting of a novel gesture classifier model and a novel single-time activation algorithm. This key gesture spotting architecture was evaluated on four separate hand skeleton gesture datasets, and achieved high recognition accuracy with early detection. We address the second challenge by encapsulating different data processing and augmentation strategies, as well as the proposed key gesture spotting architecture, into a graphical user interface and an application programming interface. Two user studies demonstrate that developers are able to efficiently construct custom recognizers using both the graphical user interface and the application programming interface.


Assuntos
Realidade Aumentada , Gestos , Reconhecimento Automatizado de Padrão , Gráficos por Computador , Algoritmos , Mãos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA