Your browser doesn't support javascript.
loading
Robust Dual-Modal Speech Keyword Spotting for XR Headsets.
IEEE Trans Vis Comput Graph ; 30(5): 2507-2516, 2024 May.
Article en En | MEDLINE | ID: mdl-38437114
ABSTRACT
While speech interaction finds widespread utility within the Extended Reality (XR) domain, conventional vocal speech keyword spotting systems continue to grapple with formidable challenges, including suboptimal performance in noisy environments, impracticality in situations requiring silence, and susceptibility to inadvertent activations when others speak nearby. These challenges, however, can potentially be surmounted through the cost-effective fusion of voice and lip movement information. Consequently, we propose a novel vocal-echoic dual-modal keyword spotting system designed for XR headsets. We devise two different modal fusion approches and conduct experiments to test the system's performance across diverse scenarios. The results show that our dual-modal system not only consistently outperforms its single-modal counterparts, demonstrating higher precision in both typical and noisy environments, but also excels in accurately identifying silent utterances. Furthermore, we have successfully applied the system in real-time demonstrations, achieving promising results. The code is available at https//github.com/caizhuojiang/VE-KWS.
Asunto(s)

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Habla / Gráficos por Computador Idioma: En Revista: IEEE Trans Vis Comput Graph / IEEE trans. vis. comput. graph. (Online) / IEEE transactions on visualization and computer graphics (Online) Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Habla / Gráficos por Computador Idioma: En Revista: IEEE Trans Vis Comput Graph / IEEE trans. vis. comput. graph. (Online) / IEEE transactions on visualization and computer graphics (Online) Asunto de la revista: INFORMATICA MEDICA Año: 2024 Tipo del documento: Article