Your browser doesn't support javascript.
loading
Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly.
Yan, YuXiang; Bai, Xiaoliang; He, Weiping; Wang, Shuxia; Zhang, XiangYu; Wang, Peng; Liu, Liwei; Zhang, Bing.
Afiliación
  • Yan Y; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
  • Bai X; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
  • He W; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
  • Wang S; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
  • Zhang X; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
  • Wang P; School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065 China.
  • Liu L; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
  • Zhang B; Cyber-Physical Interaction Lab, Northwestern Polytechnical University, Xi'an, 710072 China.
Int J Adv Manuf Technol ; : 1-23, 2023 Jun 02.
Article en En | MEDLINE | ID: mdl-37360661
In mixed reality (MR) remote collaborative assembly, remote experts can guide local users to complete the assembly of physical tasks by sharing user cues (eye gazes, gestures, etc.) and spatial visual cues (such as AR annotations, virtual replicas). At present, remote experts need to carry out complex operations to transfer information to local users, but the fusion of virtual and real information makes the display of information in the MR collaborative interaction interface appear messy and redundant, and local users sometimes find it difficult to pay attention to the focus of information transferred by experts. Our research aims to simplify the operation of remote experts in MR remote collaborative assembly and to enhance the expression of visual cues that reflect experts' attention, so as to promote the expression and communication of collaborative intention that user has and improve assembly efficiency. We developed a system (EaVAS) through a method that is based on the assembly semantic association model and the expert operation visual enhancement mechanism that integrates gesture, eye gaze, and spatial visual cues. EaVAS can give experts great freedom of operation in MR remote collaborative assembly, so that experts can strengthen the visual expression of the information they want to convey to local users. EaVAS was tested for the first time in an engine physical assembly task. The experimental results show that the EaVAS has better time performance, cognitive performance, and user experience than that of the traditional MR remote collaborative assembly method (3DGAM). Our research results have certain guiding significance for the research of user cognition in MR remote collaborative assembly, which expands the application of MR technology in collaborative assembly tasks.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Int J Adv Manuf Technol Año: 2023 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Int J Adv Manuf Technol Año: 2023 Tipo del documento: Article