Your browser doesn't support javascript.
loading
Multimodal fusion of EMG and vision for human grasp intent inference in prosthetic hand control.
Zandigohar, Mehrshad; Han, Mo; Sharif, Mohammadreza; Günay, Sezen Yagmur; Furmanek, Mariusz P; Yarossi, Mathew; Bonato, Paolo; Onal, Cagdas; Padir, Taskin; Erdogmus, Deniz; Schirner, Gunar.
Affiliation
  • Zandigohar M; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States.
  • Han M; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States.
  • Sharif M; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States.
  • Günay SY; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States.
  • Furmanek MP; Department of Physical Therapy, Movement and Rehabilitation Sciences, Northeastern University, Boston, MA, United States.
  • Yarossi M; Institute of Sport Sciences, Academy of Physical Education in Katowice, Katowice, Poland.
  • Bonato P; Department of Physical Therapy, Movement and Rehabilitation Sciences, Northeastern University, Boston, MA, United States.
  • Onal C; Motion Analysis Lab, Spaulding Rehabilitation Hospital, Charlestown, MA, United States.
  • Padir T; Soft Robotics Lab, Worcester Polytechnic Institute, Worcester, MA, United States.
  • Erdogmus D; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States.
  • Schirner G; Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, United States.
Front Robot AI ; 11: 1312554, 2024.
Article in En | MEDLINE | ID: mdl-38476118
ABSTRACT

Objective:

For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities.

Methods:

In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components.

Results:

Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%.

Conclusion:

Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Robot AI Year: 2024 Document type: Article Affiliation country: United States Country of publication: Switzerland

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Front Robot AI Year: 2024 Document type: Article Affiliation country: United States Country of publication: Switzerland