RESUMO
Individuals suffering from quadriplegia can achieve increased independence by using an assistive robotic manipulator (ARM). However, due to their disability, the interfaces that can be used to operate such devices become limited. A versatile intraoral tongue control interface (ITCI) has previously been develop for this user group, as the tongue is usually spared from disability. A previous study has shown that the ITCI can provide direct and continuous control of 6-7 degrees of freedom (DoF) of an ARM, due to a high number of provided inputs (18). In the present pilot study we investigated whether semi-automation might further improve the efficiency of the ITCI, when controlling an ARM. This was achieved by adding a camera to the end effector of the ARM and using computer vision algorithms to guide the ARM to grasp a target object. Three ITCI and one joystick control scheme were tested and compared: 1) manual Cartesian control with a base frame reference point, 2) manual Cartesian control with an end effector reference point 3) manual Cartesian control with an end effector reference point and an autonomous grasp function 4) regular JACO2 joystick control. The results indicated that end effector control was superior to the base frame control in total task time, number of commands issued and path efficiency. The addition of the automatic grasp function did not improve the performance, but resulted in fewer collisions/displacements of the target object when grasping.