RESUMEN
The need for more practical, robust, and affordable prosthetic hands for amputees has led to significant advancements in their functionality. However, the challenge of designing prostheses that balance dexterity, functionality, and affordability still persists. The lack of prosthetic designs that can efficiently address the needs of amputees for both heavy manual labor and social interactions, while also being lightweight and dexterous, is still evident. This paper presents a design for an anthropomorphic, adaptive, lightweight, body-powered prosthetic hand built for performing Activities of Daily Living using a selectively lockable differential mechanism. The proposed differential mechanism allows the users to flex and lock the prosthesis' fingers in a wide range of poses and grasps. The performance of the body-powered prosthesis is experimentally validated with three different types of experiments: i) object grasping, ii) total grasping strength, and iii) individual finger force exertion. The resultant prosthetic hand is lightweight and comfortable to wear and is adequately capable of grasping a wide range of objects, including items commonly used in cleaning and cooking applications where waterproofness is essential.
Asunto(s)
Actividades Cotidianas , Miembros Artificiales , Humanos , Diseño de Prótesis , Mano , DedosRESUMEN
The increasing use of smart technical devices in our everyday lives has necessitated the use of muscle-machine interfaces (MuMI) that are intuitive and that can facilitate immersive interactions with these devices. The most common method to develop MuMIs is using Electromyography (EMG) based signals. However, due to several drawbacks of EMG-based interfaces, alternative methods to develop MuMI are being explored. In our previous work, we presented a new MuMI called Lightmyography (LMG), which achieved outstanding results compared to a classic EMG-based interface in a five-gesture classification task. In this study, we extend our previous work experimentally validating the efficiency of the LMG armband in classifying thirty-two different gestures from six participants using a deep learning technique called Temporal Multi-Channel Vision Transformers (TMC-ViT). The efficiency of the proposed model was assessed using accuracy. Moreover, two different undersampling techniques are compared. The proposed thirty-two-gesture classifiers achieve accuracies as high as 92%. Finally, we employ the LMG interface in the real-time control of a robotic hand using ten different gestures, successfully reproducing several grasp types from taxonomy grasps presented in the literature.
Asunto(s)
Robótica , Humanos , Mano , Electromiografía/métodos , Músculos , Fuerza de la ManoRESUMEN
Conventional muscle-machine interfaces like Electromyography (EMG), have significant drawbacks, such as crosstalk, a non-linear relationship between the signal and the corresponding motion, and increased signal processing requirements. In this work, we introduce a new muscle-machine interfacing technique called lightmyography (LMG), that can be used to efficiently decode human hand gestures, motion, and forces from the detected contractions of the human muscles. LMG utilizes light propagation through elastic media and human tissue, measuring changes in light luminosity to detect muscle movement. Similar to forcemyography, LMG infers muscular contractions through tissue deformation and skin displacements. In this study, we look at how different characteristics of the light source and silicone medium affect the performance of LMG and we compare LMG and EMG based gesture decoding using various machine learning techniques. To do that, we design an armband equipped with five LMG modules, and we use it to collect the required LMG data. Three different machine learning methods are employed: Random Forests, Convolutional Neural Networks, and Temporal Multi-Channel Vision Transformers. The system has also been efficiently used in decoding the forces exerted during power grasping. The results demonstrate that LMG outperforms EMG for most methods and subjects.
Asunto(s)
Gestos , Redes Neurales de la Computación , Humanos , Electromiografía/métodos , Músculos , Movimiento (Física) , Algoritmos , ManoRESUMEN
Electromyography (EMG) signals have been used in designing muscle-machine interfaces (MuMIs) for various applications, ranging from entertainment (EMG controlled games) to human assistance and human augmentation (EMG controlled prostheses and exoskeletons). For this, classical machine learning methods such as Random Forest (RF) models have been used to decode EMG signals. However, these methods depend on several stages of signal pre-processing and extraction of hand-crafted features so as to obtain the desired output. In this work, we propose EMG based frameworks for the decoding of object motions in the execution of dexterous, in-hand manipulation tasks using raw EMG signals input and two novel deep learning (DL) techniques called Temporal Multi-Channel Transformers and Vision Transformers. The results obtained are compared, in terms of accuracy and speed of decoding the motion, with RF-based models and Convolutional Neural Networks as a benchmark. The models are trained for 11 subjects in a motion-object specific and motion-object generic way, using the 10-fold cross-validation procedure. This study shows that the performance of MuMIs can be improved by employing DL-based models with raw myoelectric activations instead of developing DL or classic machine learning models with hand-crafted features.
Asunto(s)
Miembros Artificiales , Mano , Electromiografía/métodos , Humanos , Movimiento (Física) , Redes Neurales de la ComputaciónRESUMEN
Evaluating the dexterity of human and robotic hands through appropriate benchmarks, scores, and metrics is of paramount importance for determining how skillful humans are and for designing and developing new bioinspired or even biomimetic end-effectors (e.g., robotic grippers and hands). Dexterity tests have been used in industrial and medical settings to assess how dexterous the hands of workers and surgeons are as well as in robotic rehabilitation settings to determine the improvement or deterioration of the hand function after a stroke or a surgery. In robotics, having a comprehensive dexterity test can allow us to evaluate and compare grippers and hands irrespectively of their design characteristics. However, there is a lack of well defined metrics, benchmarks, and tests that quantify robot dexterity. Previous work has focused on a number of widely accepted functional tests that are used for the evaluation of manual dexterity and human hand function improvement post injury. Each of these tests focuses on a different set of specific tasks and objects. Deriving from these tests, this work proposes a new modular, affordable, accessible, open-source dexterity test for both humans and robots. This test evaluates the grasping and manipulation capabilities by combining the features and best practices of the aforementioned tests, as well as new task categories specifically designed to evaluate dexterous manipulation capabilities. The dexterity test and the accompanying benchmarks allow us to determine the overall hand function recovery and dexterity of robotic end-effectors with ease. More precisely, a dexterity score that ranges from 0 (simplistic, non-dexterous system) to 1 (human-like system) is calculated using the weighted sum of the accuracy and task execution speed subscores. It should also be noted that the dexterity of a robotic system can be evaluated assessing the efficiency of either the robotic hardware, or the robotic perception system, or both. The test and the benchmarks proposed in the study have been validated using extensive human and robot trials. The human trials have been used to determine the baseline scores for the evaluation system. The results show that the time required to complete the tasks reduces significantly with trials indicating a clear learning curve in mastering the dexterous manipulation capabilities associated with the imposed tasks. Finally, the time required to complete the tasks with restricted tactile feedback is significantly higher indicating its importance.
RESUMEN
With an increasing number of robotic and prosthetic devices, there is a need for intuitive Muscle-Machine Interfaces (MuMIs) that allow the user to have an embodied interaction with the devices they are controlling. Such MuMIs can be developed using machine learning based methods that utilize myoelectric activations from the muscles of the user to decode their intention. However, the choice of the learning method is subjective and depends on the features extracted from the raw Electromyography signals as well as on the intended application. In this work, we compare the performance of five machine learning methods and eight time-domain feature extraction techniques in discriminating between different gestures executed by the user of an EMG based MuMI. From the results, it can be seen that the Willison Amplitude performs consistently better for all the machine learning methods compared in this study, while the Zero Crossings achieves the worst results for the Decision Trees and the Random Forests and the Variance offers the worst performance for all the other learning methods. The Random Forests method is shown to achieve the best results in terms of achieved accuracies (has the lowest variance between subjects). In order to experimentally validate the efficiency of the Random Forest classifier and the Willison Amplitude technique, a series of gestures were decoded in a real-time manner from the myoelectric activations of the operator and they were used to control a robot hand.
Asunto(s)
Mano , Intención , Electromiografía , Gestos , Humanos , Aprendizaje AutomáticoRESUMEN
Robot grasping in unstructured and dynamic environments is heavily dependent on the object attributes. Although Deep Learning approaches have delivered exceptional performance in robot perception, human perception and reasoning are still superior in processing novel object classes. Furthermore, training such models requires large, difficult to obtain datasets. This work combines crowdsourcing and gamification to leverage human intelligence, enhancing the object recognition and attribute estimation processes of robot grasping. The framework employs an attribute matching system that encodes visual information into an online puzzle game, utilizing the collective intelligence of players to expand the attribute database and react to real-time perception conflicts. The framework is deployed and evaluated in two proof-of-concept applications: enhancing the control of a robotic exoskeleton glove and improving object identification for autonomous robot grasping. In addition, a model for estimating the framework response time is proposed. The obtained results demonstrate that the framework is capable of rapid adaptation to novel object classes, based purely on visual information and human experience.
RESUMEN
Over the last decade, there has been an increased interest in developing aerial robotic platforms that exhibit grasping and perching capabilities not only within the research community but also in companies across different industry sectors. Aerial robots range from standard multicopter vehicles/drones, to autonomous helicopters, and fixed-wing or hybrid devices. Such devices rely on a range of different solutions for achieving grasping and perching. These solutions can be classified as: 1) simple gripper systems, 2) arm-gripper systems, 3) tethered gripping mechanisms, 4) reconfigurable robot frames, 5) adhesion solutions, and 6) embedment solutions. Grasping and perching are two crucial capabilities that allow aerial robots to interact with the environment and execute a plethora of complex tasks, facilitating new applications that range from autonomous package delivery and search and rescue to autonomous inspection of dangerous or remote environments. In this review paper, we present the state-of-the-art in aerial grasping and perching mechanisms and we provide a comprehensive comparison of their characteristics. Furthermore, we analyze these mechanisms by comparing the advantages and disadvantages of the proposed technologies and we summarize the significant achievements in these two research topics. Finally, we conclude the review by suggesting a series of potential future research directions that we believe that are promising.
RESUMEN
Currently, ~ 1.5 million American deaf-blind individuals depend on the availability of interpreting services to communicate in their primary conversational language, tactile American Sign Language (ASL). In an effort to give the deaf-blind community access to a device that facilitates independent communication using tactile ASL, we developed TATUM (Tactile ASL Translational User Mechanism). TATUM employs 15 degrees of actuation in a hand-wrist system that is capable of signing the 26-letter ASL alphabet. Leveraging Interpres, an independent cloud-based service, all servo sequences that render desired fingerspelled letters and ASL words are stored in a web application programming interface (API). A validation study including both deaf and deaf-blind participants confirmed that the TATUM hand mimics a human hand both in size and feel. The current design of TATUM attained an average recognition rate of 94.7% in visual validation, indicative of the potential to support deaf and deaf-blind individuals in communicating via visual and tactile ASL.
Asunto(s)
Procedimientos Quirúrgicos Robotizados , Lengua de Signos , Comunicación , Mano , Humanos , Estados UnidosRESUMEN
Recognising and classifying human hand gestures is important for effective communication between humans and machines in applications such as human-robot interaction, human to robot skill transfer, and control of prosthetic devices. Although there are already many interfaces that enable decoding of the intention and action of humans, they are either bulky or they rely on techniques that need careful positioning of the sensors, causing inconvenience when the system needs to be used in real-life scenarios and environments. Moreover, electromyography (EMG), which is the most commonly used technique, captures EMG signals that have a nonlinear relationship with the human intention and motion. In this work, we present lightmyography (LMG) a new muscle machine interfacing method for decoding human intention and motion. Lightmyography utilizes light propagation through elastic media and the change of light luminosity to detect silicone deformation. Lightmyography is similar to forcemyography in the sense that they both record muscular contractions through skin displacements. In order to experimentally validate the efficiency of the proposed method, we designed an interface consisting of five LMG sensors to perform gesture classification experiments. Using this device, we were able to accurately detect a series of different hand postures and gestures. We also compared LMG data with processed EMG data.
Asunto(s)
Intención , Contracción Muscular , Electromiografía , Humanos , Movimiento (Física) , MúsculosRESUMEN
Over the last decade underactuated, adaptive robot grippers and hands have received an increased interest from the robotics research community. This class of robotic end-effectors can be used in many different fields and scenarios with a very promising application being the development of prosthetic devices. Their suitability for the development of such devices is attributed to the utilization of underactuation that provides increased functionality and dexterity with reduced weight, cost, and control complexity. The most critical components of underactuated, adaptive hands that allow them to perform a broad set of grasp poses are appropriate differential mechanisms that facilitate the actuation of multiple degrees of freedom using a single motor. In this work, we focus on the design, analysis, and experimental validation of a four output geared differential, a series elastic differential, and a whiffletree differential that can incorporate a series of manual and automated locking mechanisms. The locking mechanisms have been developed so as to enhance the control of the differential outputs, allowing for efficient grasp selection with a minimal set of actuators. The differential mechanisms are applied to prosthetic hands, comparing them and describing the benefits and the disadvantages of each.
RESUMEN
Traditionally, the robotic end-effectors that are employed in unstructured and dynamic environments are rigid and their operation requires sophisticated sensing elements and complicated control algorithms in order to handle and manipulate delicate and fragile objects. Over the last decade, considerable research effort has been put into the development of adaptive, under-actuated, soft robots that facilitate robust interactions with dynamic environments. In this paper, we present soft, retractable, pneumatically actuated, telescopic actuators that facilitate the efficient execution of stable grasps involving a plethora of everyday life objects. The efficiency of the proposed actuators is validated by employing them in two different soft and hybrid robotic grippers. The hybrid gripper uses three rigid fingers to accomplish the execution of all the tasks required by a traditional robotic gripper, while three inflatable, telescopic fingers provide soft interaction with objects. This synergistic combination of soft and rigid structures allows the gripper to cage/trap and firmly hold heavy and irregular objects. The second, simplistic and highly affordable robotic gripper employs just the telescopic actuators, exhibiting an adaptive behavior during the execution of stable grasps of fragile and delicate objects. The experiments demonstrate that both grippers can successfully and stably grasp a wide range of objects, being able to exert significantly high contact forces.
RESUMEN
Soft, underactuated, and wearable robotic exo-gloves have received an increased interest over the last years. These devices can be used to improve the capabilities of healthy individuals or to assist people that suffer from neurological and musculoskeletal diseases. Despite the significant progress in the field, most existing solutions are still heavy and expensive, they require an external power source to operate, and they are not wearable. In this paper, we focus on the development of an affordable, underactuated, tendon-driven, wearable exo-glove equipped with a novel four-output differential mechanism that provides grasping capabilities enhancement to the user. The device and the differential mechanism are experimentally tested and assessed using three different types of experiments: i) grasping tests that involve different everyday objects, ii) force exertion capability tests that assess the fingertip forces for different types of grasps, and iii) tendon tension tests that estimate the maximum tendon tension that can be obtained by employing the proposed differential. The device considerably improves the grasping capabilities of the user with a weight of 690 g and an operation autonomy of a whole day.
Asunto(s)
Dispositivo Exoesqueleto , Mano , Tendones , Dispositivos Electrónicos Vestibles , Fuerza de la Mano , HumanosRESUMEN
Adaptive, tendon-driven and affordable prosthetic devices have received an increased interest over the last decades. Prosthetic devices range from body-powered solutions to fully actuated systems. Despite the significant progress in the field, most existing solutions are expensive, heavy, and bulky, or they cannot be used for partial hand amputations. In this paper, we focus on the development of adaptive, tendon-driven, glove-based, affordable prostheses for partial hand amputations and we propose two compact and lightweight devices (a body powered and a motor driven version). The efficiency of the devices is experimentally validated and their performance is evaluated using two different types of tests: i) grasping tests that involve different everyday objects and ii) tests that assess the force exertion capabilities of the proposed prostheses.
Asunto(s)
Miembros Artificiales , Mano , Adaptación Fisiológica , Amputación Quirúrgica , Fuerza de la Mano , Humanos , Movimiento , Diseño de Prótesis , TendonesRESUMEN
This paper presents an adaptive actuation mechanism that can be employed for the development of anthropomorphic, dexterous robot hands. The tendon-driven actuation mechanism achieves both flexion/extension and adduction/abduction on the finger's metacarpophalangeal joint using two actuators. Moment arm pulleys are employed to drive the tendon laterally and achieve a simultaneous execution of abduction and flexion motion. Particular emphasis has been given to the modeling and analysis of the actuation mechanism. More specifically, the analysis determines specific values for the design parameters for desired abduction angles. Also, a model for spatial motion is provided that relates the actuation modes with the finger motions. A static balance analysis is performed for the computation of the tendon force at each joint. A model is employed for the computation of the stiffness of the rotational flexure joints. The proposed mechanism has been designed and fabricated with the hybrid deposition manufacturing technique. The efficiency of the mechanism has been validated with experiments that include the assessment of the role of friction, the computation of the reachable workspace, the assessment of the force exertion capabilities, the demonstration of the feasible motions, and the evaluation of the grasping and manipulation capabilities. An anthropomorphic robot hand equipped with the proposed actuation mechanism was also fabricated to evaluate its performance. The proposed mechanism facilitates the collaboration of actuators to increase the exerted forces, improving hand dexterity and allowing the execution of dexterous manipulation tasks.
RESUMEN
Electromyography (EMG) based interfaces are the most common solutions for the control of robotic, orthotic, prosthetic, assistive, and rehabilitation devices, translating myoelectric activations into meaningful actions. Over the last years, a lot of emphasis has been put into the EMG based decoding of human intention, but very few studies have been carried out focusing on the continuous decoding of human motion. In this work, we present a learning scheme for the EMG based decoding of object motions in dexterous, in-hand manipulation tasks. We also study the contribution of different muscles while performing these tasks and the effect of the gender and hand size in the overall decoding accuracy. To do that, we use EMG signals derived from 16 muscle sites (8 on the hand and 8 on the forearm) from 11 different subjects and an optical motion capture system that records the object motion. The object motion decoding is formulated as a regression problem using the Random Forests methodology. Regarding feature selection, we use the following time-domain features: root mean square, waveform length and zero crossings. A 10-fold cross validation procedure is used for model assessment purposes and the feature variable importance values are calculated for each feature. This study shows that subject specific, hand specific, and object specific decoding models offer better decoding accuracy that the generic models.
Asunto(s)
Electromiografía/métodos , Mano/fisiología , Movimiento/fisiología , Prótesis e Implantes , Adulto , Algoritmos , Fenómenos Biomecánicos , Femenino , Antebrazo/fisiología , Voluntarios Sanos , Humanos , Aprendizaje Automático , Masculino , Músculo Esquelético/fisiología , Diseño de Prótesis , Reproducibilidad de los Resultados , Robótica , Adulto JovenRESUMEN
This paper presents a compliant, underactuated finger for the development of anthropomorphic robotic and prosthetic hands. The finger achieves both flexion/extension and adduction/abduction on the metacarpophalangeal joint, by using two actuators. The design employs moment arm pulleys to drive the tendon laterally and amplify the abduction motion, while also maintaining the flexion motion. Particular emphasis has been given to the analysis of the mechanism. The proposed finger has been fabricated with the hybrid deposition manufacturing technique and the actuation mechanism's efficiency has been validated with experiments that include the computation of the reachable workspace, the assessment of the exerted forces at the fingertip, the demonstration of the feasible motions, and the presentation of the grasping and manipulation capabilities. The proposed mechanism facilitates the collaboration of the two actuators to increase the exerted finger forces. Moreover, the extended workspace allows the execution of dexterous manipulation tasks.
Asunto(s)
Dedos/fisiología , Fenómenos Biomecánicos , Adaptabilidad , Humanos , Articulaciones/fisiología , Rotación , Tendones/fisiologíaRESUMEN
Adaptive robot hands are typically created by introducing structural compliance either in their joints (e.g., implementation of flexures joints) or in their finger-pads. In this paper, we present a series of alternative uses of structural compliance for the development of simple, adaptive, compliant and/or under-actuated robot grippers and hands that can efficiently and robustly execute a variety of grasping and dexterous, in-hand manipulation tasks. The proposed designs utilize only one actuator per finger to control multiple degrees of freedom and they retain the superior grasping capabilities of the adaptive grasping mechanisms even under significant object pose or other environmental uncertainties. More specifically, in this work, we introduce, discuss, and evaluate: (a) a design of pre-shaped, compliant robot fingers that adapts/conforms to the object geometry, (b) a hyper-adaptive finger-pad design that maximizes the area of the contact patches between the hand and the object, maximizing also grasp stability, and (c) a design that executes compliance adjustable manipulation tasks that can be predetermined by tuning the in-series compliance of the tendon routing system and by appropriately selecting the imposed tendon loads. The grippers are experimentally tested and their efficiency is validated using three different types of tests: (i) grasping tests that involve different everyday objects, (ii) grasp quality tests that estimate the contact area between the grippers and the objects grasped, and (iii) dexterous, in-hand manipulation experiments to evaluate the manipulation capabilities of the Compliance Adjustable Manipulation (CAM) hand. The devices employ mechanical adaptability to facilitate and simplify the efficient execution of robust grasping and dexterous, in-hand manipulation tasks.
RESUMEN
The field of Brain Machine Interfaces (BMI) has attracted an increased interest due to its multiple applications in the health and entertainment domains. A BMI enables a direct interface between the brain and machines and is capable of translating neuronal information into meaningful actions (e.g., Electromyography based control of a prosthetic hand). One of the biggest challenges in developing a surface Electromyography (sEMG) based interface is the selection of the right muscles for the execution of a desired task. In this work, we investigate optimal muscle selections for sEMG based decoding of dexterous in-hand manipulation motions. To do that, we use EMG signals derived from 14 muscle sites of interest (7 on the hand and 7 on the forearm) and an optical motion capture system that records the object motion. The regression problem is formulated using the Random Forests methodology that is based on decision trees. Regarding features selection, we use the following time-domain features: root mean square, waveform length and zero crossings. A 5-fold cross validation procedure is used for model assessment purposes and the importance values are calculated for each feature. This pilot study shows that the muscles of the hand contribute more than the muscles of the forearm to the execution of inhand manipulation tasks and that the myoelectric activations of the hand muscles provide better estimation accuracies for the decoding of manipulation motions. These outcomes suggest that the loss of the hand muscles in certain amputations limits the amputees' ability to perform a dexterous, EMG based control of a prosthesis in manipulation tasks. The results discussed can also be used for improving the efficiency and intuitiveness of EMG based interfaces for healthy subjects.