Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 18(10): e0292078, 2023.
Article in English | MEDLINE | ID: mdl-37851613

ABSTRACT

Robot-to-human communication is important for mutual understanding during human-robot collaboration. Most of the current collaborative robots (cobots) are designed with low levels of anthropomorphism. Therefore, the ability of cobots to express human-like communication is limited. In this work, we present an open-source platform named Antropo to increase the level of anthropomorphism of Franka Emika-a widely used collaborative robot arm. The Antropo platform includes three modules: a camera module for expressing eye gaze, a light module for visual feedback, and a sound module for acoustic feedback. These modules can be rapidly prototyped through 3D printers, laser-cutters, and off-the-shelf components available at a low cost. The Antropo platform can be easily installed on the Franka Emika robot. The added communication channels can be synchronised with the robot's motions to enhance mutual understanding. All hardware CAD design files and software files are released. The platform can be used to study human-like behaviours of cobots and the effects of these behaviours on different aspects of human-robot collaboration. We demonstrate the Antropo platform in an assembly task in which the Franka Emika robot expresses various human-like communicative behaviours via the added communication channels. We also present two industrial applications in which the Antropo platform was customised for the Universal Robots UR16e.


Subject(s)
Robotics , Humans , Communication , Acoustics , Equipment Design , Feedback, Sensory
2.
Front Neurorobot ; 16: 923164, 2022.
Article in English | MEDLINE | ID: mdl-36524219

ABSTRACT

Locomotion mode recognition provides the prosthesis control with the information on when to switch between different walking modes, whereas the gait phase detection indicates where we are in the gait cycle. But powered prostheses often implement a different control strategy for each locomotion mode to improve the functionality of the prosthesis. Existing studies employed several classical machine learning methods for locomotion mode recognition. However, these methods were less effective for data with complex decision boundaries and resulted in misclassifications of motion recognition. Deep learning-based methods potentially resolve these limitations as it is a special type of machine learning method with more sophistication. Therefore, this study evaluated three deep learning-based models for locomotion mode recognition, namely recurrent neural network (RNN), long short-term memory (LSTM) neural network, and convolutional neural network (CNN), and compared the recognition performance of deep learning models to the machine learning model with random forest classifier (RFC). The models are trained from data of one inertial measurement unit (IMU) placed on the lower shanks of four able-bodied subjects to perform four walking modes, including level ground walking (LW), standing (ST), and stair ascent/stair descent (SA/SD). The results indicated that CNN and LSTM models outperformed other models, and these models were promising for applying locomotion mode recognition in real-time for robotic prostheses.

3.
PLoS One ; 15(8): e0236939, 2020.
Article in English | MEDLINE | ID: mdl-32823270

ABSTRACT

We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children's behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.


Subject(s)
Autism Spectrum Disorder/therapy , Databases, Factual , Medical Informatics , Robotics , Behavior , Child , Evidence-Based Medicine , Female , Humans , Male
4.
Sensors (Basel) ; 20(14)2020 Jul 17.
Article in English | MEDLINE | ID: mdl-32708924

ABSTRACT

Fast and accurate gait phase detection is essential to achieve effective powered lower-limb prostheses and exoskeletons. As the versatility but also the complexity of these robotic devices increases, the research on how to make gait detection algorithms more performant and their sensing devices smaller and more wearable gains interest. A functional gait detection algorithm will improve the precision, stability, and safety of prostheses, and other rehabilitation devices. In the past years the state-of-the-art has advanced significantly in terms of sensors, signal processing, and gait detection algorithms. In this review, we investigate studies and developments in the field of gait event detection methods, more precisely applied to prosthetic devices. We compared advantages and limitations between all the proposed methods and extracted the relevant questions and recommendations about gait detection methods for future developments.


Subject(s)
Artificial Limbs , Exoskeleton Device , Algorithms , Gait , Signal Processing, Computer-Assisted
5.
Infant Behav Dev ; 42: 157-67, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26589653

ABSTRACT

This study investigates if infants perceive an unfamiliar agent, such as the robot Keepon, as a social agent after observing an interaction between the robot and a human adult. 23 infants, aged 9-17 month, were exposed, in a first phase, to either a contingent interaction between the active robot and an active human adult, or to an interaction between an active human adult and the non-active robot, followed by a second phase, in which infants were offered the opportunity to initiate a turn-taking interaction with Keepon. The measured variables were: (1) the number of social initiations the infant directed toward the robot, and (2) the number of anticipatory orientations of attention to the agent that follows in the conversation. The results indicate a significant higher level of initiations in the interactive robot condition compared to the non-active robot condition, while the difference between the frequencies of anticipations of turn-taking behaviors was not significant.


Subject(s)
Attention/physiology , Infant Behavior/physiology , Play and Playthings/psychology , Robotics , Adult , Child, Preschool , Cognition , Communication , Female , Humans , Infant , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...