Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
Add more filters










Publication year range
1.
Acta Neurochir (Wien) ; 166(1): 104, 2024 Feb 24.
Article in English | MEDLINE | ID: mdl-38400918

ABSTRACT

INTRODUCTION: The current assessment and standardization of microsurgical skills are subjective, posing challenges in reliable skill evaluation. We aim to address these limitations by developing a quantitative and objective framework for accurately assessing and enhancing microsurgical anastomosis skills among surgical trainees. We hypothesize that this framework can differentiate the proficiency levels of microsurgeons, aligning with subjective assessments based on the ALI score. METHODS: We select relevant performance metrics from the literature on laparoscopic skill assessment and human motor control studies, focusing on time, instrument kinematics, and tactile information. This information is measured and estimated by a set of sensors, including cameras, a motion capture system, and tactile sensors. The recorded data is analyzed offline using our proposed evaluation framework. Our study involves 12 participants of different ages ([Formula: see text] years) and genders (nine males and three females), including six novice and six intermediate subjects, who perform surgical anastomosis procedures on a chicken leg model. RESULTS: We show that the proposed set of objective and quantitative metrics to assess skill proficiency aligns with subjective evaluations, particularly the ALI score method, and can effectively differentiate novices from more proficient microsurgeons. Furthermore, we find statistically significant disparities, where microsurgeons with intermediate level of skill proficiency surpassed novices in both task speed, reduced idle time, and smoother, briefer hand displacements. CONCLUSION: The framework enables accurate skill assessment and provides objective feedback for improving microsurgical anastomosis skills among surgical trainees. By overcoming the subjectivity and limitations of current assessment methods, our approach contributes to the advancement of surgical education and the development of aspiring microsurgeons. Furthermore, our framework emerges to precisely distinguish and classify proficiency levels (novice and intermediate) exhibited by microsurgeons.


Subject(s)
Clinical Competence , Laparoscopy , Humans , Male , Female , Anastomosis, Surgical/methods , Microsurgery/methods
2.
Open Res Eur ; 4: 4, 2024.
Article in English | MEDLINE | ID: mdl-38385118

ABSTRACT

The importance of construction automation has grown worldwide, aiming to deliver new machineries for the automation of roads, tunnels, bridges, buildings and earth-work construction. This need is mainly driven by (i) the shortage and rising costs of skilled workers, (ii) the tremendous increased needs for new infrastructures to serve the daily activities and (iii) the immense demand for maintenance of ageing infrastructure. Shotcrete (sprayed concrete) is increasingly becoming popular technology among contractors and builders, as its application is extremely economical and flexible as the growth in construction repairs in developed countries demand excessive automation of concrete placement. Even if shotcrete technology is heavily mechanized, the actual application is still performed manually at a large extend. RoBétArméEuropean project targets the Construction 4.0 transformation of the construction with shotcrete with the adoption of breakthrough technologies such as sensors, augmented reality systems, high-performance computing, additive manufacturing, advanced materials, autonomous robots and simulation systems, technologies that have already been studied and applied so far in Industry 4.0. The paper at hand showcases the development of a novel robotic system with advanced perception, cognition and digitization capabilities for the automation of all phases of shotcrete application. In particular, the challenges and barriers in shotcrete automation are presented and the RoBétArmésuggested solutions are outlined. We introduce a basic conceptual architecture of the system to be developed and we demonstrate the four application scenarios on which the system is designated to operate.


The RoBétArmé European project targets the Construction 4.0 transformation of the construction with shotcrete with the adoption of breakthrough technologies such as sensors, augmented reality systems, high-performance computing, additive manufacturing, advanced materials, autonomous robots and simulation systems, technologies that have already been studied and applied so far in Industry 4.0. This paper showcases a case study on which novel robotic systems will be developed for the automation of shotecrete application. The outcomes of this research can be widely used in other application technologies related to the construction domain.

3.
Sci Rep ; 13(1): 20163, 2023 11 17.
Article in English | MEDLINE | ID: mdl-37978205

ABSTRACT

During reaching actions, the human central nerve system (CNS) generates the trajectories that optimize effort and time. When there is an obstacle in the path, we make sure that our arm passes the obstacle with a sufficient margin. This comfort margin varies between individuals. When passing a fragile object, risk-averse individuals may adopt a larger margin by following the longer path than risk-prone people do. However, it is not known whether this variation is associated with a personalized cost function used for the individual optimal control policies and how it is represented in our brain activity. This study investigates whether such individual variations in evaluation criteria during reaching results from differentiated weighting given to energy minimization versus comfort, and monitors brain error-related potentials (ErrPs) evoked when subjects observe a robot moving dangerously close to a fragile object. Seventeen healthy participants monitored a robot performing safe, daring and unsafe trajectories around a wine glass. Each participant displayed distinct evaluation criteria on the energy efficiency and comfort of robot trajectories. The ErrP-BCI outputs successfully inferred such individual variation. This study suggests that ErrPs could be used in conjunction with an optimal control approach to identify the personalized cost used by CNS. It further opens new avenues for the use of brain-evoked potential to train assistive robotic devices through the use of neuroprosthetic interfaces.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Humans , Electroencephalography/methods , Evoked Potentials/physiology , Brain , Algorithms
4.
J Neural Eng ; 19(6)2022 12 02.
Article in English | MEDLINE | ID: mdl-36384035

ABSTRACT

Objective. The limited functionality of hand prostheses remains one of the main reasons behind the lack of its wide adoption by amputees. Indeed, while commercial prostheses can perform a reasonable number of grasps, they are often inadequate for manipulating the object once in hand. This lack of dexterity drastically restricts the utility of prosthetic hands. We aim at investigating a novel shared control strategy that combines autonomous control of forces exerted by a robotic hand with electromyographic (EMG) decoding to perform robust in-hand object manipulation.Approach. We conduct a three-day long longitudinal study with eight healthy subjects controlling a 16-degrees-of-freedom robotic hand to insert objects in boxes of various orientations. EMG decoding from forearm muscles enables subjects to move, proportionally and simultaneously, the fingers of the robotic hand. The desired object rotation is inferred using two EMG electrodes placed on the shoulder that record the activity of muscles responsible for elevation and depression. During the object interaction phase, the autonomous controller stabilizes and rotates the object to achieve the desired pose. In this study, we compare an incremental and a proportional shoulder-decoding method in combination with two state machine interfaces offering different levels of assistance.Main results. Results indicate that robotic assistance reduces the number of failures by41%and, when combined with an incremental shoulder EMG decoding, leads to faster task completion time (median = 16.9 s), compared to other control conditions. Training to use the assistive device is fast. After one session of practice, all subjects managed to achieve tasks with50%less failures.Significance. Shared control approaches that give some authority to an autonomous controller on-board the prosthesis are an alternative to control schemes relying on EMG decoding alone. This may improve the dexterity and versatility of robotic prosthetic hands for people with trans-radial amputation. By delegating control of forces to the prosthesis' on-board control, one speeds up reaction time and improves the precision of force control. Such a shared control mechanism may enable amputees to perform fine insertion tasks solely using their prosthetic hands. This may restore some of the functionality of the disabled arm.


Subject(s)
Amputees , Artificial Limbs , Robotics , Humans , Longitudinal Studies , Electromyography/methods , Hand/physiology , Hand Strength/physiology
5.
IEEE Trans Cybern ; PP2022 Nov 29.
Article in English | MEDLINE | ID: mdl-36446005

ABSTRACT

When humans interact with each other, eye gaze movements have to support motor control as well as communication. On the one hand, we need to fixate the task goal to retrieve visual information required for safe and precise action-execution. On the other hand, gaze movements fulfil the purpose of communication, both for reading the intention of our interaction partners, as well as to signal our action intentions to others. We study this Gaze Dialogue between two participants working on a collaborative task involving two types of actions: 1) individual action and 2) action-in-interaction. We recorded the eye-gaze data of both participants during the interaction sessions in order to build a computational model, the Gaze Dialogue, encoding the interplay of the eye movements during the dyadic interaction. The model also captures the correlation between the different gaze fixation points and the nature of the action. This knowledge is used to infer the type of action performed by an individual. We validated the model against the recorded eye-gaze behavior of one subject, taking the eye-gaze behavior of the other subject as the input. Finally, we used the model to design a humanoid robot controller that provides interpersonal gaze coordination in human-robot interaction scenarios. During the interaction, the robot is able to: 1) adequately infer the human action from gaze cues; 2) adjust its gaze fixation according to the human eye-gaze behavior; and 3) signal nonverbal cues that correlate with the robot's own action intentions.

6.
Sci Rep ; 12(1): 5285, 2022 03 28.
Article in English | MEDLINE | ID: mdl-35347216

ABSTRACT

Autonomous mobility devices such as transport, cleaning, and delivery robots, hold a massive economic and social benefit. However, their deployment should not endanger bystanders, particularly vulnerable populations such as children and older adults who are inherently smaller and fragile. This study compared the risks faced by different pedestrian categories and determined risks through crash testing involving a service robot hitting an adult and a child dummy. Results of collisions at 3.1 m/s (11.1 km/h/6.9 mph) showed risks of serious head (14%), neck (20%), and chest (50%) injuries in children, and tibia fracture (33%) in adults. Furthermore, secondary impact analysis resulted in both populations at risk of severe head injuries, namely, from falling to the ground. Our data and simulations show mitigation strategies for reducing impact injury risks below 5% by either lowering the differential speed at impact below 1.5 m/s (5.4 km/h/3.3 mph) or through the usage of absorbent materials. The results presented herein may influence the design of controllers, sensing awareness, and assessment methods for robots and small vehicles standardization, as well as, policymaking and regulations for the speed, design, and usage of these devices in populated areas.


Subject(s)
Craniocerebral Trauma , Pedestrians , Robotics , Accidental Falls , Accidents, Traffic/prevention & control , Aged , Child , Humans
7.
Commun Biol ; 4(1): 1406, 2021 12 16.
Article in English | MEDLINE | ID: mdl-34916587

ABSTRACT

Robotic assistance via motorized robotic arm manipulators can be of valuable assistance to individuals with upper-limb motor disabilities. Brain-computer interfaces (BCI) offer an intuitive means to control such assistive robotic manipulators. However, BCI performance may vary due to the non-stationary nature of the electroencephalogram (EEG) signals. It, hence, cannot be used safely for controlling tasks where errors may be detrimental to the user. Avoiding obstacles is one such task. As there exist many techniques to avoid obstacles in robotics, we propose to give the control to the robot to avoid obstacles and to leave to the user the choice of the robot behavior to do so a matter of personal preference as some users may be more daring while others more careful. We enable the users to train the robot controller to adapt its way to approach obstacles relying on BCI that detects error-related potentials (ErrP), indicative of the user's error expectation of the robot's current strategy to meet their preferences. Gaussian process-based inverse reinforcement learning, in combination with the ErrP-BCI, infers the user's preference and updates the obstacle avoidance controller so as to generate personalized robot trajectories. We validate the approach in experiments with thirteen able-bodied subjects using a robotic arm that picks up, places and avoids real-life objects. Results show that the algorithm can learn user's preference and adapt the robot behavior rapidly using less than five demonstrations not necessarily optimal.


Subject(s)
Learning , Reinforcement, Psychology , Robotics/methods , Adult , Humans , Male
8.
Front Robot AI ; 8: 714023, 2021.
Article in English | MEDLINE | ID: mdl-34660702

ABSTRACT

Human-object interaction is of great relevance for robots to operate in human environments. However, state-of-the-art robotic hands are far from replicating humans skills. It is, therefore, essential to study how humans use their hands to develop similar robotic capabilities. This article presents a deep dive into hand-object interaction and human demonstrations, highlighting the main challenges in this research area and suggesting desirable future developments. To this extent, the article presents a general definition of the hand-object interaction problem together with a concise review for each of the main subproblems involved, namely: sensing, perception, and learning. Furthermore, the article discusses the interplay between these subproblems and describes how their interaction in learning from demonstration contributes to the success of robot manipulation. In this way, the article provides a broad overview of the interdisciplinary approaches necessary for a robotic system to learn new manipulation skills by observing human behavior in the real world.

9.
J Neurophysiol ; 126(1): 195-212, 2021 07 01.
Article in English | MEDLINE | ID: mdl-34107225

ABSTRACT

Many daily tasks involve the collaboration of both hands. Humans dexterously adjust hand poses and modulate the forces exerted by fingers in response to task demands. Hand pose selection has been intensively studied in unimanual tasks, but little work has investigated bimanual tasks. This work examines hand poses selection in a bimanual high-precision-screwing task taken from watchmaking. Twenty right-handed subjects dismounted a screw on the watch face with a screwdriver in two conditions. Results showed that although subjects used similar hand poses across steps within the same experimental conditions, the hand poses differed significantly in the two conditions. In the free-base condition, subjects needed to stabilize the watch face on the table. The role distribution across hands was strongly influenced by hand dominance: the dominant hand manipulated the tool, whereas the nondominant hand controlled the additional degrees of freedom that might impair performance. In contrast, in the fixed-base condition, the watch face was stationary. Subjects used both hands even though single hand would have been sufficient. Importantly, hand poses decoupled the control of task-demanded force and torque across hands through virtual fingers that grouped multiple fingers into functional units. This preference for bimanual over unimanual control strategy could be an effort to reduce variability caused by mechanical couplings and to alleviate intrinsic sensorimotor processing burdens. To afford analysis of this variety of observations, a novel graphical matrix-based representation of the distribution of hand pose combinations was developed. Atypical hand poses that are not documented in extant hand taxonomies are also included.NEW & NOTEWORTHY We study hand poses selection in bimanual fine motor skills. To understand how roles and control variables are distributed across the hands and fingers, we compared two conditions when unscrewing a screw from a watch face. When the watch face needed positioning, role distribution was strongly influenced by hand dominance; when the watch face was stationary, a variety of hand pose combinations emerged. Control of independent task demands is distributed either across hands or across distinct groups of fingers.


Subject(s)
Functional Laterality/physiology , Motor Skills/physiology , Movement/physiology , Adolescent , Adult , Female , Hand , Humans , Male , Psychomotor Performance/physiology , Young Adult
10.
IEEE Trans Neural Syst Rehabil Eng ; 28(6): 1471-1480, 2020 06.
Article in English | MEDLINE | ID: mdl-32386160

ABSTRACT

We propose a novel controller for powered prosthetic arms, where fused EMG and gaze data predict the desired end-point for a full arm prosthesis, which could drive the forward motion of individual joints. We recorded EMG, gaze, and motion-tracking during pick-and-place trials with 7 able-bodied subjects. Subjects positioned an object above a random target on a virtual interface, each completing around 600 trials. On average across all trials and subjects gaze preceded EMG and followed a repeatable pattern that allowed for prediction. A computer vision algorithm was used to extract the initial and target fixations and estimate the target position in 2D space. Two SVRs were trained with EMG data to predict the x- and y- position of the hand; results showed that the y-estimate was significantly better than the x-estimate. The EMG and gaze predictions were fused using a Kalman Filter-based approach, and the positional error from using EMG-only was significantly higher than the fusion of EMG and gaze. The final target position Root Mean Squared Error (RMSE) decreased from 9.28 cm with an EMG-only prediction to 6.94 cm when using a gaze-EMG fusion. This error also increased significantly when removing some or all arm muscle signals. However, using fused EMG and gaze, there were no significant difference between predictors that included all muscles, or only a subset of muscles.


Subject(s)
Artificial Limbs , Algorithms , Arm , Electromyography , Hand , Humans
11.
Biol Cybern ; 114(1): 63-82, 2020 02.
Article in English | MEDLINE | ID: mdl-31907609

ABSTRACT

Tasks that require the cooperation of both hands and arms are common in human everyday life. Coordination helps to synchronize in space and temporally motion of the upper limbs. In fine bimanual tasks, coordination enables also to achieve higher degrees of precision that could be obtained from a single hand. We studied the acquisition of bimanual fine manipulation skills in watchmaking tasks, which require assembly of pieces at millimeter scale. It demands years of training. We contrasted motion kinematics performed by novice apprentices to those of professionals. Fifteen subjects, ten novices and five experts, participated in the study. We recorded force applied on the watch face and kinematics of fingers and arms. Results indicate that expert subjects wisely place their fingers on the tools to achieve higher dexterity. Compared to novices, experts also tend to align task-demanded force application with the optimal force transmission direction of the dominant arm. To understand the cognitive processes underpinning the different coordination patterns across experts and novice subjects, we followed the optimal control theoretical framework and hypothesize that the difference in task performances is caused by changes in the central nervous system's optimal criteria. We formulated kinematic metrics to evaluate the coordination patterns and exploit inverse optimization approach to infer the optimal criteria. We interpret the human acquisition of novel coordination patterns as an alteration in the composition structure of the central nervous system's optimal criteria accompanied by the learning process.


Subject(s)
Biomechanical Phenomena/physiology , Hand/physiology , Models, Biological , Motor Skills/physiology , Psychomotor Performance/physiology , Adolescent , Adult , Female , Humans , Male , Time Factors , Young Adult
12.
Science ; 364(6446)2019 06 21.
Article in English | MEDLINE | ID: mdl-31221831

ABSTRACT

Dexterous manipulation is one of the primary goals in robotics. Robots with this capability could sort and package objects, chop vegetables, and fold clothes. As robots come to work side by side with humans, they must also become human-aware. Over the past decade, research has made strides toward these goals. Progress has come from advances in visual and haptic perception and in mechanics in the form of soft actuators that offer a natural compliance. Most notably, immense progress in machine learning has been leveraged to encapsulate models of uncertainty and to support improvements in adaptive and robust control. Open questions remain in terms of how to enable robots to deal with the most unpredictable agent of all, the human.


Subject(s)
Hand/physiology , Motor Skills/physiology , Robotics/trends , Humans
13.
Neural Netw ; 106: 194-204, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30081346

ABSTRACT

Language acquisition theories classically distinguish passive language understanding from active language production. However, recent findings show that brain areas such as Broca's region are shared in language understanding and production. Furthermore, these areas are also implicated in understanding and producing goal-oriented actions. These observations question the passive view of language development. In this work, we propose a cognitive developmental model of symbol acquisition, coherent with an active view of language learning. For that purpose, we introduce the concept of social babbling. In this view, symbols are learned in the same way as goal-oriented actions in the context of specific caregiver-infant interactions. We show that this model allows a virtual agent to learn both symbolic words and gestures to refer to objects while interacting with a caregiver. We validate our model by reproducing results from studies on the influence of parental responsiveness on infants language acquisition.


Subject(s)
Gestures , Interpersonal Relations , Language Development , Language , Broca Area/physiology , Cognition/physiology , Female , Humans , Infant , Learning/physiology , Male
14.
J Neuroeng Rehabil ; 15(1): 57, 2018 06 26.
Article in English | MEDLINE | ID: mdl-29940991

ABSTRACT

BACKGROUND: Active upper-limb prostheses are used to restore important hand functionalities, such as grasping. In conventional approaches, a pattern recognition system is trained over a number of static grasping gestures. However, training a classifier in a static position results in lower classification accuracy when performing dynamic motions, such as reach-to-grasp. We propose an electromyography-based learning approach that decodes the grasping intention during the reaching motion, leading to a faster and more natural response of the prosthesis. METHODS AND RESULTS: Eight able-bodied subjects and four individuals with transradial amputation gave informed consent and participated in our study. All the subjects performed reach-to-grasp motions for five grasp types, while the elecromyographic (EMG) activity and the extension of the arm were recorded. We separated the reach-to-grasp motion into three phases, with respect to the extension of the arm. A multivariate analysis of variance (MANOVA) on the muscular activity revealed significant differences among the motion phases. Additionally, we examined the classification performance on these phases. We compared the performance of three different pattern recognition methods; Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) with linear and non-linear kernels, and an Echo State Network (ESN) approach. Our off-line analysis shows that it is possible to have high classification performance above 80% before the end of the motion when with three-grasp types. An on-line evaluation with an upper-limb prosthesis shows that the inclusion of the reaching motion in the training of the classifier importantly improves classification accuracy and enables the detection of grasp intention early in the reaching motion. CONCLUSIONS: This method offers a more natural and intuitive control of prosthetic devices, as it will enable controlling grasp closure in synergy with the reaching motion. This work contributes to the decrease of delays between the user's intention and the device response and improves the coordination of the device with the motion of the arm.


Subject(s)
Artificial Limbs , Electromyography/methods , Hand Strength/physiology , Intention , Pattern Recognition, Automated/methods , Adult , Discriminant Analysis , Female , Hand/physiology , Humans , Male , Motion
16.
Sci Rep ; 7(1): 15023, 2017 11 03.
Article in English | MEDLINE | ID: mdl-29101325

ABSTRACT

Rapid progress in the area of humanoid robots offers tremendous possibilities for investigating and improving social competences in people with social deficits, but remains yet unexplored in schizophrenia. In this study, we examined the influence of social feedbacks elicited by a humanoid robot on motor coordination during a human-robot interaction. Twenty-two schizophrenia patients and twenty-two matched healthy controls underwent a collaborative motor synchrony task with the iCub humanoid robot. Results revealed that positive social feedback had a facilitatory effect on motor coordination in the control participants compared to non-social positive feedback. This facilitatory effect was not present in schizophrenia patients, whose social-motor coordination was similarly impaired in social and non-social feedback conditions. Furthermore, patients' cognitive flexibility impairment and antipsychotic dosing were negatively correlated with patients' ability to synchronize hand movements with iCub. Overall, our findings reveal that patients have marked difficulties to exploit facial social cues elicited by a humanoid robot to modulate their motor coordination during human-robot interaction, partly accounted for by cognitive deficits and medication. This study opens new perspectives for comprehension of social deficits in this mental disorder.


Subject(s)
Feedback , Robotics , Schizophrenic Psychology , Social Behavior , Adult , Female , Humans , Male , Middle Aged , Schizophrenia , Social Perception , Young Adult
17.
NPJ Schizophr ; 3: 8, 2017.
Article in English | MEDLINE | ID: mdl-28560254

ABSTRACT

We present novel, low-cost and non-invasive potential diagnostic biomarkers of schizophrenia. They are based on the 'mirror-game', a coordination task in which two partners are asked to mimic each other's hand movements. In particular, we use the patient's solo movement, recorded in the absence of a partner, and motion recorded during interaction with an artificial agent, a computer avatar or a humanoid robot. In order to discriminate between the patients and controls, we employ statistical learning techniques, which we apply to nonverbal synchrony and neuromotor features derived from the participants' movement data. The proposed classifier has 93% accuracy and 100% specificity. Our results provide evidence that statistical learning techniques, nonverbal movement coordination and neuromotor characteristics could form the foundation of decision support tools aiding clinicians in cases of diagnostic uncertainty.

18.
Schizophr Res ; 176(2-3): 506-513, 2016 10.
Article in English | MEDLINE | ID: mdl-27293136

ABSTRACT

BACKGROUND: The use of humanoid robots to play a therapeutic role in helping individuals with social disorders such as autism is a newly emerging field, but remains unexplored in schizophrenia. As the ability for robots to convey emotion appear of fundamental importance for human-robot interactions, we aimed to evaluate how schizophrenia patients recognize positive and negative facial emotions displayed by a humanoid robot. METHODS: We included 21 schizophrenia outpatients and 17 healthy participants. In a reaction time task, they were shown photographs of human faces and of a humanoid robot (iCub) expressing either positive or negative emotions, as well as a non-social stimulus. Patients' symptomatology, mind perception, reaction time and number of correct answers were evaluated. RESULTS: Results indicated that patients and controls recognized better and faster the emotional valence of facial expressions expressed by humans than by the robot. Participants were faster when responding to positive compared to negative human faces and inversely were faster for negative compared to positive robot faces. Importantly, participants performed worse when they perceived iCub as being capable of experiencing things (experience subscale of the mind perception questionnaire). In schizophrenia patients, negative correlations emerged between negative symptoms and both robot's and human's negative face accuracy. CONCLUSIONS: Individuals do not respond similarly to human facial emotion and to non-anthropomorphic emotional signals. Humanoid robots have the potential to convey emotions to patients with schizophrenia, but their appearance seems of major importance for human-robot interactions.


Subject(s)
Facial Recognition , Schizophrenic Psychology , Adult , Analysis of Variance , Female , Humans , Male , Neuropsychological Tests , Photic Stimulation , Reaction Time , Robotics , Social Perception
19.
PLoS One ; 11(6): e0156874, 2016.
Article in English | MEDLINE | ID: mdl-27281341

ABSTRACT

BACKGROUND: The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). METHODOLOGY/PRINCIPAL FINDINGS: 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. CONCLUSION/SIGNIFICANCE: This work confirms that people can exploit gaze cues to predict another person's movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user's sense of affiliation.


Subject(s)
Cues , Discrimination, Psychological/physiology , Eye Movements/physiology , Interpersonal Relations , Movement , Robotics , Adult , Biomimetics , Computer Simulation , Computers , Female , Hand/physiology , Humans , Male , Reaction Time , Reality Testing , Self Concept , Social Behavior , Young Adult
20.
IEEE Trans Cybern ; 46(11): 2459-2472, 2016 Nov.
Article in English | MEDLINE | ID: mdl-26452294

ABSTRACT

In this paper, we present a robotic assistance scheme which allows for impedance compensation with stiffness, damping, and mass parameters for hand manipulation tasks and we apply it to manual welding. The impedance compensation does not assume a preprogrammed hand trajectory. Rather, the intention of the human for the hand movement is estimated in real time using a smooth Kalman filter. The movement is restricted by compensatory virtual impedance in the directions perpendicular to the estimated direction of movement. With airbrush painting experiments, we test three sets of values for the impedance parameters as inspired from impedance measurements with manual welding. We apply the best of the tested sets for assistance in manual welding and perform welding experiments with professional and novice welders. We contrast three conditions: 1) welding with the robot's assistance; 2) with the robot when the robot is passive; and 3) welding without the robot. We demonstrate the effectiveness of the assistance through quantitative measures of both task performance and perceived user's satisfaction. The performance of both the novice and professional welders improves significantly with robotic assistance compared to welding with a passive robot. The assessment of user satisfaction shows that all novice and most professional welders appreciate the robotic assistance as it suppresses the tremors in the directions perpendicular to the movement for welding.

SELECTION OF CITATIONS
SEARCH DETAIL
...