Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(2)2024 Jan 16.
Article in English | MEDLINE | ID: mdl-38257661

ABSTRACT

This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot's linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs.


Subject(s)
Robotics , Humans , Motion , Acceleration , Movement , Cues
2.
Sensors (Basel) ; 23(22)2023 Nov 12.
Article in English | MEDLINE | ID: mdl-38005524

ABSTRACT

This article presents the Network Empower and Prototyping Platform (NEP+), a flexible framework purposefully crafted to simplify the process of interactive application development, catering to both technical and non-technical users. The name "NEP+" encapsulates the platform's dual mission: to empower the network-related capabilities of ZeroMQ and to provide software tools and interfaces for prototyping and integration. NEP+ accomplishes this through a comprehensive quality model and an integrated software ecosystem encompassing middleware, user-friendly graphical interfaces, a command-line tool, and an accessible end-user programming interface. This article primarily focuses on presenting the proposed quality model and software architecture, illustrating how they can empower developers to craft cross-platform, accessible, and user-friendly interfaces for various applications, with a particular emphasis on robotics and the Internet of Things (IoT). Additionally, we provide practical insights into the applicability of NEP+ by briefly presenting real-world user cases where human-centered projects have successfully utilized NEP+ to develop robotics systems. To further emphasize the suitability of NEP+ tools and interfaces for developer use, we conduct a pilot study that delves into usability and workload assessment. The outcomes of this study highlight the user-friendly features of NEP+ tools, along with their ease of adoption and cross-platform capabilities. The novelty of NEP+ fundamentally lies in its holistic approach, acting as a bridge across diverse user groups, fostering inclusivity, and promoting collaboration.


Subject(s)
Software , User-Computer Interface , Humans , Information Systems , Pilot Projects
3.
Sensors (Basel) ; 23(12)2023 Jun 18.
Article in English | MEDLINE | ID: mdl-37420859

ABSTRACT

In recent years, there has been a growing interest in the development of robotic systems for improving the quality of life of individuals of all ages. Specifically, humanoid robots offer advantages in terms of friendliness and ease of use in such applications. This article proposes a novel system architecture that enables a commercial humanoid robot, specifically the Pepper robot, to walk side-by-side while holding hands, and communicating by responding to the surrounding environment. To achieve this control, an observer is required to estimate the force applied to the robot. This was accomplished by comparing joint torques calculated from the dynamics model to actual current measurements. Additionally, object recognition was performed using Pepper's camera to facilitate communication in response to surrounding objects. By integrating these components, the system has demonstrated its capability to achieve its intended purpose.


Subject(s)
Robotics , Humans , Quality of Life , Walking/physiology , Hand , Communication
4.
Sensors (Basel) ; 22(12)2022 Jun 14.
Article in English | MEDLINE | ID: mdl-35746280

ABSTRACT

Motor rehabilitation is used to improve motor control skills to improve the patient's quality of life. Regular adjustments based on the effect of therapy are necessary, but this can be time-consuming for the clinician. This study proposes to use an efficient tool for high-dimensional data by considering a deep learning approach for dimensionality reduction of hand movement recorded using a wireless remote control embedded with the Oculus Rift S. This latent space is created as a visualization tool also for use in a reinforcement learning (RL) algorithm employed to provide a decision-making framework. The data collected consists of motions drawn with wireless remote control in an immersive VR environment for six different motions called "Cube", "Cylinder", "Heart", "Infinity", "Sphere", and "Triangle". From these collected data, different artificial databases were created to simulate variations of the data. A latent space representation is created using an adversarial autoencoder (AAE), taking into account unsupervised (UAAE) and semi-supervised (SSAAE) training. Then, each test point is represented by a distance metric and used as a reward for two classes of Multi-Armed Bandit (MAB) algorithms, namely Boltzmann and Sibling Kalman filters. The results showed that AAE models can represent high-dimensional data in a two-dimensional latent space and that MAB agents can efficiently and quickly learn the distance evolution in the latent space. The results show that Sibling Kalman filter exploration outperforms Boltzmann exploration with an average cumulative weighted probability error of 7.9 versus 19.9 using the UAAE latent space representation and 8.0 versus 20.0 using SSAAE. In conclusion, this approach provides an effective approach to visualize and track current motor control capabilities regarding a target in order to reflect the patient's abilities in VR games in the context of DDA.


Subject(s)
Quality of Life , Virtual Reality , Hand , Humans , Movement , Upper Extremity
5.
Article in English | MEDLINE | ID: mdl-34138711

ABSTRACT

This study investigates the possibility of estimating lower-limb joint kinematics and meaningful performance indexes for physiotherapists, during gait on a treadmill based on data collected from a sparse placement of new Visual Inertial Measurement Units (VIMU) and the use of an Extended Kalman Filter (EKF). The proposed EKF takes advantage of the biomechanics of the human body and of the investigated task to reduce sensor inaccuracies. Two state-vector formulations, one based on the use of constant acceleration model and one based on Fourier series, and the tuning of their corresponding parameters were analyzed. The constant acceleration model, due to its inherent inconsistency for human motion, required a cumbersome optimisation process and needed the a-priori knowledge of reference joint trajectories for EKF parameters tuning. On the other hand, the Fourier series formulation could be used without a specific parameters tuning process. In both cases, the average root mean square difference and correlation coefficient between the estimated joint angles and those reconstructed with a reference stereophotogrammetric system was 3.5deg and 0.70, respectively. Moreover, the stride lengths were estimated with a normalized root mean square difference inferior to 2% when using the forward kinematics model receiving as input the estimated joint angles. The popular gait deviation index was also estimated and showed similar results very close to 100, using both the proposed method and the reference stereophotogrammetric system. Such consistency was obtained using only three wireless and affordable VIMU located at the pelvis and both heels and tracked using two affordable RGB cameras. Being further easy-to-use and suitable for applications taking place outside of the laboratory, the proposed method thus represents a good compromise between accurate reference stereophotogrammetric systems and markerless ones for which accuracy is still under debate.


Subject(s)
Acceleration , Gait , Biomechanical Phenomena , Humans , Motion , Photogrammetry
6.
Sensors (Basel) ; 21(4)2021 Feb 22.
Article in English | MEDLINE | ID: mdl-33671497

ABSTRACT

Fatigue increases the risk of injury during sports training and rehabilitation. Early detection of fatigue during exercises would help adapt the training in order to prevent over-training and injury. This study lays the foundation for a data-driven model to automatically predict the onset of fatigue and quantify consequent fatigue changes using a force plate (FP) or inertial measurement units (IMUs). The force plate and body-worn IMUs were used to capture movements associated with exercises (squats, high knee jacks, and corkscrew toe-touch) to estimate participant-specific fatigue levels in a continuous fashion using random forest (RF) regression and convolutional neural network (CNN) based regression models. Analysis of unseen data showed high correlation (up to 89%, 93%, and 94% for the squat, jack, and corkscrew exercises, respectively) between the predicted fatigue levels and self-reported fatigue levels. Predictions using force plate data achieved similar performance as those with IMU data; the best results in both cases were achieved with a convolutional neural network. The displacement of the center of pressure (COP) was found to be correlated with fatigue compared to other commonly used features of the force plate. Bland-Altman analysis also confirmed that the predicted fatigue levels were close to the true values. These results contribute to the field of human motion recognition by proposing a deep neural network model that can detect fairly small changes of motion data in a continuous process and quantify the movement. Based on the successful findings with three different exercises, the general nature of the methodology is potentially applicable to a variety of other forms of exercises, thereby contributing to the future adaptation of exercise programs and prevention of over-training and injury as a result of excessive fatigue.


Subject(s)
Exercise , Wearable Electronic Devices , Biomechanical Phenomena , Fatigue/diagnosis , Female , Humans , Male , Motion
7.
8.
Sensors (Basel) ; 20(5)2020 Mar 09.
Article in English | MEDLINE | ID: mdl-32182906

ABSTRACT

This article presents the novel Python, C# and JavaScript libraries of Node Primitives (NEP), a high-level, open, distributed, and component-based framework designed to enable easy development of cross-platform software architectures. NEP is built on top of low-level, high-performance and robust sockets libraries (ZeroMQ and Nanomsg) and robot middlewares (ROS 1 and ROS 2). This enables platform-independent development of Human-Robot Interaction (HRI) software architectures. We show minimal code examples for enabling Publish/Subscribe communication between Internet of Things (IoT) and Robotics modules. Two user cases performed outside laboratories are briefly described in order to prove the technological feasibility of NEP for developing real-world applications. The first user case briefly shows the potential of using NEP for enabling the creation of End-User Development (EUD) interfaces for IoT-aided Human-Robot Interaction. The second user case briefly describes a software architecture integrating state-of-art sensory devices, deep learning perceptual modules, and a ROS -based humanoid robot to enable IoT-aided HRI in a public space. Finally, a comparative study showed better latency results of NEP over a popular state-of-art tool (ROS using rosbridge) for connecting different nodes executed in local-host and local area network (LAN).


Subject(s)
Artificial Intelligence , Internet of Things , Robotics/methods , Software , Adult , Child , Humans , User-Computer Interface
9.
J Biomech ; 104: 109718, 2020 05 07.
Article in English | MEDLINE | ID: mdl-32151378

ABSTRACT

Assessment of gait parameters is commonly performed through the high-end motion tracking systems, which limits the measurement to sophisticated laboratory settings due to its excessive cost. Recently, Microsoft Kinect (v2) sensor has become popular in clinical gait analysis due to its low-cost. But, determining the accuracy of its RGB-D image data stream in measuring the joint kinematics and local dynamic stability remains an unsolved problem. This study examined the suitability of Kinect(v2) RGB-D image data stream in assessing those gait parameters. Fifteen healthy participants walked on a treadmill during which lower body kinematics were measured by a Kinect(v2) sensor and a optophotogrametric tracking system, simultaneously. Extended Kalman filter was used to extract the lower extremity joint angles from Kinect, while inverse kinematics was used for the gold standard system. For both systems, local dynamic stability was assessed using maximal Lyapunov exponent. Sprague's validation metrics, root mean square error (RMSE) and normalized RMSE were computed to confirm the difference between the joint angles time series of the two systems while relative agreement between them was investigated through Pearson's correlation coefficient (pr). Fisher's Exact Test was performed on maximal Lyapunov exponent to investigate the data independence while reliability was assessed using intraclass correlation coefficients. This study concludes that the RGB-D data stream of Kinect sensor is efficient in estimating joint kinematics, but not suitable for measuring the local dynamic stability.


Subject(s)
Gait , Software , Biomechanical Phenomena , Humans , Reproducibility of Results , Walking
10.
J Biomech ; 103: 109684, 2020 04 16.
Article in English | MEDLINE | ID: mdl-32213290

ABSTRACT

The ability to visualize and interpret high dimensional time-series data will be critical as wearable and other sensors are adopted in rehabilitation protocols. This study proposes a latent space representation of high dimensional time-series data for data visualization. For that purpose, a deep learning model called Adversarial AutoEncoder (AAE) is proposed to perform efficient data dimensionality reduction by considering unsupervised and semi-supervised adversarial training. Eighteen subjects were recruited for the experiment and performed two sets of exercises (upper and lower body) on the Wii Balance Board. Then, the accuracy of the latent space representation is evaluated on both sets of exercises separately. Data dimensionality reduction with conventional Machine Learning (ML) and supervised Deep Learning (DL) classification are also performed to compare the efficiency of AAE approaches. The results showed that AAE can outperform conventional ML approaches while providing close results to DL supervised classification. AAE approaches for data visualization are a promising approach to monitor the subject's movements and detect adverse events or similarity with previous data, providing an intuitive way to monitor the patient's progress and provide potential information for rehabilitation tracking.


Subject(s)
Human Activities , Machine Learning , Humans
11.
PLoS One ; 15(2): e0228869, 2020.
Article in English | MEDLINE | ID: mdl-32074124

ABSTRACT

Human activity recognition is an important and difficult topic to study because of the important variability between tasks repeated several times by a subject and between subjects. This work is motivated by providing time-series signal classification and a robust validation and test approaches. This study proposes to classify 60 signs from the American Sign Language based on data provided by the LeapMotion sensor by using different conventional machine learning and deep learning models including a model called DeepConvLSTM that integrates convolutional and recurrent layers with Long-Short Term Memory cells. A kinematic model of the right and left forearm/hand/fingers/thumb is proposed as well as the use of a simple data augmentation technique to improve the generalization of neural networks. DeepConvLSTM and convolutional neural network demonstrated the highest accuracy compared to other models with 91.1 (3.8) and 89.3 (4.0) % respectively compared to the recurrent neural network or multi-layer perceptron. Integrating convolutional layers in a deep learning model seems to be an appropriate solution for sign language recognition with depth sensors data.


Subject(s)
Neural Networks, Computer , Sign Language , Algorithms , Biomechanical Phenomena , Deep Learning , Gestures , Hand , Humans , Machine Learning , Male , Movement
13.
PLoS One ; 14(6): e0217129, 2019.
Article in English | MEDLINE | ID: mdl-31226108

ABSTRACT

Object handovers between humans are common in our daily life but the mechanisms underlying handovers are still largely unclear. A good understanding of these mechanisms is important not only for a better understanding of human social behaviors, but also for the prospect of an automatized society in which machines will need to perform similar objects exchanges with humans. In this paper, we analyzed how humans determine the location of object transfer during handovers- to determine whether they can predict the preferred handover location of a partner, the variation of this prediction in 3D space, and to examine how much of a role vision plays in the whole process. For this we developed a paradigm that allows us to compare handovers by humans with and without on-line visual feedback. Our results show that humans have the surprising ability to modulate their handover location according to partners they have just met such that the resulting handover errors are in the order of few centimeters, even in the absence of vision. The handover errors are least along the axis joining the two partners, suggesting a limited role for visual feedback in this direction. Finally, we show that the handover locations are explained very well by a linear model considering the heights, genders and social dominances of the two partners, and the distance between them. We developed separate models for the behavior of 'givers' and 'receivers' and discuss how the behavior of the same individual changes depending on his role in the handover.


Subject(s)
Feedback , Interpersonal Relations , Movement , Visual Perception , Female , Humans , Male , Social Dominance , Surveys and Questionnaires , Young Adult
14.
Comput Methods Biomech Biomed Engin ; 21(14): 740-749, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30369247

ABSTRACT

Developing tools to predict the force capabilities of the human limbs through the Force Feasible Set (FFS) may be of great interest for robotic assisted rehabilitation and digital human modelling for ergonomics. Indeed, it could help to refine rehabilitation programs for active participation during exercise therapy and to prevent musculoskeletal disorders. In this framework, the purpose of this study is to use artificial neural networks (ANN) to predict the FFS of the upper-limb based on joint centre Cartesian positions and anthropometric data. Seventeen right upper-limb musculoskeletal models based on individual anthropometric data are created. For each musculoskeletal model, the FFS is computed for 8428 different postures. For any combination of force direction and joint positions, ANNs can predict the FFS with high values of coefficient of determination (R2 > 0.89) between the true and predicted data.


Subject(s)
Models, Biological , Musculoskeletal System/metabolism , Neural Networks, Computer , Arm/physiology , Biomechanical Phenomena , Humans , Regression Analysis
15.
J Biomech ; 78: 166-171, 2018 09 10.
Article in English | MEDLINE | ID: mdl-30097268

ABSTRACT

The aim of this study was to use Recurrent Neural Network (RNN) to predict the orientation and amplitude of the applied force during the push phase of manual wheelchair propulsion. Trunk and the right-upper limb kinematics data were assessed with an optoeletronic device (Qualisys) and the force applied on the handrim was recorded with an instrumented wheel (SMARTWheel®). Data acquisitions were performed at 60/80/10/120/140% of the freely chosen frequency at submaximal and maximal conditions. The final database consisted of d = 5708 push phases. The input data were the trunk and right upper-limb kinematics (joint angle, angular velocity and acceleration) and anthropometric data (height, weight, segment length) and the output data were the applied forces orientation and amplitude. A ratio of 70/15/15 was used to train, validate and test the RNN (dtrain = 3996, dvalidation = 856 and dtest = 856). The angle and amplitude errors between the measured and predicted force was assessed from dtest. Results showed that for most of the push phase (∼70%), the force direction prediction errors were less than 12°. The mean absolute amplitude errors were less than 8 N and the mean absolute amplitude percentage errors were less than 20% for most of the push phase (∼80%).


Subject(s)
Mechanical Phenomena , Motion , Neural Networks, Computer , Wheelchairs , Acceleration , Adult , Arm/physiology , Biomechanical Phenomena , Equipment Design , Female , Humans , Male , Torso/physiology
16.
IEEE Trans Neural Syst Rehabil Eng ; 26(2): 407-418, 2018 02.
Article in English | MEDLINE | ID: mdl-28141526

ABSTRACT

This paper proposes a method to enable the use of non-intrusive, small, wearable, and wireless sensors to estimate the pose of the lower body during gait and other periodic motions and to extract objective performance measures useful for physiotherapy. The Rhythmic Extended Kalman Filter (Rhythmic-EKF) algorithm is developed to estimate the pose, learn an individualized model of periodic movement over time, and use the learned model to improve pose estimation. The proposed approach learns a canonical dynamical system model of the movement during online observation, which is used to accurately model the acceleration during pose estimation. The canonical dynamical system models the motion as a periodic signal. The estimated phase and frequency of the motion also allow the proposed approach to segment the motion into repetitions and extract useful features, such as gait symmetry, step length, and mean joint movement and variance. The algorithm is shown to outperform the extended Kalman filter in simulation, on healthy participant data, and stroke patient data. For the healthy participant marching dataset, the Rhythmic-EKF improves joint acceleration and velocity estimates over regular EKF by 40% and 37%, respectively, estimates joint angles with 2.4° root mean squared error, and segments the motion into repetitions with 96% accuracy.


Subject(s)
Algorithms , Biomechanical Phenomena , Gait Disorders, Neurologic/rehabilitation , Acceleration , Adult , Aged , Computer Simulation , Female , Healthy Volunteers , Humans , Joints/physiology , Male , Posture , Reproducibility of Results , Stroke Rehabilitation , Young Adult
17.
J Biomech ; 64: 85-92, 2017 11 07.
Article in English | MEDLINE | ID: mdl-28947159

ABSTRACT

This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters.


Subject(s)
Joints/physiology , Mechanical Phenomena , Biomechanical Phenomena , Female , Gait/physiology , Humans , Lower Extremity/physiology , Male , Models, Biological
18.
PLoS One ; 12(6): e0180011, 2017.
Article in English | MEDLINE | ID: mdl-28662090

ABSTRACT

By proposing efficient methods for estimating Body Segment Inertial Parameters' (BSIP) estimation and validating them with a force plate, it is possible to improve the inverse dynamic computations that are necessary in multiple research areas. Until today a variety of studies have been conducted to improve BSIP estimation but to our knowledge a real validation has never been completely successful. In this paper, we propose a validation method using both kinematic and kinetic parameters (contact forces) gathered from optical motion capture system and a force plate respectively. To compare BSIPs, we used the measured contact forces (Force plate) as the ground truth, and reconstructed the displacements of the Center of Pressure (COP) using inverse dynamics from two different estimation techniques. Only minor differences were seen when comparing the estimated segment masses. Their influence on the COP computation however is large and the results show very distinguishable patterns of the COP movements. Improving BSIP techniques is crucial and deviation from the estimations can actually result in large errors. This method could be used as a tool to validate BSIP estimation techniques. An advantage of this approach is that it facilitates the comparison between BSIP estimation methods and more specifically it shows the accuracy of those parameters.


Subject(s)
Movement , Biomechanical Phenomena , Humans , Models, Biological , Pressure
19.
J Biomech ; 62: 148-155, 2017 09 06.
Article in English | MEDLINE | ID: mdl-28551098

ABSTRACT

To reduce the impact of the soft tissue artefact (STA) on the estimate of skeletal movement using stereophotogrammetric and skin-marker data, multi-body kinematics optimisation (MKO) and extended Kalman filters (EKF) have been proposed. This paper assessed the feasibility and efficiency of these methods when they embed a mathematical model of the STA and simultaneously estimate the ankle, knee and hip joint kinematics and the model parameters. A STA model was used that provides an estimate of the STA affecting the marker-cluster located on a body segment as a function of the kinematics of the adjacent joints. The MKO and the EKF were implemented with and without the STA model. To assess these methods, intra-cortical pin and skin markers located on the thigh, shank, and foot of three subjects and tracked during the stance phase of running were used. Embedding the STA model in MKO and EKF reduced the average RMS of marker tracking from 12.6 to 1.6mm and from 4.3 to 1.9mm, respectively, showing that a STA model trial-specific calibration is feasible. Nevertheless, with the STA model embedded in MKO, the RMS difference between the estimated and the reference joint kinematics determined from the pin markers slightly increased (from 2.0 to 2.1deg) On the contrary, when the STA model was embedded in the EKF, this RMS difference was slightly reduced (from 2.0 to 1.7deg) thus showing a better potentiality of this method to attenuate STA effects and improve the accuracy of joint kinematics estimate.


Subject(s)
Ankle Joint/physiology , Artifacts , Hip Joint/physiology , Knee Joint/physiology , Models, Biological , Running/physiology , Adult , Biomechanical Phenomena , Calibration , Humans , Male , Photogrammetry , Posture
20.
J Biomech ; 57: 131-135, 2017 05 24.
Article in English | MEDLINE | ID: mdl-28413069

ABSTRACT

In order to improve the evaluation of the force feasible set (FFS) of the upper-limb which is of great interest in the biomechanics field, this study proposes two additional techniques. The first one is based on the identification of the maximal isometric force (MIF) of Hill-based muscles models from sEMG and isometric force measurements at the hand. The second one considers muscles cocontraction. The FFS was computed with an upper-limb musculoskeletal model in three different cases. The first one (M1) considered binary muscular activation and a simple MIF scaling method based on the weight and muscle length of the subject. The second one (M2) introduces cocontraction factors determined from sEMG. The third one (M3) considers the cocontraction factors and the MIF identification. Finally, M1, M2 and M3 are compared with end-effector force measurement. M3 outperforms the two other methods on FFS prediction demonstrating the validity and the usefulness of MIF identification and the consideration of the cocontraction factors.


Subject(s)
Muscle, Skeletal/physiology , Upper Extremity/physiology , Adult , Biomechanical Phenomena , Electromyography , Female , Humans , Isometric Contraction/physiology , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL