Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38722725

RESUMEN

Utilization of hand-tracking cameras, such as Leap, for hand rehabilitation and functional assessments is an innovative approach to providing affordable alternatives for people with disabilities. However, prior to deploying these commercially-available tools, a thorough evaluation of their performance for disabled populations is necessary. In this study, we provide an in-depth analysis of the accuracy of Leap's hand-tracking feature for both individuals with and without upper-body disabilities for common dynamic tasks used in rehabilitation. Leap is compared against motion capture with conventional techniques such as signal correlations, mean absolute errors, and digit segment length estimation. We also propose the use of dimensionality reduction techniques, such as Principal Component Analysis (PCA), to capture the complex, high-dimensional signal spaces of the hand. We found that Leap's hand-tracking performance did not differ between individuals with and without disabilities, yielding average signal correlations between 0.7-0.9. Both low and high mean absolute errors (between 10-80mm) were observed across participants. Overall, Leap did well with general hand posture tracking, with the largest errors associated with the tracking of the index finger. Leap's hand model was found to be most inaccurate in the proximal digit segment, underestimating digit lengths with errors as high as 18mm. Using PCA to quantify differences between the high-dimensional spaces of Leap and motion capture showed that high correlations between latent space projections were associated with high accuracy in the original signal space. These results point to the potential of low-dimensional representations of complex hand movements to support hand rehabilitation and assessment.


Asunto(s)
Mano , Análisis de Componente Principal , Grabación en Video , Humanos , Mano/fisiología , Masculino , Femenino , Adulto , Personas con Discapacidad/rehabilitación , Persona de Mediana Edad , Reproducibilidad de los Resultados , Adulto Joven , Algoritmos , Movimiento/fisiología
2.
Front Bioeng Biotechnol ; 11: 1134135, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37434753

RESUMEN

In the past, linear dimensionality-reduction techniques, such as Principal Component Analysis, have been used to simplify the myoelectric control of high-dimensional prosthetic hands. Nonetheless, their nonlinear counterparts, such as Autoencoders, have been shown to be more effective at compressing and reconstructing complex hand kinematics data. As a result, they have a potential of being a more accurate tool for prosthetic hand control. Here, we present a novel Autoencoder-based controller, in which the user is able to control a high-dimensional (17D) virtual hand via a low-dimensional (2D) space. We assess the efficacy of the controller via a validation experiment with four unimpaired participants. All the participants were able to significantly decrease the time it took for them to match a target gesture with a virtual hand to an average of 6.9s and three out of four participants significantly improved path efficiency. Our results suggest that the Autoencoder-based controller has the potential to be used to manipulate high-dimensional hand systems via a myoelectric interface with a higher accuracy than PCA; however, more exploration needs to be done on the most effective ways of learning such a controller.

3.
Front Bioeng Biotechnol ; 11: 1139405, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37214310

RESUMEN

Dimensionality reduction techniques have proven useful in simplifying complex hand kinematics. They may allow for a low-dimensional kinematic or myoelectric interface to be used to control a high-dimensional hand. Controlling a high-dimensional hand, however, is difficult to learn since the relationship between the low-dimensional controls and the high-dimensional system can be hard to perceive. In this manuscript, we explore how training practices that make this relationship more explicit can aid learning. We outline three studies that explore different factors which affect learning of an autoencoder-based controller, in which a user is able to operate a high-dimensional virtual hand via a low-dimensional control space. We compare computer mouse and myoelectric control as one factor contributing to learning difficulty. We also compare training paradigms in which the dimensionality of the training task matched or did not match the true dimensionality of the low-dimensional controller (both 2D). The training paradigms were a) a full-dimensional task, in which the user was unaware of the underlying controller dimensionality, b) an implicit 2D training, which allowed the user to practice on a simple 2D reaching task before attempting the full-dimensional one, without establishing an explicit connection between the two, and c) an explicit 2D training, during which the user was able to observe the relationship between their 2D movements and the higher-dimensional hand. We found that operating a myoelectric interface did not pose a big challenge to learning the low-dimensional controller and was not the main reason for the poor performance. Implicit 2D training was found to be as good, but not better, as training directly on the high-dimensional hand. What truly aided the user's ability to learn the controller was the 2D training that established an explicit connection between the low-dimensional control space and the high-dimensional hand movements.

4.
ASSETS ; 20232023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38618626

RESUMEN

Always-on, upper-body input from sensors like accelerometers, infrared cameras, and electromyography hold promise to enable accessible gesture input for people with upper-body motor impairments. When these sensors are distributed across the person's body, they can enable the use of varied body parts and gestures for device interaction. Personalized upper-body gestures that enable input from diverse body parts including the head, neck, shoulders, arms, hands and fingers and match the abilities of each user, could be useful for ensuring that gesture systems are accessible. In this work, we characterize the personalized gesture sets designed by 25 participants with upper-body motor impairments and develop design recommendations for upper-body personalized gesture interfaces. We found that the personalized gesture sets that participants designed were highly ability-specific. Even within a specific type of disability, there were significant differences in what muscles participants used to perform upper-body gestures, with some pre-dominantly using shoulder and upper-arm muscles, and others solely using their finger muscles. Eight percent of gestures that participants designed were with their head, neck, and shoulders, rather than their hands and fingers, demonstrating the importance of tracking the whole upper-body. To combat fatigue, participants performed 51% of gestures with their hands resting on or barely coming off of their armrest, highlighting the importance of using sensing mechanisms that are agnostic to the location and orientation of the body. Lastly, participants activated their muscles but did not visibly move during 10% of the gestures, demonstrating the need for using sensors that can sense muscle activations without movement. Both inertial measurement unit (IMU) and electromyography (EMG) wearable sensors proved to be promising sensors to differentiate between personalized gestures. Personalized upper-body gesture interfaces that take advantage of each person's abilities are critical for enabling accessible upper-body gestures for people with upper-body motor impairments.

5.
Front Bioeng Biotechnol ; 9: 724626, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34722477

RESUMEN

We seek to use dimensionality reduction to simplify the difficult task of controlling a lower limb prosthesis. Though many techniques for dimensionality reduction have been described, it is not clear which is the most appropriate for human gait data. In this study, we first compare how Principal Component Analysis (PCA) and an autoencoder on poses (Pose-AE) transform human kinematics data during flat ground and stair walking. Second, we compare the performance of PCA, Pose-AE and a new autoencoder trained on full human movement trajectories (Move-AE) in order to capture the time varying properties of gait. We compare these methods for both movement classification and identifying the individual. These are key capabilities for identifying useful data representations for prosthetic control. We first find that Pose-AE outperforms PCA on dimensionality reduction by achieving a higher Variance Accounted For (VAF) across flat ground walking data, stairs data, and undirected natural movements. We then find in our second task that Move-AE significantly outperforms both PCA and Pose-AE on movement classification and individual identification tasks. This suggests the autoencoder is more suitable than PCA for dimensionality reduction of human gait, and can be used to encode useful representations of entire movements to facilitate prosthetic control tasks.

6.
Artículo en Inglés | MEDLINE | ID: mdl-32432105

RESUMEN

The purpose of this study was to find a parsimonious representation of hand kinematics data that could facilitate prosthetic hand control. Principal Component Analysis (PCA) and a non-linear Autoencoder Network (nAEN) were compared in their effectiveness at capturing the essential characteristics of a wide spectrum of hand gestures and actions. Performance of the two methods was compared on (a) the ability to accurately reconstruct hand kinematic data from a latent manifold of reduced dimension, (b) variance distribution across latent dimensions, and (c) the separability of hand movements in compressed and reconstructed representations derived using a linear classifier. The nAEN exhibited higher performance than PCA in its ability to more accurately reconstruct hand kinematic data from a latent manifold of reduced dimension. Whereas, for two dimensions in the latent manifold, PCA was able to account for 78% of input data variance, nAEN accounted for 94%. In addition, the nAEN latent manifold was spanned by coordinates with more uniform share of signal variance compared to PCA. Lastly, the nAEN was able to produce a manifold of more separable movements than PCA, as different tasks, when reconstructed, were more distinguishable by a linear classifier, SoftMax regression. It is concluded that non-linear dimensionality reduction may offer a more effective platform than linear methods to control prosthetic hands.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA