Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-37831558

ABSTRACT

People with unilateral transtibial amputation generally exhibit asymmetric gait, likely due to inadequate prosthetic ankle function. This results in compensatory behavior, leading to long-term musculoskeletal impairments (e.g., osteoarthritis in the joints of the intact limb). Powered prostheses can better emulate biological ankles, however, control methods are over-reliant on non-disabled data, require extensive amounts of tuning by experts, and cannot adapt to each user's unique gait patterns. This work directly addresses all these limitations with a personalized and data-driven control strategy. Our controller uses a virtual setpoint trajectory within an impedance-inspired formula to adjust the dynamics of the robotic ankle-foot prosthesis as a function of stance phase. A single sensor measuring thigh motion is used to estimate the gait phase in real time. The virtual setpoint trajectory is modified via a data-driven iterative learning strategy aimed at optimizing ankle angle symmetry. The controller was experimentally evaluated on two people with transtibial amputation. The control scheme successfully increased ankle angle symmetry about the two limbs by 24.4% when compared to the passive condition. In addition, the symmetry controller significantly increased peak prosthetic ankle power output at push-off by 0.52 W/kg and significantly reduced biomechanical risk factors associated with osteoarthritis (i.e., knee and hip abduction moments) in the intact limb. This research demonstrates the benefits of personalized and data-driven symmetry controllers for robotic ankle-foot prostheses.


Subject(s)
Amputees , Artificial Limbs , Joint Prosthesis , Osteoarthritis , Robotic Surgical Procedures , Humans , Ankle , Gait , Biomechanical Phenomena , Walking
2.
Sensors (Basel) ; 23(18)2023 Sep 06.
Article in English | MEDLINE | ID: mdl-37765769

ABSTRACT

Inverse dynamics from motion capture is the most common technique for acquiring biomechanical kinetic data. However, this method is time-intensive, limited to a gait laboratory setting, and requires a large array of reflective markers to be attached to the body. A practical alternative must be developed to provide biomechanical information to high-bandwidth prosthesis control systems to enable predictive controllers. In this study, we applied deep learning to build dynamical system models capable of accurately estimating and predicting prosthetic ankle torque from inverse dynamics using only six input signals. We performed a hyperparameter optimization protocol that automatically selected the model architectures and learning parameters that resulted in the most accurate predictions. We show that the trained deep neural networks predict ankle torques one sample into the future with an average RMSE of 0.04 ± 0.02 Nm/kg, corresponding to 2.9 ± 1.6% of the ankle torque's dynamic range. Comparatively, a manually derived analytical regression model predicted ankle torques with a RMSE of 0.35 ± 0.53 Nm/kg, corresponding to 26.6 ± 40.9% of the ankle torque's dynamic range. In addition, the deep neural networks predicted ankle torque values half a gait cycle into the future with an average decrease in performance of 1.7% of the ankle torque's dynamic range when compared to the one-sample-ahead prediction. This application of deep learning provides an avenue towards the development of predictive control systems for powered limbs aimed at optimizing prosthetic ankle torque.


Subject(s)
Ankle , Deep Learning , Torque , Biomechanical Phenomena , Ankle Joint , Gait , Walking
3.
Front Bioeng Biotechnol ; 11: 1134135, 2023.
Article in English | MEDLINE | ID: mdl-37434753

ABSTRACT

In the past, linear dimensionality-reduction techniques, such as Principal Component Analysis, have been used to simplify the myoelectric control of high-dimensional prosthetic hands. Nonetheless, their nonlinear counterparts, such as Autoencoders, have been shown to be more effective at compressing and reconstructing complex hand kinematics data. As a result, they have a potential of being a more accurate tool for prosthetic hand control. Here, we present a novel Autoencoder-based controller, in which the user is able to control a high-dimensional (17D) virtual hand via a low-dimensional (2D) space. We assess the efficacy of the controller via a validation experiment with four unimpaired participants. All the participants were able to significantly decrease the time it took for them to match a target gesture with a virtual hand to an average of 6.9s and three out of four participants significantly improved path efficiency. Our results suggest that the Autoencoder-based controller has the potential to be used to manipulate high-dimensional hand systems via a myoelectric interface with a higher accuracy than PCA; however, more exploration needs to be done on the most effective ways of learning such a controller.

4.
Article in English | MEDLINE | ID: mdl-37155400

ABSTRACT

Rich variations in gait are generated according to several attributes of the individual and environment, such as age, athleticism, terrain, speed, personal "style", mood, etc. The effects of these attributes can be hard to quantify explicitly, but relatively straightforward to sample. We seek to generate gait that expresses these attributes, creating synthetic gait samples that exemplify a custom mix of attributes. This is difficult to perform manually, and generally restricted to simple, human-interpretable and handcrafted rules. In this manuscript, we present neural network architectures to learn representations of hard to quantify attributes from data, and generate gait trajectories by composing multiple desirable attributes. We demonstrate this method for the two most commonly desired attribute classes: individual style and walking speed. We show that two methods, cost function design and latent space regularization, can be used individually or combined. We also show two uses of machine learning classifiers that recognize individuals and speeds. Firstly, they can be used as quantitative measures of success; if a synthetic gait fools a classifier, then it is considered to be a good example of that class. Secondly, we show that classifiers can be used in the latent space regularizations and cost functions to improve training beyond a typical squared-error cost.


Subject(s)
Deep Learning , Humans , Gait , Walking , Walking Speed , Neural Networks, Computer
5.
Front Bioeng Biotechnol ; 11: 1139405, 2023.
Article in English | MEDLINE | ID: mdl-37214310

ABSTRACT

Dimensionality reduction techniques have proven useful in simplifying complex hand kinematics. They may allow for a low-dimensional kinematic or myoelectric interface to be used to control a high-dimensional hand. Controlling a high-dimensional hand, however, is difficult to learn since the relationship between the low-dimensional controls and the high-dimensional system can be hard to perceive. In this manuscript, we explore how training practices that make this relationship more explicit can aid learning. We outline three studies that explore different factors which affect learning of an autoencoder-based controller, in which a user is able to operate a high-dimensional virtual hand via a low-dimensional control space. We compare computer mouse and myoelectric control as one factor contributing to learning difficulty. We also compare training paradigms in which the dimensionality of the training task matched or did not match the true dimensionality of the low-dimensional controller (both 2D). The training paradigms were a) a full-dimensional task, in which the user was unaware of the underlying controller dimensionality, b) an implicit 2D training, which allowed the user to practice on a simple 2D reaching task before attempting the full-dimensional one, without establishing an explicit connection between the two, and c) an explicit 2D training, during which the user was able to observe the relationship between their 2D movements and the higher-dimensional hand. We found that operating a myoelectric interface did not pose a big challenge to learning the low-dimensional controller and was not the main reason for the poor performance. Implicit 2D training was found to be as good, but not better, as training directly on the high-dimensional hand. What truly aided the user's ability to learn the controller was the 2D training that established an explicit connection between the low-dimensional control space and the high-dimensional hand movements.

6.
Foot (Edinb) ; 56: 101989, 2023 Sep.
Article in English | MEDLINE | ID: mdl-36905794

ABSTRACT

BACKGROUND: Plantar ulceration is a serious complication of diabetes. However, the mechanism of injury initiating ulceration remains unclear. The unique structure of the plantar soft tissue includes superficial and deep layers of adipocytes contained in septal chambers, however, the size of these chambers has not been quantified in diabetic or non-diabetic tissue. Computer-aided methods can be leveraged to guide microstructural measurements and differences with disease status. METHODS: Adipose chambers in whole slide images of diabetic and non-diabetic plantar soft tissue were segmented with a pre-trained U-Net and area, perimeter, and minimum and maximum diameter of adipose chambers were measured. Whole slide images were classified as diabetic or non-diabetic using the Axial-DeepLab network, and the attention layer was overlaid on the input image for interpretation. RESULTS: Non-diabetic deep chambers were 90 %, 41 %, 34 %, and 39 % larger in area (26,954 ± 2428 µm2 vs 14,157 ± 1153 µm2), maximum (277 ± 13 µm vs 197 ± 8 µm) and minimum (140 ± 6 µm vs 104 ± 4 µm) diameter, and perimeter (405 ± 19 µm vs 291 ± 12 µm), respectively, than the superficial (p < 0.001). However, there was no significant difference in these parameters in diabetic specimens (area 18,695 ± 2576 µm2 vs 16627 ± 130 µm2, maximum diameter 221 ± 16 µm vs 210 ± 14 µm, minimum diameter 121 ± 8 µm vs 114 ± 7 µm, perimeter 341 ± 24 µm vs 320 ± 21 µm). Between diabetic and non-diabetic chambers, only the maximum diameter of the deep chambers differed (221 ± 16 µm vs 277 ± 13 µm). The attention network achieved 82 % accuracy on validation, but the attention resolution was too coarse to identify meaningful additional measurements. CONCLUSIONS: Adipose chamber size differences may provide a basis for plantar soft tissue mechanical changes with diabetes. Attention networks are promising tools for classification, but additional care is required when designing networks for identifying novel features. DATA AVAILABILITY: All images, analysis code, data, and/or other resources required to replicate this work are available from the corresponding author upon reasonable request.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Humans
7.
Sci Data ; 10(1): 26, 2023 01 12.
Article in English | MEDLINE | ID: mdl-36635316

ABSTRACT

In this manuscript, we describe a unique dataset of human locomotion captured in a variety of out-of-the-laboratory environments captured using Inertial Measurement Unit (IMU) based wearable motion capture. The data contain full-body kinematics for walking, with and without stops, stair ambulation, obstacle course navigation, dynamic movements intended to test agility, and negotiating common obstacles in public spaces such as chairs. The dataset contains 24.2 total hours of movement data from a college student population with an approximately equal split of males to females. In addition, for one of the activities, we captured the egocentric field of view and gaze of the subjects using an eye tracker. Finally, we provide some examples of applications using the dataset and discuss how it might open possibilities for new studies in human gait analysis.


Subject(s)
Gait , Walking , Female , Humans , Male , Biomechanical Phenomena , Locomotion
8.
Front Bioeng Biotechnol ; 10: 918939, 2022.
Article in English | MEDLINE | ID: mdl-36312532

ABSTRACT

Gait complexity is widely used to understand risk factors for injury, rehabilitation, the performance of assistive devices, and other matters of clinical interest. We analyze the complexity of out-of-the-lab locomotion activities via measures that have previously been used in gait analysis literature, as well as measures from other domains of data analysis. We categorize these broadly as quantifying either the intrinsic dimensionality, the variability, or the regularity, periodicity, or self-similarity of the data from a nonlinear dynamical systems perspective. We perform this analysis on a novel full-body motion capture dataset collected in out-of-the-lab conditions for a variety of indoor environments. This is a unique dataset with a large amount (over 24 h total) of data from participants behaving without low-level instructions in out-of-the-lab indoor environments. We show that reasonable complexity measures can yield surprising, and even profoundly contradictory, results. We suggest that future complexity analysis can use these guidelines to be more specific and intentional about what aspect of complexity a quantitative measure expresses. This will become more important as wearable motion capture technology increasingly allows for comparison of ecologically relevant behavior with lab-based measurements.

9.
Article in English | MEDLINE | ID: mdl-35245198

ABSTRACT

We seek to predict knee and ankle motion using wearable sensors. These predictions could serve as target trajectories for a lower limb prosthesis. In this manuscript, we investigate the use of egocentric vision for improving performance over kinematic wearable motion capture. We present an out-of-the-lab dataset of 23 healthy subjects navigating public classrooms, a large atrium, and stairs for a total of almost 12 hours of recording. The prediction task is difficult because the movements include avoiding obstacles, other people, idiosyncratic movements such as traversing doors, and individual choices in selecting the future path. We demonstrate that using vision improves the quality of the predicted knee and ankle trajectories, especially in congested spaces and when the visual environment provides information that does not appear simply in the movements of the body. Overall, including vision results in 7.9% and 7.0% improvement in root mean squared error of knee and ankle angle predictions respectively. The improvement in Pearson Correlation Coefficient for knee and ankle predictions is 1.5% and 12.3% respectively. We discuss particular moments where vision greatly improved, or failed to improve, the prediction performance. We also find that the benefits of vision can be enhanced with more data. Lastly, we discuss challenges of continuous estimation of gait in natural, out-of-the-lab datasets.


Subject(s)
Optic Flow , Ankle Joint , Biomechanical Phenomena , Gait , Humans , Knee Joint , Lower Extremity , Walking
10.
Article in English | MEDLINE | ID: mdl-35100118

ABSTRACT

Many upper-limb prostheses lack proper wrist rotation functionality, leading to users performing poor compensatory strategies, leading to overuse or abandonment. In this study, we investigate the validity of creating and implementing a data-driven predictive control strategy in object grasping tasks performed in virtual reality. We propose the idea of using gaze-centered vision to predict the wrist rotations of a user and implement a user study to investigate the impact of using this predictive control. We demonstrate that using this vision-based predictive system leads to a decrease in compensatory movement in the shoulder, as well as task completion time. We discuss the cases in which the virtual prosthesis with the predictive model implemented did and did not make a physical improvement in various arm movements. We also discuss the cognitive value in implementing such predictive control strategies into prosthetic controllers. We find that gaze-centered vision provides information about the intent of the user when performing object reaching and that the performance of prosthetic hands improves greatly when wrist prediction is implemented. Lastly, we address the limitations of this study in the context of both the study itself as well as any future physical implementations.


Subject(s)
Artificial Limbs , Deep Learning , Eye-Tracking Technology , Humans , Wrist , Wrist Joint
11.
Front Bioeng Biotechnol ; 10: 1034672, 2022.
Article in English | MEDLINE | ID: mdl-36588953

ABSTRACT

We anticipate wide adoption of wrist and forearm electomyographic (EMG) interface devices worn daily by the same user. This presents unique challenges that are not yet well addressed in the EMG literature, such as adapting for session-specific differences while learning a longer-term model of the specific user. In this manuscript we present two contributions toward this goal. First, we present the MiSDIREKt (Multi-Session Dynamic Interaction Recordings of EMG and Kinematics) dataset acquired using a novel hardware design. A single participant performed four kinds of hand interaction tasks in virtual reality for 43 distinct sessions over 12 days, totaling 814 min. Second, we analyze this data using a non-linear encoder-decoder for dimensionality reduction in gesture classification. We find that an architecture which recalibrates with a small amount of single session data performs at an accuracy of 79.5% on that session, as opposed to architectures which learn solely from the single session (49.6%) or learn only from the training data (55.2%).

12.
Gait Posture ; 92: 383-393, 2022 02.
Article in English | MEDLINE | ID: mdl-34933229

ABSTRACT

BACKGROUND: Stair descent analysis has been typically limited to laboratory staircases of 4 or 5 steps. To date there has been no report of gait parameters during unconstrained stair descent outside of the laboratory, and few motion capture datasets are publicly available. RESEARCH QUESTION: We aim to collect a dataset and perform gait analysis for stair descent outside of the laboratory. We aim to measure basic kinematic and kinetic gait parameters and foot placement behavior. METHODS: We present a public stair descent dataset from 101 unimpaired participants aged 18-35 on an unconstrained 13-step staircase collected using wearable sensors. The dataset consists of kinematics (full-body joint angle and position), kinetics (plantar normal forces, acceleration), and foot placement for 30,609 steps. RESULTS: We report the lower limb joint angle ranges (30° and 8° for hip flexion and extension, 85° and -11° for knee flexion and extension, and 31° and 28° for ankle dorsi- and plantar-flexion). The self-selected speed was 0.79 ± 0.16 m/s, with cycle duration of 0.97 ± 0.18 s. Mean foot overhang as a percentage of foot length was 17.07 ± 6.66 %, and we calculate that foot size explains only 6% of heel placement variation, but 79% of toe placement variation. We also find a minor but significant asymmetry between left and right maximum hip flexion angle, though all other measured parameters were symmetrical. SIGNIFICANCE: This is the first quantitative observation of gait data from a large number (n = 101) of participants descending an unconstrained staircase outside of a laboratory. This study enables analysis of gait characteristics including self-selected walking speed and foot placement to better understand typical stair gait behavior. The dataset is a public resource for understanding typical stair descent.


Subject(s)
Knee Joint , Walking , Adolescent , Adult , Ankle Joint , Biomechanical Phenomena , Gait , Humans , Young Adult
13.
Front Bioeng Biotechnol ; 9: 724626, 2021.
Article in English | MEDLINE | ID: mdl-34722477

ABSTRACT

We seek to use dimensionality reduction to simplify the difficult task of controlling a lower limb prosthesis. Though many techniques for dimensionality reduction have been described, it is not clear which is the most appropriate for human gait data. In this study, we first compare how Principal Component Analysis (PCA) and an autoencoder on poses (Pose-AE) transform human kinematics data during flat ground and stair walking. Second, we compare the performance of PCA, Pose-AE and a new autoencoder trained on full human movement trajectories (Move-AE) in order to capture the time varying properties of gait. We compare these methods for both movement classification and identifying the individual. These are key capabilities for identifying useful data representations for prosthetic control. We first find that Pose-AE outperforms PCA on dimensionality reduction by achieving a higher Variance Accounted For (VAF) across flat ground walking data, stairs data, and undirected natural movements. We then find in our second task that Move-AE significantly outperforms both PCA and Pose-AE on movement classification and individual identification tasks. This suggests the autoencoder is more suitable than PCA for dimensionality reduction of human gait, and can be used to encode useful representations of entire movements to facilitate prosthetic control tasks.

14.
Comput Biol Med ; 134: 104491, 2021 07.
Article in English | MEDLINE | ID: mdl-34090017

ABSTRACT

Histomorphological measurements can be used to identify microstructural changes related to disease pathomechanics, in particular, plantar soft tissue changes with diabetes. However, these measurements are time-consuming and susceptible to sampling and human measurement error. We investigated two approaches to automate segmentation of plantar soft tissue stained with modified Hart's stain for elastin with the eventual goal of subsequent morphological analysis. The first approach used multiple texture- and color-based features with tile-wise classification. The second approach used a convolutional neural network modified from the U-Net architecture with fewer channel dimensions and additional downsampling steps. A hybrid color and texture feature, Fourier reduced histogram of uniform improved opponent color local binary patterns (f-IOCLBP), yielded the best feature-based segmentation, but still performed 3.6% worse on average than the modified U-Net. The texture-based method was sensitive to changes in illumination and stain intensity, and segmentation errors were often in large regions of single tissues or at tissue boundaries. The U-Net was able to segment small, few-pixel tissue boundaries, and errors were often trivial to clean up with post-processing. A U-Net approach outperforms hand-crafted features for segmentation of plantar soft tissue stained with modified Hart's stain for elastin.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer
15.
PLoS One ; 15(6): e0234354, 2020.
Article in English | MEDLINE | ID: mdl-32530942

ABSTRACT

Soft robot fabrication by casting liquid elastomer often requires multiple steps of casting or skillful manual labor. We present a novel soft robotic fabrication technique: negshell casting (negative-space eggshell casting), that reduces the steps required for fabrication by introducing 3D-printed thin-walled cores for use in casting that are meant to be left in place instead of being removed later in the fabrication process. Negshell casting consists of two types of cores: sacrificial cores (negshell cores) and structural cores. Negshell cores are designed to be broken into small pieces that have little effect on the mechanical structure of the soft robot, and can be used for creating fluidic channels and bellows for actuation. Structural cores, on the other hand, are not meant to be broken, and are for increasing the stiffness of soft robotic structures, such as endoskeletons. We describe the design and fabrication concepts for both types of cores and report the mechanical characterization of the cores embedded in silicone rubber specimens. We also present an example use-case of negshell casting for a single joint soft robotic finger, along with an experiment to demonstrate how negshell casting concepts can aid in force transmission. Finally, we present real-world usage of negshell casting in a 6 degree-of-freedom three-finger soft robotic gripper, and a demonstration of the gripper in a robotic pick-and-place task. A companion website with further details about fabrication (as well as an introduction to molding and casting for those who are unfamiliar with the terms), engineering file downloads, and experimental data is provided at https://negshell.github.io/.


Subject(s)
Computer-Aided Design/instrumentation , Printing, Three-Dimensional/instrumentation , Robotics/instrumentation , Biomimetics/instrumentation , Elasticity , Elastomers , Equipment Design , Fingers/physiology , Hand Strength/physiology , Humans , Mechanical Phenomena , Silicone Elastomers
16.
J Hand Ther ; 33(2): 254-262, 2020.
Article in English | MEDLINE | ID: mdl-32482376

ABSTRACT

INTRODUCTION: Affordable virtual reality (VR) technology is now widely available. Billions of dollars are currently being invested into improving and mass producing VR and augmented reality products. PURPOSE OF THE STUDY: The purpose of the present study is to explore the potential of immersive VR to make physical therapy/occupational therapy less painful, more fun, and to help motivate patients to cooperate with their hand therapist. DISCUSSION: The following topics are covered: a) psychological influences on pain perception, b) the logic of how VR analgesia works, c) evidence for reduction of acute procedural pain during hand therapy, d) recent major advances in VR technology, and e) future directions-immersive VR embodiment therapy for phantom limb (chronic) pain. CONCLUSION: VR hand therapy has potential for a wide range of patient populations needing hand therapy, including acute pain and potentially chronic pain patients. Being in VR helps reduce the patients' pain, making it less painful for patients to move their hand/fingers during hand therapy, and gamified VR can help motivate the patient to perform therapeutic hand exercises, and make hand therapy more fun. In addition, VR camera-based hand tracking technology may be used to help therapists monitor how well patients are doing their hand therapy exercises, and to quantify whether adherence to treatment increases long-term functionality. Additional research and development into using VR as a tool for hand therapist is recommended for both acute pain and persistent pain patient populations.


Subject(s)
Acute Pain/therapy , Chronic Pain/therapy , Exercise Therapy , Hand , Video Games , Virtual Reality , Acute Pain/etiology , Analgesia , Chronic Pain/etiology , Humans
17.
Article in English | MEDLINE | ID: mdl-32432105

ABSTRACT

The purpose of this study was to find a parsimonious representation of hand kinematics data that could facilitate prosthetic hand control. Principal Component Analysis (PCA) and a non-linear Autoencoder Network (nAEN) were compared in their effectiveness at capturing the essential characteristics of a wide spectrum of hand gestures and actions. Performance of the two methods was compared on (a) the ability to accurately reconstruct hand kinematic data from a latent manifold of reduced dimension, (b) variance distribution across latent dimensions, and (c) the separability of hand movements in compressed and reconstructed representations derived using a linear classifier. The nAEN exhibited higher performance than PCA in its ability to more accurately reconstruct hand kinematic data from a latent manifold of reduced dimension. Whereas, for two dimensions in the latent manifold, PCA was able to account for 78% of input data variance, nAEN accounted for 94%. In addition, the nAEN latent manifold was spanned by coordinates with more uniform share of signal variance compared to PCA. Lastly, the nAEN was able to produce a manifold of more separable movements than PCA, as different tasks, when reconstructed, were more distinguishable by a linear classifier, SoftMax regression. It is concluded that non-linear dimensionality reduction may offer a more effective platform than linear methods to control prosthetic hands.

18.
IEEE Int Conf Rehabil Robot ; 2019: 796-802, 2019 06.
Article in English | MEDLINE | ID: mdl-31374728

ABSTRACT

Prosthetic limb controllers employ discrete modes for well-defined scenarios such as stair ascent, stair descent, or ramps. General human locomotion, however, is a continuous motion, fluidly adapting to the environment and not always categorizable into modes. It exhibits strong inter-joint coordination and the movement of a single joint can be largely predicted based on the movement of the rest of the body. We show that using body motion from the intact limbs and trunk, a reference trajectory can be generated for a prosthetic joint for every instant in time. Previously we demonstrated that a Recurrent Neural Network (RNN) can predict ankle angle trajectory for structured activities. In this study, we apply a similar network to more unstructured activities which are hard to categorize into modes. A wearable motion capture suit was worn by 10 healthy subjects to record full body kinematics during obstacle avoidance, sidestepping, weaving through cones, and backward walking. We used an RNN to predict right ankle kinematics from the other joint kinematics. The model was robust to subject-specific variations such as walking speed and step length. We present the performance for different activities and using different subsets of the sensors. This system demonstrates the potential for generating a reference trajectory for a prosthesis or other rehabilitation robot without explicit featurization of terrains or gait events.


Subject(s)
Artificial Limbs , Adult , Ankle/physiology , Biomechanical Phenomena , Female , Humans , Male
19.
IEEE Int Conf Rehabil Robot ; 2019: 1215-1220, 2019 06.
Article in English | MEDLINE | ID: mdl-31374795

ABSTRACT

Lower-limb amputees demonstrate decreased performance in stair ambulation compared to their intact-limb counterparts. An estimated 21% of amputees can navigate stairs without a handrail; almost 33% do not use stairs at all. The absence of tactile sensation on the bottom of the foot, creating uncertainty in foot placement, may be overcome by integrating sensory feedback into prosthesis design. Here we describe the design and evaluation of a haptic feedback system worn on the thigh to provide vibrotactile cues of foot placement with respect to stair steps. Tactor discrimination and foot placement awareness tests were performed to analyze system efficacy. Control participants wearing ski boots (N=10) and below-knee amputees (N=2) could discriminate individual tactor vibrations with 95.4% and 90.1% accuracy, respectively. The use of vibrotactile feedback increased accuracy in reporting foot placement by 15% and 17.5%, respectively. These results suggest that using vibrotactile arrays for sensory feedback may improve stair descent performance in lower-limb amputees.


Subject(s)
Amputees , Lower Extremity/physiology , Artificial Limbs , Feedback, Sensory/physiology , Female , Foot/physiology , Gait/physiology , Humans , Male , Prosthesis Design
20.
IEEE Trans Haptics ; 12(1): 78-86, 2019.
Article in English | MEDLINE | ID: mdl-30047898

ABSTRACT

The two point discrimination test is a well known means of assessing tactile sensory acuity. In this paper, we describe a similar test wherein one point of stimulation is tactile and the other one is visual. We propose this test to measure how vision and taction combine to form a unified touch percept, in the presence of sensory conflict. We perform the test at the leg above the knee, using virtual reality to render the visual "touch" while the real touch occurs by means of a monofilament. The results are compared with those obtained from a traditional two point discrimination test performed at the same location. This information can be used to understand how to allocate tactors in a haptic feedback array as well as to better inform the design of future virtual reality accessories.


Subject(s)
Discrimination, Psychological/physiology , Touch Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...