RESUMO
This article is the second in a two-part series analyzing human arm and hand motion during a wide range of unstructured tasks. In this work, we track the hand of healthy individuals as they perform a variety of activities of daily living (ADLs) in three ways decoupled from hand orientation: end-point locations of the hand trajectory, whole path trajectories of the hand, and straight-line paths generated using start and end points of the hand. These data are examined by a clustering procedure to reduce the wide range of hand use to a smaller representative set. Hand orientations are subsequently analyzed for the end-point location clustering results and subsets of orientations are identified in three reference frames: global, torso, and forearm. Data driven methods that are used include dynamic time warping (DTW), DTW barycenter averaging (DBA), and agglomerative hierarchical clustering with Ward's linkage. Analysis of the end-point locations, path trajectory, and straight-line path trajectory identified 5, 5, and 7 ADL task categories, respectively, while hand orientation analysis identified up to 4 subsets of orientations for each task location, discretized and classified to the facets of a rhombicuboctahedron. Together these provide insight into our hand usage in daily life and inform an implementation in prosthetic or robotic devices using sequential control.
Assuntos
Atividades Cotidianas , Movimento , Análise por Conglomerados , Mãos , Humanos , Movimento (Física)RESUMO
This paper is the first in a two-part series analyzing human arm and hand motion during a wide range of unstructured tasks. The wide variety of motions performed by the human arm during daily tasks makes it desirable to find representative subsets to reduce the dimensionality of these movements for a variety of applications, including the design and control of robotic and prosthetic devices. This paper presents a novel method and the results of an extensive human subjects study to obtain representative arm joint angle trajectories that span naturalistic motions during Activities of Daily Living (ADLs). In particular, we seek to identify sets of useful motion trajectories of the upper limb that are functions of a single variable, allowing, for instance, an entire prosthetic or robotic arm to be controlled with a single input from a user, along with a means to select between motions for different tasks. Data driven approaches are used to discover clusters and representative motion averages for the wrist 3 degree of freedom (DOF), elbow-wrist 4 DOF, and full-arm 7 DOF motions. The proposed method makes use of well-known techniques such as dynamic time warping (DTW) to obtain a divergence measure between motion segments, Ward's distance criterion to build hierarchical trees, and functional principal component analysis (fPCA) to evaluate cluster variability. The emerging clusters associate various recorded motions into primarily hand start and end location for the full-arm system, motion direction for the wrist-only system, and an intermediate between the two qualities for the elbow-wrist system.
Assuntos
Atividades Cotidianas , Movimento , Análise por Conglomerados , Humanos , Amplitude de Movimento Articular , Extremidade SuperiorRESUMO
Interactions with an object during within-hand manipulation (WIHM) constitutes an assortment of gripping, sliding, and pivoting actions. In addition to manipulation benefits, the re-orientation and motion of the objects within-the-hand also provides a rich array of additional haptic information via the interactions to the sensory organs of the hand. In this article, we utilize variable friction (VF) robotic fingers to execute a rolling WIHM on a variety of objects, while recording 'proprioceptive' actuator data, which is then used for object classification (i.e., without tactile sensors). Rather than hand-picking a select group of features for this task, our approach begins with 66 general features, which are computed from actuator position and load profiles for each object-rolling manipulation, based on gradient changes. An Extra Trees classifier performs object classification while also ranking each feature's importance. Using only the six most-important 'Key Features' from the general set, a classification accuracy of 86% was achieved for distinguishing the six geometric objects included in our data set. Comparatively, when all 66 features are used, the accuracy is 89.8%.
Assuntos
Mãos , Aprendizado de Máquina , Atividade Motora , Propriocepção , Robótica , Percepção do Tato , Fricção , HumanosRESUMO
New upper limb prosthetic devices are continuously being developed by a variety of industrial, academic, and hobbyist groups. Yet, little research has evaluated the long term use of currently available prostheses in daily life activities, beyond laboratory or survey studies. We seek to objectively measure how experienced unilateral upper limb prosthesis-users employ their prosthetic devices and unaffected limb for manipulation during everyday activities. In particular, our goal is to create a method for evaluating all types of amputee manipulation, including non-prehensile actions beyond conventional grasp functions, as well as to examine the relative use of both limbs in unilateral and bilateral cases. This study employs a head-mounted video camera to record participant's hands and arms as they complete unstructured domestic tasks within their own homes. A new 'Unilateral Prosthesis-User Manipulation Taxonomy' is presented based observations from 10 hours of recorded videos. The taxonomy addresses manipulation actions of the intact hand, prostheses, bilateral activities, and environmental feature-use (aiïordances). Our preliminary results involved tagging 23 minute segments of the full videos from 3 amputee participants using the taxonomy. This resulted in over 2,300 tag instances. Observations included that non-prehensile interactions outnumbered prehensile interactions in the affected limb for users with more distal amputation that allowed arm mobility.
Assuntos
Amputados/reabilitação , Membros Artificiais/classificação , Desenho de Prótese/métodos , Análise e Desempenho de Tarefas , Extremidade Superior/fisiologia , Atividades Cotidianas , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Gravação em VídeoRESUMO
BACKGROUND: Manual laparoscopic surgery requires extensive training and familiarization. It has been suggested that motion inversion caused by the 'fulcrum effect' is key to motor challenges. We investigate the potential of a conceptual semi-robotic handheld tool that negates natural inversion. METHODS: A custom laparoscopic simulator with haptic feedback was developed to allow interactive evaluation of the conceptual tool via virtual prototyping, prior to fabricating a physical prototype. Two groups of eight participants each used either the conceptual or a regular virtual tool over a ten week study to complete two abstract tasks of motor control and force regulation. RESULTS: Statistically significant higher rates of skill improvement were demonstrated with the conceptual tool for motion efficiency, task completion time and error reduction. Force regulation increased for both groups but without significant differences. CONCLUSIONS: The results indicate potential for fulcrum-negating hand tools in reducing the time needed to acquire motor skills.
Assuntos
Competência Clínica , Simulação por Computador , Laparoscopia/instrumentação , Laparoscopia/métodos , Aprendizagem , Adulto , Algoritmos , Instrução por Computador , Desenho de Equipamento , Retroalimentação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Movimento (Física) , Destreza Motora , Desempenho Psicomotor , Robótica , Estresse Mecânico , Interface Usuário-ComputadorRESUMO
Shape-changing interfaces are a category of device capable of altering their form in order to facilitate communication of information. In this work, we present a shape-changing device that has been designed for navigation assistance. 'The Animotus' (previously, 'The Haptic Sandwich' ), resembles a cube with an articulated upper half that is able to rotate and extend (translate) relative to the bottom half, which is fixed in the user's grasp. This rotation and extension, generally felt via the user's fingers, is used to represent heading and proximity to navigational targets. The device is intended to provide an alternative to screen or audio based interfaces for visually impaired, hearing impaired, deafblind, and sighted pedestrians. The motivation and design of the haptic device is presented, followed by the results of a navigation experiment that aimed to determine the role of each device DOF, in terms of facilitating guidance. An additional device, 'The Haptic Taco', which modulated its volume in response to target proximity (negating directional feedback), was also compared. Results indicate that while the heading (rotational) DOF benefited motion efficiency, the proximity (translational) DOF benefited velocity. Combination of the two DOF improved overall performance. The volumetric Taco performed comparably to the Animotus' extension DOF.
Assuntos
Retroalimentação Sensorial/fisiologia , Tato/fisiologia , Adulto , Desenho de Equipamento , Feminino , Dedos/fisiologia , Humanos , Masculino , Pedestres , Tecnologia Assistiva , Interface Usuário-Computador , Adulto JovemRESUMO
Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.