Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Front Robot AI ; 10: 1281188, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38077457

RESUMO

Humans regularly use all inner surfaces of the hand during manipulation, whereas traditional formulations for robots tend to use only the tips of their fingers, limiting overall dexterity. In this paper, we explore the use of the whole hand during spatial robotic dexterous within-hand manipulation. We present a novel four-fingered robotic hand called the Model B, which is designed and controlled using a straight-forward potential energy-based motion model that is based on the hand configuration and applied actuator torques. In this way the hand-object system is driven to a new desired configuration, often through sliding and rolling between the object and hand, and with the fingers "caging" the object to prevent ejection. This paper presents the first ever application of the energy model in three dimensions, which was used to compare the theoretical manipulability of popular robotic hands, which then inspired the design of the Model B. We experimentally validate the hand's performance with extensive benchtop experimentation with test objects and real world objects, as well as on a robotic arm, and demonstrate complex spatial caging manipulation on a variety of objects in all six object dimensions (three translation and three rotation) using all inner surfaces of the fingers and the palm.

2.
Sci Robot ; 6(54)2021 05 12.
Artigo em Inglês | MEDLINE | ID: mdl-34043534

RESUMO

Humans use all surfaces of the hand for contact-rich manipulation. Robot hands, in contrast, typically use only the fingertips, which can limit dexterity. In this work, we leveraged a potential energy-based whole-hand manipulation model, which does not depend on contact wrench modeling like traditional approaches, to design a robotic manipulator. Inspired by robotic caging grasps and the high levels of dexterity observed in human manipulation, a metric was developed and used in conjunction with the manipulation model to design a two-fingered dexterous hand, the Model W. This was accomplished by simulating all planar finger topologies composed of open kinematic chains of up to three serial revolute and prismatic joints, forming symmetric two-fingered hands, and evaluating their performance according to the metric. We present the best design, an unconventional robot hand capable of performing continuous object reorientation, as well as repeatedly alternating between power and pinch grasps-two contact-rich skills that have often eluded robotic hands-and we experimentally characterize the hand's manipulation capability. This hand realizes manipulation motions reminiscent of thumb-index finger manipulative movement in humans, and its topology provides the foundation for a general-purpose dexterous robot hand.


Assuntos
Mãos , Robótica/instrumentação , Fenômenos Biomecânicos , Simulação por Computador , Desenho de Equipamento , Dedos/anatomia & histologia , Dedos/fisiologia , Mãos/anatomia & histologia , Mãos/fisiologia , Força da Mão/fisiologia , Interface Háptica , Humanos , Movimento (Física)
3.
Sci Robot ; 6(54)2021 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-34043540

RESUMO

The process of modeling a series of hand-object parameters is crucial for precise and controllable robotic in-hand manipulation because it enables the mapping from the hand's actuation input to the object's motion to be obtained. Without assuming that most of these model parameters are known a priori or can be easily estimated by sensors, we focus on equipping robots with the ability to actively self-identify necessary model parameters using minimal sensing. Here, we derive algorithms, on the basis of the concept of virtual linkage-based representations (VLRs), to self-identify the underlying mechanics of hand-object systems via exploratory manipulation actions and probabilistic reasoning and, in turn, show that the self-identified VLR can enable the control of precise in-hand manipulation. To validate our framework, we instantiated the proposed system on a Yale Model O hand without joint encoders or tactile sensors. The passive adaptability of the underactuated hand greatly facilitates the self-identification process, because they naturally secure stable hand-object interactions during random exploration. Relying solely on an in-hand camera, our system can effectively self-identify the VLRs, even when some fingers are replaced with novel designs. In addition, we show in-hand manipulation applications of handwriting, marble maze playing, and cup stacking to demonstrate the effectiveness of the VLR in precise in-hand manipulation control.


Assuntos
Mãos , Robótica/métodos , Algoritmos , Fenômenos Biomecânicos , Simulação por Computador , Desenho de Equipamento , Força da Mão , Interface Háptica/estatística & dados numéricos , Humanos , Sistemas Homem-Máquina , Robótica/instrumentação , Robótica/estatística & dados numéricos , Teoria de Sistemas , Interface Usuário-Computador
4.
IEEE Trans Haptics ; 13(3): 600-610, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31831440

RESUMO

Interactions with an object during within-hand manipulation (WIHM) constitutes an assortment of gripping, sliding, and pivoting actions. In addition to manipulation benefits, the re-orientation and motion of the objects within-the-hand also provides a rich array of additional haptic information via the interactions to the sensory organs of the hand. In this article, we utilize variable friction (VF) robotic fingers to execute a rolling WIHM on a variety of objects, while recording 'proprioceptive' actuator data, which is then used for object classification (i.e., without tactile sensors). Rather than hand-picking a select group of features for this task, our approach begins with 66 general features, which are computed from actuator position and load profiles for each object-rolling manipulation, based on gradient changes. An Extra Trees classifier performs object classification while also ranking each feature's importance. Using only the six most-important 'Key Features' from the general set, a classification accuracy of 86% was achieved for distinguishing the six geometric objects included in our data set. Comparatively, when all 66 features are used, the accuracy is 89.8%.


Assuntos
Mãos , Aprendizado de Máquina , Atividade Motora , Propriocepção , Robótica , Percepção do Tato , Fricção , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA