Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Nat Commun ; 15(1): 1760, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38409128

RESUMO

Most wearable robots such as exoskeletons and prostheses can operate with dexterity, while wearers do not perceive them as part of their bodies. In this perspective, we contend that integrating environmental, physiological, and physical information through multi-modal fusion, incorporating human-in-the-loop control, utilizing neuromuscular interface, employing flexible electronics, and acquiring and processing human-robot information with biomechatronic chips, should all be leveraged towards building the next generation of wearable robots. These technologies could improve the embodiment of wearable robots. With optimizations in mechanical structure and clinical training, the next generation of wearable robots should better facilitate human motor and sensory reconstruction and enhancement.


Assuntos
Exoesqueleto Energizado , Robótica , Dispositivos Eletrônicos Vestíveis , Humanos , Eletrônica , Tecnologia
3.
4.
Front Neurorobot ; 15: 686010, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34456705

RESUMO

Robots start to play a role in our social landscape, and they are progressively becoming responsive, both physically and socially. It begs the question of how humans react to and interact with robots in a coordinated manner and what the neural underpinnings of such behavior are. This exploratory study aims to understand the differences in human-human and human-robot interactions at a behavioral level and from a neurophysiological perspective. For this purpose, we adapted a collaborative dynamical paradigm from the literature. We asked 12 participants to hold two corners of a tablet while collaboratively guiding a ball around a circular track either with another participant or a robot. In irregular intervals, the ball was perturbed outward creating an artificial error in the behavior, which required corrective measures to return to the circular track again. Concurrently, we recorded electroencephalography (EEG). In the behavioral data, we found an increased velocity and positional error of the ball from the track in the human-human condition vs. human-robot condition. For the EEG data, we computed event-related potentials. We found a significant difference between human and robot partners driven by significant clusters at fronto-central electrodes. The amplitudes were stronger with a robot partner, suggesting a different neural processing. All in all, our exploratory study suggests that coordinating with robots affects action monitoring related processing. In the investigated paradigm, human participants treat errors during human-robot interaction differently from those made during interactions with other humans. These results can improve communication between humans and robot with the use of neural activity in real-time.

5.
Sci Robot ; 6(54)2021 05 26.
Artigo em Inglês | MEDLINE | ID: mdl-34043538

RESUMO

Perceiving and handling deformable objects is an integral part of everyday life for humans. Automating tasks such as food handling, garment sorting, or assistive dressing requires open problems of modeling, perceiving, planning, and control to be solved. Recent advances in data-driven approaches, together with classical control and planning, can provide viable solutions to these open challenges. In addition, with the development of better simulation environments, we can generate and study scenarios that allow for benchmarking of various approaches and gain better understanding of what theoretical developments need to be made and how practical systems can be implemented and evaluated to provide flexible, scalable, and robust solutions. To this end, we survey more than 100 relevant studies in this area and use it as the basis to discuss open problems. We adopt a learning perspective to unify the discussion over analytical and data-driven approaches, addressing how to use and integrate model priors and task data in perceiving and manipulating a variety of deformable objects.


Assuntos
Robótica/métodos , Simulação por Computador , Humanos , Aprendizagem , Fenômenos Mecânicos , Percepção , Fenômenos Físicos , Robótica/instrumentação , Robótica/estatística & dados numéricos
7.
Data Brief ; 30: 105335, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32258263

RESUMO

Representing 3D geometry for different tasks, e.g. rendering and reconstruction, is an important goal in different fields, such as computer graphics, computer vision and robotics. Robotic applications often require perception of object shape information extracted from sensory data that can be noisy and incomplete. This is a challenging task and in order to facilitate analysis of new methods and comparison of different approaches for shape modeling (e.g. surface estimation), completion and exploration, we provide real sensory data acquired from exploring various objects of different complexities. The dataset includes visual and tactile readings in the form of 3D point clouds obtained using two different robot setups that are equipped with visual and tactile sensors. During data collection, the robots touch the experiment objects in a predefined manner at various exploration configurations and gather visual and tactile points in the same coordinate frame based on calibration between the robots and the used cameras. The goal of this exhaustive exploration procedure is to sense unseen parts of the objects which are not visible to the cameras, but can be sensed via tactile sensors activated at touched areas. The data was used for shape completion and modeling via Implicit Surface representation and Gaussian-Process-based regression, in the work "Object shape estimation and modeling, based on sparse Gaussian process implicit surfaces, combining visual data and tactile exploration" [3], and also used partially in "Enhancing visual perception of shape through tactile glances" [4], both studying efficient exploration of objects to reduce number of touches.

8.
J Chem Inf Model ; 60(3): 1302-1316, 2020 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-32130862

RESUMO

We define a molecular caging complex as a pair of molecules in which one molecule (the "host" or "cage") possesses a cavity that can encapsulate the other molecule (the "guest") and prevent it from escaping. Molecular caging complexes can be useful in applications such as molecular shape sorting, drug delivery, and molecular immobilization in materials science, to name just a few. However, the design and computational discovery of new caging complexes is a challenging task, as it is hard to predict whether one molecule can encapsulate another because their shapes can be quite complex. In this paper, we propose a computational screening method that predicts whether a given pair of molecules form a caging complex. Our method is based on a caging verification algorithm that was designed by our group for applications in robotic manipulation. We tested our algorithm on three pairs of molecules that were previously described in a pioneering work on molecular caging complexes and found that our results are fully consistent with the previously reported ones. Furthermore, we performed a screening experiment on a data set consisting of 46 hosts and four guests and used our algorithm to predict which pairs are likely to form caging complexes. Our method is computationally efficient and can be integrated into a screening pipeline to complement experimental techniques.


Assuntos
Algoritmos
9.
Front Robot AI ; 7: 47, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501215

RESUMO

To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks "hand-shake," "hand-wave," "parachute fist-bump," and "rocket fist-bump." We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

10.
Front Robot AI ; 7: 82, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501249

RESUMO

Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.

11.
Science ; 364(6446)2019 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-31221831

RESUMO

Dexterous manipulation is one of the primary goals in robotics. Robots with this capability could sort and package objects, chop vegetables, and fold clothes. As robots come to work side by side with humans, they must also become human-aware. Over the past decade, research has made strides toward these goals. Progress has come from advances in visual and haptic perception and in mechanics in the form of soft actuators that offer a natural compliance. Most notably, immense progress in machine learning has been leveraged to encapsulate models of uncertainty and to support improvements in adaptive and robust control. Open questions remain in terms of how to enable robots to deal with the most unpredictable agent of all, the human.


Assuntos
Mãos/fisiologia , Destreza Motora/fisiologia , Robótica/tendências , Humanos
12.
Sci Robot ; 4(28)2019 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-33137744

RESUMO

Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV's mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed "resting." For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.

13.
Sci Robot ; 3(21)2018 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-33141725

RESUMO

Social robotics studies what it really means for humans and robots to interact and the implications of those interactions.

14.
Sci Robot ; 3(23)2018 Oct 17.
Artigo em Inglês | MEDLINE | ID: mdl-33141734

RESUMO

Computer vision diverged from robotics and has focused on contests and data sets; reconnecting the two could solve real-world problems.

15.
PeerJ Comput Sci ; 4: e153, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33816807

RESUMO

We describe the Coefficient-Flow algorithm for calculating the bounding chain of an $(n-1)$-boundary on an $n$-manifold-like simplicial complex $S$. We prove its correctness and show that it has a computational time complexity of O(|S (n-1)|) (where S (n-1) is the set of $(n-1)$-faces of $S$). We estimate the big- $O$ coefficient which depends on the dimension of $S$ and the implementation. We present an implementation, experimentally evaluate the complexity of our algorithm, and compare its performance with that of solving the underlying linear system.

16.
Data Brief ; 11: 491-498, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28289699

RESUMO

We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB - an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA