Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Neurorobot ; 18: 1342786, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38895095

RESUMEN

Symbolic task planning is a widely used approach to enforce robot autonomy due to its ease of understanding and deployment in engineered robot architectures. However, techniques for symbolic task planning are difficult to scale in real-world, highly dynamic, human-robot collaboration scenarios because of the poor performance in planning domains where action effects may not be immediate, or when frequent re-planning is needed due to changed circumstances in the robot workspace. The validity of plans in the long term, plan length, and planning time could hinder the robot's efficiency and negatively affect the overall human-robot interaction's fluency. We present a framework, which we refer to as Teriyaki, specifically aimed at bridging the gap between symbolic task planning and machine learning approaches. The rationale is training Large Language Models (LLMs), namely GPT-3, into a neurosymbolic task planner compatible with the Planning Domain Definition Language (PDDL), and then leveraging its generative capabilities to overcome a number of limitations inherent to symbolic task planners. Potential benefits include (i) a better scalability in so far as the planning domain complexity increases, since LLMs' response time linearly scales with the combined length of the input and the output, instead of super-linearly as in the case of symbolic task planners, and (ii) the ability to synthesize a plan action-by-action instead of end-to-end, and to make each action available for execution as soon as it is generated instead of waiting for the whole plan to be available, which in turn enables concurrent planning and execution. In the past year, significant efforts have been devoted by the research community to evaluate the overall cognitive capabilities of LLMs, with alternate successes. Instead, with Teriyaki we aim to providing an overall planning performance comparable to traditional planners in specific planning domains, while leveraging LLMs capabilities in other metrics, specifically those related to their short- and mid-term generative capabilities, which are used to build a look-ahead predictive planning model. Preliminary results in selected domains show that our method can: (i) solve 95.5% of problems in a test data set of 1,000 samples; (ii) produce plans up to 13.5% shorter than a traditional symbolic planner; (iii) reduce average overall waiting times for a plan availability by up to 61.4%.

2.
Sci Robot ; 8(78): eadd5434, 2023 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-37196072

RESUMEN

Human manual dexterity relies critically on touch. Robotic and prosthetic hands are much less dexterous and make little use of the many tactile sensors available. We propose a framework modeled on the hierarchical sensorimotor controllers of the nervous system to link sensing to action in human-in-the-loop, haptically enabled, artificial hands.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Percepción del Tacto , Humanos , Mano/fisiología , Tacto/fisiología
3.
IEEE Trans Cybern ; 53(1): 497-513, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34910648

RESUMEN

The possibility for humans to interact with physical or virtual systems using gestures has been vastly explored by researchers and designers in the last 20 years to provide new and intuitive interaction modalities. Unfortunately, the literature about gestural interaction is not homogeneous, and it is characterized by a lack of shared terminology. This leads to fragmented results and makes it difficult for research activities to build on top of state-of-the-art results and approaches. The analysis in this article aims at creating a common conceptual design framework to enforce development efforts in gesture-based human-machine interaction (HMI). The main contributions of this article can be summarized as follows: 1) we provide a broad definition for the notion of functional gesture in HMI; 2) we design a flexible and expandable gesture taxonomy; and 3) we put forward a detailed problem statement for gesture-based HMI. Finally, to support our main contribution, this article presents and analyzes 83 most pertinent articles classified on the basis of our taxonomy and problem statement.


Asunto(s)
Gestos , Interfaz Usuario-Computador , Humanos , Examen Físico
4.
Front Neurorobot ; 16: 808222, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35280844

RESUMEN

Tactile sensing endows the robots to perceive certain physical properties of the object in contact. Robots with tactile perception can classify textures by touching. Interestingly, textures of fine micro-geometry beyond the nominal resolution of the tactile sensors can also be identified through exploratory robotic movements like sliding. To study the problem of fine texture classification, we design a robotic sliding experiment using a finger-shaped multi-channel capacitive tactile sensor. A feature extraction process is presented to encode the acquired tactile signals (in the form of time series) into a low dimensional (≤7D) feature vector. The feature vector captures the frequency signature of a fabric texture such that fabrics can be classified directly. The experiment includes multiple combinations of sliding parameters, i.e., speed and pressure, to investigate the correlation between sliding parameters and the generated feature space. Results show that changing the contact pressure can greatly affect the significance of the extracted feature vectors. Instead, variation of sliding speed shows no apparent effects. In summary, this paper presents a study of texture classification on fabrics by training a simple k-NN classifier, using only one modality and one type of exploratory motion (sliding). The classification accuracy can reach up to 96%. The analysis of the feature space also implies a potential parametric representation of textures for tactile perception, which could be used for the adaption of motion to reach better classification performance.

5.
IEEE Trans Cybern ; 52(6): 5587-5606, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34133297

RESUMEN

We present Arianna+, a framework to design networks of ontologies for representing knowledge enabling smart homes to perform human activity recognition online. In the network, nodes are ontologies allowing for various data contextualisation, while edges are general-purpose computational procedures elaborating data. Arianna+ provides a flexible interface between the inputs and outputs of procedures and statements, which are atomic representations of ontological knowledge. Arianna+ schedules procedures on the basis of events by employing logic-based reasoning, that is, by checking the classification of certain statements in the ontologies. Each procedure involves input and output statements that are differently contextualized in the ontologies based on specific prior knowledge. Arianna+ allows to design networks that encode data within multiple contexts and, as a reference scenario, we present a modular network based on a spatial context shared among all activities and a temporal context specialized for each activity to be recognized. In the article, we argue that a network of small ontologies is more intelligible and has a reduced computational load than a single ontology encoding the same knowledge. Arianna+ integrates in the same architecture heterogeneous data processing techniques, which may be better suited to different contexts. Thus, we do not propose a new algorithmic approach to activity recognition, instead, we focus on the architectural aspects for accommodating logic-based and data-driven activity models in a context-oriented way. Also, we discuss how to leverage data contextualization and reasoning for activity recognition, and to support an iterative development process driven by domain experts.


Asunto(s)
Actividades Humanas , Semántica , Humanos
6.
Front Robot AI ; 8: 714023, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34660702

RESUMEN

Human-object interaction is of great relevance for robots to operate in human environments. However, state-of-the-art robotic hands are far from replicating humans skills. It is, therefore, essential to study how humans use their hands to develop similar robotic capabilities. This article presents a deep dive into hand-object interaction and human demonstrations, highlighting the main challenges in this research area and suggesting desirable future developments. To this extent, the article presents a general definition of the hand-object interaction problem together with a concise review for each of the main subproblems involved, namely: sensing, perception, and learning. Furthermore, the article discusses the interplay between these subproblems and describes how their interaction in learning from demonstration contributes to the success of robot manipulation. In this way, the article provides a broad overview of the interdisciplinary approaches necessary for a robotic system to learn new manipulation skills by observing human behavior in the real world.

7.
Data Brief ; 32: 106122, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32904359

RESUMEN

The article describes a multi-sensory dataset related to the Activities of Daily Living (ADL). These are the activities that contribute to an assessment of the overall status of elderly or people with special needs, possibly suffering from mild cognitive impairments. Typical basic ADLs include walking, such postural transitions as getting up or sitting down, as well as behaviours related to feeding, such as drinking or eating with knife and fork, or personal hygiene, e.g., teeth brushing. The collection process adopted for building this dataset considers nine ADL-related activities, which have been performed in different locations and involving the usage of both left and right arms. The dataset acquisition involved 10 volunteers performing 186 ADL instances, for a grand total of over 1860 examples. The dataset contains data from six 9-axis Inertial Measurement Units (IMUs), worn by each volunteer (two for each arm, one on the back and one on the right thigh). The dataset features an accurate data labelling done via manual annotation performed thanks to videos recorded by an RGB camera. The videos recorded during the experiments have been used only for labelling purposes, and they are not published.

8.
Front Neurorobot ; 13: 53, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31379549

RESUMEN

In the past few years a new scenario for robot-based applications has emerged. Service and mobile robots have opened new market niches. Also, new frameworks for shop-floor robot applications have been developed. In all these contexts, robots are requested to perform tasks within open-ended conditions, possibly dynamically varying. These new requirements ask also for a change of paradigm in the design of robots: on-line and safe feedback motion control becomes the core of modern robot systems. Future robots will learn autonomously, interact safely and possess qualities like self-maintenance. Attaining these features would have been relatively easy if a complete model of the environment was available, and if the robot actuators could execute motion commands perfectly relative to this model. Unfortunately, a complete world model is not available and robots have to plan and execute the tasks in the presence of environmental uncertainties which makes sensing an important component of new generation robots. For this reason, today's new generation robots are equipped with more and more sensing components, and consequently they are ready to actively deal with the high complexity of the real world. Complex sensorimotor tasks such as exploration require coordination between the motor system and the sensory feedback. For robot control purposes, sensory feedback should be adequately organized in terms of relevant features and the associated data representation. In this paper, we propose an overall functional picture linking sensing to action in closed-loop sensorimotor control of robots for touch (hands, fingers). Basic qualities of haptic perception in humans inspire the models and categories comprising the proposed classification. The objective is to provide a reasoned, principled perspective on the connections between different taxonomies used in the Robotics and human haptic literature. The specific case of active exploration is chosen to ground interesting use cases. Two reasons motivate this choice. First, in the literature on haptics, exploration has been treated only to a limited extent compared to grasping and manipulation. Second, exploration involves specific robot behaviors that exploit distributed and heterogeneous sensory data.

9.
Data Brief ; 24: 103837, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30993154

RESUMEN

In the past few years, the technology of automated guided vehicles (AGVs) has notably advanced. In particular, in the context of factory and warehouse automation, different approaches have been presented for detecting and localizing pallets inside warehouses and shop-floor environments. In a related research paper Mohamed et al., 2018, we show that an AGVs can detect, localize, and track pallets using machine learning techniques based only on the data of an on-board 2D laser rangefinder. Such sensor is very common in industrial scenarios due to its simplicity and robustness, but it can only provide a limited amount of data. Therefore, it has been neglected in the past in favor of more complex solutions. In this paper, we release to the community the data we collected in Ref. Mohamed et al., 2018 for further research activities in the field of pallet localization and tracking. The dataset comprises a collection of 565 2D scans from real-world environments, which are divided into 340 samples where pallets are present, and 225 samples where they are not. The data have been manually labelled and are provided in different formats.

10.
Sensors (Basel) ; 19(4)2019 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-30781527

RESUMEN

Tactile sensing is a key enabling technology to develop complex behaviours for robots interacting with humans or the environment. This paper discusses computational aspects playing a significant role when extracting information about contact events. Considering a large-scale, capacitance-based robot skin technology we developed in the past few years, we analyse the classical Boussinesq⁻Cerruti's solution and the Love's approach for solving a distributed inverse contact problem, both from a qualitative and a computational perspective. Our contribution is the characterisation of the algorithms' performance using a freely available dataset and data originating from surfaces provided with robot skin.


Asunto(s)
Robótica/tendencias , Piel , Tacto/fisiología , Algoritmos , Capacidad Eléctrica , Diseño de Equipo , Humanos , Propiedades de Superficie
11.
Data Brief ; 22: 109-117, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30581913

RESUMEN

The article describes a multi-sensor dataset of human-human handovers composed of over 1000 recordings collected from 18 volunteers. The recordings refer to 76 test configurations, which consider different volunteer׳s starting positions and roles, objects to pass and motion strategies. In all experiments, we acquire 6-axis inertial data from two smartwatches, the 15-joint skeleton model of one volunteer with an RGB-D camera and the upper-body model of both persons using a total of 20 motion capture markers. The recordings are annotated with videos and questionnaires about the perceived characteristics of the handover.

12.
Front Neurorobot ; 11: 24, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28588473

RESUMEN

Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

13.
IEEE Trans Syst Man Cybern B Cybern ; 39(1): 212-29, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19068439

RESUMEN

This paper presents muNav, a novel approach to navigation which, with minimal requirements in terms of onboard sensory, memory, and computational power, exhibits way-finding behaviors in very complex environments. The algorithm is intrinsically robust, since it does not require any internal geometrical representation or self-localization capabilities. Experimental results, performed with both simulated and real robots, validate the proposed theoretical approach.


Asunto(s)
Ambiente , Movimiento (Física) , Robótica/métodos , Algoritmos , Inteligencia Artificial , Simulación por Computador , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA