Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Sensors (Basel) ; 18(7)2018 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-30036976

RESUMO

To effectively interact with people, social robots need to perceive human behaviors and in turn display their own behaviors using social communication modes such as gestures. The modeling of gestures can be difficult due to the high dimensionality of the robot configuration space. Imitation learning can be used to teach a robot to implement multi-jointed arm gestures by directly observing a human teacher's arm movements (for example, using a non-contact 3D sensor) and then mapping these movements onto the robot arms. In this paper, we present a novel imitation learning system with robot self-collision awareness and avoidance. The proposed method uses a kinematical approach with bounding volumes to detect and avoid collisions with the robot itself while performing gesticulations. We conducted experiments with a dual arm social robot and a 3D sensor to determine the effectiveness of our imitation system in being able to mimic gestures while avoiding self-collisions.

2.
J Intell Robot Syst ; 108(2): 15, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37275783

RESUMO

Swarm robotic systems comprising members with limited onboard localization capabilities rely on employing collaborative motion-control strategies to successfully carry out multi-task missions. Such strategies impose constraints on the trajectories of the swarm and require the swarm to be divided into worker robots that accomplish the tasks at hand, and support robots that facilitate the movement of the worker robots. The consideration of the constraints imposed by these strategies is essential for optimal mission-planning. Existing works have focused on swarms that use leader-based collaborative motion-control strategies for mission execution and are divided into worker and support robots prior to mission-planning. These works optimize the plan of the worker robots and, then, use a rule-based approach to select the plan of the support robots for movement facilitation - resulting in a sub-optimal plan for the swarm. Herein, we present a mission-planning methodology that concurrently optimizes the plan of the worker and support robots by dividing the mission-planning problem into five stages: division-of-labor, task-allocation of worker robots, worker robot path-planning, movement-concurrency, and movement-allocation. The proposed methodology concurrently searches for the optimal value of the variables of all stages. The proposed methodology is novel as it (1) incorporates the division-of-labor of the swarm into worker and support robots into the mission-planning problem, (2) plans the paths of the swarm robots to allow for concurrent facilitation of multiple independent worker robot group movements, and (3) is applicable to any collaborative swarm motion-control strategy that utilizes support robots. A unique pre-implementation estimator, for determining the possible improvement in mission execution performance that can achieved through the proposed methodology was also developed to allow the user to justify the additional computational resources required by it. The estimator uses a machine learning model and estimates this improvement based on the parameters of the mission at hand. Extensive simulated experiments showed that the proposed concurrent methodology improves the mission execution performance of the swarm by almost 40% compared to the competing sequential methodology that optimizes the plan of the worker robots first and, then, the plan of the support robots. The developed pre-implementation estimator was shown to achieve an estimation error of less than 5%.

3.
IEEE Trans Cybern ; 51(12): 5954-5968, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32149676

RESUMO

For social robots to effectively engage in human-robot interaction (HRI), they need to be able to interpret human affective cues and to respond appropriately via display of their own emotional behavior. In this article, we present a novel multimodal emotional HRI architecture to promote natural and engaging bidirectional emotional communications between a social robot and a human user. User affect is detected using a unique combination of body language and vocal intonation, and multimodal classification is performed using a Bayesian Network. The Emotionally Expressive Robot utilizes the user's affect to determine its own emotional behavior via an innovative two-layer emotional model consisting of deliberative (hidden Markov model) and reactive (rule-based) layers. The proposed architecture has been implemented via a small humanoid robot to perform diet and fitness counseling during HRI. In order to evaluate the Emotionally Expressive Robot's effectiveness, a Neutral Robot that can detect user affects but lacks an emotional display, was also developed. A between-subjects HRI experiment was conducted with both types of robots. Extensive results have shown that both robots can effectively detect user affect during the real-time HRI. However, the Emotionally Expressive Robot can appropriately determine its own emotional response based on the situation at hand and, therefore, induce more user positive valence and less negative arousal than the Neutral Robot.


Assuntos
Robótica , Teorema de Bayes , Comunicação , Emoções , Humanos , Interação Social
4.
IEEE Trans Cybern ; 50(2): 856-868, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30369464

RESUMO

Locating a mobile target, untrackable in real-time, is pertinent to numerous time-critical applications, such as wilderness search and rescue. This paper proposes a hybrid approach to this dynamic problem, where both static and mobile sensors are utilized for the goal of detecting a target. The approach is novel in that a team of robots utilized to deploy a static-sensor network also actively searches for the target via on-board sensors. Synergy is achieved through: 1) optimal deployment planning of static-sensor networks and 2) optimal routing and motion planning of the robots for the deployment of the network and target search. The static-sensor network is planned first to maximize the likelihood of target detection while ensuring (temporal and spatial) unbiasedness in target motion. Robot motions are, subsequently, planned in two stages: 1) route planning and 2) trajectory planning. In the first stage, given a static-sensor network configuration, robot routes are planned to maximize the amount of spare time available to the mobile agents/sensors, for target search in between (just-in-time) static-sensor deployments. In the second stage, given robot routes (i.e., optimal sequences of sensor delivery locations and times), the corresponding robot trajectories are planned to make effective use of any spare time the mobile agents may have to search for the target. The proposed search strategy was validated through extensive simulations, some of which are given in detail here. An analysis of the method's performance in terms of target-search success is also included.

5.
IEEE Trans Syst Man Cybern B Cybern ; 37(1): 190-8, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17278571

RESUMO

This paper presents a novel agent-based method for the dynamic coordinated selection and positioning of active-vision cameras for the simultaneous surveillance of multiple objects-of-interest as they travel through a cluttered environment with a-priori unknown trajectories. The proposed system dynamically adjusts not only the orientation but also the position of the cameras in order to maximize the system's performance by avoiding occlusions and acquiring images with preferred viewing angles. Sensor selection and positioning are accomplished through an agent-based approach. The proposed sensing-system reconfiguration strategy has been verified via simulations and implemented on an experimental prototype setup for automated facial recognition. Both simulations and experimental analyses have shown that the use of dynamic sensors along with an effective online dispatching strategy may tangibly improve the surveillance performance of a sensing system.


Assuntos
Inteligência Artificial , Biometria/métodos , Face/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Medidas de Segurança
6.
IEEE Trans Syst Man Cybern B Cybern ; 36(6): 1432-41, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17186819

RESUMO

This correspondence presents a novel online trajectory-planning method for the autonomous robotic interception of moving targets in the presence of dynamic obstacles, i.e., position and velocity matching (also referred to as rendezvous). The proposed time-optimal interception method is a hybrid algorithm that augments a novel rendezvous-guidance (RG) technique with the velocity-obstacle approach, for obstacle avoidance, first reported by Fiorini and Shiller. The obstacle-avoidance algorithm itself could not be used in its original form and had to be modified to ensure that the online planned path deviates minimally from the one generated by the RG algorithm. Extensive simulation and experimental analyses, some of which are reported in this correspondence, have clearly demonstrated the tangible time efficiency of the proposed interception method.

7.
IEEE Trans Cybern ; 45(9): 1784-97, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25376050

RESUMO

This paper presents a novel strategy for the on-line planning of optimal motion-paths for a team of autonomous ground robots engaged in wilderness search and rescue (WiSAR). The proposed strategy, which forms part of an overall multirobot coordination (MRC) methodology, addresses the dynamic nature of WiSAR by: 1) planning initial, time-optimal, and piecewise polynomial paths for all robots; 2) implementing and regularly evaluating the optimality of the paths through a set of checks that gauge feasibility of path-completion within the available time; and 3) replanning paths, on-line, whenever deemed necessary. The fundamental principle of maintaining the optimal deployment of the robots throughout the search guides the MRC methodology. The proposed path-planning strategy is illustrated through a simulated realistic WiSAR example, and compared to an alternative, nonprobabilistic approach.

8.
IEEE Trans Syst Man Cybern B Cybern ; 41(5): 1287-98, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21571612

RESUMO

This paper presents a novel modular methodology for predicting a lost person's (motion) behavior for autonomous coordinated multirobot wilderness search and rescue. The new concept of isoprobability curves is introduced and developed, which represents a unique mechanism for identifying the target's probable location at any given time within the search area while accounting for influences such as terrain topology, target physiology and psychology, clues found, etc. The isoprobability curves are propagated over time and space. The significant tangible benefit of the proposed target-motion prediction methodology is demonstrated through a comparison to a nonprobabilistic approach, as well as through a simulated realistic wilderness search scenario.


Assuntos
Modelos Biológicos , Atividade Motora/fisiologia , Trabalho de Resgate/métodos , Robótica/instrumentação , Meio Selvagem , Pré-Escolar , Simulação por Computador , Cibernética , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA