Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Robot AI ; 11: 1340334, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39092214

RESUMEN

Learning from demonstration is an approach that allows users to personalize a robot's tasks. While demonstrations often focus on conveying the robot's motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task's goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user's decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot's motion and the user's intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user's intention and execute the task.

2.
Sensors (Basel) ; 21(9)2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-34062836

RESUMEN

The tracking of Vulnerable Road Users (VRU) is one of the vital tasks of autonomous cars. This includes estimating the positions and velocities of VRUs surrounding a car. To do this, VRU trackers must utilize measurements that are received from sensors. However, even the most accurate VRU trackers are affected by measurement noise, background clutter, and VRUs' interaction and occlusion. Such uncertainties can cause deviations in sensors' data association, thereby leading to dangerous situations and potentially even the failure of a tracker. The initialization of a data association depends on various parameters. This paper proposes steps to reveal the trade-offs between stochastic model parameters to improve data association's accuracy in autonomous cars. The proposed steps can reduce the number of false tracks; besides, it is independent of variations in measurement noise and the number of VRUs. Our initialization can reduce the lag between the first detection and initialization of the VRU trackers. As a proof of concept, the procedure is validated using experiments, simulation data, and the publicly available KITTI dataset. Moreover, we compared our initialization method with the most popular approaches that were found in the literature. The results showed that the tracking precision and accuracy increase to 3.6% with the proposed initialization as compared to the state-of-the-art algorithms in tracking VRU.

3.
Sensors (Basel) ; 21(8)2021 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-33923735

RESUMEN

In this work, we propose and evaluate a pose-graph optimization-based real-time multi-sensor fusion framework for vehicle positioning using low-cost automotive-grade sensors. Pose-graphs can model multiple absolute and relative vehicle positioning sensor measurements and can be optimized using nonlinear techniques. We model pose-graphs using measurements from a precise stereo camera-based visual odometry system, a robust odometry system using the in-vehicle velocity and yaw-rate sensor, and an automotive-grade GNSS receiver. Our evaluation is based on a dataset with 180 km of vehicle trajectories recorded in highway, urban, and rural areas, accompanied by postprocessed Real-Time Kinematic GNSS as ground truth. We compare the architecture's performance with (i) vehicle odometry and GNSS fusion and (ii) stereo visual odometry, vehicle odometry, and GNSS fusion; for offline and real-time optimization strategies. The results exhibit a 20.86% reduction in the localization error's standard deviation and a significant reduction in outliers when compared with automotive-grade GNSS receivers.

4.
Sensors (Basel) ; 21(2)2021 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-33435468

RESUMEN

The particle filter was popularized in the early 1990s and has been used for solving estimation problems ever since. The standard algorithm can be understood and implemented with limited effort due to the widespread availability of tutorial material and code examples. Extensive research has advanced the standard particle filter algorithm to improve its performance and applicability in various ways in the years after. As a result, selecting and implementing an advanced version of the particle filter that goes beyond the standard algorithm and fits a specific estimation problem requires either a thorough understanding or reviewing large amounts of the literature. The latter can be heavily time consuming especially for those with limited hands-on experience. Lack of implementation details in theory-oriented papers complicates this task even further. The goal of this tutorial is facilitating the reader to familiarize themselves with the key concepts of advanced particle filter algorithms and to select and implement the right particle filter for the estimation problem at hand. It acts as a single entry point that provides a theoretical overview of the filter, its assumptions and solutions for various challenges encountered when applying particle filters. Besides that, it includes a running example that demonstrates and implements many of the challenges and solutions.

5.
Sensors (Basel) ; 20(23)2020 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-33287292

RESUMEN

Estimating accurate positions of multiple pedestrians is a critical task in robotics and autonomous cars. We propose a tracker based on typical human motion patterns to track multiple pedestrians. This paper assumes that the legs' reflection and extension angles are approximately changing periodically during human motion. A Fourier series is fitted in order to describe the moving, such as describing the position and velocity of the hip, knee, and ankle. Our tracker receives the position of the ankle, knee, and hip as measurements. As a proof of concept, we compare our tracker with state-of-the-art methods. The proposed models have been validated by experimental data, the Human Gait Database (HuGaDB), and the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) tracking benchmark. The results indicate that our tracker is able to estimate the reflection and extension angles with a precision of 90.97%. Moreover, the comparison shows that the tracking precision increases up to 1.3% with the proposed tracker when compared to a constant velocity based tracker.

6.
Sensors (Basel) ; 16(10)2016 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-27727171

RESUMEN

The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle's surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture.

7.
Open Med Inform J ; 2: 92-104, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-19415138

RESUMEN

This paper investigates the impact of fast parameter identification methods, which do not require any forward simulations, on model-based glucose control, using retrospective data in the Christchurch Hospital Intensive Care Unit. The integral-based identification method has been previously clinically validated and extensively applied in a number of biomedical applications; and is a crucial element in the presented model-based therapeutics approach. Common non-linear regression and gradient descent approaches are too computationally intense and not suitable for the glucose control applications presented. The main focus in this paper is on better characterizing and understanding the importance of the integral in the formulation and the effect it has on model-based drug therapy control. As a comparison, a potentially more natural derivative formulation which has the same computation speed advantages is investigated, and is shown to go unstable with respect to modelling error which is always present clinically. The integral method remains robust.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA