Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38339673

RESUMO

Modern visual perception techniques often rely on multiple heterogeneous sensors to achieve accurate and robust estimates. Knowledge of their relative positions is a mandatory prerequisite to accomplish sensor fusion. Typically, this result is obtained through a calibration procedure that correlates the sensors' measurements. In this context, we focus on LiDAR and RGB sensors that exhibit complementary capabilities. Given the sparsity of LiDAR measurements, current state-of-the-art calibration techniques often rely on complex or large calibration targets to resolve the relative pose estimation. As such, the geometric properties of the targets may hinder the calibration procedure in those cases where an ad hoc environment cannot be guaranteed. This paper addresses the problem of LiDAR-RGB calibration using common calibration patterns (i.e., A3 chessboard) with minimal human intervention. Our approach exploits the flatness of the target to find associations between the sensors' measurements, leading to robust features and retrieval of the solution through nonlinear optimization. The results of quantitative and comparative experiments with other state-of-the-art approaches show that our simple schema performs on par or better than existing methods that rely on complex calibration targets.

2.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610366

RESUMO

This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems-as required for calibration-is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.

3.
Sensors (Basel) ; 24(2)2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38276361

RESUMO

LiDAR sensors, pivotal in various fields like agriculture and robotics for tasks such as 3D object detection and map creation, are increasingly coupled with thermal cameras to harness heat information. This combination proves particularly effective in adverse conditions like darkness and rain. Ensuring seamless fusion between the sensors necessitates precise extrinsic calibration. Our innovative calibration method leverages human presence during sensor setup movements, eliminating the reliance on dedicated calibration targets. It optimizes extrinsic parameters by employing a novel evolutionary algorithm on a specifically designed loss function that measures human alignment across modalities. Our approach showcases a notable 4.43% improvement in the loss over extrinsic parameters obtained from target-based calibration in the FieldSAFE dataset. This advancement reduces costs related to target creation, saves time in diverse pose collection, mitigates repetitive calibration efforts amid sensor drift or setting changes, and broadens accessibility by obviating the need for specific targets. The adaptability of our method in various environments, like urban streets or expansive farm fields, stems from leveraging the ubiquitous presence of humans. Our method presents an efficient, cost-effective, and readily applicable means of extrinsic calibration, enhancing sensor fusion capabilities in the critical fields reliant on precise and robust data acquisition.


Assuntos
Agricultura , Algoritmos , Humanos , Calibragem , Evolução Biológica , Fazendas
4.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931765

RESUMO

The data fusion of a 3-D light detection and ranging (LIDAR) point cloud and a camera image during the creation of a 3-D map is important because it enables more efficient object classification by autonomous mobile robots and facilitates the construction of a fine 3-D model. The principle behind data fusion is the accurate estimation of the LIDAR-camera's external parameters through extrinsic calibration. Although several studies have proposed the use of multiple calibration targets or poses for precise extrinsic calibration, no study has clearly defined the relationship between the target positions and the data fusion accuracy. Here, we strictly investigated the effects of the deployment of calibration targets on data fusion and proposed the key factors to consider in the deployment of the targets in extrinsic calibration. Thereafter, we applied a probability method to perform a global and robust sampling of the camera external parameters. Subsequently, we proposed an evaluation method for the parameters, which utilizes the color ratio of the 3-D colored point cloud map. The derived probability density confirmed the good performance of the deployment method in estimating the camera external parameters. Additionally, the evaluation quantitatively confirmed the effectiveness of our deployments of the calibration targets in achieving high-accuracy data fusion compared with the results obtained using the previous methods.

5.
Sensors (Basel) ; 24(4)2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38400287

RESUMO

Accurate calibration between LiDAR and camera sensors is crucial for autonomous driving systems to perceive and understand the environment effectively. Typically, LiDAR-camera extrinsic calibration requires feature alignment and overlapping fields of view. Aligning features from different modalities can be challenging due to noise influence. Therefore, this paper proposes a targetless extrinsic calibration method for monocular cameras and LiDAR sensors that have a non-overlapping field of view. The proposed solution uses pose transformation to establish data association across different modalities. This conversion turns the calibration problem into an optimization problem within a visual SLAM system without requiring overlapping views. To improve performance, line features serve as constraints in visual SLAM. Accurate positions of line segments are obtained by utilizing an extended photometric error optimization method. Moreover, a strategy is proposed for selecting appropriate calibration methods from among several alternative optimization schemes. This adaptive calibration method selection strategy ensures robust calibration performance in urban autonomous driving scenarios with varying lighting and environmental textures while avoiding failures and excessive bias that may result from relying on a single approach.

6.
Sensors (Basel) ; 23(2)2023 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-36679732

RESUMO

Robotic systems are evolving to include a large number of sensors and diverse sensor modalities. In order to operate a system with multiple sensors, the geometric transformations between those sensors must be accurately estimated. The process by which these transformations are estimated is known as sensor calibration. Behind every sensor calibration approach is a formulation and a framework. The formulation is the method by which the transformations are estimated. The framework is the set of operations required to carry out the calibration procedure. This paper proposes a novel calibration framework that gives more flexibility, control and information to the user, enhancing the user interface and the user experience of calibrating a robotic system. The framework consists of several visualization and interaction functionalities useful for a calibration procedure, such as the estimation of the initial pose of the sensors, the data collection and labeling, the data review and correction and the visualization of the estimation of the extrinsic and intrinsic parameters. This framework is supported by the Atomic Transformations Optimization Method formulation, referred to as ATOM. Results show that this framework is applicable to various robotic systems with different configurations, number of sensors and sensor modalities. In addition to this, a survey comparing the frameworks of different calibration approaches shows that ATOM provides a very good user experience.


Assuntos
Calibragem
7.
Sensors (Basel) ; 23(7)2023 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-37050800

RESUMO

This paper studies the effect of reference frame selection in sensor-to-sensor extrinsic calibration when formulated as a motion-based hand-eye calibration problem. As the sensor trajectories typically contain some composition of noise, the aim is to determine which selection strategies work best under which noise conditions. Different reference selection options are tested under varying noise conditions in simulations, and the findings are validated with real data from the KITTI dataset. The study is conducted for four state-of-the-art methods, as well as two proposed cost functions for nonlinear optimization. One of the proposed cost functions incorporates outlier rejection to improve calibration performance and was shown to significantly improve performance in the presence of outliers, and either match or outperform the other algorithms in other noise conditions. However, the performance gain from reference frame selection was deemed larger than that from algorithm selection. In addition, we show that with realistic noise, the reference frame selection method commonly used in the literature, is inferior to other tested options, and that relative error metrics are not reliable for telling which method achieves best calibration performance.

8.
Sensors (Basel) ; 22(15)2022 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-35898082

RESUMO

In this contribution, we present a simple and intuitive approach for estimating the exterior (geometrical) calibration of a Lidar instrument with respect to a camera as well as their synchronization shifting (temporal calibration) during data acquisition. For the geometrical calibration, the 3D rigid transformation of the camera system was estimated with respect to the Lidar frame on the basis of the establishment of 2D to 3D point correspondences. The 2D points were automatically extracted on images by exploiting an AprilTag fiducial marker, while the detection of the corresponding Lidar points was carried out by estimating the center of a custom-made retroreflective target. Both AprilTag and Lidar reflective targets were attached to a planar board (calibration object) following an easy-to-implement set-up, which yielded high accuracy in the determination of the center of the calibration target. After the geometrical calibration procedure, the temporal calibration was carried out by matching the position of the AprilTag to the corresponding Lidar target (after being projected onto the image frame), during the recording of a steadily moving calibration target. Our calibration framework was given as an open-source software implemented in the ROS platform. We have applied our method to the calibration of a four-camera mobile mapping system (MMS) with respect to an integrated Velodyne Lidar sensor and evaluated it against a state-of-the-art chessboard-based method. Although our method was a single-camera-to-Lidar calibration approach, the consecutive calibration of all four cameras with respect to the Lidar sensor yielded highly accurate results, which were exploited in a multi-camera texturing scheme of city point clouds.

9.
Sensors (Basel) ; 22(6)2022 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-35336392

RESUMO

In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.

10.
Sensors (Basel) ; 22(19)2022 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-36236333

RESUMO

Three-dimensional light detection and ranging (LiDAR) sensors have received much attention in the field of autonomous navigation owing to their accurate, robust, and rich geometric information. Autonomous vehicles are typically equipped with multiple 3D LiDARs because there are many commercially available low-cost 3D LiDARs. Extrinsic calibration of multiple LiDAR sensors is essential in order to obtain consistent geometric information. This paper presents a systematic procedure for the extrinsic calibration of multiple 3D LiDAR sensors using plane objects. At least three independent planes are required within the common field of view of the LiDAR sensors. The planes satisfying the condition can easily be found on objects such as the ground, walls, or columns in indoor and outdoor environments. Therefore, the proposed method does not require environmental modifications such as using artificial calibration objects. Multiple LiDARs typically have different viewpoints to reduce blind spots. This situation increases the difficulty of the extrinsic calibration using conventional registration algorithms. We suggest a plane registration method for cases in which correspondences are not known. The entire calibration process can easily be automated using the proposed registration technique. The presented experimental results clearly show that the proposed method generates more accurate extrinsic parameters than conventional point cloud registration methods.

11.
Sensors (Basel) ; 22(22)2022 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-36433493

RESUMO

RGB and depth cameras are extensively used for the 3D tracking of human pose and motion. Typically, these cameras calculate a set of 3D points representing the human body as a skeletal structure. The tracking capabilities of a single camera are often affected by noise and inaccuracies due to occluded body parts. Multiple-camera setups offer a solution to maximize coverage of the captured human body and to minimize occlusions. According to best practices, fusing information across multiple cameras typically requires spatio-temporal calibration. First, the cameras must synchronize their internal clocks. This is typically performed by physically connecting the cameras to each other using an external device or cable. Second, the pose of each camera relative to the other cameras must be calculated (Extrinsic Calibration). The state-of-the-art methods use specialized calibration session and devices such as a checkerboard to perform calibration. In this paper, we introduce an approach to the spatio-temporal calibration of multiple cameras which is designed to run on-the-fly without specialized devices or equipment requiring only the motion of the human body in the scene. As an example, the system is implemented and evaluated using Microsoft Azure Kinect. The study shows that the accuracy and robustness of this approach is on par with the state-of-the-art practices.


Assuntos
Calibragem , Humanos , Movimento (Física)
12.
Sensors (Basel) ; 22(1)2021 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-35009647

RESUMO

Light Detection and Ranging (LiDAR) is a sensor that uses a laser to represent the surrounding environment in three-dimensional information. Thanks to the development of LiDAR, LiDAR-based applications are being actively used in autonomous vehicles. In order to effectively use the information coming from LiDAR, extrinsic calibration which finds the translation and the rotation relationship between LiDAR coordinate and vehicle coordinate is essential. Therefore, many studies on LiDAR extrinsic calibration are steadily in progress. The performance index (PI) of the calibration parameter is a value that quantitatively indicates whether the obtained calibration parameter is similar to the true value or not. In order to effectively use the obtained calibration parameter, it is important to validate the parameter through PI. Therefore, in this paper, we propose an algorithm to obtain the performance index for the calibration parameter between LiDAR and the motion sensor. This performance index is experimentally verified in various environments by Monte Carlo simulation and validated using CarMaker simulation data and real data. As a result of verification, the PI of the calibration parameter obtained through the proposed algorithm has the smallest value when the calibration parameter has a true value, and increases as an error is added to the true value. In other words, it has been proven that PI is convex to the calibration parameter. In addition, it is able to confirm that the PI obtained using the proposed algorithm provides information on the effect of the calibration parameters on mapping and localization.

13.
Sensors (Basel) ; 21(4)2021 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-33562538

RESUMO

Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.

14.
Sensors (Basel) ; 20(21)2020 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-33167580

RESUMO

In this paper, we introduce a novel approach to estimate the extrinsic parameters between a LiDAR and a camera. Our method is based on line correspondences between the LiDAR point clouds and camera images. We solve the rotation matrix with 3D-2D infinity point pairs extracted from parallel lines. Then, the translation vector can be solved based on the point-on-line constraint. Different from other target-based methods, this method can be performed simply without preparing specific calibration objects because parallel lines are commonly presented in the environment. We validate our algorithm on both simulated and real data. Error analysis shows that our method can perform well in terms of robustness and accuracy.

15.
Sensors (Basel) ; 20(10)2020 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-32429427

RESUMO

This paper details a new extrinsic calibration method for scanning laser rangefinder that is precisely focused on the geometrical ground plane-based estimation. This method is also efficient in the challenging experimental configuration of a high angle of inclination of the LiDAR. In this configuration, the calibration of the LiDAR sensor is a key problem that can be be found in various domains and in particular to guarantee the efficiency of ground surface object detection. The proposed extrinsic calibration method can be summarized by the following procedure steps: fitting ground plane, extrinsic parameters estimation (3D orientation angles and altitude), and extrinsic parameters optimization. Finally, the results are presented in terms of precision and robustness against the variation of LiDAR's orientation and range accuracy, respectively, showing the stability and the accuracy of the proposed extrinsic calibration method, which was validated through numerical simulation and real data to prove the method performance.

16.
Sensors (Basel) ; 20(7)2020 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-32224948

RESUMO

Multiple two-dimensional laser rangefinders (LRFs) are applied in many applications like mobile robotics, autonomous vehicles, and three-dimensional reconstruction. The extrinsic calibration between LRFs is the first step to perform data fusion and practical application. In this paper, we proposed a simple method to calibrate LRFs based on a corner composed of three mutually perpendicular planes. In contrast to other methods that require a special pattern or assistance from other sensors, the trihedron corner needed in this method is common in daily environments. In practice, we can adjust the position of the LRFs to observe the corner until the laser scanning plane intersects with three planes of the corner. Then, we formed a Perspective-Three-Point problem to solve the position and orientation of each LRF at the common corner coordinate system. The method was validated with synthetic and real experiments, showing better performance than existing methods.

17.
Sensors (Basel) ; 21(1)2020 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-33374942

RESUMO

Multi-modal sensor fusion has become ubiquitous in the field of vehicle motion estimation. Achieving a consistent sensor fusion in such a set-up demands the precise knowledge of the misalignments between the coordinate systems in which the different information sources are expressed. In ego-motion estimation, even sub-degree misalignment errors lead to serious performance degradation. The present work addresses the extrinsic calibration of a land vehicle equipped with standard production car sensors and an automotive-grade inertial measurement unit (IMU). Specifically, the article presents a method for the estimation of the misalignment between the IMU and vehicle coordinate systems, while considering the IMU biases. The estimation problem is treated as a joint state and parameter estimation problem, and solved using an adaptive estimator that relies on the IMU measurements, a dynamic single-track model as well as the suspension and odometry systems. Additionally, we show that the validity of the misalignment estimates can be assessed by identifying the misalignment between a high-precision INS/GNSS and the IMU and vehicle coordinate systems. The effectiveness of the proposed calibration procedure is demonstrated using real sensor data. The results show that estimation accuracies below 0.1 degrees can be achieved in spite of moderate variations in the manoeuvre execution.

18.
Sensors (Basel) ; 19(22)2019 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-31731824

RESUMO

Extrinsic camera calibration is essential for any computer vision task in a camera network. Typically, researchers place a calibration object in the scene to calibrate all the cameras in a camera network. However, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel, accurate and fully automatic extrinsic calibration framework for camera networks with partially overlapping views. The proposed method considers the pedestrians in the observed scene as the calibration objects and analyzes the pedestrian tracks to obtain extrinsic parameters. Compared to the state of the art, the new method is fully automatic and robust in various environments. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We apply a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the top and the bottom of the pedestrians are then used to compute the extrinsic calibration matrices. We also propose a novel method to calibrate the camera network by only using the top and centerline of the person when the bottom of the person is not available in heavily occluded scenes. We verified the robustness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires less than one minute of observing the walking people to reach this accuracy in controlled environments. It also just takes a few minutes to collect enough data for the calibration in uncontrolled environments. Our proposed method can perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.


Assuntos
Calibragem/normas , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial , Humanos , Reconhecimento Automatizado de Padrão/normas , Pedestres
19.
Sensors (Basel) ; 19(6)2019 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-30884756

RESUMO

In this paper, a simple and easy high-precision calibration method is proposed for the LRF-camera combined measurement system which is widely used at present. This method can be applied not only to mainstream 2D and 3D LRF-cameras, but also to calibrate newly developed 1D LRF-camera combined systems. It only needs a calibration board to record at least three sets of data. First, the camera parameters and distortion coefficients are decoupled by the distortion center. Then, the spatial coordinates of laser spots are solved using line and plane constraints, and the estimation of LRF-camera extrinsic parameters is realized. In addition, we establish a cost function for optimizing the system. Finally, the calibration accuracy and characteristics of the method are analyzed through simulation experiments, and the validity of the method is verified through the calibration of a real system.

20.
Sensors (Basel) ; 19(7)2019 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-30934950

RESUMO

RGB-Depth (RGB-D) cameras are widely used in computer vision and robotics applications such as 3D modeling and human⁻computer interaction. To capture 3D information of an object from different viewpoints simultaneously, we need to use multiple RGB-D cameras. To minimize costs, the cameras are often sparsely distributed without shared scene features. Due to the advantage of being visible from different viewpoints, spherical objects have been used for extrinsic calibration of widely-separated cameras. Assuming that the projected shape of the spherical object is circular, this paper presents a multi-cue-based method for detecting circular regions in a single color image. Experimental comparisons with existing methods show that our proposed method accurately detects spherical objects with cluttered backgrounds under different illumination conditions. The circle detection method is then applied to extrinsic calibration of multiple RGB-D cameras, for which we propose to use robust cost functions to reduce errors due to misdetected sphere centers. Through experiments, we show that the proposed method provides accurate calibration results in the presence of outliers and performs better than a least-squares-based method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA