Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.165
Filtrar
Más filtros

Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(33): e2310157121, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39102539

RESUMEN

The Amazon forest contains globally important carbon stocks, but in recent years, atmospheric measurements suggest that it has been releasing more carbon than it has absorbed because of deforestation and forest degradation. Accurately attributing the sources of carbon loss to forest degradation and natural disturbances remains a challenge because of the difficulty of classifying disturbances and simultaneously estimating carbon changes. We used a unique, randomized, repeated, very high-resolution airborne laser scanning survey to provide a direct, detailed, and high-resolution partitioning of aboveground carbon gains and losses in the Brazilian Arc of Deforestation. Our analysis revealed that disturbances directly attributed to human activity impacted 4.2% of the survey area while windthrows and other disturbances affected 2.7% and 14.7%, respectively. Extrapolating the lidar-based statistics to the study area (544,300 km2), we found that 24.1, 24.2, and 14.5 Tg C y-1 were lost through clearing, fires, and logging, respectively. The losses due to large windthrows (21.5 Tg C y-1) and other disturbances (50.3 Tg C y-1) were partially counterbalanced by forest growth (44.1 Tg C y-1). Our high-resolution estimates demonstrated a greater loss of carbon through forest degradation than through deforestation and a net loss of carbon of 90.5 ± 16.6 Tg C y-1 for the study region attributable to both anthropogenic and natural processes. This study highlights the role of forest degradation in the carbon balance for this critical region in the Earth system.


Asunto(s)
Carbono , Conservación de los Recursos Naturales , Bosques , Brasil/epidemiología , Carbono/metabolismo , Humanos , Árboles/crecimiento & desarrollo , Ciclo del Carbono
2.
Trends Genet ; 39(7): 531-544, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36907721

RESUMEN

Insects are crucial for ecosystem health but climate change and pesticide use are driving massive insect decline. To mitigate this loss, we need new and effective monitoring techniques. Over the past decade there has been a shift to DNA-based techniques. We describe key emerging techniques for sample collection. We suggest that the selection of tools should be broadened, and that DNA-based insect monitoring data need to be integrated more rapidly into policymaking. We argue that there are four key areas for advancement, including the generation of more complete DNA barcode databases to interpret molecular data, standardisation of molecular methods, scaling up of monitoring efforts, and integrating molecular tools with other technologies that allow continuous, passive monitoring based on images and/or laser imaging, detection, and ranging (LIDAR).


Asunto(s)
Biodiversidad , Ecosistema , Animales , Código de Barras del ADN Taxonómico/métodos , ADN/genética , Insectos/genética
3.
Proc Natl Acad Sci U S A ; 119(10): e2110756119, 2022 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-35235447

RESUMEN

SignificanceAerosol-cloud interaction affects the cooling of Earth's climate, mostly by activation of aerosols as cloud condensation nuclei that can increase the amount of sunlight reflected back to space. But the controlling physical processes remain uncertain in current climate models. We present a lidar-based technique as a unique remote-sensing tool without thermodynamic assumptions for simultaneously profiling diurnal aerosol and water cloud properties with high resolution. Direct lateral observations of cloud properties show that the vertical structure of low-level water clouds can be far from being perfectly adiabatic. Furthermore, our analysis reveals that, instead of an increase of liquid water path (LWP) as proposed by most general circulation models, elevated aerosol loading can cause a net decrease in LWP.

4.
Glob Chang Biol ; 30(2): e17185, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38361266

RESUMEN

Climate change in northern latitudes is increasing the vulnerability of peatlands and the riparian transition zones between peatlands and upland forests (referred to as ecotones) to greater frequency of wildland fires. We examined early post-fire vegetation regeneration following the 2011 Utikuma complex fire (central Alberta, Canada). This study examined 779 peatlands and adjacent ecotones, covering an area of ~182 km2 . Based on the known regional fire history, peatlands that burned in 2011 were stratified into either long return interval (LRI) fire regimes of >80 years (i.e., no recorded prior fire history) or short fire return interval (SRI) of 55 years (i.e., within the boundary of a documented severe fire in 1956). Data from six multitemporal airborne lidar surveys were used to quantify trajectories of vegetation change for 8 years prior to and 8 years following the 2011 fire. To date, no studies have quantified the impacts of post-fire regeneration following short versus long return interval fires across this broad range of peatlands with variable environmental and post-fire successional trajectories. We found that SRI peatlands demonstrated more rapid vascular and shrub growth rates, especially in peatland centers, than LRI peatlands. Bogs and fens burned in 1956, and with little vascular vegetation (classified as "open peatlands") prior to the 2011 fire, experienced the greatest changes. These peatlands tended to transition to vascular/shrub forms following the SRI fire, while open LRI peatlands were not significantly different from pre-fire conditions. The results of this study suggest the emergence of a positive feedback, where areas experiencing SRI fires in southern boreal peatlands are expected to transition to forested vegetation forms. Along fen edges and within bog centers, SRI fires are expected to reduce local peatland groundwater moisture-holding capacity and promote favorable conditions for increased fire frequency and severity in the future.


Asunto(s)
Incendios , Incendios Forestales , Bosques , Humedales , Alberta , Ecosistema
5.
Ann Bot ; 134(3): 455-466, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-38804175

RESUMEN

BACKGROUND AND AIMS: Lidar is a promising tool for fast and accurate measurements of trees. There are several approaches to estimate above-ground woody biomass using lidar point clouds. One of the most widely used methods involves fitting geometric primitives (e.g. cylinders) to the point cloud, thereby reconstructing both the geometry and topology of the tree. However, current algorithms are not suited for accurate estimation of the volume of finer branches, because of the unreliable point dispersions from, for example, beam footprint compared to the structure diameter. METHOD: We propose a new method that couples point cloud-based skeletonization and multi-linear statistical modelling based on structural data to make a model (structural model) that accurately estimates the above-ground woody biomass of trees from high-quality lidar point clouds, including finer branches. The structural model was tested at segment, axis and branch level, and compared to a cylinder fitting algorithm and to the pipe model theory. KEY RESULTS: The model accurately predicted the biomass with 1.6 % normalized root mean square error (nRMSE) at the segment scale from a k-fold cross-validation. It also gave satisfactory results when scaled up to the branch level with a significantly lower error (13 % nRMSE) and bias (-5 %) compared to conventional cylinder fitting to the point cloud (nRMSE: 92 %, bias: 82 %), or using the pipe model theory (nRMSE: 31 %, bias: -27 %). The model was then applied to the whole-tree scale and showed that the sampled trees had more than 1.7 km of structures on average and that 96 % of that length was coming from the twigs (i.e. <5 cm diameter). Our results showed that neglecting twigs can lead to a significant underestimation of tree above-ground woody biomass (-21 %). CONCLUSIONS: The structural model approach is an effective method that allows a more accurate estimation of the volumes of smaller branches from lidar point clouds. This method is versatile but requires manual measurements on branches for calibration. Nevertheless, once the model is calibrated, it can provide unbiased and large-scale estimations of tree structure volumes, making it an excellent choice for accurate 3D reconstruction of trees and estimating standing biomass.


Asunto(s)
Algoritmos , Biomasa , Árboles , Árboles/crecimiento & desarrollo , Árboles/anatomía & histología , Tecnología de Sensores Remotos/métodos , Madera/crecimiento & desarrollo , Madera/anatomía & histología
6.
Environ Sci Technol ; 58(5): 2413-2422, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38266235

RESUMEN

Wildland fire is a major global driver in the exchange of aerosols between terrestrial environments and the atmosphere. This exchange is commonly quantified using emission factors or the mass of a pollutant emitted per mass of fuel burned. However, emission factors for microbes aerosolized by fire have yet to be determined. Using bacterial cell concentrations collected on unmanned aircraft systems over forest fires in Utah, USA, we determine bacterial emission factors (BEFs) for the first time. We estimate that 1.39 × 1010 and 7.68 × 1011 microbes are emitted for each Mg of biomass consumed in fires burning thinning residues and intact forests, respectively. These emissions exceed estimates of background bacterial emissions in other studies by 3-4 orders of magnitude. For the ∼2631 ha of similar forests in the Fishlake National Forest that burn each year on average, an estimated 1.35 × 1017 cells or 8.1 kg of bacterial biomass were emitted. BEFs were then used to parametrize a computationally scalable particle transport model that predicted over 99% of the emitted cells were transported beyond the 17.25 x 17.25 km model domain. BEFs can be used to expand understanding of global wildfire microbial emissions and their potential consequences to ecosystems, the atmosphere, and humans.


Asunto(s)
Incendios , Incendios Forestales , Humanos , Ecosistema , Bosques , Bacterias
7.
Neurosurg Focus ; 56(1): E6, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38163339

RESUMEN

OBJECTIVE: A comprehensive understanding of microsurgical neuroanatomy, familiarity with the operating room environment, patient positioning in relation to the surgery, and knowledge of surgical approaches is crucial in neurosurgical education. However, challenges such as limited patient exposure, heightened patient safety concerns, a decreased availability of surgical cases during training, and difficulties in accessing cadavers and laboratories have adversely impacted this education. Three-dimensional (3D) models and augmented reality (AR) applications can be utilized to depict the cortical and white matter anatomy of the brain, create virtual models of patient surgical positions, and simulate the operating room and neuroanatomy laboratory environment. Herein, the authors, who used a single application, aimed to demonstrate the creation of 3D models of anatomical cadaver dissections, surgical approaches, patient surgical positions, and operating room and laboratory designs as alternative educational materials for neurosurgical training. METHODS: A 3D modeling application (Scaniverse) was employed to generate 3D models of cadaveric brain specimens and surgical approaches using photogrammetry. It was also used to create virtual representations of the operating room and laboratory environment, as well as the surgical positions of patients, by utilizing light detection and ranging (LiDAR) sensor technology for accurate spatial mapping. These virtual models were then presented in AR for educational purposes. RESULTS: Virtual representations in three dimensions were created to depict cadaver specimens, surgical approaches, patient surgical positions, and the operating room and laboratory environment. These models offer the flexibility of rotation and movement in various planes for improved visualization and understanding. The operating room and laboratory environment were rendered in three dimensions to create a simulation that could be navigated using AR and mixed reality technology. Realistic cadaveric models with intricate details were showcased on internet-based platforms and AR platforms for enhanced visualization and learning. CONCLUSIONS: The utilization of this cost-effective, straightforward, and readily available approach to generate 3D models has the potential to enhance neuroanatomical and neurosurgical education. These digital models can be easily stored and shared via the internet, making them accessible to neurosurgeons worldwide for educational purposes.


Asunto(s)
Neuroanatomía , Quirófanos , Humanos , Neuroanatomía/educación , Laboratorios , Simulación por Computador , Cadáver
8.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38339673

RESUMEN

Modern visual perception techniques often rely on multiple heterogeneous sensors to achieve accurate and robust estimates. Knowledge of their relative positions is a mandatory prerequisite to accomplish sensor fusion. Typically, this result is obtained through a calibration procedure that correlates the sensors' measurements. In this context, we focus on LiDAR and RGB sensors that exhibit complementary capabilities. Given the sparsity of LiDAR measurements, current state-of-the-art calibration techniques often rely on complex or large calibration targets to resolve the relative pose estimation. As such, the geometric properties of the targets may hinder the calibration procedure in those cases where an ad hoc environment cannot be guaranteed. This paper addresses the problem of LiDAR-RGB calibration using common calibration patterns (i.e., A3 chessboard) with minimal human intervention. Our approach exploits the flatness of the target to find associations between the sensors' measurements, leading to robust features and retrieval of the solution through nonlinear optimization. The results of quantitative and comparative experiments with other state-of-the-art approaches show that our simple schema performs on par or better than existing methods that rely on complex calibration targets.

9.
Sensors (Basel) ; 24(6)2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38544025

RESUMEN

An innovative mobile lidar device, developed to monitor volcanic plumes during explosive eruptions at Mt. Etna (Italy) and to analyse the optical properties of volcanic particles, was upgraded in October 2023 with the aim of improving volcanic plume retrievals. The new configuration of the lidar allows it to obtain new data on both the optical and the microphysical properties of the atmospheric aerosol. In fact, after the upgrade, the lidar is able to measure three backscattering coefficients, two extinction coefficients and two depolarisation ratios in a configuration defined as "state-of-the-art lidar", where properties such as particle size distribution and the refractive index can be derived. During the lidar implementation, we were able to test the system's performance through specific calibration measurements. A comparison in an aerosol-free region (7.2-12 km) between lidar signals at 1064 nm, 532 nm and 355 nm and the corresponding pure molecular profiles showed a relative difference of <1% between them for all the wavelengths, highlighting the good dynamic of the signals. The overlap correction allowed us to reduce the underestimation of the backscattering coefficient from 50% to 10% below 450 m and 750 m at both 355 and 532 nm, respectively. The correct alignment between the laser beam and the receiver optical chain was tested using the signal received from the different quadrants of the telescope, and the relative differences between the four directions were comparable to zero, within the margin of error. Finally, the first measurement results are shown and compared with results obtained by other instruments, with the aim of proving the ability of the upgraded system to more precisely characterise aerosol optical and microphysical properties.

10.
Sensors (Basel) ; 24(2)2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-38276337

RESUMEN

SLAM (Simultaneous Localization and Mapping) based on 3D LiDAR (Laser Detection and Ranging) is an expanding field of research with numerous applications in the areas of autonomous driving, mobile robotics, and UAVs (Unmanned Aerial Vehicles). However, in most real-world scenarios, dynamic objects can negatively impact the accuracy and robustness of SLAM. In recent years, the challenge of achieving optimal SLAM performance in dynamic environments has led to the emergence of various research efforts, but there has been relatively little relevant review. This work delves into the development process and current state of SLAM based on 3D LiDAR in dynamic environments. After analyzing the necessity and importance of filtering dynamic objects in SLAM, this paper is developed from two dimensions. At the solution-oriented level, mainstream methods of filtering dynamic targets in 3D point cloud are introduced in detail, such as the ray-tracing-based approach, the visibility-based approach, the segmentation-based approach, and others. Then, at the problem-oriented level, this paper classifies dynamic objects and summarizes the corresponding processing strategies for different categories in the SLAM framework, such as online real-time filtering, post-processing after the mapping, and Long-term SLAM. Finally, the development trends and research directions of dynamic object filtering in SLAM based on 3D LiDAR are discussed and predicted.

11.
Sensors (Basel) ; 24(10)2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38794002

RESUMEN

This article presents a high-precision obstacle detection algorithm using 3D mechanical LiDAR to meet railway safety requirements. To address the potential errors in the point cloud, we propose a calibration method based on projection and a novel rail extraction algorithm that effectively handles terrain variations and preserves the point cloud characteristics of the track area. We address the limitations of the traditional process involving fixed Euclidean thresholds by proposing a modulation function based on directional density variations to adjust the threshold dynamically. Finally, using PCA and local-ICP, we conduct feature analysis and classification of the clustered data to obtain the obstacle clusters. We conducted continuous experiments on the testing site, and the results showed that our system and algorithm achieved an STDR (stable detection rate) of over 95% for obstacles with a size of 15 cm × 15 cm × 15 cm in the range of ±25 m; at the same time, for obstacles of 10 cm × 10 cm × 10 cm, an STDR of over 80% was achieved within a range of ±20 m. This research provides a possible solution and approach for railway security via obstacle detection.

12.
Sensors (Basel) ; 24(4)2024 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-38400430

RESUMEN

To develop socially assistive robots for monitoring older adults at home, a sensor is required to identify residents and capture activities within the room without violating privacy. We focused on 2D Light Detection and Ranging (2D-LIDAR) capable of robustly measuring human contours in a room. While horizontal 2D contour data can provide human location, identifying humans and activities from these contours is challenging. To address this issue, we developed novel methods using deep learning techniques. This paper proposes methods for person identification and activity estimation in a room using contour point clouds captured by a single 2D-LIDAR at hip height. In this approach, human contours were extracted from 2D-LIDAR data using density-based spatial clustering of applications with noise. Subsequently, the person and activity within a 10-s interval were estimated employing deep learning techniques. Two deep learning models, namely Long Short-Term Memory (LSTM) and image classification (VGG16), were compared. In the experiment, a total of 120 min of walking data and 100 min of additional activities (door opening, sitting, and standing) were collected from four participants. The LSTM-based and VGG16-based methods achieved accuracies of 65.3% and 89.7%, respectively, for person identification among the four individuals. Furthermore, these methods demonstrated accuracies of 94.2% and 97.9%, respectively, for the estimation of the four activities. Despite the 2D-LIDAR point clouds at hip height containing small features related to gait, the results indicate that the VGG16-based method has the capability to identify individuals and accurately estimate their activities.


Asunto(s)
Abdomen , Persona Soltera , Humanos , Anciano , Marcha , Memoria a Largo Plazo , Privacidad
13.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38544137

RESUMEN

This paper presents an innovative dataset designed explicitly for challenging agricultural environments, such as greenhouses, where precise location is crucial, but GNNS accuracy may be compromised by construction elements and the crop. The dataset was collected using a mobile platform equipped with a set of sensors typically used in mobile robots as it was moved through all the corridors of a typical Mediterranean greenhouse featuring tomato crops. This dataset presents a unique opportunity for constructing detailed 3D models of plants in such indoor-like spaces, with potential applications such as robotized spraying. For the first time, to the authors' knowledge, a dataset suitable to test simultaneous localization and mapping (SLAM) methods is presented in a greenhouse environment, which poses unique challenges. The suitability of the dataset for this purpose is assessed by presenting SLAM results with state-of-the-art algorithms. The dataset is available online.

14.
Sensors (Basel) ; 24(9)2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38733047

RESUMEN

LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird's-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost.

15.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artículo en Inglés | MEDLINE | ID: mdl-38339539

RESUMEN

Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many contexts due to its ability to capture more information, its robustness to dynamic changes in the environment compared to an RGB camera, and its cost, which has decreased in recent years and which is an important factor for many application scenarios. The challenge with high-resolution 3D LiDAR sensors is that they can output large amounts of 3D data with up to a few million points per second, which is difficult to process in real time when applying complex algorithms and models for efficient semantic segmentation. Most existing approaches are either only suitable for relatively small point clouds or rely on computationally intensive sampling techniques to reduce their size. As a result, most of these methods do not work in real time in realistic field robotics application scenarios, making them unsuitable for practical applications. Systematic point selection is a possible solution to reduce the amount of data to be processed. Although our approach is memory and computationally efficient, it selects only a small subset of points, which may result in important features being missed. To address this problem, our proposed systematic sampling method called SyS3DS (Systematic Sampling for 3D Semantic Segmentation) incorporates a technique in which the local neighbours of each point are retained to preserve geometric details. SyS3DS is based on the graph colouring algorithm and ensures that the selected points are non-adjacent in order to obtain a subset of points that are representative of the 3D points in the scene. To take advantage of the ensemble learning method, we pass a different subset of nodes for each epoch. This leverages a new technique called auto-ensemble, where ensemble learning is proposed as a collection of different learning models instead of tuning different hyperparameters individually during training and validation. SyS3DS has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation on large datasets such as Semantic3D. We also present a preliminary study on the validity of the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception.

16.
Sensors (Basel) ; 24(4)2024 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-38400350

RESUMEN

Most automated vehicles (AVs) are equipped with abundant sensors, which enable AVs to improve ride comfort by sensing road elevation, such as speed bumps. This paper proposes a method for estimating the road impulse features ahead of vehicles in urban environments with microelectromechanical system (MEMS) light detection and ranging (LiDAR). The proposed method deploys a real-time estimation of the vehicle pose to solve the problem of sparse sampling of the LiDAR. Considering the LiDAR error model, the proposed method builds the grid height measurement model by maximum likelihood estimation. Moreover, it incorporates height measurements with the LiDAR error model by the Kalman filter and introduces motion uncertainty to form an elevation weight method by confidence eclipse. In addition, a gate strategy based on the Mahalanobis distance is integrated to handle the sharp changes in elevation. The proposed method is tested in the urban environment. The results demonstrate the effectiveness of our method.

17.
Sensors (Basel) ; 24(11)2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38894305

RESUMEN

This paper presents a current-mode VCSEL driver (CMVD) implemented using 180 nm CMOS technology for application in short-range LiDAR sensors, in which current-steering logic is suggested to deliver modulation currents from 0.1 to 10 mApp and a bias current of 0.1 mA simultaneously to the VCSEL diode. For the simulations, the VCSEL diode is modeled with a 1.6 V forward-bias voltage and a 50 Ω series resistor. The post-layout simulations of the proposed CMVD clearly demonstrate large output pulses and eye-diagrams. Measurements of the CMVD demonstrate large output pulses, confirming the simulation results. The chip consumes a maximum of 11 mW from a 3.3 V supply, and the core occupies an area of 0.1 mm2.

18.
Sensors (Basel) ; 24(14)2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-39066115

RESUMEN

3D object detection is a challenging and promising task for autonomous driving and robotics, benefiting significantly from multi-sensor fusion, such as LiDAR and cameras. Conventional methods for sensor fusion rely on a projection matrix to align the features from LiDAR and cameras. However, these methods often suffer from inadequate flexibility and robustness, leading to lower alignment accuracy under complex environmental conditions. Addressing these challenges, in this paper, we propose a novel Bidirectional Attention Fusion module, named BAFusion, which effectively fuses the information from LiDAR and cameras using cross-attention. Unlike the conventional methods, our BAFusion module can adaptively learn the cross-modal attention weights, making the approach more flexible and robust. Moreover, drawing inspiration from advanced attention optimization techniques in 2D vision, we developed the Cross Focused Linear Attention Fusion Layer (CFLAF Layer) and integrated it into our BAFusion pipeline. This layer optimizes the computational complexity of attention mechanisms and facilitates advanced interactions between image and point cloud data, showcasing a novel approach to addressing the challenges of cross-modal attention calculations. We evaluated our method on the KITTI dataset using various baseline networks, such as PointPillars, SECOND, and Part-A2, and demonstrated consistent improvements in 3D object detection performance over these baselines, especially for smaller objects like cyclists and pedestrians. Our approach achieves competitive results on the KITTI benchmark.

19.
Sensors (Basel) ; 24(12)2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38931662

RESUMEN

Extrinsic parameter calibration is the foundation and prerequisite for LiDAR and camera data fusion of the autonomous system. This technology is widely used in fields such as autonomous driving, mobile robots, intelligent surveillance, and visual measurement. The learning-based method is one of the targetless calibrating methods in LiDAR and camera calibration. Due to its advantages of fast speed, high accuracy, and robustness under complex conditions, it has gradually been applied in practice from a simple theoretical model in just a few years, becoming an indispensable and important method. This paper systematically summarizes the research and development of this type of method in recent years. According to the principle of calibration parameter estimation, learning-based calibration algorithms are divided into two categories: accurate calibrating estimation and relative calibrating prediction. The evolution routes and algorithm frameworks of these two types of algorithms are elaborated, and the methods used in the algorithms' steps are summarized. The algorithm mechanism, advantages, limitations, and applicable scenarios are discussed. Finally, we make a summary, pointing out existing research issues and trends for future development.

20.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38931765

RESUMEN

The data fusion of a 3-D light detection and ranging (LIDAR) point cloud and a camera image during the creation of a 3-D map is important because it enables more efficient object classification by autonomous mobile robots and facilitates the construction of a fine 3-D model. The principle behind data fusion is the accurate estimation of the LIDAR-camera's external parameters through extrinsic calibration. Although several studies have proposed the use of multiple calibration targets or poses for precise extrinsic calibration, no study has clearly defined the relationship between the target positions and the data fusion accuracy. Here, we strictly investigated the effects of the deployment of calibration targets on data fusion and proposed the key factors to consider in the deployment of the targets in extrinsic calibration. Thereafter, we applied a probability method to perform a global and robust sampling of the camera external parameters. Subsequently, we proposed an evaluation method for the parameters, which utilizes the color ratio of the 3-D colored point cloud map. The derived probability density confirmed the good performance of the deployment method in estimating the camera external parameters. Additionally, the evaluation quantitatively confirmed the effectiveness of our deployments of the calibration targets in achieving high-accuracy data fusion compared with the results obtained using the previous methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA