Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.159
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(33): e2310157121, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39102539

RESUMO

The Amazon forest contains globally important carbon stocks, but in recent years, atmospheric measurements suggest that it has been releasing more carbon than it has absorbed because of deforestation and forest degradation. Accurately attributing the sources of carbon loss to forest degradation and natural disturbances remains a challenge because of the difficulty of classifying disturbances and simultaneously estimating carbon changes. We used a unique, randomized, repeated, very high-resolution airborne laser scanning survey to provide a direct, detailed, and high-resolution partitioning of aboveground carbon gains and losses in the Brazilian Arc of Deforestation. Our analysis revealed that disturbances directly attributed to human activity impacted 4.2% of the survey area while windthrows and other disturbances affected 2.7% and 14.7%, respectively. Extrapolating the lidar-based statistics to the study area (544,300 km2), we found that 24.1, 24.2, and 14.5 Tg C y-1 were lost through clearing, fires, and logging, respectively. The losses due to large windthrows (21.5 Tg C y-1) and other disturbances (50.3 Tg C y-1) were partially counterbalanced by forest growth (44.1 Tg C y-1). Our high-resolution estimates demonstrated a greater loss of carbon through forest degradation than through deforestation and a net loss of carbon of 90.5 ± 16.6 Tg C y-1 for the study region attributable to both anthropogenic and natural processes. This study highlights the role of forest degradation in the carbon balance for this critical region in the Earth system.


Assuntos
Carbono , Conservação dos Recursos Naturais , Florestas , Brasil/epidemiologia , Carbono/metabolismo , Humanos , Árvores/crescimento & desenvolvimento , Ciclo do Carbono
2.
Trends Genet ; 39(7): 531-544, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36907721

RESUMO

Insects are crucial for ecosystem health but climate change and pesticide use are driving massive insect decline. To mitigate this loss, we need new and effective monitoring techniques. Over the past decade there has been a shift to DNA-based techniques. We describe key emerging techniques for sample collection. We suggest that the selection of tools should be broadened, and that DNA-based insect monitoring data need to be integrated more rapidly into policymaking. We argue that there are four key areas for advancement, including the generation of more complete DNA barcode databases to interpret molecular data, standardisation of molecular methods, scaling up of monitoring efforts, and integrating molecular tools with other technologies that allow continuous, passive monitoring based on images and/or laser imaging, detection, and ranging (LIDAR).


Assuntos
Biodiversidade , Ecossistema , Animais , Código de Barras de DNA Taxonômico/métodos , DNA/genética , Insetos/genética
3.
Proc Natl Acad Sci U S A ; 119(10): e2110756119, 2022 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-35235447

RESUMO

SignificanceAerosol-cloud interaction affects the cooling of Earth's climate, mostly by activation of aerosols as cloud condensation nuclei that can increase the amount of sunlight reflected back to space. But the controlling physical processes remain uncertain in current climate models. We present a lidar-based technique as a unique remote-sensing tool without thermodynamic assumptions for simultaneously profiling diurnal aerosol and water cloud properties with high resolution. Direct lateral observations of cloud properties show that the vertical structure of low-level water clouds can be far from being perfectly adiabatic. Furthermore, our analysis reveals that, instead of an increase of liquid water path (LWP) as proposed by most general circulation models, elevated aerosol loading can cause a net decrease in LWP.

4.
Glob Chang Biol ; 30(2): e17185, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38361266

RESUMO

Climate change in northern latitudes is increasing the vulnerability of peatlands and the riparian transition zones between peatlands and upland forests (referred to as ecotones) to greater frequency of wildland fires. We examined early post-fire vegetation regeneration following the 2011 Utikuma complex fire (central Alberta, Canada). This study examined 779 peatlands and adjacent ecotones, covering an area of ~182 km2 . Based on the known regional fire history, peatlands that burned in 2011 were stratified into either long return interval (LRI) fire regimes of >80 years (i.e., no recorded prior fire history) or short fire return interval (SRI) of 55 years (i.e., within the boundary of a documented severe fire in 1956). Data from six multitemporal airborne lidar surveys were used to quantify trajectories of vegetation change for 8 years prior to and 8 years following the 2011 fire. To date, no studies have quantified the impacts of post-fire regeneration following short versus long return interval fires across this broad range of peatlands with variable environmental and post-fire successional trajectories. We found that SRI peatlands demonstrated more rapid vascular and shrub growth rates, especially in peatland centers, than LRI peatlands. Bogs and fens burned in 1956, and with little vascular vegetation (classified as "open peatlands") prior to the 2011 fire, experienced the greatest changes. These peatlands tended to transition to vascular/shrub forms following the SRI fire, while open LRI peatlands were not significantly different from pre-fire conditions. The results of this study suggest the emergence of a positive feedback, where areas experiencing SRI fires in southern boreal peatlands are expected to transition to forested vegetation forms. Along fen edges and within bog centers, SRI fires are expected to reduce local peatland groundwater moisture-holding capacity and promote favorable conditions for increased fire frequency and severity in the future.


Assuntos
Incêndios , Incêndios Florestais , Florestas , Áreas Alagadas , Alberta , Ecossistema
5.
Ann Bot ; 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38804175

RESUMO

BACKGROUND AND AIMS: Lidar is a promising tool for fast and accurate measurements of trees. There are several approaches to estimate aboveground woody biomass using lidar point clouds. One of the most widely used methods involves fitting geometric primitives (e.g. cylinders) to the point cloud, thereby reconstructing both the geometry and topology of the tree. However, current algorithms are not suited for accurate estimation of the volume of finer branches, because of the unreliable point dispersions from e.g. beam footprint compared to the structure diameter. METHODS: We propose a new method that couples point cloud-based skeletonization and multi-linear statistical modelling based on structural data to make a model (structural model) that accurately estimates the aboveground woody biomass of trees from high-quality lidar point clouds, including finer branches. The structural model was tested at segment, axis, and branch level, and compared to a cylinder fitting algorithm and to the pipe model theory. KEY RESULTS: The model accurately predicted the biomass with 1.6% nRMSE at the segment scale from a k-fold cross-validation. It also gave satisfactory results when up-scaled to the branch level with a significantly lower error (13% nRMSE) and bias (-5%) compared to conventional cylinder fitting to the point cloud (nRMSE: 92%, bias: 82%), or using the pipe model theory (nRMSE: 31%, bias: -27%).The model was then applied to the whole-tree scale and showed that the sampled trees had more than 1.7km of structures on average and that 96% of that length was coming from the twigs (i.e. <5 cm diameter). Our results showed that neglecting twigs can lead to a significant underestimation of tree aboveground woody biomass (-21%). CONCLUSIONS: The structural model approach is an effective method that allows a more accurate estimation of the volumes of smaller branches from lidar point clouds. This method is versatile but requires manual measurements on branches for calibration. Nevertheless, once the model is calibrated, it can provide unbiased and large-scale estimations of tree structure volumes, making it an excellent choice for accurate 3D reconstruction of trees and estimating standing biomass.

6.
Environ Sci Technol ; 58(5): 2413-2422, 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38266235

RESUMO

Wildland fire is a major global driver in the exchange of aerosols between terrestrial environments and the atmosphere. This exchange is commonly quantified using emission factors or the mass of a pollutant emitted per mass of fuel burned. However, emission factors for microbes aerosolized by fire have yet to be determined. Using bacterial cell concentrations collected on unmanned aircraft systems over forest fires in Utah, USA, we determine bacterial emission factors (BEFs) for the first time. We estimate that 1.39 × 1010 and 7.68 × 1011 microbes are emitted for each Mg of biomass consumed in fires burning thinning residues and intact forests, respectively. These emissions exceed estimates of background bacterial emissions in other studies by 3-4 orders of magnitude. For the ∼2631 ha of similar forests in the Fishlake National Forest that burn each year on average, an estimated 1.35 × 1017 cells or 8.1 kg of bacterial biomass were emitted. BEFs were then used to parametrize a computationally scalable particle transport model that predicted over 99% of the emitted cells were transported beyond the 17.25 x 17.25 km model domain. BEFs can be used to expand understanding of global wildfire microbial emissions and their potential consequences to ecosystems, the atmosphere, and humans.


Assuntos
Incêndios , Incêndios Florestais , Humanos , Ecossistema , Florestas , Bactérias
7.
Neurosurg Focus ; 56(1): E6, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38163339

RESUMO

OBJECTIVE: A comprehensive understanding of microsurgical neuroanatomy, familiarity with the operating room environment, patient positioning in relation to the surgery, and knowledge of surgical approaches is crucial in neurosurgical education. However, challenges such as limited patient exposure, heightened patient safety concerns, a decreased availability of surgical cases during training, and difficulties in accessing cadavers and laboratories have adversely impacted this education. Three-dimensional (3D) models and augmented reality (AR) applications can be utilized to depict the cortical and white matter anatomy of the brain, create virtual models of patient surgical positions, and simulate the operating room and neuroanatomy laboratory environment. Herein, the authors, who used a single application, aimed to demonstrate the creation of 3D models of anatomical cadaver dissections, surgical approaches, patient surgical positions, and operating room and laboratory designs as alternative educational materials for neurosurgical training. METHODS: A 3D modeling application (Scaniverse) was employed to generate 3D models of cadaveric brain specimens and surgical approaches using photogrammetry. It was also used to create virtual representations of the operating room and laboratory environment, as well as the surgical positions of patients, by utilizing light detection and ranging (LiDAR) sensor technology for accurate spatial mapping. These virtual models were then presented in AR for educational purposes. RESULTS: Virtual representations in three dimensions were created to depict cadaver specimens, surgical approaches, patient surgical positions, and the operating room and laboratory environment. These models offer the flexibility of rotation and movement in various planes for improved visualization and understanding. The operating room and laboratory environment were rendered in three dimensions to create a simulation that could be navigated using AR and mixed reality technology. Realistic cadaveric models with intricate details were showcased on internet-based platforms and AR platforms for enhanced visualization and learning. CONCLUSIONS: The utilization of this cost-effective, straightforward, and readily available approach to generate 3D models has the potential to enhance neuroanatomical and neurosurgical education. These digital models can be easily stored and shared via the internet, making them accessible to neurosurgeons worldwide for educational purposes.


Assuntos
Neuroanatomia , Salas Cirúrgicas , Humanos , Neuroanatomia/educação , Laboratórios , Simulação por Computador , Cadáver
8.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38339673

RESUMO

Modern visual perception techniques often rely on multiple heterogeneous sensors to achieve accurate and robust estimates. Knowledge of their relative positions is a mandatory prerequisite to accomplish sensor fusion. Typically, this result is obtained through a calibration procedure that correlates the sensors' measurements. In this context, we focus on LiDAR and RGB sensors that exhibit complementary capabilities. Given the sparsity of LiDAR measurements, current state-of-the-art calibration techniques often rely on complex or large calibration targets to resolve the relative pose estimation. As such, the geometric properties of the targets may hinder the calibration procedure in those cases where an ad hoc environment cannot be guaranteed. This paper addresses the problem of LiDAR-RGB calibration using common calibration patterns (i.e., A3 chessboard) with minimal human intervention. Our approach exploits the flatness of the target to find associations between the sensors' measurements, leading to robust features and retrieval of the solution through nonlinear optimization. The results of quantitative and comparative experiments with other state-of-the-art approaches show that our simple schema performs on par or better than existing methods that rely on complex calibration targets.

9.
Sensors (Basel) ; 24(6)2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38544025

RESUMO

An innovative mobile lidar device, developed to monitor volcanic plumes during explosive eruptions at Mt. Etna (Italy) and to analyse the optical properties of volcanic particles, was upgraded in October 2023 with the aim of improving volcanic plume retrievals. The new configuration of the lidar allows it to obtain new data on both the optical and the microphysical properties of the atmospheric aerosol. In fact, after the upgrade, the lidar is able to measure three backscattering coefficients, two extinction coefficients and two depolarisation ratios in a configuration defined as "state-of-the-art lidar", where properties such as particle size distribution and the refractive index can be derived. During the lidar implementation, we were able to test the system's performance through specific calibration measurements. A comparison in an aerosol-free region (7.2-12 km) between lidar signals at 1064 nm, 532 nm and 355 nm and the corresponding pure molecular profiles showed a relative difference of <1% between them for all the wavelengths, highlighting the good dynamic of the signals. The overlap correction allowed us to reduce the underestimation of the backscattering coefficient from 50% to 10% below 450 m and 750 m at both 355 and 532 nm, respectively. The correct alignment between the laser beam and the receiver optical chain was tested using the signal received from the different quadrants of the telescope, and the relative differences between the four directions were comparable to zero, within the margin of error. Finally, the first measurement results are shown and compared with results obtained by other instruments, with the aim of proving the ability of the upgraded system to more precisely characterise aerosol optical and microphysical properties.

10.
Sensors (Basel) ; 24(2)2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38276337

RESUMO

SLAM (Simultaneous Localization and Mapping) based on 3D LiDAR (Laser Detection and Ranging) is an expanding field of research with numerous applications in the areas of autonomous driving, mobile robotics, and UAVs (Unmanned Aerial Vehicles). However, in most real-world scenarios, dynamic objects can negatively impact the accuracy and robustness of SLAM. In recent years, the challenge of achieving optimal SLAM performance in dynamic environments has led to the emergence of various research efforts, but there has been relatively little relevant review. This work delves into the development process and current state of SLAM based on 3D LiDAR in dynamic environments. After analyzing the necessity and importance of filtering dynamic objects in SLAM, this paper is developed from two dimensions. At the solution-oriented level, mainstream methods of filtering dynamic targets in 3D point cloud are introduced in detail, such as the ray-tracing-based approach, the visibility-based approach, the segmentation-based approach, and others. Then, at the problem-oriented level, this paper classifies dynamic objects and summarizes the corresponding processing strategies for different categories in the SLAM framework, such as online real-time filtering, post-processing after the mapping, and Long-term SLAM. Finally, the development trends and research directions of dynamic object filtering in SLAM based on 3D LiDAR are discussed and predicted.

11.
Sensors (Basel) ; 24(10)2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38794002

RESUMO

This article presents a high-precision obstacle detection algorithm using 3D mechanical LiDAR to meet railway safety requirements. To address the potential errors in the point cloud, we propose a calibration method based on projection and a novel rail extraction algorithm that effectively handles terrain variations and preserves the point cloud characteristics of the track area. We address the limitations of the traditional process involving fixed Euclidean thresholds by proposing a modulation function based on directional density variations to adjust the threshold dynamically. Finally, using PCA and local-ICP, we conduct feature analysis and classification of the clustered data to obtain the obstacle clusters. We conducted continuous experiments on the testing site, and the results showed that our system and algorithm achieved an STDR (stable detection rate) of over 95% for obstacles with a size of 15 cm × 15 cm × 15 cm in the range of ±25 m; at the same time, for obstacles of 10 cm × 10 cm × 10 cm, an STDR of over 80% was achieved within a range of ±20 m. This research provides a possible solution and approach for railway security via obstacle detection.

12.
Sensors (Basel) ; 24(4)2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38400430

RESUMO

To develop socially assistive robots for monitoring older adults at home, a sensor is required to identify residents and capture activities within the room without violating privacy. We focused on 2D Light Detection and Ranging (2D-LIDAR) capable of robustly measuring human contours in a room. While horizontal 2D contour data can provide human location, identifying humans and activities from these contours is challenging. To address this issue, we developed novel methods using deep learning techniques. This paper proposes methods for person identification and activity estimation in a room using contour point clouds captured by a single 2D-LIDAR at hip height. In this approach, human contours were extracted from 2D-LIDAR data using density-based spatial clustering of applications with noise. Subsequently, the person and activity within a 10-s interval were estimated employing deep learning techniques. Two deep learning models, namely Long Short-Term Memory (LSTM) and image classification (VGG16), were compared. In the experiment, a total of 120 min of walking data and 100 min of additional activities (door opening, sitting, and standing) were collected from four participants. The LSTM-based and VGG16-based methods achieved accuracies of 65.3% and 89.7%, respectively, for person identification among the four individuals. Furthermore, these methods demonstrated accuracies of 94.2% and 97.9%, respectively, for the estimation of the four activities. Despite the 2D-LIDAR point clouds at hip height containing small features related to gait, the results indicate that the VGG16-based method has the capability to identify individuals and accurately estimate their activities.


Assuntos
Abdome , Pessoa Solteira , Humanos , Idoso , Marcha , Memória de Longo Prazo , Privacidade
13.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38544137

RESUMO

This paper presents an innovative dataset designed explicitly for challenging agricultural environments, such as greenhouses, where precise location is crucial, but GNNS accuracy may be compromised by construction elements and the crop. The dataset was collected using a mobile platform equipped with a set of sensors typically used in mobile robots as it was moved through all the corridors of a typical Mediterranean greenhouse featuring tomato crops. This dataset presents a unique opportunity for constructing detailed 3D models of plants in such indoor-like spaces, with potential applications such as robotized spraying. For the first time, to the authors' knowledge, a dataset suitable to test simultaneous localization and mapping (SLAM) methods is presented in a greenhouse environment, which poses unique challenges. The suitability of the dataset for this purpose is assessed by presenting SLAM results with state-of-the-art algorithms. The dataset is available online.

14.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38339539

RESUMO

Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many contexts due to its ability to capture more information, its robustness to dynamic changes in the environment compared to an RGB camera, and its cost, which has decreased in recent years and which is an important factor for many application scenarios. The challenge with high-resolution 3D LiDAR sensors is that they can output large amounts of 3D data with up to a few million points per second, which is difficult to process in real time when applying complex algorithms and models for efficient semantic segmentation. Most existing approaches are either only suitable for relatively small point clouds or rely on computationally intensive sampling techniques to reduce their size. As a result, most of these methods do not work in real time in realistic field robotics application scenarios, making them unsuitable for practical applications. Systematic point selection is a possible solution to reduce the amount of data to be processed. Although our approach is memory and computationally efficient, it selects only a small subset of points, which may result in important features being missed. To address this problem, our proposed systematic sampling method called SyS3DS (Systematic Sampling for 3D Semantic Segmentation) incorporates a technique in which the local neighbours of each point are retained to preserve geometric details. SyS3DS is based on the graph colouring algorithm and ensures that the selected points are non-adjacent in order to obtain a subset of points that are representative of the 3D points in the scene. To take advantage of the ensemble learning method, we pass a different subset of nodes for each epoch. This leverages a new technique called auto-ensemble, where ensemble learning is proposed as a collection of different learning models instead of tuning different hyperparameters individually during training and validation. SyS3DS has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation on large datasets such as Semantic3D. We also present a preliminary study on the validity of the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception.

15.
Sensors (Basel) ; 24(9)2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38733047

RESUMO

LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird's-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost.

16.
Sensors (Basel) ; 24(4)2024 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-38400350

RESUMO

Most automated vehicles (AVs) are equipped with abundant sensors, which enable AVs to improve ride comfort by sensing road elevation, such as speed bumps. This paper proposes a method for estimating the road impulse features ahead of vehicles in urban environments with microelectromechanical system (MEMS) light detection and ranging (LiDAR). The proposed method deploys a real-time estimation of the vehicle pose to solve the problem of sparse sampling of the LiDAR. Considering the LiDAR error model, the proposed method builds the grid height measurement model by maximum likelihood estimation. Moreover, it incorporates height measurements with the LiDAR error model by the Kalman filter and introduces motion uncertainty to form an elevation weight method by confidence eclipse. In addition, a gate strategy based on the Mahalanobis distance is integrated to handle the sharp changes in elevation. The proposed method is tested in the urban environment. The results demonstrate the effectiveness of our method.

17.
Sensors (Basel) ; 24(14)2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39066115

RESUMO

3D object detection is a challenging and promising task for autonomous driving and robotics, benefiting significantly from multi-sensor fusion, such as LiDAR and cameras. Conventional methods for sensor fusion rely on a projection matrix to align the features from LiDAR and cameras. However, these methods often suffer from inadequate flexibility and robustness, leading to lower alignment accuracy under complex environmental conditions. Addressing these challenges, in this paper, we propose a novel Bidirectional Attention Fusion module, named BAFusion, which effectively fuses the information from LiDAR and cameras using cross-attention. Unlike the conventional methods, our BAFusion module can adaptively learn the cross-modal attention weights, making the approach more flexible and robust. Moreover, drawing inspiration from advanced attention optimization techniques in 2D vision, we developed the Cross Focused Linear Attention Fusion Layer (CFLAF Layer) and integrated it into our BAFusion pipeline. This layer optimizes the computational complexity of attention mechanisms and facilitates advanced interactions between image and point cloud data, showcasing a novel approach to addressing the challenges of cross-modal attention calculations. We evaluated our method on the KITTI dataset using various baseline networks, such as PointPillars, SECOND, and Part-A2, and demonstrated consistent improvements in 3D object detection performance over these baselines, especially for smaller objects like cyclists and pedestrians. Our approach achieves competitive results on the KITTI benchmark.

18.
Sensors (Basel) ; 24(13)2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-39000898

RESUMO

The motivation behind this research is the lack of an underground mining shaft data set in the literature in the form of open access. For this reason, our data set can be used for many research purposes such as shaft inspection, 3D measurements, simultaneous localization and mapping, artificial intelligence, etc. The data collection method incorporates rotated Velodyne VLP-16, Velodyne Ultra Puck VLP-32c, Livox Tele-15, IMU Xsens MTi-30 and Faro Focus 3D. The ground truth data were acquired with a geodetic survey including 15 ground control points and 6 Faro Focus 3D terrestrial laser scanner stations of a total 273,784,932 of 3D measurement points. This data set provides an end-user case study of realistic applications in mobile mapping technology. The goal of this research was to fill the gap in the underground mining data set domain. The result is the first open-access data set for an underground mining shaft (shaft depth -300 m).

19.
Sensors (Basel) ; 24(13)2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-39001172

RESUMO

Studies have shown that vehicle trajectory data are effective for calibrating microsimulation models. Light Detection and Ranging (LiDAR) technology offers high-resolution 3D data, allowing for detailed mapping of the surrounding environment, including road geometry, roadside infrastructures, and moving objects such as vehicles, cyclists, and pedestrians. Unlike other traditional methods of trajectory data collection, LiDAR's high-speed data processing, fine angular resolution, high measurement accuracy, and high performance in adverse weather and low-light conditions make it well suited for applications requiring real-time response, such as autonomous vehicles. This research presents a comprehensive framework for integrating LiDAR sensor data into simulation models and their accurate calibration strategies for proactive safety analysis. Vehicle trajectory data were extracted from LiDAR point clouds collected at six urban signalized intersections in Lubbock, Texas, in the USA. Each study intersection was modeled with PTV VISSIM and calibrated to replicate the observed field scenarios. The Directed Brute Force method was used to calibrate two car-following and two lane-change parameters of the Wiedemann 1999 model in VISSIM, resulting in an average accuracy of 92.7%. Rear-end conflicts extracted from the calibrated models combined with a ten-year historical crash dataset were fitted into a Negative Binomial (NB) model to estimate the model's parameters. In all the six intersections, rear-end conflict count is a statistically significant predictor (p-value < 0.05) of observed rear-end crash frequency. The outcome of this study provides a framework for the combined use of LiDAR-based vehicle trajectory data, microsimulation, and surrogate safety assessment tools to transportation professionals. This integration allows for more accurate and proactive safety evaluations, which are essential for designing safer transportation systems, effective traffic control strategies, and predicting future congestion problems.

20.
Sensors (Basel) ; 24(15)2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39124009

RESUMO

The intrusion of objects into track areas is a significant issue affecting the safety of urban rail transit systems. In recent years, obstacle detection technology based on LiDAR has been developed to identify potential issues, in which accurately extracting the track area is critical for segmentation and collision avoidance. However, because of the sparsity limitations inherent in LiDAR data, existing methods can only segment track regions over short distances, which are often insufficient given the speed and braking distance of urban rail trains. As such, a new approach is developed in this study to indirectly extract track areas by detecting references parallel to the rails (e.g., tunnel walls, protective walls, and sound barriers). Reference point selection and curve fitting are then applied to generate a reference curve on either side of the track. A centerline is then extrapolated from the two curves and expanded to produce a 2D track area with the given size specifications. Finally, the 3D track area is acquired by detecting the ground and removing points that are either too high or too low. The proposed technique was evaluated using a variety of scenes, including tunnels, elevated sections, and level urban rail transit lines. The results showed this method could successfully extract track regions from LiDAR data over significantly longer distances than conventional algorithms.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA