Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-39001172

RESUMO

Studies have shown that vehicle trajectory data are effective for calibrating microsimulation models. Light Detection and Ranging (LiDAR) technology offers high-resolution 3D data, allowing for detailed mapping of the surrounding environment, including road geometry, roadside infrastructures, and moving objects such as vehicles, cyclists, and pedestrians. Unlike other traditional methods of trajectory data collection, LiDAR's high-speed data processing, fine angular resolution, high measurement accuracy, and high performance in adverse weather and low-light conditions make it well suited for applications requiring real-time response, such as autonomous vehicles. This research presents a comprehensive framework for integrating LiDAR sensor data into simulation models and their accurate calibration strategies for proactive safety analysis. Vehicle trajectory data were extracted from LiDAR point clouds collected at six urban signalized intersections in Lubbock, Texas, in the USA. Each study intersection was modeled with PTV VISSIM and calibrated to replicate the observed field scenarios. The Directed Brute Force method was used to calibrate two car-following and two lane-change parameters of the Wiedemann 1999 model in VISSIM, resulting in an average accuracy of 92.7%. Rear-end conflicts extracted from the calibrated models combined with a ten-year historical crash dataset were fitted into a Negative Binomial (NB) model to estimate the model's parameters. In all the six intersections, rear-end conflict count is a statistically significant predictor (p-value < 0.05) of observed rear-end crash frequency. The outcome of this study provides a framework for the combined use of LiDAR-based vehicle trajectory data, microsimulation, and surrogate safety assessment tools to transportation professionals. This integration allows for more accurate and proactive safety evaluations, which are essential for designing safer transportation systems, effective traffic control strategies, and predicting future congestion problems.

2.
Sensors (Basel) ; 24(14)2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-39065873

RESUMO

In the context of LiDAR sensor-based autonomous vehicles, segmentation networks play a crucial role in accurately identifying and classifying objects. However, discrepancies between the types of LiDAR sensors used for training the network and those deployed in real-world driving environments can lead to performance degradation due to differences in the input tensor attributes, such as x, y, and z coordinates, and intensity. To address this issue, we propose novel intensity rendering and data interpolation techniques. Our study evaluates the effectiveness of these methods by applying them to object tracking in real-world scenarios. The proposed solutions aim to harmonize the differences between sensor data, thereby enhancing the performance and reliability of deep learning networks for autonomous vehicle perception systems. Additionally, our algorithms prevent performance degradation, even when different types of sensors are used for the training data and real-world applications. This approach allows for the use of publicly available open datasets without the need to spend extensive time on dataset construction and annotation using the actual sensors deployed, thus significantly saving time and resources. When applying the proposed methods, we observed an approximate 20% improvement in mIoU performance compared to scenarios without these enhancements.

3.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39066044

RESUMO

A system has been developed to convert manual wheelchairs into electric wheelchairs, providing assistance to users through the implemented algorithm, which ensures safe driving and obstacle avoidance. While manual wheelchairs are typically controlled indoors based on user preferences, they do not guarantee safe driving in areas outside the user's field of vision. The proposed model utilizes the dynamic window approach specifically designed for wheelchair use, allowing for obstacle avoidance. This method evaluates potential movements within a defined velocity space to calculate the optimal path, providing seamless and safe driving assistance in real time. This innovative approach enhances user assistance and safety by integrating state-of-the-art algorithms developed using the dynamic window approach alongside advanced sensor technology. With the assistance of LiDAR sensors, the system perceives the wheelchair's surroundings, generating real-time speed values within the algorithm framework to ensure secure driving. The model's ability to adapt to indoor environments and its robust performance in real-world scenarios underscore its potential for widespread application. This study has undergone various tests, conclusively proving that the system aids users in avoidance obstacles and ensures safe driving. These tests demonstrate significant improvements in maneuverability and user safety, highlighting a noteworthy advancement in assistive technology for individuals with limited mobility.


Assuntos
Algoritmos , Cadeiras de Rodas , Humanos , Desenho de Equipamento , Condução de Veículo , Eletricidade
4.
ACS Appl Mater Interfaces ; 16(15): 19121-19136, 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38588341

RESUMO

Plate-type hollow black TiO2 (HL/BT) with a high NIR reflectance was fabricated for the first time as a LiDAR-detectable black material. A TiO2 layer was formed on commercial-grade glass by using the sol-gel method to obtain a plate-type structure. The glass template was then etched with hydrofluoric acid to form a hollow structure, and blackness was further achieved through NaBH4 reduction, which altered the oxidation state of TiO2 to black TixO2x-1 or Ti4+ to Ti3+ and Ti2+. The blackness of the HL/BT material was maintained by a novel approach that involved etching prior to reduction. The thickness of the TiO2 layer was controlled to maximize the NIR reflectance when applied as paint. The HL/BT material with a thickness of 140 nm (HL/BT140) showed a blackness (L*) of 13.3 and high NIR reflectance of 23.6% at a wavelength of 905 nm. This is attributed to the effective light reflection at the interface created by the TiO2 layer and the hollow structure. Plate-type HL/BT140 provides excellent spreadability, durability, and thermal stability in practical paint applications compared with sphere-type materials due to the higher contacting area to the applied surface, making it suitable for use as a LiDAR-detectable inorganic black pigment in autonomous environments.

5.
Sensors (Basel) ; 24(6)2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38544108

RESUMO

Virtual testing and validation are building blocks in the development of autonomous systems, in particular autonomous driving. Perception sensor models gained more attention to cover the entire tool chain of the sense-plan-act cycle, in a realistic test setup. In the literature or state-of-the-art software tools various kinds of lidar sensor models are available. We present a point cloud lidar sensor model, based on ray tracing, developed for a modular software architecture, which can be used stand-alone. The model is highly parametrizable and designed as a toolbox to simulate different kinds of lidar sensors. It is linked to an infrared material database to incorporate physical sensor effects introduced by the ray-surface interaction. The maximum detectable range depends on the material reflectivity, which can be covered with this approach. The angular dependence and maximum range for different Lambertian target materials are studied. Point clouds from a scene in an urban street environment are compared for different sensor parameters.

6.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38339539

RESUMO

Recently, new semantic segmentation and object detection methods have been proposed for the direct processing of three-dimensional (3D) LiDAR sensor point clouds. LiDAR can produce highly accurate and detailed 3D maps of natural and man-made environments and is used for sensing in many contexts due to its ability to capture more information, its robustness to dynamic changes in the environment compared to an RGB camera, and its cost, which has decreased in recent years and which is an important factor for many application scenarios. The challenge with high-resolution 3D LiDAR sensors is that they can output large amounts of 3D data with up to a few million points per second, which is difficult to process in real time when applying complex algorithms and models for efficient semantic segmentation. Most existing approaches are either only suitable for relatively small point clouds or rely on computationally intensive sampling techniques to reduce their size. As a result, most of these methods do not work in real time in realistic field robotics application scenarios, making them unsuitable for practical applications. Systematic point selection is a possible solution to reduce the amount of data to be processed. Although our approach is memory and computationally efficient, it selects only a small subset of points, which may result in important features being missed. To address this problem, our proposed systematic sampling method called SyS3DS (Systematic Sampling for 3D Semantic Segmentation) incorporates a technique in which the local neighbours of each point are retained to preserve geometric details. SyS3DS is based on the graph colouring algorithm and ensures that the selected points are non-adjacent in order to obtain a subset of points that are representative of the 3D points in the scene. To take advantage of the ensemble learning method, we pass a different subset of nodes for each epoch. This leverages a new technique called auto-ensemble, where ensemble learning is proposed as a collection of different learning models instead of tuning different hyperparameters individually during training and validation. SyS3DS has been shown to process up to 1 million points in a single pass. It outperforms the state of the art in efficient semantic segmentation on large datasets such as Semantic3D. We also present a preliminary study on the validity of the performance of LiDAR-only data, i.e., intensity values from LiDAR sensors without RGB values for semi-autonomous robot perception.

7.
Sensors (Basel) ; 23(23)2023 Nov 26.
Artigo em Inglês | MEDLINE | ID: mdl-38067798

RESUMO

Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object's velocity and direction of motion in the sensor's field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s-1 and a two-sigma confidence interval of [-0.0008 m s-1, 0.0017 m s-1] for the axis-wise estimation of an object's relative velocity, and an RMSE of 0.0815 m s-1 and a two-sigma confidence interval of [0.0138 m s-1, 0.0170 m s-1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion.

8.
Sensors (Basel) ; 23(21)2023 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-37960680

RESUMO

Many fields are currently investigating the use of convolutional neural networks to detect specific objects in three-dimensional data. While algorithms based on three-dimensional data are more stable and insensitive to lighting conditions than algorithms based on two-dimensional image data, they require more computation than two-dimensional data, making it difficult to drive CNN algorithms using three-dimensional data in lightweight embedded systems. In this paper, we propose a method to process three-dimensional data through a simple algorithm instead of complex operations such as convolution in CNN, and utilize its physical characteristics to generate ROIs to perform a CNN object detection algorithm based on two-dimensional image data. After preprocessing the LiDAR point cloud data, it is separated into individual objects through clustering, and semantic detection is performed through a classifier trained based on machine learning by extracting physical characteristics that can be utilized for semantic detection. The final object recognition is performed through a 2D-based object detection algorithm that bypasses the process of tracking bounding boxes by generating individual 2D image regions from the location and size of objects initially detected by semantic detection. This allows us to utilize the physical characteristics of 3D data to improve the accuracy of 2D image-based object detection algorithms, even in environments where it is difficult to collect data from camera sensors, resulting in a lighter system than 3D data-based object detection algorithms. The proposed model achieved an accuracy of 81.84% on the YOLO v5 algorithm on an embedded board, which is 1.92% higher than the typical model. The proposed model achieves 47.41% accuracy in an environment with 40% higher brightness and 54.12% accuracy in an environment with 40% lower brightness, which is 8.97% and 13.58% higher than the general model, respectively, and can achieve high accuracy even in non-optimal brightness environments. The proposed technique also has the advantage of reducing the execution time depending on the operating environment of the detection model.

9.
Sensors (Basel) ; 23(19)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37836973

RESUMO

By the end of the 2020s, full autonomy in autonomous driving may become commercially viable in certain regions. However, achieving Level 5 autonomy requires crucial collaborations between vehicles and infrastructure, necessitating high-speed data processing and low-latency capabilities. This paper introduces a vehicle tracking algorithm based on roadside LiDAR (light detection and ranging) infrastructure to reduce the latency to 100 ms without compromising the detection accuracy. We first develop a vehicle detection architecture based on ResNet18 that can more effectively detect vehicles at a full frame rate by improving the BEV mapping and the loss function of the optimizer. Then, we propose a new three-stage vehicle tracking algorithm. This algorithm enhances the Hungarian algorithm to better match objects detected in consecutive frames, while time-space logicality and trajectory similarity are proposed to address the short-term occlusion problem. Finally, the system is tested on static scenes in the KITTI dataset and the MATLAB/Simulink simulation dataset. The results show that the proposed framework outperforms other methods, with F1-scores of 96.97% and 98.58% for vehicle detection for the KITTI and MATLAB/Simulink datasets, respectively. For vehicle tracking, the MOTA are 88.12% and 90.56%, and the ID-F1 are 95.16% and 96.43%, which are better optimized than the traditional Hungarian algorithm. In particular, it has a significant improvement in calculation speed, which is important for real-time transportation applications.

10.
Sensors (Basel) ; 23(17)2023 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-37687832

RESUMO

This contribution focuses on a comparison of modern geomatics technologies for the derivation of growth parameters in forest management. The present text summarizes the results of our measurements over the last five years. As a case project, a mountain spruce forest with planned forest logging was selected. In this locality, terrestrial laser scanning (TLS) and terrestrial and drone close-range photogrammetry were experimentally used, as was the use of PLS mobile technology (personal laser scanning) and ALS (aerial laser scanning). Results from the data joining, usability, and economics of all technologies for forest management and ecology were discussed. ALS is expensive for small areas and the results were not suitable for a detailed parameter derivation. The RPAS (remotely piloted aircraft systems, known as "drones") method of data acquisition combines the benefits of close-range and aerial photogrammetry. If the approximate height and number of the trees are known, one can approximately calculate the extracted cubage of wood mass before forest logging. The use of conventional terrestrial close-range photogrammetry and TLS proved to be inappropriate and practically unusable in our case, and also in standard forestry practice after consultation with forestry workers. On the other hand, the use of PLS is very simple and allows you to quickly define ordered parameters and further calculate, for example, the cubic volume of wood stockpiles. The results from our research into forestry show that drones can be used to estimate quantities (wood cubature) and inspect the health status of spruce forests, However, PLS seems, nowadays, to be the best solution in forest management for deriving forest parameters. Our results are mainly oriented to practice and in no way diminish the general research in this area.

11.
Sensors (Basel) ; 23(15)2023 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-37571674

RESUMO

In this work, we introduce a novel approach to model the rain and fog effect on the light detection and ranging (LiDAR) sensor performance for the simulation-based testing of LiDAR systems. The proposed methodology allows for the simulation of the rain and fog effect using the rigorous applications of the Mie scattering theory on the time domain for transient and point cloud levels for spatial analyses. The time domain analysis permits us to benchmark the virtual LiDAR signal attenuation and signal-to-noise ratio (SNR) caused by rain and fog droplets. In addition, the detection rate (DR), false detection rate (FDR), and distance error derror of the virtual LiDAR sensor due to rain and fog droplets are evaluated on the point cloud level. The mean absolute percentage error (MAPE) is used to quantify the simulation and real measurement results on the time domain and point cloud levels for the rain and fog droplets. The results of the simulation and real measurements match well on the time domain and point cloud levels if the simulated and real rain distributions are the same. The real and virtual LiDAR sensor performance degrades more under the influence of fog droplets than in rain.

12.
Front Robot AI ; 10: 1064934, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37064577

RESUMO

In the last decades, Simultaneous Localization and Mapping (SLAM) proved to be a fundamental topic in the field of robotics, due to the many applications, ranging from autonomous driving to 3D reconstruction. Many systems have been proposed in literature exploiting a heterogeneous variety of sensors. State-of-the-art methods build their own map from scratch, using only data coming from the equipment of the robot, and not exploiting possible reconstructions of the environment. Moreover, temporary loss of data proves to be a challenge for SLAM systems, as it demands efficient re-localization to continue the localization process. In this paper, we present a SLAM system that exploits additional information coming from mapping services like OpenStreetMaps, hence the name OSM-SLAM, to face these issues. We extend an existing LiDAR-based Graph SLAM system, ART-SLAM, making it able to integrate the 2D geometry of buildings in the trajectory estimation process, by matching a prior OpenStreetMaps map with a single LiDAR scan. Each estimated pose of the robot is then associated with all buildings surrounding it. This association allows to improve localization accuracy, but also to adjust possible mistakes in the prior map. The pose estimates coming from SLAM are then jointly optimized with the constraints associated with the various OSM buildings, which can assume one of the following types: Buildings are always fixed (Prior SLAM); buildings surrounding a robot are movable in chunks, for every scan (Rigid SLAM); and every single building is free to move independently from the others (Non-rigid SLAM). Lastly, OSM maps can also be used to re-localize the robot when sensor data is lost. We compare the accuracy of the proposed system with existing methods for LiDAR-based SLAM, including the baseline, also providing a visual inspection of the results. The comparison is made by evaluating the estimated trajectory displacement using the KITTI odometry dataset. Moreover, the experimental campaign, along with an ablation study on the re-localization capabilities of the proposed system and its accuracy in loop detection-denied scenarios, allow a discussion about how the quality of prior maps influences the SLAM procedure, which may lead to worse estimates than the baseline.

13.
Sensors (Basel) ; 23(5)2023 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-36904779

RESUMO

Mobile edge computing has been proposed as a solution for solving the latency problem of traditional cloud computing. In particular, mobile edge computing is needed in areas such as autonomous driving, which requires large amounts of data to be processed without latency for safety. Indoor autonomous driving is attracting attention as one of the mobile edge computing services. Furthermore, it relies on its sensors for location recognition because indoor autonomous driving cannot use a GPS device, as is the case with outdoor driving. However, while the autonomous vehicle is being driven, the real-time processing of external events and the correction of errors are required for safety. Furthermore, an efficient autonomous driving system is required because it is a mobile environment with resource constraints. This study proposes neural network models as a machine-learning method for autonomous driving in an indoor environment. The neural network model predicts the most appropriate driving command for the current location based on the range data measured with the LiDAR sensor. We designed six neural network models to be evaluated according to the number of input data points. In addition, we made an autonomous vehicle based on the Raspberry Pi for driving and learning and an indoor circular driving track for collecting data and performance evaluation. Finally, we evaluated six neural network models in terms of confusion matrix, response time, battery consumption, and driving command accuracy. In addition, when neural network learning was applied, the effect of the number of inputs was confirmed in the usage of resources. The result will influence the choice of an appropriate neural network model for an indoor autonomous vehicle.

14.
Sensors (Basel) ; 23(6)2023 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-36991824

RESUMO

Measurement performance evaluation of real and virtual automotive light detection and ranging (LiDAR) sensors is an active area of research. However, no commonly accepted automotive standards, metrics, or criteria exist to evaluate their measurement performance. ASTM International released the ASTM E3125-17 standard for the operational performance evaluation of 3D imaging systems commonly referred to as terrestrial laser scanners (TLS). This standard defines the specifications and static test procedures to evaluate the 3D imaging and point-to-point distance measurement performance of TLS. In this work, we have assessed the 3D imaging and point-to-point distance estimation performance of a commercial micro-electro-mechanical system (MEMS)-based automotive LiDAR sensor and its simulation model according to the test procedures defined in this standard. The static tests were performed in a laboratory environment. In addition, a subset of static tests was also performed at the proving ground in natural environmental conditions to determine the 3D imaging and point-to-point distance measurement performance of the real LiDAR sensor. In addition, real scenarios and environmental conditions were replicated in the virtual environment of a commercial software to verify the LiDAR model's working performance. The evaluation results show that the LiDAR sensor and its simulation model under analysis pass all the tests specified in the ASTM E3125-17 standard. This standard helps to understand whether sensor measurement errors are due to internal or external influences. We have also shown that the 3D imaging and point-to-point distance estimation performance of LiDAR sensors significantly impacts the working performance of the object recognition algorithm. That is why this standard can be beneficial in validating automotive real and virtual LiDAR sensors, at least in the early stage of development. Furthermore, the simulation and real measurements show good agreement on the point cloud and object recognition levels.

15.
Sensors (Basel) ; 22(19)2022 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-36236655

RESUMO

This work introduces a process to develop a tool-independent, high-fidelity, ray tracing-based light detection and ranging (LiDAR) model. This virtual LiDAR sensor includes accurate modeling of the scan pattern and a complete signal processing toolchain of a LiDAR sensor. It is developed as a functional mock-up unit (FMU) by using the standardized open simulation interface (OSI) 3.0.2, and functional mock-up interface (FMI) 2.0. Subsequently, it was integrated into two commercial software virtual environment frameworks to demonstrate its exchangeability. Furthermore, the accuracy of the LiDAR sensor model is validated by comparing the simulation and real measurement data on the time domain and on the point cloud level. The validation results show that the mean absolute percentage error (MAPE) of simulated and measured time domain signal amplitude is 1.7%. In addition, the MAPE of the number of points Npoints and mean intensity Imean values received from the virtual and real targets are 8.5% and 9.3%, respectively. To the author's knowledge, these are the smallest errors reported for the number of received points Npoints and mean intensity Imean values up until now. Moreover, the distance error derror is below the range accuracy of the actual LiDAR sensor, which is 2 cm for this use case. In addition, the proving ground measurement results are compared with the state-of-the-art LiDAR model provided by commercial software and the proposed LiDAR model to measure the presented model fidelity. The results show that the complete signal processing steps and imperfections of real LiDAR sensors need to be considered in the virtual LiDAR to obtain simulation results close to the actual sensor. Such considerable imperfections are optical losses, inherent detector effects, effects generated by the electrical amplification, and noise produced by the sunlight.

16.
Sensors (Basel) ; 22(15)2022 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-35957289

RESUMO

The safety of vehicles is one of the major goals of driving automation. The safety distance is longer for rail vehicles such as trams because of the adherence limitations of the wheel-to-rail system. The major issues of fixed frontal sensing are fake target detection, blind spots related to rail slopes, curves, and random changes in the target's illumination or reflectivity. In this experimental study, distance measurements were performed using a scaled tram model equipped with a LiDAR sensor with a narrow field of view, under different conditions of illumination, size, and reflectivity of the target objects, and using different track configurations, to evaluate the effectiveness of such sensors in collision-avoidance systems for rail applications. The experimental findings are underlining the sensor's sensitivity to fake targets, objects in the sensor's blind spots, and special optical interferences, which are important for evaluating long-range LiDAR capabilities in sensing safety distance for vehicles. The conclusions can help developers to produce a dedicated colliding prevention system for trams and to identify the zones with high risk in the track where additional protection methods should be used. The LiDAR sensor must be used in conjunction with additional sensors to perform all the security tasks of an anti-colliding system for the tram.


Assuntos
Condução de Veículo , Automação , Veículos Automotores
17.
Sensors (Basel) ; 22(16)2022 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-36016073

RESUMO

In the present work, three LiDAR technologies (Faro Focus 3D X130-Terrestrial Laser Scanner, TLS-, Kaarta Stencil 2-16-Mobile mapping system, MMS-, and DJI Zenmuse L1-Airborne LiDAR sensor, ALS-) have been tested and compared in order to assess the performances in surveying built heritage in vegetated areas. Each of the mentioned devices has their limits of usability, and different methods to capture and generate 3D point clouds need to be applied. In addition, it has been necessary to apply a methodology to be able to position all the point clouds in the same reference system. While the TLS scans and the MMS data have been geo-referenced using a set of vertical markers and sphere measured by a GNSS receiver in RTK mode, the ALS model has been geo-referenced by the GNSS receiver integrated in the unmanned aerial system (UAS), which presents different characteristics and accuracies. The resulting point clouds have been analyzed and compared, focusing attention on the number of points acquired by the different systems, the density, and the nearest neighbor distance.


Assuntos
Coleta de Dados , Florestas , Documentação , Lasers
18.
Biosensors (Basel) ; 12(2)2022 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-35200392

RESUMO

Autopsy is a complex and unrepeatable procedure. It is essential to have the possibility of reviewing the autoptic findings, especially when it is done for medico-legal purposes. Traditional photography is not always adequate to record forensic practice since two-dimensional images could lead to distortion and misinterpretation. Three-dimensional (3D) reconstructions of autoptic findings could be a new way to document the autopsy. Besides, nowadays, smartphones and tablets equipped with a LiDAR sensor make it extremely easy to elaborate a 3D model directly in the autopsy room. Herein, a quality and trustworthiness evaluation of 3D models obtained during ten autopsies is made comparing 3D models and conventional autopsy photographic records. Three-dimensional models were realistic and accurate and allowed precise measurements. The review of the autoptic report was facilitated by the 3D model. Conclusions: The LiDAR sensor and 3D models have been demonstrated to be a valid tool to introduce some kind of reproducibility into the autoptic practice.


Assuntos
Medicina Legal , Fotografação , Autopsia/métodos , Documentação , Imageamento Tridimensional , Fotografação/métodos , Reprodutibilidade dos Testes
19.
Sensors (Basel) ; 21(23)2021 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-34884154

RESUMO

LiDAR sensors are needed for use in vehicular applications, particularly due to their good behavior in low-light environments, as they represent a possible solution for the safety systems of vehicles that have a long braking distance, such as trams. The testing of long-range LiDAR dynamic responses is very important for vehicle applications because of the presence of difficult operation conditions, such as different weather conditions or fake targets between the sensor and the tracked vehicle. The goal of the authors in this paper was to develop an experimental model for indoor testing, using a scaled vehicle that can measure the distances and the speeds relative to a fixed or a moving obstacle. This model, containing a LiDAR sensor, was developed to operate at variable speeds, at which the software functions were validated by repeated tests. Once the software procedures are validated, they can be applied on the full-scale model. The findings of this research include the validation of the frontal distance and relative speed measurement methodology, in addition to the validation of the independence of the measurements to the color of the obstacle and to the ambient light.

20.
Sensors (Basel) ; 21(12)2021 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-34207851

RESUMO

There have been significant advances regarding target detection in the autonomous vehicle context. To develop more robust systems that can overcome weather hazards as well as sensor problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they can accurately provide important features such as target´s depth and shape. However, most of the current state-of-the-art target detection algorithms for autonomous cars do not take into consideration the hardware limitations of the vehicle such as the reduced computing power in comparison with Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine learning algorithms as well as their reduced power consumption. We developed an accurate and small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high accuracy results are obtained while reducing the memory consumption as well as the latency of the system using the challenging KITTI dataset.


Assuntos
Algoritmos , Aprendizado de Máquina , Automóveis , Lasers
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA