Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 578
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Nano Lett ; 24(23): 6948-6956, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38810209

RESUMO

The concept of cross-sensor modulation, wherein one sensor modality can influence another's response, is often overlooked in traditional sensor fusion architectures, leading to missed opportunities for enhancing data accuracy and robustness. In contrast, biological systems, such as aquatic animals like crayfish, demonstrate superior sensor fusion through multisensory integration. These organisms adeptly integrate visual, tactile, and chemical cues to perform tasks such as evading predators and locating prey. Drawing inspiration from this, we propose a neuromorphic platform that integrates graphene-based chemitransistors, monolayer molybdenum disulfide (MoS2) based photosensitive memtransistors, and triboelectric tactile sensors to achieve "Super-Additive" responses to weak chemical, visual, and tactile cues and demonstrate contextual response modulation, also referred to as the "Inverse Effectiveness Effect." We hold the view that integrating bio-inspired sensor fusion principles across various modalities holds promise for a wide range of applications.


Assuntos
Astacoidea , Grafite , Molibdênio , Tato , Animais , Molibdênio/química , Grafite/química , Dissulfetos/química
2.
Sensors (Basel) ; 24(8)2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38676054

RESUMO

Modern vehicles equipped with Advanced Driver Assistance Systems (ADAS) rely heavily on sensor fusion to achieve a comprehensive understanding of their surrounding environment. Traditionally, the Kalman Filter (KF) has been a popular choice for this purpose, necessitating complex data association and track management to ensure accurate results. To address errors introduced by these processes, the application of the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter is a good choice. This alternative filter implicitly handles the association and appearance/disappearance of tracks. The approach presented here allows for the replacement of KF frameworks in many applications while achieving runtimes below 1 ms on the test system. The key innovations lie in the utilization of sensor-based parameter models to implicitly handle varying Fields of View (FoV) and sensing capabilities. These models represent sensor-specific properties such as detection probability and clutter density across the state space. Additionally, we introduce a method for propagating additional track properties such as classification with the GM-PHD filter, further contributing to its versatility and applicability. The proposed GM-PHD filter approach surpasses a KF approach on the KITTI dataset and another custom dataset. The mean OSPA(2) error could be reduced from 1.56 (KF approach) to 1.40 (GM-PHD approach), showcasing its potential in ADAS perception.

3.
Sensors (Basel) ; 24(8)2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38676111

RESUMO

This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields.

4.
Sensors (Basel) ; 24(12)2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38931615

RESUMO

In this study, we enhanced odometry performance by integrating vision sensors with LiDAR sensors, which exhibit contrasting characteristics. Vision sensors provide extensive environmental information but are limited in precise distance measurement, whereas LiDAR offers high accuracy in distance metrics but lacks detailed environmental data. By utilizing data from vision sensors, this research compensates for the inadequate descriptors of LiDAR sensors, thereby improving LiDAR feature matching performance. Traditional fusion methods, which rely on extracting depth from image features, depend heavily on vision sensors and are vulnerable under challenging conditions such as rain, darkness, or light reflection. Utilizing vision sensors as primary sensors under such conditions can lead to significant mapping errors and, in the worst cases, system divergence. Conversely, our approach uses LiDAR as the primary sensor, mitigating the shortcomings of previous methods and enabling vision sensors to support LiDAR-based mapping. This maintains LiDAR Odometry performance even in environments where vision sensors are compromised, thus enhancing performance with the support of vision sensors. We adopted five prominent algorithms from the latest LiDAR SLAM open-source projects and conducted experiments on the KITTI odometry dataset. This research proposes a novel approach by integrating a vision support module into the top three LiDAR SLAM methods, thereby improving performance. By making the source code of VA-LOAM publicly available, this work enhances the accessibility of the technology, fostering reproducibility and transparency within the research community.

5.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610366

RESUMO

This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems-as required for calibration-is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.

6.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610568

RESUMO

Soil organic matter (SOM) is one of the best indicators to assess soil health and understand soil productivity and fertility. Therefore, measuring SOM content is a fundamental practice in soil science and agricultural research. The traditional approach (oven-dry) of measuring SOM is a costly, arduous, and time-consuming process. However, the integration of cutting-edge technology can significantly aid in the prediction of SOM, presenting a promising alternative to traditional methods. In this study, we tested the hypothesis that an accurate estimate of SOM might be obtained by combining the ground-based sensor-captured soil parameters and soil analysis data along with drone images of the farm. The data are gathered using three different methods: ground-based sensors detect soil parameters such as temperature, pH, humidity, nitrogen, phosphorous, and potassium of the soil; aerial photos taken by UAVs display the vegetative index (NDVI); and the Haney test of soil analysis reports measured in a lab from collected samples. Our datasets combined the soil parameters collected using ground-based sensors, soil analysis reports, and NDVI content of farms to perform the data analysis to predict SOM using different machine learning algorithms. We incorporated regression and ANOVA for analyzing the dataset and explored seven different machine learning algorithms, such as linear regression, Ridge regression, Lasso regression, random forest regression, Elastic Net regression, support vector machine, and Stochastic Gradient Descent regression to predict the soil organic matter content using other parameters as predictors.

7.
Sensors (Basel) ; 24(10)2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38793902

RESUMO

The development of the GPS (Global Positioning System) and related advances have made it possible to conceive of an outdoor positioning system with great accuracy; however, for indoor positioning, more efficient, reliable, and cost-effective technology is required. There are a variety of techniques utilized for indoor positioning, such as those that are Wi-Fi, Bluetooth, infrared, ultrasound, magnetic, and visual-marker-based. This work aims to design an accurate position estimation algorithm by combining raw distance data from ultrasonic sensors (Marvelmind Beacon) and acceleration data from an inertial measurement unit (IMU), utilizing the extended Kalman filter (EKF) with UDU factorization (expressed as the product of a triangular, a diagonal, and the transpose of the triangular matrix) approach. Initially, a position estimate is calculated through the use of a recursive least squares (RLS) method with a trilateration algorithm, utilizing raw distance data. This solution is then combined with acceleration data collected from the Marvelmind sensor, resulting in a position solution akin to that of the GPS. The data were initially collected via the ROS (Robot Operating System) platform and then via the Pixhawk development card, with tests conducted using a combination of four fixed and one moving Marvelmind sensors, as well as three fixed and one moving sensors. The designed algorithm is found to produce accurate results for position estimation, and is subsequently implemented on an embedded development card (Pixhawk). The tests showed that the designed algorithm gives accurate results with centimeter precision. Furthermore, test results have shown that the UDU-EKF structure integrated into the embedded system is faster than the classical EKF.

8.
Sensors (Basel) ; 24(10)2024 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-38793933

RESUMO

This paper presents an enhanced ground vehicle localization method designed to address the challenges associated with state estimation for autonomous vehicles operating in diverse environments. The focus is specifically on the precise localization of position and orientation in both local and global coordinate systems. The proposed approach integrates local estimates generated by existing visual-inertial odometry (VIO) methods into global position information obtained from the Global Navigation Satellite System (GNSS). This integration is achieved through optimizing fusion in a pose graph, ensuring precise local estimation and drift-free global position estimation. Considering the inherent complexities in autonomous driving scenarios, such as the potential failures of a visual-inertial navigation system (VINS) and restrictions on GNSS signals in urban canyons, leading to disruptions in localization outcomes, we introduce an adaptive fusion mechanism. This mechanism allows seamless switching between three modes: utilizing only VINS, using only GNSS, and normal fusion. The effectiveness of the proposed algorithm is demonstrated through rigorous testing in the Carla simulation environment and challenging UrbanNav scenarios. The evaluation includes both qualitative and quantitative analyses, revealing that the method exhibits robustness and accuracy.

9.
Sensors (Basel) ; 24(10)2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38793942

RESUMO

Autonomous driving, as a pivotal technology in modern transportation, is progressively transforming the modalities of human mobility. In this domain, vehicle detection is a significant research direction that involves the intersection of multiple disciplines, including sensor technology and computer vision. In recent years, many excellent vehicle detection methods have been reported, but few studies have focused on summarizing and analyzing these algorithms. This work provides a comprehensive review of existing vehicle detection algorithms and discusses their practical applications in the field of autonomous driving. First, we provide a brief description of the tasks, evaluation metrics, and datasets for vehicle detection. Second, more than 200 classical and latest vehicle detection algorithms are summarized in detail, including those based on machine vision, LiDAR, millimeter-wave radar, and sensor fusion. Finally, this article discusses the strengths and limitations of different algorithms and sensors, and proposes future trends.

10.
Sensors (Basel) ; 24(10)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38794061

RESUMO

Detecting objects, particularly naval mines, on the seafloor is a complex task. In naval mine countermeasures (MCM) operations, sidescan or synthetic aperture sonars have been used to search large areas. However, a single sensor cannot meet the requirements of high-precision autonomous navigation. Based on the ORB-SLAM3-VI framework, we propose ORB-SLAM3-VIP, which integrates a depth sensor, an IMU sensor and an optical sensor. This method integrates the measurements of depth sensors and an IMU sensor into the visual SLAM algorithm through tight coupling, and establishes a multi-sensor fusion SLAM model. Depth constraints are introduced into the process of initialization, scale fine-tuning, tracking and mapping to constrain the position of the sensor in the z-axis and improve the accuracy of pose estimation and map scale estimate. The test on seven sets of underwater multi-sensor sequence data in the AQUALOC dataset shows that, compared with ORB-SLAM3-VI, the ORB-SLAM3-VIP system proposed in this paper reduces the scale error in all sequences by up to 41.2%, and reduces the trajectory error by up to 41.2%. The square root has also been reduced by up to 41.6%.

11.
Sensors (Basel) ; 24(6)2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38543987

RESUMO

The use of smart indoor robotics services is gradually increasing in real-time scenarios. This paper presents a versatile approach to multi-robot backing crash prevention in indoor environments, using hardware schemes to achieve greater competence. Here, sensor fusion was initially used to analyze the state of multi-robots and their orientation within a static or dynamic scenario. The proposed novel hardware scheme-based framework integrates both static and dynamic scenarios for the execution of backing crash prevention. A round-robin (RR) scheduling algorithm was composed for the static scenario. Dynamic backing crash prevention was deployed by embedding a first come, first served (FCFS) scheduling algorithm. The behavioral control mechanism of the distributed multi-robots was integrated with FCFS and adaptive cruise control (ACC) scheduling algorithms. The integration of multiple algorithms is a challenging task for smarter indoor robotics, and the Xilinx-based partial reconfiguration method was deployed to avoid computational issues with multiple algorithms during the run-time. These methods were coded with Verilog HDL and validated using an FPGA (Zynq)-based multi-robot system.

12.
Sensors (Basel) ; 24(6)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38544217

RESUMO

Inertial measurement units (IMUs) are key components of various applications including navigation, robotics, aerospace, and automotive systems. IMU sensor characteristics have a significant impact on the accuracy and reliability of these applications. In particular, noise characteristics and bias stability are critical for proper filter settings to perform a combined GNSS/IMU solution. This paper presents an analysis based on the Allan deviation of different IMU sensors that correspond to different grades of micro-electromechanical systems (MEMS)-type IMUs in order to evaluate their accuracy and stability over time. The study covers three IMU sensors of different grades (ascending order): Rokubun Argonaut navigator sensor (InvenSense TDK MPU9250), Samsung Galaxy Note10 phone sensor (STMicroelectronics LSM6DSR), and NovAtel PwrPak7 sensor (Epson EG320N). The noise components of the sensors are computed using overlapped Allan deviation analysis on data collected over the course of a week in a static position. The focus of the analysis is to characterize the random walk noise and bias stability, which are the most critical for combined GNSS/IMU navigation and may differ or may not be listed in manufacturers' specifications. Noise characteristics are calculated for the studied sensors and examples of their use in loosely coupled GNSS/IMU processing are assessed. This work proposes a structured and reproducible approach for working with sensors for their use in navigation tasks in combination with GNSS, and can be used for sensors of different levels to supplement missing or incorrect sensor manufacturers' data.

13.
Sensors (Basel) ; 24(2)2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38276361

RESUMO

LiDAR sensors, pivotal in various fields like agriculture and robotics for tasks such as 3D object detection and map creation, are increasingly coupled with thermal cameras to harness heat information. This combination proves particularly effective in adverse conditions like darkness and rain. Ensuring seamless fusion between the sensors necessitates precise extrinsic calibration. Our innovative calibration method leverages human presence during sensor setup movements, eliminating the reliance on dedicated calibration targets. It optimizes extrinsic parameters by employing a novel evolutionary algorithm on a specifically designed loss function that measures human alignment across modalities. Our approach showcases a notable 4.43% improvement in the loss over extrinsic parameters obtained from target-based calibration in the FieldSAFE dataset. This advancement reduces costs related to target creation, saves time in diverse pose collection, mitigates repetitive calibration efforts amid sensor drift or setting changes, and broadens accessibility by obviating the need for specific targets. The adaptability of our method in various environments, like urban streets or expansive farm fields, stems from leveraging the ubiquitous presence of humans. Our method presents an efficient, cost-effective, and readily applicable means of extrinsic calibration, enhancing sensor fusion capabilities in the critical fields reliant on precise and robust data acquisition.


Assuntos
Agricultura , Algoritmos , Humanos , Calibragem , Evolução Biológica , Fazendas
14.
Sensors (Basel) ; 24(4)2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38400380

RESUMO

As a fundamental issue in robotics academia and industry, indoor autonomous mobile robots (AMRs) have been extensively studied. For AMRs, it is crucial to obtain information about their working environment and themselves, which can be realized through sensors and the extraction of corresponding information from the measurements of these sensors. The application of sensing technologies can enable mobile robots to perform localization, mapping, target or obstacle recognition, and motion tasks, etc. This paper reviews sensing technologies for autonomous mobile robots in indoor scenes. The benefits and potential problems of using a single sensor in application are analyzed and compared, and the basic principles and popular algorithms used in processing these sensor data are introduced. In addition, some mainstream technologies of multi-sensor fusion are introduced. Finally, this paper discusses the future development trends in the sensing technology for autonomous mobile robots in indoor scenes, as well as the challenges in the practical application environments.

15.
Sensors (Basel) ; 24(11)2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38894396

RESUMO

The growing use of Unmanned Aerial Vehicles (UAVs) raises the need to improve their autonomous navigation capabilities. Visual odometry allows for dispensing positioning systems, such as GPS, especially on indoor flights. This paper reports an effort toward UAV autonomous navigation by proposing a translational velocity observer based on inertial and visual measurements for a quadrotor. The proposed observer complementarily fuses available measurements from different domains and is synthesized following the Immersion and Invariance observer design technique. A formal Lyapunov-based observer error convergence to zero is provided. The proposed observer algorithm is evaluated using numerical simulations in the Parrot Mambo Minidrone App from Simulink-Matlab.

16.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732934

RESUMO

In the field of robotics and autonomous driving, dynamic occupancy grid maps (DOGMs) are typically used to represent the position and velocity information of objects. Although three-dimensional light detection and ranging (LiDAR) sensor-based DOGMs have been actively researched, they have limitations, as they cannot classify types of objects. Therefore, in this study, a deep learning-based camera-LiDAR sensor fusion technique is employed as input to DOGMs. Consequently, not only the position and velocity information of objects but also their class information can be updated, expanding the application areas of DOGMs. Moreover, unclassified LiDAR point measurements contribute to the formation of a map of the surrounding environment, improving the reliability of perception by registering objects that were not classified by deep learning. To achieve this, we developed update rules on the basis of the Dempster-Shafer evidence theory, incorporating class information and the uncertainty of objects occupying grid cells. Furthermore, we analyzed the accuracy of the velocity estimation using two update models. One assigns the occupancy probability only to the edges of the oriented bounding box, whereas the other assigns the occupancy probability to the entire area of the box. The performance of the developed perception technique is evaluated using the public nuScenes dataset. The developed DOGM with object class information will help autonomous vehicles to navigate in complex urban driving environments by providing them with rich information, such as the class and velocity of nearby obstacles.

17.
Sensors (Basel) ; 24(12)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38931487

RESUMO

Loop-closure detection plays a pivotal role in simultaneous localization and mapping (SLAM). It serves to minimize cumulative errors and ensure the overall consistency of the generated map. This paper introduces a multi-sensor fusion-based loop-closure detection scheme (TS-LCD) to address the challenges of low robustness and inaccurate loop-closure detection encountered in single-sensor systems under varying lighting conditions and structurally similar environments. Our method comprises two innovative components: a timestamp synchronization method based on data processing and interpolation, and a two-order loop-closure detection scheme based on the fusion validation of visual and laser loops. Experimental results on the publicly available KITTI dataset reveal that the proposed method outperforms baseline algorithms, achieving a significant average reduction of 2.76% in the trajectory error (TE) and a notable decrease of 1.381 m per 100 m in the relative error (RE). Furthermore, it boosts loop-closure detection efficiency by an average of 15.5%, thereby effectively enhancing the positioning accuracy of odometry.

18.
Sensors (Basel) ; 24(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39000829

RESUMO

This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.

19.
Sensors (Basel) ; 24(13)2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-39000897

RESUMO

Effective security surveillance is crucial in the railway sector to prevent security incidents, including vandalism, trespassing, and sabotage. This paper discusses the challenges of maintaining seamless surveillance over extensive railway infrastructure, considering both technological advances and the growing risks posed by terrorist attacks. Based on previous research, this paper discusses the limitations of current surveillance methods, particularly in managing information overload and false alarms that result from integrating multiple sensor technologies. To address these issues, we propose a new fusion model that utilises Probabilistic Occupancy Maps (POMs) and Bayesian fusion techniques. The fusion model is evaluated on a comprehensive dataset comprising three use cases with a total of eight real life critical scenarios. We show that, with this model, the detection accuracy can be increased while simultaneously reducing the false alarms in railway security surveillance systems. This way, our approach aims to enhance situational awareness and reduce false alarms, thereby improving the effectiveness of railway security measures.

20.
Sensors (Basel) ; 24(13)2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-39000972

RESUMO

With the continuous development of new sensor features and tracking algorithms for object tracking, researchers have opportunities to experiment using different combinations. However, there is no standard or agreed method for selecting an appropriate architecture for autonomous vehicle (AV) crash reconstruction using multi-sensor-based sensor fusion. This study proposes a novel simulation method for tracking performance evaluation (SMTPE) to solve this problem. The SMTPE helps select the best tracking architecture for AV crash reconstruction. This study reveals that a radar-camera-based centralized tracking architecture of multi-sensor fusion performed the best among three different architectures tested with varying sensor setups, sampling rates, and vehicle crash scenarios. We provide a brief guideline for the best practices in selecting appropriate sensor fusion and tracking architecture arrangements, which can be helpful for future vehicle crash reconstruction and other AV improvement research.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA