Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 589
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nano Lett ; 24(23): 6948-6956, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38810209

RESUMO

The concept of cross-sensor modulation, wherein one sensor modality can influence another's response, is often overlooked in traditional sensor fusion architectures, leading to missed opportunities for enhancing data accuracy and robustness. In contrast, biological systems, such as aquatic animals like crayfish, demonstrate superior sensor fusion through multisensory integration. These organisms adeptly integrate visual, tactile, and chemical cues to perform tasks such as evading predators and locating prey. Drawing inspiration from this, we propose a neuromorphic platform that integrates graphene-based chemitransistors, monolayer molybdenum disulfide (MoS2) based photosensitive memtransistors, and triboelectric tactile sensors to achieve "Super-Additive" responses to weak chemical, visual, and tactile cues and demonstrate contextual response modulation, also referred to as the "Inverse Effectiveness Effect." We hold the view that integrating bio-inspired sensor fusion principles across various modalities holds promise for a wide range of applications.


Assuntos
Astacoidea , Grafite , Molibdênio , Tato , Animais , Molibdênio/química , Grafite/química , Dissulfetos/química
2.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39275571

RESUMO

In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual-inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot's body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results.

3.
Sensors (Basel) ; 24(12)2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38931615

RESUMO

In this study, we enhanced odometry performance by integrating vision sensors with LiDAR sensors, which exhibit contrasting characteristics. Vision sensors provide extensive environmental information but are limited in precise distance measurement, whereas LiDAR offers high accuracy in distance metrics but lacks detailed environmental data. By utilizing data from vision sensors, this research compensates for the inadequate descriptors of LiDAR sensors, thereby improving LiDAR feature matching performance. Traditional fusion methods, which rely on extracting depth from image features, depend heavily on vision sensors and are vulnerable under challenging conditions such as rain, darkness, or light reflection. Utilizing vision sensors as primary sensors under such conditions can lead to significant mapping errors and, in the worst cases, system divergence. Conversely, our approach uses LiDAR as the primary sensor, mitigating the shortcomings of previous methods and enabling vision sensors to support LiDAR-based mapping. This maintains LiDAR Odometry performance even in environments where vision sensors are compromised, thus enhancing performance with the support of vision sensors. We adopted five prominent algorithms from the latest LiDAR SLAM open-source projects and conducted experiments on the KITTI odometry dataset. This research proposes a novel approach by integrating a vision support module into the top three LiDAR SLAM methods, thereby improving performance. By making the source code of VA-LOAM publicly available, this work enhances the accessibility of the technology, fostering reproducibility and transparency within the research community.

4.
Sensors (Basel) ; 24(8)2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38676054

RESUMO

Modern vehicles equipped with Advanced Driver Assistance Systems (ADAS) rely heavily on sensor fusion to achieve a comprehensive understanding of their surrounding environment. Traditionally, the Kalman Filter (KF) has been a popular choice for this purpose, necessitating complex data association and track management to ensure accurate results. To address errors introduced by these processes, the application of the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter is a good choice. This alternative filter implicitly handles the association and appearance/disappearance of tracks. The approach presented here allows for the replacement of KF frameworks in many applications while achieving runtimes below 1 ms on the test system. The key innovations lie in the utilization of sensor-based parameter models to implicitly handle varying Fields of View (FoV) and sensing capabilities. These models represent sensor-specific properties such as detection probability and clutter density across the state space. Additionally, we introduce a method for propagating additional track properties such as classification with the GM-PHD filter, further contributing to its versatility and applicability. The proposed GM-PHD filter approach surpasses a KF approach on the KITTI dataset and another custom dataset. The mean OSPA(2) error could be reduced from 1.56 (KF approach) to 1.40 (GM-PHD approach), showcasing its potential in ADAS perception.

5.
Sensors (Basel) ; 24(8)2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38676111

RESUMO

This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields.

6.
Sensors (Basel) ; 24(10)2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38793902

RESUMO

The development of the GPS (Global Positioning System) and related advances have made it possible to conceive of an outdoor positioning system with great accuracy; however, for indoor positioning, more efficient, reliable, and cost-effective technology is required. There are a variety of techniques utilized for indoor positioning, such as those that are Wi-Fi, Bluetooth, infrared, ultrasound, magnetic, and visual-marker-based. This work aims to design an accurate position estimation algorithm by combining raw distance data from ultrasonic sensors (Marvelmind Beacon) and acceleration data from an inertial measurement unit (IMU), utilizing the extended Kalman filter (EKF) with UDU factorization (expressed as the product of a triangular, a diagonal, and the transpose of the triangular matrix) approach. Initially, a position estimate is calculated through the use of a recursive least squares (RLS) method with a trilateration algorithm, utilizing raw distance data. This solution is then combined with acceleration data collected from the Marvelmind sensor, resulting in a position solution akin to that of the GPS. The data were initially collected via the ROS (Robot Operating System) platform and then via the Pixhawk development card, with tests conducted using a combination of four fixed and one moving Marvelmind sensors, as well as three fixed and one moving sensors. The designed algorithm is found to produce accurate results for position estimation, and is subsequently implemented on an embedded development card (Pixhawk). The tests showed that the designed algorithm gives accurate results with centimeter precision. Furthermore, test results have shown that the UDU-EKF structure integrated into the embedded system is faster than the classical EKF.

7.
Sensors (Basel) ; 24(10)2024 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-38793933

RESUMO

This paper presents an enhanced ground vehicle localization method designed to address the challenges associated with state estimation for autonomous vehicles operating in diverse environments. The focus is specifically on the precise localization of position and orientation in both local and global coordinate systems. The proposed approach integrates local estimates generated by existing visual-inertial odometry (VIO) methods into global position information obtained from the Global Navigation Satellite System (GNSS). This integration is achieved through optimizing fusion in a pose graph, ensuring precise local estimation and drift-free global position estimation. Considering the inherent complexities in autonomous driving scenarios, such as the potential failures of a visual-inertial navigation system (VINS) and restrictions on GNSS signals in urban canyons, leading to disruptions in localization outcomes, we introduce an adaptive fusion mechanism. This mechanism allows seamless switching between three modes: utilizing only VINS, using only GNSS, and normal fusion. The effectiveness of the proposed algorithm is demonstrated through rigorous testing in the Carla simulation environment and challenging UrbanNav scenarios. The evaluation includes both qualitative and quantitative analyses, revealing that the method exhibits robustness and accuracy.

8.
Sensors (Basel) ; 24(10)2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38793942

RESUMO

Autonomous driving, as a pivotal technology in modern transportation, is progressively transforming the modalities of human mobility. In this domain, vehicle detection is a significant research direction that involves the intersection of multiple disciplines, including sensor technology and computer vision. In recent years, many excellent vehicle detection methods have been reported, but few studies have focused on summarizing and analyzing these algorithms. This work provides a comprehensive review of existing vehicle detection algorithms and discusses their practical applications in the field of autonomous driving. First, we provide a brief description of the tasks, evaluation metrics, and datasets for vehicle detection. Second, more than 200 classical and latest vehicle detection algorithms are summarized in detail, including those based on machine vision, LiDAR, millimeter-wave radar, and sensor fusion. Finally, this article discusses the strengths and limitations of different algorithms and sensors, and proposes future trends.

9.
Sensors (Basel) ; 24(10)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38794061

RESUMO

Detecting objects, particularly naval mines, on the seafloor is a complex task. In naval mine countermeasures (MCM) operations, sidescan or synthetic aperture sonars have been used to search large areas. However, a single sensor cannot meet the requirements of high-precision autonomous navigation. Based on the ORB-SLAM3-VI framework, we propose ORB-SLAM3-VIP, which integrates a depth sensor, an IMU sensor and an optical sensor. This method integrates the measurements of depth sensors and an IMU sensor into the visual SLAM algorithm through tight coupling, and establishes a multi-sensor fusion SLAM model. Depth constraints are introduced into the process of initialization, scale fine-tuning, tracking and mapping to constrain the position of the sensor in the z-axis and improve the accuracy of pose estimation and map scale estimate. The test on seven sets of underwater multi-sensor sequence data in the AQUALOC dataset shows that, compared with ORB-SLAM3-VI, the ORB-SLAM3-VIP system proposed in this paper reduces the scale error in all sequences by up to 41.2%, and reduces the trajectory error by up to 41.2%. The square root has also been reduced by up to 41.6%.

10.
Sensors (Basel) ; 24(6)2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38543987

RESUMO

The use of smart indoor robotics services is gradually increasing in real-time scenarios. This paper presents a versatile approach to multi-robot backing crash prevention in indoor environments, using hardware schemes to achieve greater competence. Here, sensor fusion was initially used to analyze the state of multi-robots and their orientation within a static or dynamic scenario. The proposed novel hardware scheme-based framework integrates both static and dynamic scenarios for the execution of backing crash prevention. A round-robin (RR) scheduling algorithm was composed for the static scenario. Dynamic backing crash prevention was deployed by embedding a first come, first served (FCFS) scheduling algorithm. The behavioral control mechanism of the distributed multi-robots was integrated with FCFS and adaptive cruise control (ACC) scheduling algorithms. The integration of multiple algorithms is a challenging task for smarter indoor robotics, and the Xilinx-based partial reconfiguration method was deployed to avoid computational issues with multiple algorithms during the run-time. These methods were coded with Verilog HDL and validated using an FPGA (Zynq)-based multi-robot system.

11.
Sensors (Basel) ; 24(6)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38544217

RESUMO

Inertial measurement units (IMUs) are key components of various applications including navigation, robotics, aerospace, and automotive systems. IMU sensor characteristics have a significant impact on the accuracy and reliability of these applications. In particular, noise characteristics and bias stability are critical for proper filter settings to perform a combined GNSS/IMU solution. This paper presents an analysis based on the Allan deviation of different IMU sensors that correspond to different grades of micro-electromechanical systems (MEMS)-type IMUs in order to evaluate their accuracy and stability over time. The study covers three IMU sensors of different grades (ascending order): Rokubun Argonaut navigator sensor (InvenSense TDK MPU9250), Samsung Galaxy Note10 phone sensor (STMicroelectronics LSM6DSR), and NovAtel PwrPak7 sensor (Epson EG320N). The noise components of the sensors are computed using overlapped Allan deviation analysis on data collected over the course of a week in a static position. The focus of the analysis is to characterize the random walk noise and bias stability, which are the most critical for combined GNSS/IMU navigation and may differ or may not be listed in manufacturers' specifications. Noise characteristics are calculated for the studied sensors and examples of their use in loosely coupled GNSS/IMU processing are assessed. This work proposes a structured and reproducible approach for working with sensors for their use in navigation tasks in combination with GNSS, and can be used for sensors of different levels to supplement missing or incorrect sensor manufacturers' data.

12.
Sensors (Basel) ; 24(4)2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38400380

RESUMO

As a fundamental issue in robotics academia and industry, indoor autonomous mobile robots (AMRs) have been extensively studied. For AMRs, it is crucial to obtain information about their working environment and themselves, which can be realized through sensors and the extraction of corresponding information from the measurements of these sensors. The application of sensing technologies can enable mobile robots to perform localization, mapping, target or obstacle recognition, and motion tasks, etc. This paper reviews sensing technologies for autonomous mobile robots in indoor scenes. The benefits and potential problems of using a single sensor in application are analyzed and compared, and the basic principles and popular algorithms used in processing these sensor data are introduced. In addition, some mainstream technologies of multi-sensor fusion are introduced. Finally, this paper discusses the future development trends in the sensing technology for autonomous mobile robots in indoor scenes, as well as the challenges in the practical application environments.

13.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610366

RESUMO

This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems-as required for calibration-is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring.

14.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610568

RESUMO

Soil organic matter (SOM) is one of the best indicators to assess soil health and understand soil productivity and fertility. Therefore, measuring SOM content is a fundamental practice in soil science and agricultural research. The traditional approach (oven-dry) of measuring SOM is a costly, arduous, and time-consuming process. However, the integration of cutting-edge technology can significantly aid in the prediction of SOM, presenting a promising alternative to traditional methods. In this study, we tested the hypothesis that an accurate estimate of SOM might be obtained by combining the ground-based sensor-captured soil parameters and soil analysis data along with drone images of the farm. The data are gathered using three different methods: ground-based sensors detect soil parameters such as temperature, pH, humidity, nitrogen, phosphorous, and potassium of the soil; aerial photos taken by UAVs display the vegetative index (NDVI); and the Haney test of soil analysis reports measured in a lab from collected samples. Our datasets combined the soil parameters collected using ground-based sensors, soil analysis reports, and NDVI content of farms to perform the data analysis to predict SOM using different machine learning algorithms. We incorporated regression and ANOVA for analyzing the dataset and explored seven different machine learning algorithms, such as linear regression, Ridge regression, Lasso regression, random forest regression, Elastic Net regression, support vector machine, and Stochastic Gradient Descent regression to predict the soil organic matter content using other parameters as predictors.

15.
Sensors (Basel) ; 24(2)2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38276361

RESUMO

LiDAR sensors, pivotal in various fields like agriculture and robotics for tasks such as 3D object detection and map creation, are increasingly coupled with thermal cameras to harness heat information. This combination proves particularly effective in adverse conditions like darkness and rain. Ensuring seamless fusion between the sensors necessitates precise extrinsic calibration. Our innovative calibration method leverages human presence during sensor setup movements, eliminating the reliance on dedicated calibration targets. It optimizes extrinsic parameters by employing a novel evolutionary algorithm on a specifically designed loss function that measures human alignment across modalities. Our approach showcases a notable 4.43% improvement in the loss over extrinsic parameters obtained from target-based calibration in the FieldSAFE dataset. This advancement reduces costs related to target creation, saves time in diverse pose collection, mitigates repetitive calibration efforts amid sensor drift or setting changes, and broadens accessibility by obviating the need for specific targets. The adaptability of our method in various environments, like urban streets or expansive farm fields, stems from leveraging the ubiquitous presence of humans. Our method presents an efficient, cost-effective, and readily applicable means of extrinsic calibration, enhancing sensor fusion capabilities in the critical fields reliant on precise and robust data acquisition.


Assuntos
Agricultura , Algoritmos , Humanos , Calibragem , Evolução Biológica , Fazendas
16.
Sensors (Basel) ; 24(17)2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39275752

RESUMO

Current state-of-the-art (SOTA) LiDAR-only detectors perform well for 3D object detection tasks, but point cloud data are typically sparse and lacks semantic information. Detailed semantic information obtained from camera images can be added with existing LiDAR-based detectors to create a robust 3D detection pipeline. With two different data types, a major challenge in developing multi-modal sensor fusion networks is to achieve effective data fusion while managing computational resources. With separate 2D and 3D feature extraction backbones, feature fusion can become more challenging as these modes generate different gradients, leading to gradient conflicts and suboptimal convergence during network optimization. To this end, we propose a 3D object detection method, Attention-Enabled Point Fusion (AEPF). AEPF uses images and voxelized point cloud data as inputs and estimates the 3D bounding boxes of object locations as outputs. An attention mechanism is introduced to an existing feature fusion strategy to improve 3D detection accuracy and two variants are proposed. These two variants, AEPF-Small and AEPF-Large, address different needs. AEPF-Small, with a lightweight attention module and fewer parameters, offers fast inference. AEPF-Large, with a more complex attention module and increased parameters, provides higher accuracy than baseline models. Experimental results on the KITTI validation set show that AEPF-Small maintains SOTA 3D detection accuracy while inferencing at higher speeds. AEPF-Large achieves mean average precision scores of 91.13, 79.06, and 76.15 for the car class's easy, medium, and hard targets, respectively, in the KITTI validation set. Results from ablation experiments are also presented to support the choice of model architecture.

17.
Sensors (Basel) ; 24(16)2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39204932

RESUMO

The engine in-cylinder pressure is a very important parameter for the optimization of internal combustion engines. This paper proposes an alternative recursive Kalman filter-based engine cylinder pressure reconstruction approach using sensor-fused engine speed. In the proposed approach, the fused engine speed is first obtained using the centralized sensor fusion technique, which synthesizes the information from the engine vibration sensor and engine flywheel angular speed sensor. Afterwards, with the fused speed, the engine cylinder pressure signal can be reconstructed by inverse filtering of the engine structural vibration signal. The cylinder pressure reconstruction results of the proposed approach are validated by two combustion indicators, which are pressure peak Pmax and peak location Ploc. Meanwhile, the reconstruction results are compared with the results obtained by the cylinder pressure reconstruction approach using the calculated engine speed. The results of sensor fusion can indicate that the fused speed is smoother when the vibration signal is trusted more. Furthermore, the cylinder pressure reconstruction results can display the relationship between the sensor-fused speed and the cylinder pressure reconstruction accuracy, and with more belief in the vibration signal, the reconstructed results will become better.

18.
Sensors (Basel) ; 24(16)2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39205116

RESUMO

Corn, as one of the three major grain crops in China, plays a crucial role in ensuring national food security through its yield and quality. With the advancement of agricultural intelligence, agricultural robot technology has gained significant attention. High-precision navigation is the basis for realizing various operations of agricultural robots in corn fields and is closely related to the quality of operations. Corn leaf and stalk recognition and ranging are the prerequisites for achieving high-precision navigation and have attracted much attention. This paper proposes a corn leaf and stalk recognition and ranging algorithm based on multi-sensor fusion. First, YOLOv8 is used to identify corn leaves and stalks. Considering the large differences in leaf morphology and the large changes in field illumination that lead to discontinuous identification, an equidistant expansion polygon algorithm is proposed to post-process the leaves, thereby increasing the average recognition completeness of the leaves to 86.4%. Secondly, after eliminating redundant point clouds, the IMU data are used to calculate the confidence of the LiDAR and depth camera ranging point clouds, and point cloud fusion is performed based on this to achieve high-precision ranging of corn leaves. The average ranging error is 2.9 cm, which is lower than the measurement error of a single sensor. Finally, the stalk point cloud is processed and clustered using the FILL-DBSCAN algorithm to identify and measure the distance of the same corn stalk. The algorithm combines recognition accuracy and ranging accuracy to meet the needs of robot navigation or phenotypic measurement in corn fields, ensuring the stable and efficient operation of the robot in the corn field.


Assuntos
Algoritmos , Folhas de Planta , Zea mays , Zea mays/anatomia & histologia , Folhas de Planta/anatomia & histologia , Robótica , Agricultura/métodos , Produtos Agrícolas , China
19.
Sensors (Basel) ; 24(17)2024 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-39275374

RESUMO

In recent years, the safety issues of high-speed railways have remained severe. The intrusion of personnel or obstacles into the perimeter has often occurred in the past, causing derailment or parking, especially in the case of bad weather such as fog, haze, rain, etc. According to previous research, it is difficult for a single sensor to meet the application needs of all scenario, all weather, and all time domains. Due to the complementary advantages of multi-sensor data such as images and point clouds, multi-sensor fusion detection technology for high-speed railway perimeter intrusion is becoming a research hotspot. To the best of our knowledge, there has been no review of research on multi-sensor fusion detection technology for high-speed railway perimeter intrusion. To make up for this deficiency and stimulate future research, this article first analyzes the situation of high-speed railway technical defense measures and summarizes the research status of single sensor detection. Secondly, based on the analysis of typical intrusion scenarios in high-speed railways, we introduce the research status of multi-sensor data fusion detection algorithms and data. Then, we discuss risk assessment of railway safety. Finally, the trends and challenges of multi-sensor fusion detection algorithms in the railway field are discussed. This provides effective theoretical support and technical guidance for high-speed rail perimeter intrusion monitoring.

20.
Sensors (Basel) ; 24(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39000829

RESUMO

This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA