Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-39000897

RESUMO

Effective security surveillance is crucial in the railway sector to prevent security incidents, including vandalism, trespassing, and sabotage. This paper discusses the challenges of maintaining seamless surveillance over extensive railway infrastructure, considering both technological advances and the growing risks posed by terrorist attacks. Based on previous research, this paper discusses the limitations of current surveillance methods, particularly in managing information overload and false alarms that result from integrating multiple sensor technologies. To address these issues, we propose a new fusion model that utilises Probabilistic Occupancy Maps (POMs) and Bayesian fusion techniques. The fusion model is evaluated on a comprehensive dataset comprising three use cases with a total of eight real life critical scenarios. We show that, with this model, the detection accuracy can be increased while simultaneously reducing the false alarms in railway security surveillance systems. This way, our approach aims to enhance situational awareness and reduce false alarms, thereby improving the effectiveness of railway security measures.

2.
Sensors (Basel) ; 24(12)2024 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-38931679

RESUMO

In the domain of mobile robot navigation, conventional path-planning algorithms typically rely on predefined rules and prior map information, which exhibit significant limitations when confronting unknown, intricate environments. With the rapid evolution of artificial intelligence technology, deep reinforcement learning (DRL) algorithms have demonstrated considerable effectiveness across various application scenarios. In this investigation, we introduce a self-exploration and navigation approach based on a deep reinforcement learning framework, aimed at resolving the navigation challenges of mobile robots in unfamiliar environments. Firstly, we fuse data from the robot's onboard lidar sensors and camera and integrate odometer readings with target coordinates to establish the instantaneous state of the decision environment. Subsequently, a deep neural network processes these composite inputs to generate motion control strategies, which are then integrated into the local planning component of the robot's navigation stack. Finally, we employ an innovative heuristic function capable of synthesizing map information and global objectives to select the optimal local navigation points, thereby guiding the robot progressively toward its global target point. In practical experiments, our methodology demonstrates superior performance compared to similar navigation methods in complex, unknown environments devoid of predefined map information.

3.
Sensors (Basel) ; 24(18)2024 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-39338859

RESUMO

The point cloud is one of the measurement results of local measurement and is widely used because of its high measurement accuracy, high data density, and low environmental impact. However, since point cloud data from a single measurement are generally small in spatial extent, it is necessary to accurately globalize the local point cloud to measure large components. In this paper, the method of using an iGPS (indoor Global Positioning System) as an external measurement device to realize high-accuracy globalization of local point cloud data is proposed. Two calibration models are also discussed for different application scenarios. Verification experiments prove that the average calibration errors of these two calibration models are 0.12 mm and 0.17 mm, respectively. The proposed method can maintain calibration precision in a large spatial range (about 10 m × 10 m × 5 m), which is of high value for engineering applications.

4.
Sensors (Basel) ; 23(2)2023 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-36679512

RESUMO

Today, more and more Internet public media platforms allowing people to make donations or seek help are being founded in China. However, there are few specialized sports-related public welfare platforms. In this paper, a sports-related public welfare platform that aims to help people who were disabled due to participation in sports and those who are disabled but want to participate in sports was developed based on multi-sensor technology. A multi-sensor data fusion algorithm was developed, and its estimation performance was verified by comparing it with the existing Kalman consistent filtering algorithm in terms of average estimation and average consistency errors. Experimental results prove that the speed of the data collection and analysis of the sports-related public welfare platform using the algorithm established in this paper was greatly improved. Relevant data on how users used this platform showed that various factors affected users' practical satisfaction with sports-related public welfare media platforms. It is suggested that a sports-related public welfare media platform should pay attention to the aid effect, and specific efforts should be devoted to improving the reliability and timeliness of public welfare aid information, and ensuring the stability of the platform system.


Assuntos
Esportes , Humanos , Reprodutibilidade dos Testes , Tecnologia , Coleta de Dados , Algoritmos
5.
Sensors (Basel) ; 23(10)2023 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-37430664

RESUMO

Human activity recognition (HAR) is becoming increasingly important, especially with the growing number of elderly people living at home. However, most sensors, such as cameras, do not perform well in low-light environments. To address this issue, we designed a HAR system that combines a camera and a millimeter wave radar, taking advantage of each sensor and a fusion algorithm to distinguish between confusing human activities and to improve accuracy in low-light settings. To extract the spatial and temporal features contained in the multisensor fusion data, we designed an improved CNN-LSTM model. In addition, three data fusion algorithms were studied and investigated. Compared to camera data in low-light environments, the fusion data significantly improved the HAR accuracy by at least 26.68%, 19.87%, and 21.92% under the data level fusion algorithm, feature level fusion algorithm, and decision level fusion algorithm, respectively. Moreover, the data level fusion algorithm also resulted in a reduction of the best misclassification rate to 2%~6%. These findings suggest that the proposed system has the potential to enhance the accuracy of HAR in low-light environments and to decrease human activity misclassification rates.


Assuntos
Algoritmos , Atividades Humanas , Idoso , Humanos , Radar , Reconhecimento Psicológico
6.
Sensors (Basel) ; 23(21)2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37960585

RESUMO

This paper presents a leader-follower mobile robot control approach using onboard sensors. The follower robot is equipped with an Intel RealSense camera mounted on a rotating platform. Camera observations and ArUco markers are used to localize the robots to each other and relative to the workspace. The rotating platform allows the expansion of the perception range. As a result, the robot can use observations that are not within the camera's field of view at the same time in the localization process. The decision-making process associated with the control of camera rotation is implemented using behavior trees. In addition, measurements from encoders and IMUs are used to improve the quality of localization. Data fusion is performed using the EKF filter and allows the user to determine the robot's poses. A 3D-printed cuboidal tower is added to the leader robot with four ArUco markers located on its sides. Fiducial landmarks are placed on vertical surfaces in the workspace to improve the localization process. The experiments were performed to verify the effectiveness of the presented control algorithm. The robot operating system (ROS) was installed on both robots.

7.
Sensors (Basel) ; 23(2)2023 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-36679519

RESUMO

A single sensor is prone to decline recognition accuracy in the face of a complex environment, while the existing multi-sensor evidence theory fusion methods do not comprehensively consider the impact of evidence conflict and fuzziness. In this paper, a new evidence weight combination and probability allocation method is proposed, which calculated the degree of evidence fuzziness through the maximum entropy principle, and also considered the impact of evidence conflict on fusing results. The two impact factors were combined to calculate the trusted discount and reallocate the probability function. Finally, Dempster's combination rule was used to fuse every piece of evidence. On this basis, experiments were first conducted to prove that the existing weight combination methods produce results contrary to common sense when handling high-conflicting and high-clarity evidence, and then comparative experiments were conducted to prove the effectiveness of the proposed evidence weight combination and probability allocation method. Moreover, it was verified, on the PAMAP2 data set, that the proposed method can obtain higher fusing accuracy and more reliable fusing results in all kinds of behavior recognition. Compared with the traditional methods and the existing improved methods, the weight allocation method proposed in this paper dynamically adjusts the weight of fuzziness and conflict in the fusing process and improves the fusing accuracy by about 3.3% and 1.7% respectively which solved the limitations of the existing weight combination methods.


Assuntos
Reconhecimento Psicológico , Confiança , Funções Verossimilhança , Entropia
8.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991658

RESUMO

Intelligent connected vehicles (ICVs) have played an important role in improving the intelligence degree of transportation systems, and improving the trajectory prediction capability of ICVs is beneficial for traffic efficiency and safety. In this paper, a real-time trajectory prediction method based on vehicle-to-everything (V2X) communication is proposed for ICVs to improve the accuracy of their trajectory prediction. Firstly, this paper applies a Gaussian mixture probability hypothesis density (GM-PHD) model to construct the multidimension dataset of ICV states. Secondly, this paper adopts vehicular microscopic data with more dimensions, which is output by GM-PHD as the input of LSTM to ensure the consistency of the prediction results. Then, the signal light factor and Q-Learning algorithm were applied to improve the LSTM model, adding features in the spatial dimension to complement the temporal features used in the LSTM. When compared with the previous models, more consideration was given to the dynamic spatial environment. Finally, an intersection at Fushi Road in Shijingshan District, Beijing, was selected as the field test scenario. The final experimental results show that the GM-PHD model achieved an average error of 0.1181 m, which is a 44.05% reduction compared to the LiDAR-based model. Meanwhile, the error of the proposed model can reach 0.501 m. When compared to the social LSTM model, the prediction error was reduced by 29.43% under the average displacement error (ADE) metric. The proposed method can provide data support and an effective theoretical basis for decision systems to improve traffic safety.

9.
Sensors (Basel) ; 23(24)2023 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-38139534

RESUMO

Indoor fires pose significant threats in terms of casualties and economic losses globally. Thus, it is vital to accurately detect indoor fires at an early stage. To improve the accuracy of indoor fire detection for the resource-constrained embedded platform, an indoor fire detection method based on multi-sensor fusion and a lightweight convolutional neural network (CNN) is proposed. Firstly, the Savitzky-Golay (SG) filter is used to clean the three types of heterogeneous sensor data, then the cleaned sensor data are transformed by means of the Gramian Angular Field (GAF) method into matrices, which are finally integrated into a three-dimensional matrix. This preprocessing stage will preserve temporal dependency and enlarge the characteristics of time-series data. Therefore, we could reduce the number of blocks, channels and layers in the network, leading to a lightweight CNN for indoor fire detection. Furthermore, we use the Fire Dynamic Simulator (FDS) to simulate data for the training stage, enhancing the robustness of the network. The fire detection performance of the proposed method is verified through an experiment. It was found that the proposed method achieved an impressive accuracy of 99.1%, while the number of CNN parameters and the amount of computation is still small, which is more suitable for the resource-constrained embedded platform of an indoor fire detection system.

10.
Sensors (Basel) ; 22(15)2022 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-35957462

RESUMO

The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, the cloud model is employed to construct the basic probability assignment (BPA) function of the evidence corresponding to each data source. To address the issue that traditional evidence theory produces results that do not correspond to the facts when fusing conflicting evidence, the three measures of the Jousselme distance, cosine similarity, and the Jaccard coefficient are combined to measure the similarity of the evidence. The Hellinger distance of the interval is used to calculate the credibility of the evidence. The similarity and credibility are combined to improve the evidence, and the fusion is performed according to Dempster's rule to finally obtain the results. The numerical example results show that the proposed improved evidence theory method has better convergence and focus, and the confidence in the correct proposition is up to 100%. Applying the proposed multi-sensor data-fusion method to early indoor fire detection, the method improves the accuracy by 0.9-6.4% and reduces the false alarm rate by 0.7-10.2% compared with traditional and other improved evidence theories, proving its validity and feasibility, which provides a certain reference value for multi-sensor information fusion.

11.
Sensors (Basel) ; 22(21)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36365922

RESUMO

Ensemble learning systems (ELS) have been widely utilized for human activity recognition (HAR) with multiple homogeneous or heterogeneous sensors. However, traditional ensemble approaches for HAR cannot always work well due to insufficient accuracy and diversity of base classifiers, the absence of ensemble pruning, as well as the inefficiency of the fusion strategy. To overcome these problems, this paper proposes a novel selective ensemble approach with group decision-making (GDM) for decision-level fusion in HAR. As a result, the fusion process in the ELS is transformed into an abstract process that includes individual experts (base classifiers) making decisions with the GDM fusion strategy. Firstly, a set of diverse local base classifiers are constructed through the corresponding mechanism of the base classifier and the sensor. Secondly, the pruning methods and the number of selected base classifiers for the fusion phase are determined by considering the diversity among base classifiers and the accuracy of candidate classifiers. Two ensemble pruning methods are utilized: mixed diversity measure and complementarity measure. Thirdly, component decision information from the selected base classifiers is combined by using the GDM fusion strategy and the recognition results of the HAR approach can be obtained. Experimental results on two public activity recognition datasets (The OPPORTUNITY dataset; Daily and Sports Activity Dataset (DSAD)) suggest that the proposed GDM-based approach outperforms the well-known fusion techniques and other state-of-the-art approaches in the literature.


Assuntos
Algoritmos , Atividades Humanas , Humanos , Tomada de Decisões
12.
Sensors (Basel) ; 22(19)2022 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-36236747

RESUMO

This paper presents a method to include unmodeled dynamics of load or a robot's end-effector into algorithms for collision detection or general understanding of a robot's operation context. The approach relies on the application of a previously developed modification of the Dynamic Time Warping algorithm, as well as a universally applicable algorithm for identifying kinematic parameters. The entire process can be applied to arbitrary robot configuration, and it does not require identification of dynamic parameters. The paper addresses the two main categories of contact tasks with unmodelled dynamics, which are determined based on whether the external contact force has a consistent profile in the end effector or base coordinate. Conclusions for representative examples analysed in the paper are applicable to tasks such as load manipulation, press bending, and crimping for the first type of forces and applications such as drilling, screwdriving, snap-fit, bolting, and riveting assembly for the latter category. The results presented in the paper are based on realistic testing with measurements obtained from an industrial robot.


Assuntos
Robótica , Algoritmos , Fenômenos Biomecânicos , Fenômenos Mecânicos , Robótica/métodos
13.
Sensors (Basel) ; 22(8)2022 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-35458841

RESUMO

The pervasive use of sensors and actuators in the Industry 4.0 paradigm has changed the way we interact with industrial systems. In such a context, modern frameworks are not only limited to the system telemetry but also include the detection of potentially harmful conditions. However, when the number of signals generated by a system is large, it becomes challenging to properly correlate the information for an effective diagnosis. The combination of Artificial Intelligence and sensor data fusion techniques is a valid solution to address this problem, implementing models capable of extracting information from a set of heterogeneous sources. On the other hand, the constrained resources of Edge devices, where these algorithms are usually executed, pose strict limitations in terms of memory occupation and models complexity. To overcome this problem, in this paper we propose an Echo State Network architecture which exploits sensor data fusion to detect the faults on a scale replica industrial plant. Thanks to its sparse weights structure, Echo State Networks are Recurrent Neural Networks models, which exhibit a low complexity and memory footprint, which makes them suitable to be deployed on an Edge device. Through the analysis of vibration and current signals, the proposed model is able to correctly detect the majority of the faults occurring in the industrial plant. Experimental results demonstrate the feasibility of the proposed approach and present a comparison with other approaches, where we show that our methodology is the best trade-off in terms of precision, recall, F1-score and inference time.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Algoritmos , Indústrias
14.
Sensors (Basel) ; 22(8)2022 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-35459007

RESUMO

The objective of smart cities is to improve the quality of life for citizens by using Information and Communication Technology (ICT). The smart IoT environment consists of multiple sensor devices that continuously produce a large amount of data. In the IoT system, accurate inference from multi-sensor data is imperative to make a correct decision. Sensor data are often imprecise, resulting in low-quality inference results and wrong decisions. Correspondingly, single-context data are insufficient for making an accurate decision. In this paper, a novel compound context-aware scheme is proposed based on Bayesian inference to achieve accurate fusion and inference from the sensory data. In the proposed scheme, multi-sensor data are fused based on the relation and contexts of sensor data whether they are dependent or not on each other. Extensive computer simulations show that the proposed technique significantly improves the inference accuracy when it is compared to the other two representative Bayesian inference techniques.


Assuntos
Comunicação , Qualidade de Vida , Teorema de Bayes , Cidades , Simulação por Computador
15.
Sensors (Basel) ; 23(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36616729

RESUMO

The rapid development of microsystems technology with the availability of various machine learning algorithms facilitates human activity recognition (HAR) and localization by low-cost and low-complexity systems in various applications related to industry 4.0, healthcare, ambient assisted living as well as tracking and navigation tasks. Previous work, which provided a spatiotemporal framework for HAR by fusing sensor data generated from an inertial measurement unit (IMU) with data obtained by an RGB photodiode for visible light sensing (VLS), already demonstrated promising results for real-time HAR and room identification. Based on these results, we extended the system by applying feature extraction methods of the time and frequency domain to improve considerably the correct determination of common human activities in industrial scenarios in combination with room localization. This increases the correct detection of activities to over 90% accuracy. Furthermore, it is demonstrated that this solution is applicable to real-world operating conditions in ambient light.


Assuntos
Inteligência Ambiental , Atividades Humanas , Humanos , Algoritmos , Aprendizado de Máquina
16.
Sensors (Basel) ; 22(17)2022 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-36081107

RESUMO

The recognition of warheads in the target cloud of the ballistic midcourse phase remains a challenging issue for missile defense systems. Considering factors such as the differing dimensions of the features between sensors and the different recognition credibility of each sensor, this paper proposes a weighted decision-level fusion architecture to take advantage of data from multiple radar sensors, and an online feature reliability evaluation method is also used to comprehensively generate sensor weight coefficients. The weighted decision-level fusion method can overcome the deficiency of a single sensor and enhance the recognition rate for warheads in the midcourse phase by considering the changes in the reliability of the sensor's performance caused by the influence of the environment, location, and other factors during observation. Based on the simulation dataset, the experiment was carried out with multiple sensors and multiple bandwidths, and the results showed that the proposed model could work well with various classifiers involving traditional learning algorithms and ensemble learning algorithms.


Assuntos
Algoritmos , Simulação por Computador , Reprodutibilidade dos Testes
17.
Sensors (Basel) ; 22(6)2022 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-35336348

RESUMO

Due to the need for controlling many ageing and complex structures, structural health monitoring (SHM) has become increasingly common over the past few decades. However, one of the main limitations for the implementation of continuous monitoring systems in real-world structures is the effect that benign influences, such as environmental and operational variations (EOVs), have on damage sensitive features. These fluctuations may mask malign changes caused by structural damages, resulting in false structural condition assessment. When damage identification is implemented as novelty detection due to the lack of known damage states, outliers may be part of the data set as the result of the benign and malign factors mentioned above. Thanks to the developments in the field of robust outlier detection, the current paper presents a new data fusion method based on the use of cointegration and minimum covariance determinant estimator (MCD), which allows us to visualize and to classify outliers in SHM data, depending on their origin. To validate the effectiveness of this technique, the recent case study of the KW51 bridge has been considered, whose natural frequencies are subjected to variations due to both EOVs and a real structural change.

18.
Sensors (Basel) ; 22(13)2022 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-35808239

RESUMO

We present a workflow for seamless real-time navigation and 3D thermal mapping in combined indoor and outdoor environments in a global reference frame. The automated workflow and partly real-time capabilities are of special interest for inspection tasks and also for other time-critical applications. We use a hand-held integrated positioning system (IPS), which is a real-time capable visual-aided inertial navigation technology, and augment it with an additional passive thermal infrared camera and global referencing capabilities. The global reference is realized through surveyed optical markers (AprilTags). Due to the sensor data's fusion of the stereo camera and the thermal images, the resulting georeferenced 3D point cloud is enriched with thermal intensity values. A challenging calibration approach is used to geometrically calibrate and pixel-co-register the trifocal camera system. By fusing the terrestrial dataset with additional geographic information from an unmanned aerial vehicle, we gain a complete building hull point cloud and automatically reconstruct a semantic 3D model. A single-family house with surroundings in the village of Morschenich near the city of Jülich (German federal state North Rhine-Westphalia) was used as a test site to demonstrate our workflow. The presented work is a step towards automated building information modeling.


Assuntos
Imageamento Tridimensional , Semântica , Calibragem , Termografia , Visão Ocular
19.
Sensors (Basel) ; 21(21)2021 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-34770350

RESUMO

In the field of underwater vision, image matching between the main two sensors (sonar and optical camera) has always been a challenging problem. The independent imaging mechanism of the two determines the modalities of the image, and the local features of the images under various modalities are significantly different, which makes the general matching method based on the optical image invalid. In order to make full use of underwater acoustic and optical images, and promote the development of multisensor information fusion (MSIF) technology, this letter proposes to apply an image attribute transfer algorithm and advanced local feature descriptor to solve the problem of underwater acousto-optic image matching. We utilize real and simulated underwater images for testing; experimental results show that our proposed method could effectively preprocess these multimodal images to obtain an accurate matching result, thus providing a new solution for the underwater multisensor image matching task.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Acústica , Som , Visão Ocular
20.
Sensors (Basel) ; 21(12)2021 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-34203831

RESUMO

Technology has been promoting a great transformation in farming. The introduction of robotics; the use of sensors in the field; and the advances in computer vision; allow new systems to be developed to assist processes, such as phenotyping, of crop's life cycle monitoring. This work presents, which we believe to be the first time, a system capable of generating 3D models of non-rigid corn plants, which can be used as a tool in the phenotyping process. The system is composed by two modules: an terrestrial acquisition module and a processing module. The terrestrial acquisition module is composed by a robot, equipped with an RGB-D camera and three sets of temperature, humidity, and luminosity sensors, that collects data in the field. The processing module conducts the non-rigid 3D plants reconstruction and merges the sensor data into these models. The work presented here also shows a novel technique for background removal in depth images, as well as efficient techniques for processing these images and the sensor data. Experiments have shown that from the models generated and the data collected, plant structural measurements can be performed accurately and the plant's environment can be mapped, allowing the plant's health to be evaluated and providing greater crop efficiency.


Assuntos
Imageamento Tridimensional , Robótica , Agricultura , Fazendas , Zea mays
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA