Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(3)2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38339496

RESUMEN

Pedestrian tracking in surveillance videos is crucial and challenging for precise personnel management. Due to the limited coverage of a single video, the integration of multiple surveillance videos is necessary in practical applications. In the realm of pedestrian management using multiple surveillance videos, continuous pedestrian tracking is quite important. However, prevailing cross-video pedestrian matching methods mainly rely on the appearance features of pedestrians, resulting in low matching accuracy and poor tracking robustness. To address these shortcomings, this paper presents a cross-video pedestrian tracking algorithm, which introduces spatial information. The proposed algorithm introduces the coordinate features of pedestrians in different videos and a linear weighting strategy focusing on the overlapping view of the tracking process. The experimental results show that, compared to traditional methods, the method in this paper improves the success rate of target pedestrian matching and enhances the robustness of continuous pedestrian tracking. This study provides a viable reference for pedestrian tracking and crowd management in video applications.

2.
Sensors (Basel) ; 19(3)2019 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-30717199

RESUMEN

In the indoor environment, the activity of the pedestrian can reflect some semantic information. These activities can be used as the landmarks for indoor localization. In this paper, we propose a pedestrian activities recognition method based on a convolutional neural network. A new convolutional neural network has been designed to learn the proper features automatically. Experiments show that the proposed method achieves approximately 98% accuracy in about 2 s in identifying nine types of activities, including still, walk, upstairs, up elevator, up escalator, down elevator, down escalator, downstairs and turning. Moreover, we have built a pedestrian activity database, which contains more than 6 GB of data of accelerometers, magnetometers, gyroscopes and barometers collected with various types of smartphones. We will make it public to contribute to academic research.

3.
Sensors (Basel) ; 17(4)2017 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-28420108

RESUMEN

WiFi fingerprinting-based indoor localization has been widely used due to its simplicity and can be implemented on the smartphones. The major drawback of WiFi fingerprinting is that the radio map construction is very labor-intensive and time-consuming. Another drawback of WiFi fingerprinting is the Received Signal Strength (RSS) variance problem, caused by environmental changes and device diversity. RSS variance severely degrades the localization accuracy. In this paper, we propose a robust crowdsourcing-based indoor localization system (RCILS). RCILS can automatically construct the radio map using crowdsourcing data collected by smartphones. RCILS abstracts the indoor map as the semantics graph in which the edges are the possible user paths and the vertexes are the location where users may take special activities. RCILS extracts the activity sequence contained in the trajectories by activity detection and pedestrian dead-reckoning. Based on the semantics graph and activity sequence, crowdsourcing trajectories can be located and a radio map is constructed based on the localization results. For the RSS variance problem, RCILS uses the trajectory fingerprint model for indoor localization. During online localization, RCILS obtains an RSS sequence and realizes localization by matching the RSS sequence with the radio map. To evaluate RCILS, we apply RCILS in an office building. Experiment results demonstrate the efficiency and robustness of RCILS.

4.
PLoS One ; 10(6): e0128545, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26030908

RESUMEN

How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes.


Asunto(s)
Movimientos Oculares/fisiología , Reconocimiento Visual de Modelos/fisiología , Percepción Visual/fisiología , Adulto , Atención/fisiología , Señales (Psicología) , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA