Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 92
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(17)2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39275411

RESUMEN

Gait recognition based on gait silhouette profiles is currently a major approach in the field of gait recognition. In previous studies, models typically used gait silhouette images sized at 64 × 64 pixels as input data. However, in practical applications, cases may arise where silhouette images are smaller than 64 × 64, leading to a loss in detail information and significantly affecting model accuracy. To address these challenges, we propose a gait recognition system named Multi-scale Feature Cross-Fusion Gait (MFCF-Gait). At the input stage of the model, we employ super-resolution algorithms to preprocess the data. During this process, we observed that different super-resolution algorithms applied to larger silhouette images also affect training outcomes. Improved super-resolution algorithms contribute to enhancing model performance. In terms of model architecture, we introduce a multi-scale feature cross-fusion network model. By integrating low-level feature information from higher-resolution images with high-level feature information from lower-resolution images, the model emphasizes smaller-scale details, thereby improving recognition accuracy for smaller silhouette images. The experimental results on the CASIA-B dataset demonstrate significant improvements. On 64 × 64 silhouette images, the accuracies for NM, BG, and CL states reached 96.49%, 91.42%, and 78.24%, respectively. On 32 × 32 silhouette images, the accuracies were 94.23%, 87.68%, and 71.57%, respectively, showing notable enhancements.


Asunto(s)
Algoritmos , Marcha , Marcha/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
2.
Sensors (Basel) ; 24(17)2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39275485

RESUMEN

As people age, abnormal gait recognition becomes a critical problem in the field of healthcare. Currently, some algorithms can classify gaits with different pathologies, but they cannot guarantee high accuracy while keeping the model lightweight. To address these issues, this paper proposes a lightweight network (NSVGT-ICBAM-FACN) based on the new side-view gait template (NSVGT), improved convolutional block attention module (ICBAM), and transfer learning that fuses convolutional features containing high-level information and attention features containing semantic information of interest to achieve robust pathological gait recognition. The NSVGT contains different levels of information such as gait shape, gait dynamics, and energy distribution at different parts of the body, which integrates and compensates for the strengths and limitations of each feature, making gait characterization more robust. The ICBAM employs parallel concatenation and depthwise separable convolution (DSC). The former strengthens the interaction between features. The latter improves the efficiency of processing gait information. In the classification head, we choose to employ DSC instead of global average pooling. This method preserves the spatial information and learns the weights of different locations, which solves the problem that the corner points and center points in the feature map have the same weight. The classification accuracies for this paper's model on the self-constructed dataset and GAIT-IST dataset are 98.43% and 98.69%, which are 0.77% and 0.59% higher than that of the SOTA model, respectively. The experiments demonstrate that the method achieves good balance between lightweightness and performance.


Asunto(s)
Marcha , Delgadez , Marcha/fisiología , Delgadez/fisiopatología , Atención , Teléfono Celular , Aplicaciones Móviles , Aprendizaje Automático , Conjuntos de Datos como Asunto
3.
Sensors (Basel) ; 24(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38894144

RESUMEN

Gait, a manifestation of one's walking pattern, intricately reflects the harmonious interplay of various bodily systems, offering valuable insights into an individual's health status. However, the current study has shortcomings in the extraction of temporal and spatial dependencies in joint motion, resulting in inefficiencies in pathological gait classification. In this paper, we propose a Frequency Pyramid Graph Convolutional Network (FP-GCN), advocating to complement temporal analysis and further enhance spatial feature extraction. specifically, a spectral decomposition component is adopted to extract gait data with different time frames, which can enhance the detection of rhythmic patterns and velocity variations in human gait and allow a detailed analysis of the temporal features. Furthermore, a novel pyramidal feature extraction approach is developed to analyze the inter-sensor dependencies, which can integrate features from different pathways, enhancing both temporal and spatial feature extraction. Our experimentation on diverse datasets demonstrates the effectiveness of our approach. Notably, FP-GCN achieves an impressive accuracy of 98.78% on public datasets and 96.54% on proprietary data, surpassing existing methodologies and underscoring its potential for advancing pathological gait classification. In summary, our innovative FP-GCN contributes to advancing feature extraction and pathological gait recognition, which may offer potential advancements in healthcare provisions, especially in regions with limited access to medical resources and in home-care environments. This work lays the foundation for further exploration and underscores the importance of remote health monitoring, diagnosis, and personalized interventions.


Asunto(s)
Marcha , Redes Neurales de la Computación , Humanos , Marcha/fisiología , Algoritmos , Caminata/fisiología
4.
Sensors (Basel) ; 23(8)2023 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-37112147

RESUMEN

Gait recognition, the task of identifying an individual based on their unique walking style, can be difficult because walking styles can be influenced by external factors such as clothing, viewing angle, and carrying conditions. To address these challenges, this paper proposes a multi-model gait recognition system that integrates Convolutional Neural Networks (CNNs) and Vision Transformer. The first step in the process is to obtain a gait energy image, which is achieved by applying an averaging technique to a gait cycle. The gait energy image is then fed into three different models, DenseNet-201, VGG-16, and a Vision Transformer. These models are pre-trained and fine-tuned to encode the salient gait features that are specific to an individual's walking style. Each model provides prediction scores for the classes based on the encoded features, and these scores are then summed and averaged to produce the final class label. The performance of this multi-model gait recognition system was evaluated on three datasets, CASIA-B, OU-ISIR dataset D, and OU-ISIR Large Population dataset. The experimental results showed substantial improvement compared to existing methods on all three datasets. The integration of CNNs and ViT allows the system to learn both the pre-defined and distinct features, providing a robust solution for gait recognition even under the influence of covariates.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Marcha , Aprendizaje , Modelos Biológicos
5.
Sensors (Basel) ; 23(10)2023 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-37430892

RESUMEN

Parkinson's disease (PD) is a neurodegenerative disorder that causes gait abnormalities. Early and accurate recognition of PD gait is crucial for effective treatment. Recently, deep learning techniques have shown promising results in PD gait analysis. However, most existing methods focus on severity estimation and frozen gait detection, while the recognition of Parkinsonian gait and normal gait from the forward video has not been reported. In this paper, we propose a novel spatiotemporal modeling method for PD gait recognition, named WM-STGCN, which utilizes a Weighted adjacency matrix with virtual connection and Multi-scale temporal convolution in a Spatiotemporal Graph Convolution Network. The weighted matrix enables different intensities to be assigned to different spatial features, including virtual connections, while the multi-scale temporal convolution helps to effectively capture the temporal features at different scales. Moreover, we employ various approaches to augment skeleton data. Experimental results show that our proposed method achieved the best accuracy of 87.1% and an F1 score of 92.85%, outperforming Long short-term memory (LSTM), K-nearest neighbors (KNN), Decision tree, AdaBoost, and ST-GCN models. Our proposed WM-STGCN provides an effective spatiotemporal modeling method for PD gait recognition that outperforms existing methods. It has the potential for clinical application in PD diagnosis and treatment.


Asunto(s)
Marcha , Enfermedad de Parkinson , Humanos , Enfermedad de Parkinson/diagnóstico , Análisis de la Marcha , Análisis por Conglomerados , Memoria a Largo Plazo
6.
Sensors (Basel) ; 23(2)2023 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-36679401

RESUMEN

Personal identification based on radar gait measurement is an important application of biometric technology because it enables remote and continuous identification of people, irrespective of the lighting conditions and subjects' outfits. This study explores an effective time-velocity distribution and its relevant parameters for Doppler-radar-based personal gait identification using deep learning. Most conventional studies on radar-based gait identification used a short-time Fourier transform (STFT), which is a general method to obtain time-velocity distribution for motion recognition using Doppler radar. However, the length of the window function that controls the time and velocity resolutions of the time-velocity image was empirically selected, and several other methods for calculating high-resolution time-velocity distributions were not considered. In this study, we compared four types of representative time-velocity distributions calculated from the Doppler-radar-received signals: STFT, wavelet transform, Wigner-Ville distribution, and smoothed pseudo-Wigner-Ville distribution. In addition, the identification accuracies of various parameter settings were also investigated. We observed that the optimally tuned STFT outperformed other high-resolution distributions, and a short length of the window function in the STFT process led to a reasonable accuracy; the best identification accuracy was 99% for the identification of twenty-five test subjects. These results indicate that STFT is the optimal time-velocity distribution for gait-based personal identification using the Doppler radar, although the time and velocity resolutions of the other methods were better than those of the STFT.


Asunto(s)
Aprendizaje Profundo , Radar , Humanos , Análisis de Fourier , Ultrasonografía Doppler/métodos , Marcha
7.
Sensors (Basel) ; 23(2)2023 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-36679646

RESUMEN

Some recent studies use a convolutional neural network (CNN) or long short-term memory (LSTM) to extract gait features, but the methods based on the CNN and LSTM have a high loss rate of time-series and spatial information, respectively. Since gait has obvious time-series characteristics, while CNN only collects waveform characteristics, and only uses CNN for gait recognition, this leads to a certain lack of time-series characteristics. LSTM can collect time-series characteristics, but LSTM results in performance degradation when processing long sequences. However, using CNN can compress the length of feature vectors. In this paper, a sequential convolution LSTM network for gait recognition using multimodal wearable inertial sensors is proposed, which is called SConvLSTM. Based on 1D-CNN and a bidirectional LSTM network, the method can automatically extract features from the raw acceleration and gyroscope signals without a manual feature design. 1D-CNN is first used to extract the high-dimensional features of the inertial sensor signals. While retaining the time-series features of the data, the dimension of the features is expanded, and the length of the feature vectors is compressed. Then, the bidirectional LSTM network is used to extract the time-series features of the data. The proposed method uses fixed-length data frames as the input and does not require gait cycle detection, which avoids the impact of cycle detection errors on the recognition accuracy. We performed experiments on three public benchmark datasets: UCI-HAR, HuGaDB, and WISDM. The results show that SConvLSTM performs better than most of those reporting the best performance methods, at present, on the three datasets.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Marcha , Aceleración , Memoria a Largo Plazo
8.
Sensors (Basel) ; 23(3)2023 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-36772096

RESUMEN

In this work, a novel Window Score Fusion post-processing technique for biometric gait recognition is proposed and successfully tested. We show that the use of this technique allows recognition rates to be greatly improved, independently of the configuration for the previous stages of the system. For this, a strict biometric evaluation protocol has been followed, using a biometric database composed of data acquired from 38 subjects by means of a commercial smartwatch in two different sessions. A cross-session test (where training and testing data were acquired in different days) was performed. Following the state of the art, the proposal was tested with different configurations in the acquisition, pre-processing, feature extraction and classification stages, achieving improvements in all of the scenarios; improvements of 100% (0% error) were even reached in some cases. This shows the advantages of including the proposed technique, whatever the system.


Asunto(s)
Identificación Biométrica , Dispositivos Electrónicos Vestibles , Humanos , Identificación Biométrica/métodos , Biometría , Marcha , Reconocimiento en Psicología , Algoritmos
9.
Sensors (Basel) ; 23(1)2023 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-36617105

RESUMEN

Human gait recognition is one of the most interesting issues within the subject of behavioral biometrics. The most significant problems connected with the practical application of biometric systems include their accuracy as well as the speed at which they operate, understood both as the time needed to recognize a particular person as well as the time necessary to create and train a biometric system. The present study made use of an ensemble of heterogeneous base classifiers to address these issues. A Heterogeneous ensemble is a group of classification models trained using various algorithms and combined to output an effective recognition A group of parameters identified on the basis of ground reaction forces was accepted as input signals. The proposed solution was tested on a sample of 322 people (5980 gait cycles). Results concerning the accuracy of recognition (meaning the Correct Classification Rate quality at 99.65%), as well as operation time (meaning the time of model construction at <12.5 min and the time needed to recognize a person at <0.1 s), should be considered as very good and exceed in quality other methods so far described in the literature.


Asunto(s)
Algoritmos , Biometría , Humanos , Marcha
10.
Sensors (Basel) ; 23(5)2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-36904884

RESUMEN

The manner of walking (gait) is a powerful biometric that is used as a unique fingerprinting method, allowing unobtrusive behavioral analytics to be performed at a distance without subject cooperation. As opposed to more traditional biometric authentication methods, gait analysis does not require explicit cooperation of the subject and can be performed in low-resolution settings, without requiring the subject's face to be unobstructed/clearly visible. Most current approaches are developed in a controlled setting, with clean, gold-standard annotated data, which powered the development of neural architectures for recognition and classification. Only recently has gait analysis ventured into using more diverse, large-scale, and realistic datasets to pretrained networks in a self-supervised manner. Self-supervised training regime enables learning diverse and robust gait representations without expensive manual human annotations. Prompted by the ubiquitous use of the transformer model in all areas of deep learning, including computer vision, in this work, we explore the use of five different vision transformer architectures directly applied to self-supervised gait recognition. We adapt and pretrain the simple ViT, CaiT, CrossFormer, Token2Token, and TwinsSVT on two different large-scale gait datasets: GREW and DenseGait. We provide extensive results for zero-shot and fine-tuning on two benchmark gait recognition datasets, CASIA-B and FVG, and explore the relationship between the amount of spatial and temporal gait information used by the visual transformer. Our results show that in designing transformer models for processing motion, using a hierarchical approach (i.e., CrossFormer models) on finer-grained movement fairs comparatively better than previous whole-skeleton approaches.


Asunto(s)
Marcha , Reconocimiento en Psicología , Humanos , Análisis de la Marcha , Caminata , Benchmarking
11.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-36904963

RESUMEN

The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.


Asunto(s)
Aprendizaje Profundo , Humanos , Algoritmos , Marcha , Aprendizaje Automático , Biometría/métodos
12.
Sensors (Basel) ; 23(16)2023 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-37631809

RESUMEN

As a biological characteristic, gait uses the posture characteristics of human walking for identification, which has the advantages of a long recognition distance and no requirement for the cooperation of subjects. This paper proposes a research method for recognising gait images at the frame level, even in cases of discontinuity, based on human keypoint extraction. In order to reduce the dependence of the network on the temporal characteristics of the image sequence during the training process, a discontinuous frame screening module is added to the front end of the gait feature extraction network, to restrict the image information input to the network. Gait feature extraction adds a cross-stage partial connection (CSP) structure to the spatial-temporal graph convolutional networks' bottleneck structure in the ResGCN network, to effectively filter interference information. It also inserts XBNBlock, on the basis of the CSP structure, to reduce estimation caused by network layer deepening and small-batch-size training. The experimental results of our model on the gait dataset CASIA-B achieve an average recognition accuracy of 79.5%. The proposed method can also achieve 78.1% accuracy on the CASIA-B sample, after training with a limited number of image frames, which means that the model is more robust.


Asunto(s)
Marcha , Proyectos de Investigación , Humanos , Caminata , Postura , Esqueleto
13.
Sensors (Basel) ; 23(22)2023 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-38005675

RESUMEN

Aiming at challenges such as the high complexity of the network model, the large number of parameters, and the slow speed of training and testing in cross-view gait recognition, this paper proposes a solution: Multi-teacher Joint Knowledge Distillation (MJKD). The algorithm employs multiple complex teacher models to train gait images from a single view, extracting inter-class relationships that are then weighted and integrated into the set of inter-class relationships. These relationships guide the training of a lightweight student model, improving its gait feature extraction capability and recognition accuracy. To validate the effectiveness of the proposed Multi-teacher Joint Knowledge Distillation (MJKD), the paper performs experiments on the CASIA_B dataset using the ResNet network as the benchmark. The experimental results show that the student model trained by Multi-teacher Joint Knowledge Distillation (MJKD) achieves 98.24% recognition accuracy while significantly reducing the number of parameters and computational cost.

14.
Sensors (Basel) ; 23(20)2023 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-37896720

RESUMEN

Gait recognition aims to identify a person based on his unique walking pattern. Compared with silhouettes and skeletons, skinned multi-person linear (SMPL) models can simultaneously provide human pose and shape information and are robust to viewpoint and clothing variances. However, previous approaches have only considered SMPL parameters as a whole and are yet to explore their potential for gait recognition thoroughly. To address this problem, we concentrate on SMPL representations and propose a novel SMPL-based method named GaitSG for gait recognition, which takes SMPL parameters in the graph structure as input. Specifically, we represent the SMPL model as graph nodes and employ graph convolution techniques to effectively model the human model topology and generate discriminative gait features. Further, we utilize prior knowledge of the human body and elaborately design a novel part graph pooling block, PGPB, to encode viewpoint information explicitly. The PGPB also alleviates the physical distance-unaware limitation of the graph structure. Comprehensive experiments on public gait recognition datasets, Gait3D and CASIA-B, demonstrate that GaitSG can achieve better performance and faster convergence than existing model-based approaches. Specifically, compared with the baseline SMPLGait (3D only), our model achieves approximately twice the Rank-1 accuracy and requires three times fewer training iterations on Gait3D.


Asunto(s)
Marcha , Caminata , Humanos , Conocimiento , Modelos Lineales , Distanciamiento Físico
15.
Sensors (Basel) ; 24(1)2023 Dec 27.
Artículo en Inglés | MEDLINE | ID: mdl-38203004

RESUMEN

Gait recognition, crucial in biometrics and behavioral analytics, has applications in human-computer interaction, identity verification, and health monitoring. Traditional sensors face limitations in complex or poorly lit settings. RF-based approaches, particularly millimeter-wave technology, are gaining traction for their privacy, insensitivity to light conditions, and high resolution in wireless sensing applications. In this paper, we propose a gait recognition system called Multidimensional Point Cloud Gait Recognition (PGGait). The system uses commercial millimeter-wave radar to extract high-quality point clouds through a specially designed preprocessing pipeline. This is followed by spatial clustering algorithms to separate users and perform target tracking. Simultaneously, we enhance the original point cloud data by increasing velocity and signal-to-noise ratio, forming the input of multidimensional point clouds. Finally, the system inputs the point cloud data into a neural network to extract spatial and temporal features for user identification. We implemented the PGGait system using a commercially available 77 GHz millimeter-wave radar and conducted comprehensive testing to validate its performance. Experimental results demonstrate that PGGait achieves up to 96.75% accuracy in recognizing single-user radial paths and exceeds 94.30% recognition accuracy in the two-person case. This research provides an efficient and feasible solution for user gait recognition with various applications.


Asunto(s)
Algoritmos , Radar , Humanos , Biometría , Marcha , Redes Neurales de la Computación
16.
Sensors (Basel) ; 23(10)2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37430786

RESUMEN

Gait recognition, also known as walking pattern recognition, has expressed deep interest in the computer vision and biometrics community due to its potential to identify individuals from a distance. It has attracted increasing attention due to its potential applications and non-invasive nature. Since 2014, deep learning approaches have shown promising results in gait recognition by automatically extracting features. However, recognizing gait accurately is challenging due to the covariate factors, complexity and variability of environments, and human body representations. This paper provides a comprehensive overview of the advancements made in this field along with the challenges and limitations associated with deep learning methods. For that, it initially examines the various gait datasets used in the literature review and analyzes the performance of state-of-the-art techniques. After that, a taxonomy of deep learning methods is presented to characterize and organize the research landscape in this field. Furthermore, the taxonomy highlights the basic limitations of deep learning methods in the context of gait recognition. The paper is concluded by focusing on the present challenges and suggesting several research directions to improve the performance of gait recognition in the future.


Asunto(s)
Marcha , Caminata , Humanos , Biometría , Reconocimiento en Psicología
17.
Sensors (Basel) ; 23(7)2023 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-37050451

RESUMEN

Walking gait data acquired with force platforms may be used for person re-identification (re-ID) in various authentication, surveillance, and forensics applications. Current force platform-based re-ID systems classify a fixed set of identities (IDs), which presents a problem when IDs are added or removed from the database. We formulated force platform-based re-ID as a deep metric learning (DML) task, whereby a deep neural network learns a feature representation that can be compared between inputs using a distance metric. The force platform dataset used in this study is one of the largest and the most comprehensive of its kind, containing 193 IDs with significant variations in clothing, footwear, walking speed, and time between trials. Several DML model architectures were evaluated in a challenging setting where none of the IDs were seen during training (i.e., zero-shot re-ID) and there was only one prior sample per ID to compare with each query sample. The best architecture was 85% accurate in this setting, though an analysis of changes in walking speed and footwear between measurement instances revealed that accuracy was 28% higher on same-speed, same-footwear comparisons, compared to cross-speed, cross-footwear comparisons. These results demonstrate the potential of DML algorithms for zero-shot re-ID using force platform data, and highlight challenging cases.

18.
Entropy (Basel) ; 25(6)2023 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-37372181

RESUMEN

Gait recognition is one of the important research directions of biometric authentication technology. However, in practical applications, the original gait data is often short, and a long and complete gait video is required for successful recognition. Also, the gait images from different views have a great influence on the recognition effect. To address the above problems, we designed a gait data generation network for expanding the cross-view image data required for gait recognition, which provides sufficient data input for feature extraction branching with gait silhouette as the criterion. In addition, we propose a gait motion feature extraction network based on regional time-series coding. By independently time-series coding the joint motion data within different regions of the body, and then combining the time-series data features of each region with secondary coding, we obtain the unique motion relationships between regions of the body. Finally, bilinear matrix decomposition pooling is used to fuse spatial silhouette features and motion time-series features to obtain complete gait recognition under shorter time-length video input. We use the OUMVLP-Pose and CASIA-B datasets to validate the silhouette image branching and motion time-series branching, respectively, and employ evaluation metrics such as IS entropy value and Rank-1 accuracy to demonstrate the effectiveness of our design network. Finally, we also collect gait-motion data in the real world and test them in a complete two-branch fusion network. The experimental results show that the network we designed can effectively extract the time-series features of human motion and achieve the expansion of multi-view gait data. The real-world tests also prove that our designed method has good results and feasibility in the problem of gait recognition with short-time video as input data.

19.
Sensors (Basel) ; 22(15)2022 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-35957239

RESUMEN

Identifying people's identity by using behavioral biometrics has attracted many researchers' attention in the biometrics industry. Gait is a behavioral trait, whereby an individual is identified based on their walking style. Over the years, gait recognition has been performed by using handcrafted approaches. However, due to several covariates' effects, the competence of the approach has been compromised. Deep learning is an emerging algorithm in the biometrics field, which has the capability to tackle the covariates and produce highly accurate results. In this paper, a comprehensive overview of the existing deep learning-based gait recognition approach is presented. In addition, a summary of the performance of the approach on different gait datasets is provided.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Biometría , Marcha , Humanos , Caminata
20.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-36298427

RESUMEN

Human gait analysis presents an opportunity to study complex spatiotemporal data transpiring as co-movement patterns of multiple moving objects (i.e., human joints). Such patterns are acknowledged as movement signatures specific to an individual, offering the possibility to identify each individual based on unique gait patterns. We present a spatiotemporal deep learning model, dubbed ST-DeepGait, to featurize spatiotemporal co-movement patterns of human joints, and accordingly classify such patterns to enable human gait recognition. To this end, the ST-DeepGait model architecture is designed according to the spatiotemporal human skeletal graph in order to impose learning the salient local spatial dynamics of gait as they occur over time. Moreover, we employ a multi-layer RNN architecture to induce a sequential notion of gait cycles in the model. Our experimental results show that ST-DeepGait can achieve recognition accuracy rates over 90%. Furthermore, we qualitatively evaluate the model with the class embeddings to show interpretable separability of the features in geometric latent space. Finally, to evaluate the generalizability of our proposed model, we perform a zero-shot detection on 10 classes of data completely unseen during training and achieve a recognition accuracy rate of 88% overall. With this paper, we also contribute our gait dataset captured with an RGB-D sensor containing approximately 30 video samples of each subject for 100 subjects totaling 3087 samples. While we use human gait analysis as a motivating application to evaluate ST-DeepGait, we believe that this model can be simply adopted and adapted to study co-movement patterns of multiple moving objects in other applications such as in sports analytics and traffic pattern analysis.


Asunto(s)
Aprendizaje Profundo , Humanos , Marcha , Análisis de la Marcha
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA