Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters








Database
Language
Publication year range
1.
Sensors (Basel) ; 24(3)2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38339579

ABSTRACT

The recognition of human activity is crucial as the Internet of Things (IoT) progresses toward future smart homes. Wi-Fi-based motion-recognition stands out due to its non-contact nature and widespread applicability. However, the channel state information (CSI) related to human movement in indoor environments changes with the direction of movement, which poses challenges for existing Wi-Fi movement-recognition methods. These challenges include limited directions of movement that can be detected, short detection distances, and inaccurate feature extraction, all of which significantly constrain the wide-scale application of Wi-Fi action-recognition. To address this issue, we propose a direction-independent CSI fusion and sharing model named CSI-F, one which combines Convolutional Neural Networks (CNN) and Gated Recurrent Units (GRU). Specifically, we have introduced a series of signal-processing techniques that utilize antenna diversity to eliminate random phase shifts, thereby removing noise influences unrelated to motion information. Later, by amplifying the Doppler frequency shift effect through cyclic actions and generating a spectrogram, we further enhance the impact of actions on CSI. To demonstrate the effectiveness of this method, we conducted experiments on datasets collected in natural environments. We confirmed that the superposition of periodic actions on CSI can improve the accuracy of the process. CSI-F can achieve higher recognition accuracy compared with other methods and a monitoring coverage of up to 6 m.


Subject(s)
Internet of Things , Movement , Humans , Motion , Doppler Effect , Environment
2.
Sensors (Basel) ; 23(16)2023 Aug 13.
Article in English | MEDLINE | ID: mdl-37631685

ABSTRACT

In recent years, convolutional neural networks (CNNs) have played a dominant role in facial expression recognition. While CNN-based methods have achieved remarkable success, they are notorious for having an excessive number of parameters, and they rely on a large amount of manually annotated data. To address this challenge, we expand the number of training samples by learning expressions from a face recognition dataset to reduce the impact of a small number of samples on the network training. In the proposed deep joint learning framework, the deep features of the face recognition dataset are clustered, and simultaneously, the parameters of an efficient CNN are learned, thereby marking the data for network training automatically and efficiently. Specifically, first, we develop a new efficient CNN based on the proposed affinity convolution module with much lower computational overhead for deep feature learning and expression classification. Then, we develop an expression-guided deep facial clustering approach to cluster the deep features and generate abundant expression labels from the face recognition dataset. Finally, the AC-based CNN is fine-tuned using an updated training set and a combined loss function. Our framework is evaluated on several challenging facial expression recognition datasets as well as a self-collected dataset. In the context of facial expression recognition applied to the field of education, our proposed method achieved an impressive accuracy of 95.87% on the self-collected dataset, surpassing other existing methods.


Subject(s)
Facial Recognition , Learning , Cluster Analysis , Face , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL