RESUMEN
In order to improve the real-time performance of gesture recognition by a micro-Doppler map of mmWave radar, the point cloud based gesture recognition for mmWave radar is proposed in this paper. Two steps are carried out for mmWave radar-based gesture recognition. The first step is to estimate the point cloud of the gestures by 3D-FFT and the peak grouping. The second step is to train the TRANS-CNN model by combining the multi-head self-attention and the 1D-convolutional network so as to extract the features in the point cloud data at a deeper level to categorize the gestures. In the experiments, TI mmWave radar sensor IWR1642 is used as a benchmark to evaluate the feasibility of the proposed approach. The results show that the accuracy of the gesture recognition reaches 98.5%. In order to prove the effectiveness of our approach, a simply 2Tx2Rx radar sensor is developed in our lab, and the accuracy of recognition reaches 97.1%. The results show that our proposed gesture recognition approach achieves the best performance in real time with limited training data in comparison with the existing methods.
RESUMEN
Within the context of a smart home, detecting the operating status of appliances in the environment plays a pivotal role, estimating power consumption, issuing overuse reminders, and identifying faults. The traditional contact-based approaches require equipment updates such as incorporating smart sockets or high-precision electric meters. Non-constant approaches involve the use of technologies like laser and Ultra-Wideband (UWB) radar. The former can only monitor one appliance at a time, and the latter is unable to detect appliances with extremely tiny vibrations and tends to be susceptible to interference from human activities. To address these challenges, we introduce HomeOSD, an advanced appliance status-detection system that uses mmWave radar. This innovative solution simultaneously tracks multiple appliances without human activity interference by measuring their extremely tiny vibrations. To reduce interference from other moving objects, like people, we introduce a Vibration-Intensity Metric based on periodic signal characteristics. We present the Adaptive Weighted Minimum Distance Classifier (AWMDC) to counteract appliance vibration fluctuations. Finally, we develop a system using a common mmWave radar and carry out real-world experiments to evaluate HomeOSD's performance. The detection accuracy is 95.58%, and the promising results demonstrate the feasibility and reliability of our proposed system.
RESUMEN
With an aging population, numerous assistive and monitoring technologies are under development to enable older adults to age in place. To facilitate aging in place, predicting risk factors such as falls and hospitalization and providing early interventions are important. Much of the work on ambient monitoring for risk prediction has centered on gait speed analysis, utilizing privacy-preserving sensors like radar. Despite compelling evidence that monitoring step length in addition to gait speed is crucial for predicting risk, radar-based methods have not explored step length measurement in the home. Furthermore, laboratory experiments on step length measurement using radars are limited to proof-of-concept studies with few healthy subjects. To address this gap, a radar-based step length measurement system for the home is proposed based on detection and tracking using a radar point cloud followed by Doppler speed profiling of the torso to obtain step lengths in the home. The proposed method was evaluated in a clinical environment involving 35 frail older adults to establish its validity. Additionally, the method was assessed in people's homes, with 21 frail older adults who had participated in the clinical assessment. The proposed radar-based step length measurement method was compared to the gold-standard Zeno Walkway Gait Analysis System, revealing a 4.5 cm/8.3% error in a clinical setting. Furthermore, it exhibited excellent reliability (ICC(2,k) = 0.91, 95% CI 0.82 to 0.96) in uncontrolled home settings. The method also proved accurate in uncontrolled home settings, as indicated by a strong consistency (ICC(3,k) = 0.81 (95% CI 0.53 to 0.92)) between home measurements and in-clinic assessments.
Asunto(s)
Fragilidad , Humanos , Anciano , Radar , Reproducibilidad de los Resultados , Vida Independiente , Velocidad al Caminar , MarchaRESUMEN
In the field of detection and ranging, multiple complementary sensing modalities may be used to enrich information obtained from a dynamic scene. One application of this sensor fusion is in public security and surveillance, where efficacy and privacy protection measures must be continually evaluated. We present a novel deployment of sensor fusion for the discrete detection of concealed metal objects on persons whilst preserving their privacy. This is achieved by coupling off-the-shelf mmWave radar and depth camera technology with a novel neural network architecture that processes radar signals using convolutional Long Short-Term Memory (LSTM) blocks and depth signals using convolutional operations. The combined latent features are then magnified using deep feature magnification to reveal cross-modality dependencies in the data. We further propose a decoder, based on the feature extraction and embedding block, to learn an efficient upsampling of the latent space to locate the concealed object in the spatial domain through radar feature guidance. We demonstrate the ability to detect the presence and infer the 3D location of concealed metal objects. We achieve accuracies of up to 95% using a technique that is robust to multiple persons. This work provides a demonstration of the potential for cost-effective and portable sensor fusion with strong opportunities for further development.
RESUMEN
Human gesture detection, obstacle detection, collision avoidance, parking aids, automotive driving, medical, meteorological, industrial, agriculture, defense, space, and other relevant fields have all benefited from recent advancements in mmWave radar sensor technology. A mmWave radar has several advantages that set it apart from other types of sensors. A mmWave radar can operate in bright, dazzling, or no-light conditions. A mmWave radar has better antenna miniaturization than other traditional radars, and it has better range resolution. However, as more data sets have been made available, there has been a significant increase in the potential for incorporating radar data into different machine learning methods for various applications. This review focuses on key performance metrics in mmWave-radar-based sensing, detailed applications, and machine learning techniques used with mmWave radar for a variety of tasks. This article starts out with a discussion of the various working bands of mmWave radars, then moves on to various types of mmWave radars and their key specifications, mmWave radar data interpretation, vast applications in various domains, and, in the end, a discussion of machine learning algorithms applied with radar data for various applications. Our review serves as a practical reference for beginners developing mmWave-radar-based applications by utilizing machine learning techniques.
RESUMEN
In this article, a novel heterogeneous fusion of convolutional neural networks that combined an RGB camera and an active mmWave radar sensor for the smart parking meter is proposed. In general, the parking fee collector on the street outdoor surroundings by traffic flows, shadows, and reflections makes it an exceedingly tough task to identify street parking regions. The proposed heterogeneous fusion convolutional neural networks combine an active radar sensor and image input with specific geometric area, allowing them to detect the parking region against different tough conditions such as rain, fog, dust, snow, glare, and traffic flow. They use convolutional neural networks to acquire output results along with the individual training and fusion of RGB camera and mmWave radar data. To achieve real-time performance, the proposed algorithm has been implemented on a GPU-accelerated embedded platform Jetson Nano with a heterogeneous hardware acceleration methodology. The experimental results exhibit that the accuracy of the heterogeneous fusion method can reach up to 99.33% on average.
RESUMEN
By integrating IoT technology, smart door locks can provide greater convenience, security, and remote access. This paper presents a novel framework for smart doors that combines face detection and recognition techniques based on mmWave radar and camera sensors. The proposed framework aims to improve the accuracy and some security aspects arising from some limitations of the camera, such as overlapping and lighting conditions. By integrating mmWave radar and camera-based face detection and recognition algorithms, the system can accurately detect and identify people approaching the door, providing seamless and secure access. This framework includes four key components: person detection based on mmWave radar, camera preparation and integration, person identification, and door lock control. The experiments show that the framework can be useful for a smart home.
RESUMEN
Because of societal changes, human activity recognition, part of home care systems, has become increasingly important. Camera-based recognition is mainstream but has privacy concerns and is less accurate under dim lighting. In contrast, radar sensors do not record sensitive information, avoid the invasion of privacy, and work in poor lighting. However, the collected data are often sparse. To address this issue, we propose a novel Multimodal Two-stream GNN Framework for Efficient Point Cloud and Skeleton Data Alignment (MTGEA), which improves recognition accuracy through accurate skeletal features from Kinect models. We first collected two datasets using the mmWave radar and Kinect v4 sensors. Then, we used zero-padding, Gaussian Noise (GN), and Agglomerative Hierarchical Clustering (AHC) to increase the number of collected point clouds to 25 per frame to match the skeleton data. Second, we used Spatial Temporal Graph Convolutional Network (ST-GCN) architecture to acquire multimodal representations in the spatio-temporal domain focusing on skeletal features. Finally, we implemented an attention mechanism aligning the two multimodal features to capture the correlation between point clouds and skeleton data. The resulting model was evaluated empirically on human activity data and shown to improve human activity recognition with radar data only. All datasets and codes are available in our GitHub.
RESUMEN
Human Activity Recognition (HAR) that includes gait analysis may be useful for various rehabilitation and telemonitoring applications. Current gait analysis methods, such as wearables or cameras, have privacy and operational constraints, especially when used with older adults. Millimeter-Wave (MMW) radar is a promising solution for gait applications because of its low-cost, better privacy, and resilience to ambient light and climate conditions. This paper presents a novel human gait analysis method that combines the micro-Doppler spectrogram and skeletal pose estimation using MMW radar for HAR. In our approach, we used the Texas Instruments IWR6843ISK-ODS MMW radar to obtain the micro-Doppler spectrogram and point clouds for 19 human joints. We developed a multilayer Convolutional Neural Network (CNN) to recognize and classify five different gait patterns with an accuracy of 95.7 to 98.8% using MMW radar data. During training of the CNN algorithm, we used the extracted 3D coordinates of 25 joints using the Kinect V2 sensor and compared them with the point clouds data to improve the estimation. Finally, we performed a real-time simulation to observe the point cloud behavior for different activities and validated our system against the ground truth values. The proposed method demonstrates the ability to distinguish between different human activities to obtain clinically relevant gait information.
Asunto(s)
Análisis de la Marcha , Radar , Anciano , Algoritmos , Marcha , Humanos , Aprendizaje AutomáticoRESUMEN
For autonomous driving, it is important to detect obstacles in all scales accurately for safety consideration. In this paper, we propose a new spatial attention fusion (SAF) method for obstacle detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF. The proposed fusion method can be embedded in the feature-extraction stage, which leverages the features of mmWave radar and vision sensor effectively. Based on the SAF, an attention weight matrix is generated to fuse the vision features, which is different from the concatenation fusion and element-wise add fusion. Moreover, the proposed SAF can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework. In addition, we build a generation model, which converts radar points to radar images for neural network training. Numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking. In addition, the source code will be released in the GitHub.
RESUMEN
BACKGROUND: Position verification and motion monitoring are critical for safe and precise radiotherapy (RT). Existing approaches to these tasks based on visible light or x-ray are suboptimal either because they cannot penetrate obstructions to the patient's skin or introduce additional radiation exposure. The low-cost mmWave radar is an ideal solution for these tasks as it can monitor patient position and motion continuously throughout the treatment delivery. PURPOSE: To develop and validate frequency-modulated continuous wave (FMCW) mmWave radars for position verification and motion tracking during RT delivery. METHODS: A 77 GHz FMCW mmWave module was used in this study. Chirp Z Transform-based (CZT) algorithm was developed to process the intermediate frequency (IF) signals. Absolute distances to flat Solid Water slabs and human shape phantoms were measured. The accuracy of absolute distance and relative displacement were evaluated. RESULTS: Without obstruction, mmWave based on the CZT algorithm was able to detect absolute distance within 1 mm for a Solid Water slab that simulated the reflectivity of the human body. Through obstructive materials, the mmWave device was able to detect absolute distance within 5 mm in the worst case and within 3.5 mm in most cases. The CZT algorithm significantly improved the accuracy of absolute distance measurement compared with Fast Fourier Transform (FFT) algorithm and was able to achieve submillimeter displacement accuracy with and without obstructions. The surface-to-skin distance (SSD) measurement accuracy was within 8 mm in the anterior of the phantom. CONCLUSIONS: With the CZT signal processing algorithm, the mmWave radar is able to measure the absolute distance to a flat surface within 1 mm. But the absolute distance measurement to a human shape phantom is as large as 8 mm at some angles. Further improvement is necessary to improve the accuracy of SSD measurement to uneven surfaces by the mmWave radar.
Asunto(s)
Procesamiento de Señales Asistido por Computador , Agua , Humanos , Movimiento (Física) , RadiografíaRESUMEN
To facilitate penetrating-imaging oriented applications such as nondestructive internal defect detection and localization under obstructed environment, a novel pixel-level information fusion strategy for mmWave and visible images is proposed. More concretely, inspired by both the advancement of deep learning on universal image fusion and the maturity of near-field millimeter wave imaging technology, an effective deep transfer learning strategy is presented to capture the information hidden in visible and millimeter wave images. Furthermore, by implementing fine-tuning strategy and by using an improved bilateral filter, the proposed fusion strategy can robustly exploit the information in both the near-field millimeter wave field and the visual light field. Extensive experiments imply that the proposed strategy can provide superior performance in terms of accuracy and robustness under real-world environment.
RESUMEN
This dataset contains complex signals coming from a mmWave FMCW radar system. Signals were acquired during a measurement campaign taken indoor and aimed to assess people's different ways of walking. Measurement setup and devices are described. The dataset consists of the acquisitions of six different types of activities, performed by 29 subjects who repeat each activity several times. Therefore, the dataset contains multiple different experiments for each activity, for a total of 231 acquisitions. The subjects walk without any constraint or do not follow any pattern, thus making this dataset useful not only for human gait recognition but also to evaluate the performance of different radar data processing algorithms.