Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(2)2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38257717

RESUMO

In health monitoring systems for the elderly, a crucial aspect is unobtrusively and continuously monitoring their activities to detect potentially hazardous incidents such as sudden falls as soon as they occur. However, the effectiveness of current non-contact sensor-based activity detection systems is limited by obstacles present in the environment. To overcome this limitation, a straightforward yet highly efficient approach involves utilizing multiple sensors that collaborate seamlessly. This paper proposes a method that leverages 2D Light Detection and Ranging (Lidar) technology for activity detection. Multiple 2D Lidars are positioned in an indoor environment with varying obstacles such as furniture, working cohesively to create a comprehensive representation of ongoing activities. The data from these Lidars is concatenated and transformed into a more interpretable format, resembling images. A convolutional Long Short-Term Memory (LSTM) Neural Network is then used to process these generated images to classify the activities. The proposed approach achieves high accuracy in three tasks: activity detection, fall detection, and unsteady gait detection. Specifically, it attains accuracies of 96.10%, 99.13%, and 93.13% for these tasks, respectively. This demonstrates the efficacy and promise of the method in effectively monitoring and identifying potentially hazardous events for the elderly through 2D Lidars, which are non-intrusive sensing technology.

2.
Sensors (Basel) ; 23(5)2023 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-36904735

RESUMO

Monitoring the activities of elderly people living alone is of great importance since it allows for the detection of when hazardous events such as falling occur. In this context, the use of 2D light detection and ranging (LIDAR) has been explored, among others, as a way to identify such events. Typically, a 2D LIDAR is placed near the ground and collects measurements continuously, and a computational device classifies these measurements. However, in a realistic environment with home furniture, it is hard for such a device to operate as it requires a direct line of sight (LOS) with its target. Furniture will block the infrared (IR) rays from reaching the monitored person thus limiting the effectiveness of such sensors. Nonetheless, due to their fixed location, if a fall is not detected when it happens, it cannot be detected afterwards. In this context, cleaning robots present a much better alternative given their autonomy. In this paper, we propose to use a 2D LIDAR mounted on top of a cleaning robot. Through continuous movement, the robot is able to collect distance information continuously. Despite having the same drawback, by roaming in the room, the robot can identify if a person is laying on the ground after falling, even after a certain period from the fall event. To achieve such a goal, the measurements captured by the moving LIDAR are transformed, interpolated, and compared to a reference state of the surroundings. A convolutional long short-term memory (LSTM) neural network is trained to classify the processed measurements and identify if a fall event occurs or has occurred. Through simulations, we show that such a system can achieve an accuracy equal to 81.2% in fall detection and 99% in the detection of lying bodies. Compared to the conventional method, which uses a static LIDAR, the accuracy reaches for the same tasks 69.4% and 88.6%, respectively.


Assuntos
Robótica , Idoso , Humanos , Acidentes por Quedas , Atividades Humanas , Raios Infravermelhos , Decoração de Interiores e Mobiliário
3.
Sensors (Basel) ; 22(10)2022 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-35632305

RESUMO

In this paper, we propose an activity detection system using a 24 × 32 resolution infrared array sensor placed on the ceiling. We first collect the data at different resolutions (i.e., 24 × 32, 12 × 16, and 6 × 8) and apply the advanced deep learning (DL) techniques of Super-Resolution (SR) and denoising to enhance the quality of the images. We then classify the images/sequences of images depending on the activities the subject is performing using a hybrid deep learning model combining a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM). We use data augmentation to improve the training of the neural networks by incorporating a wider variety of samples. The process of data augmentation is performed by a Conditional Generative Adversarial Network (CGAN). By enhancing the images using SR, removing the noise, and adding more training samples via data augmentation, our target is to improve the classification accuracy of the neural network. Through experiments, we show that employing these deep learning techniques to low-resolution noisy infrared images leads to a noticeable improvement in performance. The classification accuracy improved from 78.32% to 84.43% (for images with 6 × 8 resolution), and from 90.11% to 94.54% (for images with 12 × 16 resolution) when we used the CNN and CNN + LSTM networks, respectively.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tecnologia
4.
Sensors (Basel) ; 22(16)2022 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-36015969

RESUMO

In this paper, we address the challenging task of estimating the distance between different users in a Millimeter Wave (mmWave) massive Multiple-Input Multiple-Output (mMIMO) system. The conventional Time of Arrival (ToA) and Angle of Arrival (AoA) based methods need users under the Line-of-Sight (LoS) scenario. Under the Non-LoS (NLoS) scenario, the fingerprint-based method can extract the fingerprint that includes the location information of users from the channel state information (CSI). However, high accuracy CSI estimation involves a huge overhead and high computational complexity. Thus, we design a new type of fingerprint generated by beam sweeping. In other words, we do not have to know the CSI to generate fingerprint. In general, each user can record the Received Signal Strength Indicator (RSSI) of the received beams by performing beam sweeping. Such measured RSSI values, formatted in a matrix, could be seen as beam energy image containing the angle and location information. However, we do not use the beam energy image as the fingerprint directly. Instead, we use the difference between two beam energy images as the fingerprint to train a Deep Neural Network (DNN) that learns the relationship between the fingerprints and the distance between these two users. Because the proposed fingerprint is rich in terms of the users' location information, the DNN can easily learn the relationship between the difference between two beam energy images and the distance between those two users. We term it as the DNN-based inter-user distance (IUD) estimation method. Nonetheless, we investigate the possibility of using a super-resolution network to reduce the involved beam sweeping overhead. Using super-resolution to increase the resolution of low-resolution beam energy images obtained by the wide beam sweeping for IUD estimation can facilitate considerate improvement in accuracy performance. We evaluate the proposed DNN-based IUD estimation method by using original images of resolution 4 × 4, 8 × 8, and 16 × 16. Simulation results show that our method can achieve an average distance estimation error equal to 0.13 m for a coverage area of 60 × 30 m2. Moreover, our method outperforms the state-of-the-art IUD estimation methods that rely on users' location information.


Assuntos
COVID-19 , Simulação por Computador , Humanos , Redes Neurais de Computação
5.
Artigo em Inglês | MEDLINE | ID: mdl-38083153

RESUMO

Automatic detection of facial action units (AUs) has recently gained attention for its applications in facial expression analysis. However, using AUs in research can be challenging since they are typically manually annotated, which can be time-consuming, repetitive, and error-prone. Advancements in automated AU detection can greatly reduce the time required for this task and improve the reliability of annotations for downstream tasks, such as pain detection. In this study, we present an efficient method for detecting AUs using only 3D face landmarks. Using the detected AUs, we trained state-of-the-art deep learning models to detect pain, which validates the effectiveness of the AU detection model. Our study also establishes a new benchmark for pain detection on the BP4D+ dataset, demonstrating an 11.13% improvement in F1-score and a 3.09% improvement in accuracy using a Transformer model compared to existing studies. Our results show that utilizing only eight predicted AUs still achieves competitive results when compared to using all 34 ground-truth AUs.


Assuntos
Face , Expressão Facial , Humanos , Reprodutibilidade dos Testes , Benchmarking , Dor/diagnóstico
6.
Bioengineering (Basel) ; 10(7)2023 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-37508889

RESUMO

Alzheimer's disease (AD) is a type of dementia that is more likely to occur as people age. It currently has no known cure. As the world's population is aging quickly, early screening for AD has become increasingly important. Traditional screening methods such as brain scans or psychiatric tests are stressful and costly. The patients are likely to feel reluctant to such screenings and fail to receive timely intervention. While researchers have been exploring the use of language in dementia detection, less attention has been given to face-related features. The paper focuses on investigating how face-related features can aid in detecting dementia by exploring the PROMPT dataset that contains video data collected from patients with dementia during interviews. In this work, we extracted three types of features from the videos, including face mesh, Histogram of Oriented Gradients (HOG) features, and Action Units (AU). We trained traditional machine learning models and deep learning models on the extracted features and investigated their effectiveness in dementia detection. Our experiments show that the use of HOG features achieved the highest accuracy of 79% in dementia detection, followed by AU features with 71% accuracy, and face mesh features with 66% accuracy. Our results show that face-related features have the potential to be a crucial indicator in automated computational dementia detection.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4151-4155, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018912

RESUMO

To build a system for monitoring elderly people living alone, an important step needs to be done: identifying the presence/absence of the person being monitored and his location. Such task has several applications that we discuss in this paper, and remains very important. Several techniques were proposed in the literature. However, most of them suffer from issues related to privacy, coverage or convenience. In the current paper, we propose an infrared array sensor-based approach to detect the presence/absence of a person in a room. We used a wide angle low resolution sensor (i.e., 32×24 pixels) to collect heat-related information from the area monitored, and used Deep Learning (DL) to identify the presence of up to 3 people with an accuracy reaching 97%. Our approach also detects of the presence or absence of a person with a 100% accuracy. Nevertheless, it allows identifying the location of the detected people within a room of dimensions 4×7.4 m with a margin of 0.3 m.


Assuntos
Aprendizado Profundo , Nível de Saúde , Tecnologia de Sensoriamento Remoto , Idoso , Temperatura Alta , Humanos , Monitorização Fisiológica , Características de Residência
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa