Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Entropy (Basel) ; 26(1)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38275499

RESUMO

The profound impacts of severe air pollution on human health, ecological balance, and economic stability are undeniable. Precise air quality forecasting stands as a crucial necessity, enabling governmental bodies and vulnerable communities to proactively take essential measures to reduce exposure to detrimental pollutants. Previous research has primarily focused on predicting air quality using only time-series data. However, the importance of remote-sensing image data has received limited attention. This paper proposes a new multi-modal deep-learning model, Res-GCN, which integrates high spatial resolution remote-sensing images and time-series air quality data from multiple stations to forecast future air quality. Res-GCN employs two deep-learning networks, one utilizing the residual network to extract hidden visual information from remote-sensing images, and another using a dynamic spatio-temporal graph convolution network to capture spatio-temporal information from time-series data. By extracting features from two different modalities, improved predictive performance can be achieved. To demonstrate the effectiveness of the proposed model, experiments were conducted on two real-world datasets. The results show that the Res-GCN model effectively extracts multi-modal features, significantly enhancing the accuracy of multi-step predictions. Compared to the best-performing baseline model, the multi-step prediction's mean absolute error, root mean square error, and mean absolute percentage error increased by approximately 6%, 7%, and 7%, respectively.

2.
Front Bioeng Biotechnol ; 11: 1349372, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38268935

RESUMO

Rehabilitation robots have gained considerable focus in recent years, aiming to assist immobilized patients in regaining motor capabilities in their limbs. However, most current rehabilitation robots are designed specifically for either upper or lower limbs. This limits their ability to facilitate coordinated movement between upper and lower limbs and poses challenges in accurately identifying patients' intentions for multi-limbs coordinated movement. This research presents a multi-postures upper and lower limb cooperative rehabilitation robot (U-LLCRR) to address this gap. Additionally, the study proposes a method that can be adjusted to accommodate multi-channel surface electromyographic (sEMG) signals. This method aims to accurately identify upper and lower limb coordinated movement intentions during rehabilitation training. By using genetic algorithms and dissimilarity evaluation, various features are optimized. The Sine-BWOA-LSSVM (SBL) classification model is developed using the improved Black Widow Optimization Algorithm (BWOA) to enhance the performance of the Least Squares Support Vector Machine (LSSVM) classifier. Discrete movement recognition studies are conducted to validate the exceptional precision of the SBL classification model in limb movement recognition, achieving an average accuracy of 92.87%. Ultimately, the U-LLCRR undergoes online testing to evaluate continuous motion, specifically the movements of "Marching in place with arm swinging". The results show that the SBL classification model maintains high accuracy in recognizing continuous motion intentions, with an average identification rate of 89.25%. This indicates its potential usefulness in future rehabilitation robot-active training methods, which will be a promising tool for a wide range of applications in the fields of healthcare, sports, and beyond.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA