Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
IEEE J Biomed Health Inform ; 27(2): 1060-1071, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-37022394

RESUMEN

Video-based Photoplethysmography (VPPG) can identify arrhythmic pulses during atrial fibrillation (AF) from facial videos, providing a convenient and cost-effective way to screen for occult AF. However, facial motions in videos always distort VPPG pulse signals and thus lead to the false detection of AF. Photoplethysmography (PPG) pulse signals offer a possible solution to this problem due to the high quality and resemblance to VPPG pulse signals. Given this, a pulse feature disentanglement network (PFDNet) is proposed to discover the common features of VPPG and PPG pulse signals for AF detection. Taking a VPPG pulse signal and a synchronous PPG pulse signal as inputs, PFDNet is pre-trained to extract the motion-robust features that the two signals share. The pre-trained feature extractor of the VPPG pulse signal is then connected to an AF classifier, forming a VPPG-driven AF detector after joint fine-tuning. PFDNet has been tested on 1440 facial videos of 240 subjects (50% AF absence and 50% AF presence). It achieves a Cohen's Kappa value of 0.875 (95% confidence interval: 0.840-0.910, P<0.001) on the video samples with typical facial motions, which is 6.8% higher than that of the state-of-the-art method. PFDNet shows significant robustness to motion interference in the video-based AF detection task, promoting the development of opportunistic screening for AF in the community.


Asunto(s)
Fibrilación Atrial , Humanos , Fibrilación Atrial/diagnóstico , Electrocardiografía/métodos , Frecuencia Cardíaca , Fotopletismografía/métodos
2.
Biomed Opt Express ; 13(9): 4494-4509, 2022 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-36187251

RESUMEN

Remote photoplethysmography (RPPG) can detect heart rate from facial videos in a non-contact way. However, head movement often affects its performance in the real world. In this paper, a novel anti-motion interference method named T-SNE-based signal separation (TSS) is proposed to solve this problem. TSS first decomposes the observed color traces into pulse-related vectors and noise vectors using the T-SNE algorithm. Then, it selects the vector with the most significant spectral peak as the pulse signal for heart rate measurement. The proposed method is tested on a self-collected dataset (17 males and 8 females) and two public datasets (UBFC-RPPG and VIPL-HR). Experimental results show that the proposed method outperforms state-of-the-art methods, especially on the videos containing head movements, improving the Pearson correlation coefficient by 5% compared with the best contrasting method. To summarize, this work significantly strengthens the motion robustness of RPPG, which makes a substantial contribution to the development of video-based heart rate detection.

3.
Physiol Meas ; 43(11)2022 11 11.
Artículo en Inglés | MEDLINE | ID: mdl-36301705

RESUMEN

Objective. Daily blood pressure (BP) monitoring is essential since BP levels can reflect the functions of heart pumping and vasoconstriction. Although various neural network-based BP estimate approaches have been proposed, they have certain practical shortcomings, such as low estimation accuracy and poor model generalization. Based on the strategy of pre-training and partial fine-tuning, this work proposes a non-invasive method for BP estimation using the photoplethysmography (PPG) signal.Approach. To learn the PPG-BP relationship, the deep convolutional bidirectional recurrent neural network (DC-Bi-RNN) was pre-trained with data from the public medical information mark for intensive care (MIMIC III) database. A tiny quantity of data from the target subject was used to fine-tune the specific layers of the pre-trained model to learn more individual-specific information to achieve highly accurate BP estimation.Main results.The mean absolute error and the Pearson correlation coefficient (r) of the proposed algorithm are 3.21 mmHg and 0.919 for systolic BP, and 1.80 mmHg and 0.898 for diastolic BP (DBP). The experimental results show that our method outperforms other methods and meets the requirements of the Association for the Advancement of Medical Instrumentation standard, and received an A grade according to the British Hypertension Society standard.Significance.The proposed method applies the strategy of pre-training and partial fine-tuning to BP estimation and verifies its effectiveness in improving the accuracy of non-invasive BP estimation.


Asunto(s)
Determinación de la Presión Sanguínea , Hipertensión , Humanos , Presión Sanguínea/fisiología , Determinación de la Presión Sanguínea/métodos , Fotopletismografía/métodos , Redes Neurales de la Computación , Hipertensión/diagnóstico
4.
IEEE J Biomed Health Inform ; 26(4): 1672-1683, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34735349

RESUMEN

Atrial fibrillation (AF) is the most common arrhythmia, but an estimated 30% of patients with AF are unaware of their conditions. The purpose of this work is to design a model for AF screening from facial videos, with a focus on addressing typical motion disturbances in our real life, such as head movements and expression changes. This model detects a pulse signal from the skin color changes in a facial video by a convolution neural network, incorporating a phase-driven attention mechanism to suppress motion signals in the space domain. It then encodes the pulse signal into discriminative features for AF classification by a coding neural network, using a de-noise coding strategy to improve the robustness of the features to motion signals in the time domain. The proposed model was tested on a dataset containing 1200 samples of 100 AF patients and 100 non-AF subjects. Experimental results demonstrated that VidAF had significant robustness to facial motions, predicting clean pulse signals with the mean absolute error of inter-pulse intervals less than 100 milliseconds. Besides, the model achieved promising performance in AF identification, showing an accuracy of more than 90% in multiple challenging scenarios. VidAF provides a more convenient and cost-effective approach for opportunistic AF screening in the community.


Asunto(s)
Fibrilación Atrial , Algoritmos , Fibrilación Atrial/diagnóstico , Electrocardiografía , Frecuencia Cardíaca , Humanos , Tamizaje Masivo/métodos , Redes Neurales de la Computación
5.
Biomed Opt Express ; 11(4): 1876-1891, 2020 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-32341854

RESUMEN

With the popularity of smart phones, non-contact video-based vital sign monitoring using a camera has gained increased attention over recent years. Especially, imaging photoplethysmography (IPPG), a technique for extracting pulse waves from videos, conduces to monitor physiological information on a daily basis, including heart rate, respiration rate, blood oxygen saturation, and so on. The main challenge for accurate pulse wave extraction from facial videos is that the facial color intensity change due to cardiovascular activities is subtle and is often badly disturbed by noise, such as illumination variation, facial expression changes, and head movements. Even a tiny interference could bring a big obstacle for pulse wave extraction and reduce the accuracy of the calculated vital signs. In recent years, many novel approaches have been proposed to eliminate noise such as filter banks, adaptive filters, Distance-PPG, and machine learning, but these methods mainly focus on heart rate detection and neglect the retention of useful details of pulse wave. For example, the pulse wave extracted by the filter bank method has no dicrotic wave and approaching sine wave, but dicrotic waves are essential for calculating vital signs like blood viscosity and blood pressure. Therefore, a new framework is proposed to achieve accurate pulse wave extraction that contains mainly two steps: 1) preprocessing procedure to remove baseline offset and high frequency random noise; and 2) a self-adaptive singular spectrum analysis algorithm to obtain cyclical components and remove aperiodic irregular noise. Experimental results show that the proposed method can extract detail-preserved pulse waves from facial videos under realistic situations and outperforms state-of-the-art methods in terms of detail-preserving and real time heart rate estimation. Furthermore, the pulse wave extracted by our approach enabled the non-contact estimation of atrial fibrillation, heart rate variability, blood pressure, as well as other physiological indices that require standard pulse wave.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA