Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1194-1197, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018201

RESUMO

Over the last few years, camera-based estimation of vital signs referred to as imaging photoplethysmography (iPPG) has garnered significant attention due to the relative simplicity, ease, unobtrusiveness and flexibility offered by such measurements. It is expected that iPPG may be integrated into a host of emerging applications in areas as diverse as autonomous cars, neonatal monitoring, and telemedicine. In spite of this potential, the primary challenge of non-contact camera-based measurements is the relative motion between the camera and the subjects. Current techniques employ 2D feature tracking to reduce the effect of subject and camera motion but they are limited to handling translational and in-plane motion. In this paper, we study, for the first-time, the utility of 3D face tracking to allow iPPG to retain robust performance even in presence of out-of-plane and large relative motions. We use a RGB-D camera to obtain 3D information from the subjects and use the spatial and depth information to fit a 3D face model and track the model over the video frames. This allows us to estimate correspondence over the entire video with pixel-level accuracy, even in the presence of out-of-plane or large motions. We then estimate iPPG from the warped video data that ensures per-pixel correspondence over the entire window-length used for estimation. Our experiments demonstrate improvement in robustness when head motion is large.


Assuntos
Algoritmos , Fotopletismografia , Face , Monitorização Fisiológica , Movimento (Física)
2.
Healthc Technol Lett ; 6(6): 249-254, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32038866

RESUMO

For effective in situ endoscopic diagnosis and treatment, measurement of polyp sizes is important. For this purpose, 3D endoscopic systems have been researched. Among such systems, an active stereo technique, which projects a special pattern wherein each feature is coded, is a promising approach because of simplicity and high precision. However, previous works of this approach have problems. First, the quality of 3D reconstruction depended on the stabilities of feature extraction from the images captured by the endoscope camera. Second, due to the limited pattern projection area, the reconstructed region was relatively small. In this Letter, the authors propose a learning-based technique using convolutional neural networks to solve the first problem and an extended bundle adjustment technique, which integrates multiple shapes into a consistent single shape, to address the second. The effectiveness of the proposed techniques compared to previous techniques was evaluated experimentally.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA