Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Sensors (Basel) ; 14(2): 1961-87, 2014 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-24469352

RESUMO

Low-cost systems that can obtain a high-quality foreground segmentation almost independently of the existing illumination conditions for indoor environments are very desirable, especially for security and surveillance applications. In this paper, a novel foreground segmentation algorithm that uses only a Kinect depth sensor is proposed to satisfy the aforementioned system characteristics. This is achieved by combining a mixture of Gaussians-based background subtraction algorithm with a new Bayesian network that robustly predicts the foreground/background regions between consecutive time steps. The Bayesian network explicitly exploits the intrinsic characteristics of the depth data by means of two dynamic models that estimate the spatial and depth evolution of the foreground/background regions. The most remarkable contribution is the depth-based dynamic model that predicts the changes in the foreground depth distribution between consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that the proposed combination of algorithms is able to obtain a more accurate segmentation of the foreground/background than other state-of-the art approaches.

2.
Sci Data ; 10(1): 162, 2023 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-36959280

RESUMO

SPHERE is a large multidisciplinary project to research and develop a sensor network to facilitate home healthcare by activity monitoring, specifically towards activities of daily living. It aims to use the latest technologies in low powered sensors, internet of things, machine learning and automated decision making to provide benefits to patients and clinicians. This dataset comprises data collected from a SPHERE sensor network deployment during a set of experiments conducted in the 'SPHERE House' in Bristol, UK, during 2016, including video tracking, accelerometer and environmental sensor data obtained by volunteers undertaking both scripted and non-scripted activities of daily living in a domestic residence. Trained annotators provided ground-truth labels annotating posture, ambulation, activity and location. This dataset is a valuable resource both within and outside the machine learning community, particularly in developing and evaluating algorithms for identifying activities of daily living from multi-modal sensor data in real-world environments. A subset of this dataset was released as a machine learning competition in association with the European Conference on Machine Learning (ECML-PKDD 2016).


Assuntos
Atividades Cotidianas , Monitorização Ambulatorial , Humanos , Algoritmos , Aprendizado de Máquina
3.
IEEE Trans Biomed Eng ; 65(6): 1421-1431, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29787997

RESUMO

OBJECTIVE: We propose a novel depth-based photoplethysmography (dPPG) approach to reduce motion artifacts in respiratory volume-time data and improve the accuracy of remote pulmonary function testing (PFT) measures. METHOD: Following spatial and temporal calibration of two opposing RGB-D sensors, a dynamic three-dimensional model of the subject performing PFT is reconstructed and used to decouple trunk movements from respiratory motions. Depth-based volume-time data is then retrieved, calibrated, and used to compute 11 clinical PFT measures for forced vital capacity and slow vital capacity spirometry tests. RESULTS: A dataset of 35 subjects (298 sequences) was collected and used to evaluate the proposed dPPG method by comparing depth-based PFT measures to the measures provided by a spirometer. Other comparative experiments between the dPPG and the single Kinect approach, such as Bland-Altman analysis, similarity measures performance, intra-subject error analysis, and statistical analysis of tidal volume and main effort scaling factors, all show the superior accuracy of the dPPG approach. CONCLUSION: We introduce a depth-based whole body photoplethysmography approach, which reduces motion artifacts in depth-based volume-time data and highly improves the accuracy of depth-based computed measures. SIGNIFICANCE: The proposed dPPG method remarkably drops the error mean and standard deviation of FEF , FEF , FEF, IC , and ERV measures by half, compared to the single Kinect approach. These significant improvements establish the potential for unconstrained remote respiratory monitoring and diagnosis.


Assuntos
Fotopletismografia/métodos , Tecnologia de Sensoriamento Remoto/métodos , Testes de Função Respiratória/métodos , Processamento de Sinais Assistido por Computador , Imagem Corporal Total/métodos , Adulto , Artefatos , Feminino , Humanos , Imageamento Tridimensional/métodos , Masculino , Movimento (Física)
4.
Front Physiol ; 8: 65, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28223945

RESUMO

Introduction: There is increasing interest in technologies that may enable remote monitoring of respiratory disease. Traditional methods for assessing respiratory function such as spirometry can be expensive and require specialist training to perform and interpret. Remote, non-contact tracking of chest wall movement has been explored in the past using structured light, accelerometers and impedance pneumography, but these have often been costly and clinical utility remains to be defined. We present data from a 3-Dimensional time-of-flight camera (found in gaming consoles) used to estimate chest volume during routine spirometry maneuvres. Methods: Patients were recruited from a general respiratory physiology laboratory. Spirometry was performed according to international standards using an unmodified spirometer. A Microsoft Kinect V2 time-of-flight depth sensor was used to reconstruct 3-dimensional models of the subject's thorax to estimate volume-time and flow-time curves following the introduction of a scaling factor to transform measurements to volume estimates. The Bland-Altman method was used to assess agreement of model estimation with simultaneous recordings from the spirometer. Patient characteristics were used to assess predictors of error using regression analysis and to further explore the scaling factors. Results: The chest volume change estimated by the Kinect camera during spirometry tracked respiratory rate accurately and estimated forced vital capacity (FVC) and vital capacity to within ± <1%. Forced expiratory volume estimation did not demonstrate acceptable limits of agreement, with 61.9% of readings showing >150 ml difference. Linear regression including age, gender, height, weight, and pack years of smoking explained 37.0% of the variance in the scaling factor for volume estimation. This technique had a positive predictive value of 0.833 to detect obstructive spirometry. Conclusion: These data illustrate the potential of 3D time-of-flight cameras to remotely monitor respiratory rate. This is not a replacement for conventional spirometry and needs further refinement. Further algorithms are being developed to allow its independence from spirometry. Benefits include simplicity of set-up, no specialist training, and cost. This technique warrants further refinement and validation in larger cohorts.

5.
IEEE Trans Biomed Eng ; 64(8): 1943-1958, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-27925582

RESUMO

OBJECTIVE: We propose a remote, noninvasive approach to develop pulmonary function testing (PFT) using a depth sensor. METHOD: After generating a point cloud from scene depth values, we construct a three-dimensional model of the subject's chest. Then, by estimating the chest volume variation throughout a sequence, we generate volume-time and flow-time data for two prevalent spirometry tests: forced vital capacity (FVC) and slow vital capacity (SVC). Tidal volume and main effort sections of volume-time data are analyzed and calibrated separately to remove the effects of a subject's torso motion. After automatic extraction of keypoints from the volume-time and flow-time curves, seven FVC ( FVC, FEV1, PEF, FEF 25%, FEF 50%, FEF 75%, and FEF [Formula: see text]) and four SVC measures ( VC, IC, TV, and ERV) are computed and then validated against measures from a spirometer. A dataset of 85 patients (529 sequences in total), attending respiratory outpatient service for spirometry, was collected and used to evaluate the proposed method. RESULTS: High correlation for FVC and SVC measures on intra-test and intra-subject measures between the proposed method and the spirometer. CONCLUSION: Our proposed depth-based approach is able to remotely compute eleven clinical PFT measures, which gives highly accurate results when evaluated against a spirometer on a dataset comprising 85 patients. SIGNIFICANCE: Experimental results computed over an unprecedented number of clinical patients confirm that chest surface motion is linearly related to the changes in volume of lungs, which establishes the potential toward an accurate, low-cost, and remote alternative to traditional cumbersome methods, such as spirometry.


Assuntos
Diagnóstico por Computador/métodos , Imageamento Tridimensional/métodos , Monitorização Ambulatorial/métodos , Mecânica Respiratória/fisiologia , Tórax/fisiologia , Volume de Ventilação Pulmonar/fisiologia , Diagnóstico por Computador/instrumentação , Humanos , Imageamento Tridimensional/instrumentação , Monitorização Ambulatorial/instrumentação , Reprodutibilidade dos Testes , Testes de Função Respiratória/instrumentação , Testes de Função Respiratória/métodos , Sensibilidade e Especificidade
6.
IEEE Trans Cybern ; 43(6): 1560-71, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24273141

RESUMO

Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.


Assuntos
Cor , Gráficos por Computador , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Técnica de Subtração , Interface Usuário-Computador , Jogos de Vídeo , Simulação por Computador , Modelos Teóricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA