Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Sci Data ; 10(1): 162, 2023 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-36959280

RESUMEN

SPHERE is a large multidisciplinary project to research and develop a sensor network to facilitate home healthcare by activity monitoring, specifically towards activities of daily living. It aims to use the latest technologies in low powered sensors, internet of things, machine learning and automated decision making to provide benefits to patients and clinicians. This dataset comprises data collected from a SPHERE sensor network deployment during a set of experiments conducted in the 'SPHERE House' in Bristol, UK, during 2016, including video tracking, accelerometer and environmental sensor data obtained by volunteers undertaking both scripted and non-scripted activities of daily living in a domestic residence. Trained annotators provided ground-truth labels annotating posture, ambulation, activity and location. This dataset is a valuable resource both within and outside the machine learning community, particularly in developing and evaluating algorithms for identifying activities of daily living from multi-modal sensor data in real-world environments. A subset of this dataset was released as a machine learning competition in association with the European Conference on Machine Learning (ECML-PKDD 2016).


Asunto(s)
Actividades Cotidianas , Monitoreo Ambulatorio , Humanos , Algoritmos , Aprendizaje Automático
2.
IEEE Trans Biomed Eng ; 65(6): 1421-1431, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29787997

RESUMEN

OBJECTIVE: We propose a novel depth-based photoplethysmography (dPPG) approach to reduce motion artifacts in respiratory volume-time data and improve the accuracy of remote pulmonary function testing (PFT) measures. METHOD: Following spatial and temporal calibration of two opposing RGB-D sensors, a dynamic three-dimensional model of the subject performing PFT is reconstructed and used to decouple trunk movements from respiratory motions. Depth-based volume-time data is then retrieved, calibrated, and used to compute 11 clinical PFT measures for forced vital capacity and slow vital capacity spirometry tests. RESULTS: A dataset of 35 subjects (298 sequences) was collected and used to evaluate the proposed dPPG method by comparing depth-based PFT measures to the measures provided by a spirometer. Other comparative experiments between the dPPG and the single Kinect approach, such as Bland-Altman analysis, similarity measures performance, intra-subject error analysis, and statistical analysis of tidal volume and main effort scaling factors, all show the superior accuracy of the dPPG approach. CONCLUSION: We introduce a depth-based whole body photoplethysmography approach, which reduces motion artifacts in depth-based volume-time data and highly improves the accuracy of depth-based computed measures. SIGNIFICANCE: The proposed dPPG method remarkably drops the error mean and standard deviation of FEF , FEF , FEF, IC , and ERV measures by half, compared to the single Kinect approach. These significant improvements establish the potential for unconstrained remote respiratory monitoring and diagnosis.


Asunto(s)
Fotopletismografía/métodos , Tecnología de Sensores Remotos/métodos , Pruebas de Función Respiratoria/métodos , Procesamiento de Señales Asistido por Computador , Imagen de Cuerpo Entero/métodos , Adulto , Artefactos , Femenino , Humanos , Imagenología Tridimensional/métodos , Masculino , Movimiento (Física)
3.
Front Physiol ; 8: 65, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28223945

RESUMEN

Introduction: There is increasing interest in technologies that may enable remote monitoring of respiratory disease. Traditional methods for assessing respiratory function such as spirometry can be expensive and require specialist training to perform and interpret. Remote, non-contact tracking of chest wall movement has been explored in the past using structured light, accelerometers and impedance pneumography, but these have often been costly and clinical utility remains to be defined. We present data from a 3-Dimensional time-of-flight camera (found in gaming consoles) used to estimate chest volume during routine spirometry maneuvres. Methods: Patients were recruited from a general respiratory physiology laboratory. Spirometry was performed according to international standards using an unmodified spirometer. A Microsoft Kinect V2 time-of-flight depth sensor was used to reconstruct 3-dimensional models of the subject's thorax to estimate volume-time and flow-time curves following the introduction of a scaling factor to transform measurements to volume estimates. The Bland-Altman method was used to assess agreement of model estimation with simultaneous recordings from the spirometer. Patient characteristics were used to assess predictors of error using regression analysis and to further explore the scaling factors. Results: The chest volume change estimated by the Kinect camera during spirometry tracked respiratory rate accurately and estimated forced vital capacity (FVC) and vital capacity to within ± <1%. Forced expiratory volume estimation did not demonstrate acceptable limits of agreement, with 61.9% of readings showing >150 ml difference. Linear regression including age, gender, height, weight, and pack years of smoking explained 37.0% of the variance in the scaling factor for volume estimation. This technique had a positive predictive value of 0.833 to detect obstructive spirometry. Conclusion: These data illustrate the potential of 3D time-of-flight cameras to remotely monitor respiratory rate. This is not a replacement for conventional spirometry and needs further refinement. Further algorithms are being developed to allow its independence from spirometry. Benefits include simplicity of set-up, no specialist training, and cost. This technique warrants further refinement and validation in larger cohorts.

4.
IEEE Trans Biomed Eng ; 64(8): 1943-1958, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-27925582

RESUMEN

OBJECTIVE: We propose a remote, noninvasive approach to develop pulmonary function testing (PFT) using a depth sensor. METHOD: After generating a point cloud from scene depth values, we construct a three-dimensional model of the subject's chest. Then, by estimating the chest volume variation throughout a sequence, we generate volume-time and flow-time data for two prevalent spirometry tests: forced vital capacity (FVC) and slow vital capacity (SVC). Tidal volume and main effort sections of volume-time data are analyzed and calibrated separately to remove the effects of a subject's torso motion. After automatic extraction of keypoints from the volume-time and flow-time curves, seven FVC ( FVC, FEV1, PEF, FEF 25%, FEF 50%, FEF 75%, and FEF [Formula: see text]) and four SVC measures ( VC, IC, TV, and ERV) are computed and then validated against measures from a spirometer. A dataset of 85 patients (529 sequences in total), attending respiratory outpatient service for spirometry, was collected and used to evaluate the proposed method. RESULTS: High correlation for FVC and SVC measures on intra-test and intra-subject measures between the proposed method and the spirometer. CONCLUSION: Our proposed depth-based approach is able to remotely compute eleven clinical PFT measures, which gives highly accurate results when evaluated against a spirometer on a dataset comprising 85 patients. SIGNIFICANCE: Experimental results computed over an unprecedented number of clinical patients confirm that chest surface motion is linearly related to the changes in volume of lungs, which establishes the potential toward an accurate, low-cost, and remote alternative to traditional cumbersome methods, such as spirometry.


Asunto(s)
Diagnóstico por Computador/métodos , Imagenología Tridimensional/métodos , Monitoreo Ambulatorio/métodos , Mecánica Respiratoria/fisiología , Tórax/fisiología , Volumen de Ventilación Pulmonar/fisiología , Diagnóstico por Computador/instrumentación , Humanos , Imagenología Tridimensional/instrumentación , Monitoreo Ambulatorio/instrumentación , Reproducibilidad de los Resultados , Pruebas de Función Respiratoria/instrumentación , Pruebas de Función Respiratoria/métodos , Sensibilidad y Especificidad
5.
Sensors (Basel) ; 14(2): 1961-87, 2014 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-24469352

RESUMEN

Low-cost systems that can obtain a high-quality foreground segmentation almost independently of the existing illumination conditions for indoor environments are very desirable, especially for security and surveillance applications. In this paper, a novel foreground segmentation algorithm that uses only a Kinect depth sensor is proposed to satisfy the aforementioned system characteristics. This is achieved by combining a mixture of Gaussians-based background subtraction algorithm with a new Bayesian network that robustly predicts the foreground/background regions between consecutive time steps. The Bayesian network explicitly exploits the intrinsic characteristics of the depth data by means of two dynamic models that estimate the spatial and depth evolution of the foreground/background regions. The most remarkable contribution is the depth-based dynamic model that predicts the changes in the foreground depth distribution between consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that the proposed combination of algorithms is able to obtain a more accurate segmentation of the foreground/background than other state-of-the art approaches.

6.
IEEE Trans Cybern ; 43(6): 1560-71, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24273141

RESUMEN

Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.


Asunto(s)
Color , Gráficos por Computador , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Técnica de Sustracción , Interfaz Usuario-Computador , Juegos de Video , Simulación por Computador , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...