Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Clin Monit Comput ; 37(4): 1003-1010, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37010708

RESUMO

PURPOSE: Respiratory rate (RR) is one of the most common vital signs with numerous clinical uses. It is an important indicator of acute illness and a significant change in RR is often an early indication of a potentially serious complication or clinical event such as respiratory tract infection, respiratory failure and cardiac arrest. Early identification of changes in RR allows for prompt intervention, whereas failing to detect a change may result in poor patient outcomes. Here, we report on the performance of a depth-sensing camera system for the continuous non-contact 'touchless' monitoring of Respiratory Rate. METHODS: Seven healthy subjects undertook a range of breathing rates from 4 to 40 breaths-per-minute (breaths/min). These were set rates of 4, 5, 6, 8, 10, 15, 20, 25, 30, 35 and 40 breaths/min. In total, 553 separate respiratory rate recordings were captured across a range of conditions including body posture, position within the bed, lighting levels and bed coverings. Depth information was acquired from the scene using an Intel D415 RealSenseTM camera. This data was processed in real-time to extract depth changes within the subject's torso region corresponding to respiratory activity. A respiratory rate RRdepth was calculated using our latest algorithm and output once-per-second from the device and compared to a reference. RESULTS: An overall RMSD accuracy of 0.69 breaths/min with a corresponding bias of -0.034 was achieved across the target RR range of 4-40 breaths/min. Bland-Altman analysis revealed limits of agreement of -1.42 to 1.36 breaths/min. Three separate sub-ranges of low, normal and high rates, corresponding to < 12, 12-20, > 20 breaths/min, were also examined separately and each found to demonstrate RMSD accuracies of less than one breath-per-minute. CONCLUSIONS: We have demonstrated high accuracy in performance for respiratory rate based on a depth camera system. We have shown the ability to perform well at both high and low rates which are clinically important.


Assuntos
Taxa Respiratória , Sinais Vitais , Humanos , Postura , Algoritmos , Monitorização Fisiológica
2.
J Clin Monit Comput ; 36(3): 657-665, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-33743106

RESUMO

The monitoring of respiratory parameters is important across many areas of care within the hospital. Here we report on the performance of a depth-sensing camera system for the continuous non-contact monitoring of Respiratory Rate (RR) and Tidal Volume (TV), where these parameters were compared to a ventilator reference. Depth sensing data streams were acquired and processed over a series of runs on a single volunteer comprising a range of respiratory rates and tidal volumes to generate depth-based respiratory rate (RRdepth) and tidal volume (TVdepth) estimates. The bias and root mean squared difference (RMSD) accuracy between RRdepth and the ventilator reference, RRvent, across the whole data set was found to be -0.02 breaths/min and 0.51 breaths/min respectively. The least squares fit regression equation was determined to be: RRdepth = 0.96 × RRvent + 0.57 breaths/min and the resulting Pearson correlation coefficient, R, was 0.98 (p < 0.001). Correspondingly, the bias and root mean squared difference (RMSD) accuracy between TVdepth and the reference TVvent across the whole data set was found to be - 0.21 L and 0.23 L respectively. The least squares fit regression equation was determined to be: TVdepth = 0.79 × TVvent-0.01 L and the resulting Pearson correlation coefficient, R, was 0.92 (p < 0.001). In conclusion, a high degree of agreement was found between the depth-based respiration rate and its ventilator reference, indicating that RRdepth is a promising modality for the accurate non-contact respiratory rate monitoring in the clinical setting. In addition, a high degree of correlation between depth-based tidal volume and its ventilator reference was found, indicating that TVdepth may provide a useful monitor of tidal volume trending in practice. Future work should aim to further test these parameters in the clinical setting.


Assuntos
Taxa Respiratória , Ventiladores Mecânicos , Humanos , Monitorização Fisiológica/métodos , Respiração Artificial , Volume de Ventilação Pulmonar
3.
Sensors (Basel) ; 21(4)2021 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-33561970

RESUMO

There is considerable interest in the noncontact monitoring of patients as it allows for reduced restriction of patients, the avoidance of single-use consumables and less patient-clinician contact and hence the reduction of the spread of disease. A technology that has come to the fore for noncontact respiratory monitoring is that based on depth sensing camera systems. This has great potential for the monitoring of a range of respiratory information including the provision of a respiratory waveform, the calculation of respiratory rate and tidal volume (and hence minute volume). Respiratory patterns and apneas can also be observed in the signal. Here we review the ability of this method to provide accurate and clinically useful respiratory information.


Assuntos
Taxa Respiratória , Humanos , Monitorização Fisiológica , Volume de Ventilação Pulmonar
4.
J Imaging ; 10(3)2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38535147

RESUMO

This study innovates livestock health management, utilizing a top-view depth camera for accurate cow lameness detection, classification, and precise segmentation through integration with a 3D depth camera and deep learning, distinguishing it from 2D systems. It underscores the importance of early lameness detection in cattle and focuses on extracting depth data from the cow's body, with a specific emphasis on the back region's maximum value. Precise cow detection and tracking are achieved through the Detectron2 framework and Intersection Over Union (IOU) techniques. Across a three-day testing period, with observations conducted twice daily with varying cow populations (ranging from 56 to 64 cows per day), the study consistently achieves an impressive average detection accuracy of 99.94%. Tracking accuracy remains at 99.92% over the same observation period. Subsequently, the research extracts the cow's depth region using binary mask images derived from detection results and original depth images. Feature extraction generates a feature vector based on maximum height measurements from the cow's backbone area. This feature vector is utilized for classification, evaluating three classifiers: Random Forest (RF), K-Nearest Neighbor (KNN), and Decision Tree (DT). The study highlights the potential of top-view depth video cameras for accurate cow lameness detection and classification, with significant implications for livestock health management.

5.
Respir Med ; 220: 107463, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37993024

RESUMO

PURPOSE: Respiratory rate is a commonly used vital sign with various clinical applications. It serves as a crucial marker of acute health issues and any significant alteration in respiratory rate may be an early warning sign of major issues such as infections in the respiratory tract, respiratory failure, or cardiac arrest. Timely recognition of changes in respiratory rate enables prompt medical action, while neglecting to detect a change may lead to adverse patient outcomes. Here, we report on the performance of respiratory rate determined using a depth sensing camera system (RRdepth) which allows for continuous, non-contact 'touchless' monitoring of this important vital sign. METHODS: Thirty adult volunteers undertook a range of set breathing rates to cover a target breathing range of 4-40 breaths/min. Depth information was acquired from the torso region of the subjects using an Intel D415 RealSense camera positioned above the bed. The depth information was processed to generate a respiratory signal from which RRdepth was calculated. This was compared to a manually scored capnograph reference (RRcap). RESULTS: An overall RMSD accuracy of 0.77 breaths/min was achieved across the target respiratory rate range with a corresponding bias of 0.05 breaths/min. This corresponded to a line of best fit given by RRdepth = 1.01 x RRcap - 0.22 breaths/min with an associated high degree of correlation (R = 0.997). A breakdown of the performance with respect to sub-ranges corresponding to respiratory rates or ≤7, >7-10, >10-20, >20-30, >30 breaths/min all exhibited RMSD accuracies of less than 1.00 breaths/min. We also had the opportunity to test the performance of spontaneous breathing of the subjects which occurred during the study and found an overall RMSD accuracy of 1.20 breaths/min with corresponding accuracies ≤1.30 breaths/min across each of the individual sub-ranges. CONCLUSIONS: We have conducted an investigative study of a prototype depth sensing camera system for the non-contact monitoring of respiratory rate. The system achieved good performance with high accuracy across a wide range of rates including both clinically important high and low rates.


Assuntos
Respiração , Taxa Respiratória , Adulto , Humanos , Sistema Respiratório , Tecnologia , Monitorização Fisiológica/métodos
6.
Sensors (Basel) ; 12(7): 8640-62, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23012509

RESUMO

This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa