Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
JMIR Mhealth Uhealth ; 9(7): e26149, 2021 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-34328440

RESUMEN

BACKGROUND: Travel to clinics for chronic wound management is burdensome to patients. Remote assessment and management of wounds using mobile and telehealth approaches can reduce this burden and improve patient outcomes. An essential step in wound documentation is the capture of wound images, but poor image quality can have a negative influence on the reliability of the assessment. To date, no study has investigated the quality of remotely acquired wound images and whether these are suitable for wound self-management and telemedical interpretation of wound status. OBJECTIVE: Our goal was to develop a mobile health (mHealth) tool for the remote self-assessment of digital ulcers (DUs) in patients with systemic sclerosis (SSc). We aimed to define and validate objective measures for assessing the image quality, evaluate whether an automated feedback feature based on real-time assessment of image quality improves the overall quality of acquired wound images, and evaluate the feasibility of deploying the mHealth tool for home-based chronic wound self-monitoring by patients with SSc. METHODS: We developed an mHealth tool composed of a wound imaging and management app, a custom color reference sticker, and a smartphone holder. We introduced 2 objective image quality parameters based on the sharpness and presence of the color checker to assess the quality of the image during acquisition and enable a quality feedback mechanism in an advanced version of the app. We randomly assigned patients with SSc and DU to the 2 device groups (basic and feedback) to self-document their DU at home over 8 weeks. The color checker detection ratio (CCDR) and color checker sharpness (CCS) were compared between the 2 groups. We evaluated the feasibility of the mHealth tool by analyzing the usability feedback from questionnaires, user behavior and timings, and the overall quality of the wound images. RESULTS: A total of 21 patients were enrolled, of which 15 patients were included in the image quality analysis. The average CCDR was 0.96 (191/199) in the feedback group and 0.86 (158/183) in the basic group. The feedback group showed significantly higher (P<.001) CCS compared to the basic group. The usability questionnaire results showed that the majority of patients were satisfied with the tool, but could benefit from disease-specific adaptations. The median assessment duration was <50 seconds in all patients, indicating the mHealth tool was efficient to use and could be integrated into the daily routine of patients. CONCLUSIONS: We developed an mHealth tool that enables patients with SSc to acquire good-quality DU images and demonstrated that it is feasible to deploy such an app in this patient group. The feedback mechanism improved the overall image quality. The introduced technical solutions consist of a further step towards reliable and trustworthy digital health for home-based self-management of wounds.


Asunto(s)
Aplicaciones Móviles , Telemedicina , Estudios de Factibilidad , Retroalimentación , Humanos , Reproducibilidad de los Resultados
2.
IEEE Trans Biomed Eng ; 68(1): 350-359, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32396069

RESUMEN

Continuous monitoring of respiratory activity is desirable in many clinical applications to detect respiratory events. Non-contact monitoring of respiration can be achieved with near- and far-infrared spectrum cameras. However, current technologies are not sufficiently robust to be used in clinical applications. For example, they fail to estimate an accurate respiratory rate (RR) during apnea. We present a novel algorithm based on multispectral data fusion that aims at estimating RR also during apnea. The algorithm independently addresses the RR estimation and apnea detection tasks. Respiratory information is extracted from multiple sources and fed into an RR estimator and an apnea detector whose results are fused into a final respiratory activity estimation. We evaluated the system retrospectively using data from 30 healthy adults who performed diverse controlled breathing tasks while lying supine in a dark room and reproduced central and obstructive apneic events. Combining multiple respiratory information from multispectral cameras improved the root mean square error (RMSE) accuracy of the RR estimation from up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also improved. Furthermore, the independent consideration of apnea detection led to a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may represent a step towards the use of cameras for vital sign monitoring in medical applications.


Asunto(s)
Apnea , Frecuencia Respiratoria , Adulto , Algoritmos , Humanos , Monitoreo Fisiológico , Respiración , Estudios Retrospectivos
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5939-5942, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33019326

RESUMEN

Respiratory rate (RR) can be estimated from the photoplethysmogram (PPG) recorded by optical sensors in wearable devices. The fusion of estimates from different PPG features has lead to an increase in accuracy, but also reduced the numbers of available final estimates due to discarding of unreliable data. We propose a novel, tunable fusion algorithm using covariance intersection to estimate the RR from PPG (CIF). The algorithm is adaptive to the number of available feature estimates and takes each estimates' trustworthiness into account. In a benchmarking experiment using the CapnoBase dataset with reference RR from capnography, we compared the CIF against the state-of-the-art Smart Fusion (SF) algorithm. The median root mean square error was 1.4 breaths/min for the CIF and 1.8 breaths/min for the SF. The CIF significantly increased the retention rate distribution of all recordings from 0.46 to 0.90 (p < 0.001). The agreement with the reference RR was high with a Pearson's correlation coefficient of 0.94, a bias of 0.3 breaths/min, and limits of agreement of -4.6 and 5.2 breaths/min. In addition, the algorithm was computationally efficient. Therefore, CIF could contribute to a more robust RR estimation from wearable PPG recordings.


Asunto(s)
Fotopletismografía , Frecuencia Respiratoria , Algoritmos , Benchmarking , Capnografía
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 5672-5675, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30441623

RESUMEN

Thermal cameras enable non-contact estimation of the respiratory rate (RR). Accurate estimation of RR is highly dependent on the reliable detection of the region of interest (ROI), especially when using cameras with low pixel resolution. We present a novel approach for the automatic detection of the human nose ROI, based on facial landmark detection from an RGB camera that is fused with the thermal image after tracking. We evaluated the detection rate and spatial accuracy of the novel algorithm on recordings obtained from 16 subjects under challenging detection scenarios. Results show a high detection rate (median: 100%, 5th-95th percentile: 92%- 100%) and very good spatial accuracy with an average root mean square error of 2 pixels in the detected ROI center when compared to manual labeling. Therefore, the implementation of a multispectral camera fusion algorithm is a valid strategy to improve the reliability of non-contact RR estimation with nearable devices featuring thermal cameras.


Asunto(s)
Algoritmos , Frecuencia Respiratoria , Cara , Humanos , Reproducibilidad de los Resultados
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 4285-4288, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29060844

RESUMEN

Photoplethismographic imaging (PPGi) enables the estimation of heart rate without body contact by analyzing the temporal skin color changes from video recordings. Motion artifacts and atypical facial characteristics cause poor signals and currently limit the applicability of PPGi. We have developed a novel algorithm for locating cheek and forehead region of interests (ROI) with the aim to improve PPGi during challenging situations. The proposed approach is based on the fusion of RGB and far-infrared (FIR) video streams where FIR ROI is used as fall-back when RGB alone fails. We validated and compared the algorithm against the detection based on single sources, using videos from 8 subjects with distinctively different face characteristics. The subject performed three scenarios with incremental motion artifact content (head at rest, intensive head movements, speaking). The results showed that combining the two imaging sources increased the detection rate of cheeks from 75% (RGB) to 92% (RGB+FIR) in the challenging intensive head movement scenario. This work demonstrated that FIR imaging is complementary to simple RGB imaging and when combined, adds robustness to the detection of ROI in PPGi applications.


Asunto(s)
Temperatura , Algoritmos , Artefactos , Frecuencia Cardíaca , Movimiento (Física) , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA