Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE J Biomed Health Inform ; 27(7): 3129-3140, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37058373

RESUMEN

Evidence is rapidly accumulating that multifactorial nocturnal monitoring, through the coupling of wearable devices and deep learning, may be disruptive for early diagnosis and assessment of sleep disorders. In this work, optical, differential air-pressure and acceleration signals, acquired by a chest-worn sensor, are elaborated into five somnographic-like signals, which are then used to feed a deep network. This addresses a three-fold classification problem to predict the overall signal quality (normal, corrupted), three breathing-related patterns (normal, apnea, irregular) and three sleep-related patterns (normal, snoring, noise). In order to promote explainability, the developed architecture generates additional information in the form of qualitative (saliency maps) and quantitative (confidence indices) data, which helps to improve the interpretation of the predictions. Twenty healthy subjects enrolled in this study were monitored overnight for approximately ten hours during sleep. Somnographic-like signals were manually labeled according to the three class sets to build the training dataset. Both record- and subject-wise analyses were performed to evaluate the prediction performance and the coherence of the results. The network was accurate (0.96) in distinguishing normal from corrupted signals. Breathing patterns were predicted with higher accuracy (0.93) than sleep patterns (0.76). The prediction of irregular breathing was less accurate (0.88) than that of apnea (0.97). In the sleep pattern set, the distinction between snoring (0.73) and noise events (0.61) was less effective. The confidence index associated with the prediction allowed us to elucidate ambiguous predictions better. The saliency map analysis provided useful insights to relate predictions to the input signal content. While preliminary, this work supported the recent perspective on the use of deep learning to detect particular sleep events in multiple somnographic signals, thus representing a step towards bringing the use of AI-based tools for sleep disorder detection incrementally closer to clinical translation.


Asunto(s)
Aprendizaje Profundo , Dispositivos Electrónicos Vestibles , Humanos , Polisomnografía , Ronquido/diagnóstico , Apnea , Sueño
2.
Sensors (Basel) ; 22(7)2022 Mar 31.
Artículo en Inglés | MEDLINE | ID: mdl-35408297

RESUMEN

Identification of characteristic points in physiological signals, such as the peak of the R wave in the electrocardiogram and the peak of the systolic wave of the photopletismogram, is a fundamental step for the quantification of clinical parameters, such as the pulse transit time. In this work, we presented a novel neural architecture, called eMTUnet, to automate point identification in multivariate signals acquired with a chest-worn device. The eMTUnet consists of a single deep network capable of performing three tasks simultaneously: (i) localization in time of characteristic points (labeling task), (ii) evaluation of the quality of signals (classification task); (iii) estimation of the reliability of classification (reliability task). Preliminary results in overnight monitoring showcased the ability to detect characteristic points in the four signals with a recall index of about 1.00, 0.90, 0.90, and 0.80, respectively. The accuracy of the signal quality classification was about 0.90, on average over four different classes. The average confidence of the correctly classified signals, against the misclassifications, was 0.93 vs. 0.52, proving the worthiness of the confidence index, which may better qualify the point identification. From the achieved outcomes, we point out that high-quality segmentation and classification are both ensured, which brings the use of a multi-modal framework, composed of wearable sensors and artificial intelligence, incrementally closer to clinical translation.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Electrocardiografía , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA