Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38935470

RESUMEN

Ubiquitous sensing from wearable devices in the wild holds promise for enhancing human well-being, from diagnosing clinical conditions and measuring stress to building adaptive health promoting scaffolds. But the large volumes of data therein across heterogeneous contexts pose challenges for conventional supervised learning approaches. Representation Learning from biological signals is an emerging realm catalyzed by the recent advances in computational modeling and the abundance of publicly shared databases. The electrocardiogram (ECG) is the primary researched modality in this context, with applications in health monitoring, stress and affect estimation. Yet, most studies are limited by small-scale controlled data collection and over-parameterized architecture choices. We introduce WildECG, a pre-trained state-space model for representation learning from ECG signals. We train this model in a self-supervised manner with 275 000 10 s ECG recordings collected in the wild and evaluate it on a range of downstream tasks. The proposed model is a robust backbone for ECG analysis, providing competitive performance on most of the tasks considered, while demonstrating efficacy in low-resource regimes.

2.
Ophthalmol Sci ; 4(4): 100496, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38682028

RESUMEN

Purpose: To develop and test an artificial intelligence (AI) model to aid in differentiating pediatric pseudopapilledema from true papilledema on fundus photographs. Design: Multicenter retrospective study. Subjects: A total of 851 fundus photographs from 235 children (age < 18 years) with pseudopapilledema and true papilledema. Methods: Four pediatric neuro-ophthalmologists at 4 different institutions contributed fundus photographs of children with confirmed diagnoses of papilledema or pseudopapilledema. An AI model to classify fundus photographs as papilledema or pseudopapilledema was developed using a DenseNet backbone and a tribranch convolutional neural network. We performed 10-fold cross-validation and separately analyzed an external test set. The AI model's performance was compared with 2 masked human expert pediatric neuro-ophthalmologists, who performed the same classification task. Main Outcome Measures: Accuracy, sensitivity, and specificity of the AI model compared with human experts. Results: The area under receiver operating curve of the AI model was 0.77 for the cross-validation set and 0.81 for the external test set. The accuracy of the AI model was 70.0% for the cross-validation set and 73.9% for the external test set. The sensitivity of the AI model was 73.4% for the cross-validation set and 90.4% for the external test set. The AI model's accuracy was significantly higher than human experts on the cross validation set (P < 0.002), and the model's sensitivity was significantly higher on the external test set (P = 0.0002). The specificity of the AI model and human experts was similar (56.4%-67.3%). Moreover, the AI model was significantly more sensitive at detecting mild papilledema than human experts, whereas AI and humans performed similarly on photographs of moderate-to-severe papilledema. On review of the external test set, only 1 child (with nearly resolved pseudotumor cerebri) had both eyes with papilledema incorrectly classified as pseudopapilledema. Conclusions: When classifying fundus photographs of pediatric papilledema and pseudopapilledema, our AI model achieved > 90% sensitivity at detecting papilledema, superior to human experts. Due to the high sensitivity and low false negative rate, AI may be useful to triage children with suspected papilledema requiring work-up to evaluate for serious underlying neurologic conditions. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA