Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
Front Robot AI ; 10: 1108114, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36936408

RESUMEN

Introduction: Video-based clinical rating plays an important role in assessing dystonia and monitoring the effect of treatment in dyskinetic cerebral palsy (CP). However, evaluation by clinicians is time-consuming, and the quality of rating is dependent on experience. The aim of the current study is to provide a proof-of-concept for a machine learning approach to automatically assess scoring of dystonia using 2D stick figures extracted from videos. Model performance was compared to human performance. Methods: A total of 187 video sequences of 34 individuals with dyskinetic CP (8-23 years, all non-ambulatory) were filmed at rest during lying and supported sitting. Videos were scored by three raters according to the Dyskinesia Impairment Scale (DIS) for arm and leg dystonia (normalized scores ranging from 0-1). Coordinates in pixels of the left and right wrist, elbow, shoulder, hip, knee and ankle were extracted using DeepLabCut, an open source toolbox that builds on a pose estimation algorithm. Within a subset, tracking accuracy was assessed for a pretrained human model and for models trained with an increasing number of manually labeled frames. The mean absolute error (MAE) between DeepLabCut's prediction of the position of body points and manual labels was calculated. Subsequently, movement and position features were calculated from extracted body point coordinates. These features were fed into a Random Forest Regressor to train a model to predict the clinical scores. The model performance trained with data from one rater evaluated by MAEs (model-rater) was compared to inter-rater accuracy. Results: A tracking accuracy of 4.5 pixels (approximately 1.5 cm) could be achieved by adding 15-20 manually labeled frames per video. The MAEs for the trained models ranged from 0.21 ± 0.15 for arm dystonia to 0.14 ± 0.10 for leg dystonia (normalized DIS scores). The inter-rater MAEs were 0.21 ± 0.22 and 0.16 ± 0.20, respectively. Conclusion: This proof-of-concept study shows the potential of using stick figures extracted from common videos in a machine learning approach to automatically assess dystonia. Sufficient tracking accuracy can be reached by manually adding labels within 15-20 frames per video. With a relatively small data set, it is possible to train a model that can automatically assess dystonia with a performance comparable to human scoring.

2.
Nat Commun ; 12(1): 7068, 2021 12 03.
Artículo en Inglés | MEDLINE | ID: mdl-34862392

RESUMEN

Three-dimensional (3D) structures of protein complexes provide fundamental information to decipher biological processes at the molecular scale. The vast amount of experimentally and computationally resolved protein-protein interfaces (PPIs) offers the possibility of training deep learning models to aid the predictions of their biological relevance. We present here DeepRank, a general, configurable deep learning framework for data mining PPIs using 3D convolutional neural networks (CNNs). DeepRank maps features of PPIs onto 3D grids and trains a user-specified CNN on these 3D grids. DeepRank allows for efficient training of 3D CNNs with data sets containing millions of PPIs and supports both classification and regression. We demonstrate the performance of DeepRank on two distinct challenges: The classification of biological versus crystallographic PPIs, and the ranking of docking models. For both problems DeepRank is competitive with, or outperforms, state-of-the-art methods, demonstrating the versatility of the framework for research in structural biology.


Asunto(s)
Minería de Datos/métodos , Aprendizaje Profundo , Mapeo de Interacción de Proteínas/métodos , Cristalografía , Conjuntos de Datos como Asunto , Simulación del Acoplamiento Molecular , Dominios y Motivos de Interacción de Proteínas , Mapas de Interacción de Proteínas
3.
Sci Rep ; 11(1): 24, 2021 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-33420133

RESUMEN

Accurate and low-cost sleep measurement tools are needed in both clinical and epidemiological research. To this end, wearable accelerometers are widely used as they are both low in price and provide reasonably accurate estimates of movement. Techniques to classify sleep from the high-resolution accelerometer data primarily rely on heuristic algorithms. In this paper, we explore the potential of detecting sleep using Random forests. Models were trained using data from three different studies where 134 adult participants (70 with sleep disorder and 64 good healthy sleepers) wore an accelerometer on their wrist during a one-night polysomnography recording in the clinic. The Random forests were able to distinguish sleep-wake states with an F1 score of 73.93% on a previously unseen test set of 24 participants. Detecting when the accelerometer is not worn was also successful using machine learning ([Formula: see text]), and when combined with our sleep detection models on day-time data provide a sleep estimate that is correlated with self-reported habitual nap behaviour ([Formula: see text]). These Random forest models have been made open-source to aid further research. In line with literature, sleep stage classification turned out to be difficult using only accelerometer data.


Asunto(s)
Acelerometría/métodos , Polisomnografía/métodos , Sueño/fisiología , Acelerometría/instrumentación , Acelerometría/estadística & datos numéricos , Adolescente , Adulto , Anciano , Algoritmos , Aprendizaje Profundo , Femenino , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Polisomnografía/instrumentación , Polisomnografía/estadística & datos numéricos , Fases del Sueño , Trastornos del Sueño-Vigilia/diagnóstico , Dispositivos Electrónicos Vestibles , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda