Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3755-3758, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018818

RESUMEN

Despite recent advancements in the field of pattern recognition-based myoelectric control, the collection of a high quality training set remains a challenge limiting its adoption. This paper proposes a framework for a possible solution by augmenting short training protocols with subject-specific synthetic electromyography (EMG) data generated using a deep generative network, known as SinGAN. The aim of this work is to produce high quality synthetic data that could improve classification accuracy when combined with a limited training protocol. SinGAN was used to generate 1000 synthetic windows of EMG data from a single window of six different motions, and results were evaluated qualitatively, quantitatively, and in a classification task. Qualitative assessment of synthetic data was conducted via visual inspection of principal component analysis projections of real and synthetic feature space. Quantitative assessment of synthetic data revealed 11 of 32 synthetic features had similar location and scale to real features (using univariate two-sample Lepage tests); whereas multivariate distributions were found to be statistically different (p <0.05). Finally, the addition of these synthetic data to a brief training set of real data significantly improved classification accuracy in a cross-validation testing scheme by 5.4% (p <0.001).


Asunto(s)
Electromiografía , Detección de Señal Psicológica , Estudios de Factibilidad , Movimiento (Física) , Análisis de Componente Principal
2.
IEEE J Biomed Health Inform ; 24(4): 1196-1205, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31403450

RESUMEN

The Timed-Up-and-Go (TUG) test is a simple clinical tool commonly used to quickly assess the mobility of patients. Researchers have endeavored to automate the test using sensors or motion tracking systems to improve its accuracy and to extract more resolved information about its sub-phases. While some approaches have shown promise, they often require the donning of sensors or the use of specialized hardware, such as the now discontinued Microsoft Kinect, which combines video information with depth sensors (RGBD). In this work, we leverage recent advances in computer vision to automate the TUG test using a regular RGB video camera without the need for custom hardware or additional depth sensors. Thirty healthy participants were recorded using a Kinect V2 and a standard video feed while performing multiple trials of 3 and 1.5 meter versions of the TUG test. A Mask Regional Convolutional Neural Net (R-CNN) algorithm and a Deep Multitask Architecture for Human Sensing (DMHS) were then used together to extract global 3D poses of the participants. The timing of transitions between the six key movement phases of the TUG test were then extracted using heuristic features extracted from the time series of these 3D poses. The proposed video-based vTUG system yielded the same error as the standard Kinect-based system for all six key transitions points, and average errors of less than 0.15 seconds from a multi-observer hand labeled ground truth. This work describes a novel method of video-based automation of the TUG test using a single standard camera, removing the need for specialized equipment and facilitating the extraction of additional meaningful information for clinical use.


Asunto(s)
Aprendizaje Profundo , Prueba de Esfuerzo/métodos , Análisis de la Marcha/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Grabación en Video/métodos , Adolescente , Adulto , Algoritmos , Automatización , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA