Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-35877796

RESUMEN

Brain visual dynamics encode rich functional and biological patterns of the neural system, and if decoded, are of great promise for many applications such as intention understanding, cognitive load quantization and neural disorder measurement. We here focus on the understanding of the brain visual dynamics for the Amyotrophic lateral sclerosis (ALS) population, and propose a novel system that allows these so- called 'lock-in' patients to 'speak' with their brain visual movements. More specifically, we propose an intelligent system to decode the eye bio-potential signal, Electrooculogram (EOG), thereby understanding the patients' intention. We first propose to leverage a deep learning framework for automatic feature learning and classification of the brain visual dynamics, aiming to translate the EOG to meaningful words. We afterwards design and develop an edge computing platform on the smart phone, which can execute the deep learning algorithm, visualize the brain visual dynamics, and demonstrate the edge inference results, all in real-time. Evaluated on 4,500 trials of brain visual movements performed by multiple users, our novel system has demonstrated a high eye-word recognition rate up to 90.47%. The system is demonstrated to be intelligent, effective and convenient for decoding brain visual dynamics for ALS patients. This research thus is expected to greatly advance the decoding and understanding of brain visual dynamics, by leveraging machine learning and edge computing innovations.


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Aprendizaje Profundo , Encéfalo , Electrooculografía/métodos , Humanos
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 377-381, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891313

RESUMEN

Eye dynamics, a typical expression of brain activities, is an emerging modality for emerging and promising smart health applications. Electrooculogram (EOG) - a natural bio-electric signal generated during eye movements, if decoded, is of great potential to reveal the user's mind and enable voice-free communication for patients with amyotrophic lateral sclerosis (ALS). ALS patients usually lose physical movement abilities including speech and handwriting but fortunately can move their eyes. In this study, we propose a novel deep transfer learning-empowered system, called "eyeSay", which leverages both deep learning and transfer learning for intelligent eye EOG-to-speech translation. More specifically, we have designed a multi-stage convolutional neural network (CNN) to analyze the eye-written words, named as CNN-word. Moreover, to reveal fundamental patterns of eye movements, we build a transferable feature extractor, CNN-stroke, upon eye strokes that are building components of an eye word. Then, we transfer the CNN-stroke model to the eye word learning task in an innovative way, that is, use CNN-stroke as an additional branch of CNN-word to generate a stroke probability map. The achieved boostCNN-word model, enhanced by the transferable feature extractor, has greatly improved the eye word decoding performance. This novel study will directly contribute to voice-free communications for ALS patients, and greatly advance the ubiquitous eye EOG-based smart health area.


Asunto(s)
Esclerosis Amiotrófica Lateral , Dispositivos Electrónicos Vestibles , Electrooculografía , Movimientos Oculares , Humanos , Aprendizaje Automático
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1861-1864, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891650

RESUMEN

Human big data decoding is of great potential to reveal the complex patterns of human dynamics like physiological and biomechanical signals. In this study, we take special interest in brain visual dynamics, e.g., eye movement signals, and investigate how to leverage eye signal decoding to provide a voice-free communication possibility for ALS patients who lose ability to control their muscles. Due to substantial complexity of visual dynamics, we propose a deep learning framework to decode the visual dynamics when the user performs eye-writing tasks. Further, to enable real-time inference of the eye signals, we design and develop a mobile edge computing platform, called UbiEi-Edge, which can wirelessly receive the eye signals via low-energy Bluetooth, execute the deep learning algorithm, and visualize decoding results. This real word implementation, developed on an Android Phone, aims to provide real-time data streaming and automatic, real-time decoding of brain visual dynamics, thereby enabling a new paradigm for ALS patients to communicate with the external world. Our experiment has demonstrated the feasibility and effectiveness of the proposed novel mobile edge computing prototype. The study, by innovatively bridging AI, edge computing, and mobile health, will greatly advance the brain dynamics decoding-empowered human-centered computing and smart health big data applications.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Macrodatos , Encéfalo , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...