Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-35442889

RESUMEN

Predicting the user's intended locomotion mode is critical for wearable robot control to assist the user's seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user's locomotion mode. The system possesses multimodal visual information and recognizes user's locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.


Asunto(s)
Locomoción , Caminata , Algoritmos , Humanos , Intención , Extremidad Inferior
2.
IEEE Trans Cybern ; 52(3): 1750-1762, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32520717

RESUMEN

Computer vision has shown promising potential in wearable robotics applications (e.g., human grasping target prediction and context understanding). However, in practice, the performance of computer vision algorithms is challenged by insufficient or biased training, observation noise, cluttered background, etc. By leveraging Bayesian deep learning (BDL), we have developed a novel, reliable vision-based framework to assist upper limb prosthesis grasping during arm reaching. This framework can measure different types of uncertainties from the model and data for grasping target recognition in realistic and challenging scenarios. A probability calibration network was developed to fuse the uncertainty measures into one calibrated probability for online decision making. We formulated the problem as the prediction of grasping target while arm reaching. Specifically, we developed a 3-D simulation platform to simulate and analyze the performance of vision algorithms under several common challenging scenarios in practice. In addition, we integrated our approach into a shared control framework of a prosthetic arm and demonstrated its potential at assisting human participants with fluent target reaching and grasping tasks.


Asunto(s)
Miembros Artificiales , Robótica , Brazo , Teorema de Bayes , Fuerza de la Mano , Humanos , Extremidad Superior
3.
Accid Anal Prev ; 137: 105432, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32004860

RESUMEN

Driving distraction is a leading cause of fatal car accidents, and almost nine people are killed in the US each day because of distracting activities. Therefore, reducing the number of distraction-affected traffic accidents remains an imperative issue. A novel algorithm for detection of drivers' manual distraction was proposed in this manuscript. The detection algorithm consists of two modules. The first module predicts the bounding boxes of the driver's right hand and right ear from RGB images. The second module takes the bounding boxes as input and predicts the type of distraction. 106,677 frames extracted from videos, which were collected from twenty participants in a driving simulator, were used for training (50%) and testing (50%). For distraction classification, the results indicated that the proposed framework could detect normal driving, using the touchscreen, and talking with a phone with F1-score 0.84, 0.69, 0.82, respectively. For overall distraction detection, it achieved F1-score of 0.74. The whole framework ran at 28 frames per second. The algorithm achieved comparable overall accuracy with similar research, and was more efficient than other methods. A demo video for the algorithm can be found at https://youtu.be/NKclK1bHRd4.


Asunto(s)
Accidentes de Tránsito/prevención & control , Conducción Distraída , Reconocimiento de Normas Patrones Automatizadas/métodos , Adulto , Algoritmos , Recolección de Datos , Oído/fisiología , Femenino , Mano/fisiología , Humanos , Masculino , Redes Neurales de la Computación
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3163-3166, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946559

RESUMEN

This paper aims to investigate the visual strategy of transtibial amputees while they are approaching the transition between level-ground and stairs and compare it with that of able-bodied individuals. To this end, we conducted a pilot study where two transtibial amputee subjects and two able-bodied subjects transitioned from level-ground to stairs and vice versa while wearing eye tracking glasses to record gaze fixations. To investigate how vision functioned to both populations for preparing locomotion on new terrains, gaze fixation behavior before the new terrains were analyzed and compared between two populations across all transition cases in the study. Our results presented that, unlike the able-bodied population, amputees had most of their fixations directed on the transition region prior to new terrains. Furthermore, amputees showed an increased need for visual information during transition regions before navigation on stairs than that before navigation onto level-ground. The insights about amputees' visual behavior gained by the study may lead the future development of technologies related to the intention prediction and the locomotion recognition for amputees.


Asunto(s)
Amputados , Miembros Artificiales , Medidas del Movimiento Ocular/instrumentación , Fijación Ocular , Marcha , Fenómenos Biomecánicos , Humanos , Proyectos Piloto
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 4623-4626, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30441382

RESUMEN

Physiological responses are essential for health monitoring. However, modeling the complex interactions be- tween them across activity and environmental factors can be challenging. In this paper, we introduce a framework that identifies the state of an individual based on their activity, trains predictive models for their physiological response within these states, and jointly optimizes for the states and the models. We apply this framework to respiratory rate prediction based on heart rate and physical activity, and test it on a dataset of 9 individuals performing various activities of daily life.


Asunto(s)
Actividades Cotidianas , Ejercicio Físico , Frecuencia Cardíaca , Frecuencia Respiratoria , Humanos , Modelos Lineales
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1817-1820, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440748

RESUMEN

Lower-limb robotic prosthetics can benefit from context awareness to provide comfort and safety to the amputee. In this work, we developed a terrain identification and surface inclination estimation system for a prosthetic leg using visual and inertial sensors. We built a dataset from which images with high sharpness are selected using the IMU signal. The images are used for terrain identification and inclination is also computed simultaneously. With such information, the control of a robotized prosthetic leg can be adapted to changes in its surrounding.


Asunto(s)
Amputados , Miembros Artificiales , Humanos , Locomoción , Extremidad Inferior
7.
IEEE Trans Cybern ; 47(11): 3706-3718, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28113386

RESUMEN

Visual tracking is a critical task in many computer vision applications such as surveillance and robotics. However, although the robustness to local corruptions has been improved, prevailing trackers are still sensitive to large scale corruptions, such as occlusions and illumination variations. In this paper, we propose a novel robust object tracking technique depends on subspace learning-based appearance model. Our contributions are twofold. First, mask templates produced by frame difference are introduced into our template dictionary. Since the mask templates contain abundant structure information of corruptions, the model could encode information about the corruptions on the object more efficiently. Meanwhile, the robustness of the tracker is further enhanced by adopting system dynamic, which considers the moving tendency of the object. Second, we provide the theoretic guarantee that by adapting the modulated template dictionary system, our new sparse model can be solved by the accelerated proximal gradient algorithm as efficient as in traditional sparse tracking methods. Extensive experimental evaluations demonstrate that our method significantly outperforms 21 other cutting-edge algorithms in both speed and tracking accuracy, especially when there are challenges such as pose variation, occlusion, and illumination changes.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...