Subject(s)
Mental Health , Ethnology , Artificial Intelligence , Methodology as a Subject , Mass BehaviorABSTRACT
Human-machine interfaces (HMIs) can be used to decode a user's motor intention to control an external device. People that suffer from motor disabilities, such as spinal cord injury, can benefit from the uses of these interfaces. While many solutions can be found in this direction, there is still room for improvement both from a decoding, hardware, and subject-motor learning perspective. Here we show, in a series of experiments with non-disabled participants, a novel decoding and training paradigm allowing naïve participants to use their auricular muscles (AM) to control two degrees of freedom with a virtual cursor. AMs are particularly interesting because they are vestigial muscles and are often preserved after neurological diseases. Our method relies on the use of surface electromyographic records and the use of contraction levels of both AMs to modulate the velocity and direction of a cursor in a two-dimensional paradigm. We used a locking mechanism to fix the current position of each axis separately to enable the user to stop the cursor at a certain location. A five-session training procedure (20-30 min per session) with a 2D center-out task was performed by five volunteers. All participants increased their success rate (Initial: 52.78 ± 5.56%; Final: 72.22 ± 6.67%; median ± median absolute deviation) and their trajectory performances throughout the training. We implemented a dual task with visual distractors to assess the mental challenge of controlling while executing another task; our results suggest that the participants could perform the task in cognitively demanding conditions (success rate of 66.67 ± 5.56%). Finally, using the Nasa Task Load Index questionnaire, we found that participants reported lower mental demand and effort in the last two sessions. To summarize, all subjects could learn to control the movement of a cursor with two degrees of freedom using their AM, with a low impact on the cognitive load. Our study is a first step in developing AM-based decoders for HMIs for people with motor disabilities, such as spinal cord injury.
ABSTRACT
[This corrects the article DOI: 10.3389/fnbot.2022.939241.].
ABSTRACT
Stroke is the second leading cause of death and one of the leading causes of disability in the world. According to the World Health Organization, 11 million people suffer a stroke yearly. The cost of the disease is exorbitant, and the most widely used treatment is conventional physiotherapy. Therefore, assistive technology emerges to optimize rehabilitation and functional capabilities, but cost, robustness, usability, and long-term results still restrict the technology selection. This work aimed to develop a low-cost ankle orthosis, the G-Exos, a wearable exoskeleton to increase motor capability by assisting dorsiflexion, plantarflexion, and ankle stability. A hybrid system provided near-natural gait movements using active, motor, and passive assistance, elastic band. The system was validated with 10 volunteers with foot drop: seven with stroke, two with incomplete spinal cord injury (SCI), and one with acute inflammatory transverse myelitis (ATM). The G-Exos showed assistive functionality for gait movement. A Friedman test showed a significant difference in dorsiflexion amplitude with the use of the G-Exos compared to gait without the use of the G-Exos [x 2 (3) = 98.56, p < 0.001]. In addition, there was also a significant difference in ankle eversion and inversion comparing walking with and without the G-Exos [x 2 (3) = 36.12, p < 0.001]. The G-Exos is a robust, lightweight, and flexible assistive technology device to detect the gait phase accurately and provide better human-machine interaction. G-Exos training improved capability to deal with gait disorders, usability, and motor and functional recovery. Wearable assistive technologies lead to a better quality of life and contribute using in activities of daily living.
ABSTRACT
Hands-free interfaces are essential to people with limited mobility for interacting with biomedical or electronic devices. However, there are not enough sensing platforms that quickly tailor the interface to these users with disabilities. Thus, this article proposes to create a sensing platform that could be used by patients with mobility impairments to manipulate electronic devices, thereby their independence will be increased. Hence, a new sensing scheme is developed by using three hands-free signals as inputs: voice commands, head movements, and eye gestures. These signals are obtained by using non-invasive sensors: a microphone for the speech commands, an accelerometer to detect inertial head movements, and an infrared oculography to register eye gestures. These signals are processed and received as the user's commands by an output unit, which provides several communication ports for sending control signals to other devices. The interaction methods are intuitive and could extend boundaries for people with disabilities to manipulate local or remote digital systems. As a study case, two volunteers with severe disabilities used the sensing platform to steer a power wheelchair. Participants performed 15 common skills for wheelchair users and their capacities were evaluated according to a standard test. By using the head control they obtained 93.3 and 86.6%, respectively for volunteers A and B; meanwhile, by using the voice control they obtained 63.3 and 66.6%, respectively. These results show that the end-users achieved high performance by developing most of the skills by using the head movements interface. On the contrary, the users were not able to develop most of the skills by using voice control. These results showed valuable information for tailoring the sensing platform according to the end-user needs.
ABSTRACT
The design of cooperative advanced driver assistance systems (C-ADAS) involves a holistic and systemic vision that considers the bidirectional interaction among three main elements: the driver, the vehicle, and the surrounding environment. The evolution of these systems reflects this need. In this work, we present a survey of C-ADAS and describe a conceptual architecture that includes the driver, vehicle, and environment and their bidirectional interactions. We address the remote operation of this C-ADAS based on the Internet of vehicles (IoV) paradigm, as well as the involved enabling technologies. We describe the state of the art and the research challenges present in the development of C-ADAS. Finally, to quantify the performance of C-ADAS, we describe the principal evaluation mechanisms and performance metrics employed in these systems.
Subject(s)
Accidents, Traffic , Automobile Driving , Protective Devices , Surveys and Questionnaires , TechnologyABSTRACT
Resumen La ergonomía es considerada, en la actualidad, una disciplina científica consolidada, que se expande continuamente a nivel global. Este escenario actual es el resultado de diferentes visiones que han permeado la evolución de la ergonomía. En este artículo se hace un recorrido histórico de la ergonomía como disciplina, tomando en cuenta la escuela de los factores humanos y la escuela de la ergonomía de la actividad. Se presentan los orígenes de estas escuelas, sus paradigmas subyacentes y se realiza una comparación entre ellas. Las reflexiones presentadas en el artículo en torno a la ergonomía parten de la idea que, desde las diferencias y la diversidad, se erige el desarrollo. Los autores de este artículo son partidarios de abordar la ergonomía como una única disciplina, reconociendo la convergencia y la complementariedad entre las dos escuelas. Más allá de las diferencias existentes, la práctica de la ergonomía debe enfocarse en el diseño de los sistemas de trabajo, tomando como eje central al ser humano. Se espera que estas reflexiones permitan a los profesionales de la ergonomía y de otras diciplinas afines ganar mayor comprensión de cómo abordar la actividad humana para transformarla positivamente.
Abstract Ergonomics is now considered a consolidated scientific discipline that is continually expanding globally. This current scenario is the result of different visions that have permeated the evolution of ergonomics. This article presents a historical overview of ergonomics as a discipline considering human factors and the activity-oriented ergonomics schools. The origins of these two schools of thought on ergonomics and their underlying paradigms are presented, and a comparison between them is made. The reflections presented in the article on ergonomics are based on the idea that progress is built on differences and diversity. The authors of this article support the idea of approaching ergonomics as a single discipline, recognizing the convergence and complementarity between the two schools. Beyond the existing differences, ergonomics' practice should be focused on the design of human-centered work systems. It is hoped that the reflections made in this article will enable professionals in ergonomics and other related disciplines to understand how to approach human at work to transform working conditions positively.
Resumo A ergonomia é considerada, na atualidade, uma disciplina científica consolidada, que se expande continuamente a nível global. Este cenário atual es el resultado de diferentes visões que han permeado la evolución de la ergonomía. Neste artigo se tem uma recorrido histórico da ergonomia como disciplina, tomando na cuenta a escola dos fatores humanos e a escola da ergonomia da atividade. Se presentan los orígenes de estas escuelas, sus paradigmas subyacentes y se una realiza comparación entre ellas. Las reflexiones presentadas en el artículo en torno a la ergonomía parten de la idea that, from las diferencias y la diversidad, se erige el desarrollo. Los autores de este artículo son partidarios de abordar la ergonomía como una única disciplina, reconociendo la convergencia y la complementariedad entre las dos escuelas. Más allá de las diferencias existentes, la práctica de la ergonomía debe enfocarse en el diseño de los sistemas de trabajo, tomando como eje central al ser humano. Se espera que estas reflexiones permitan a los profesionales de la ergonomía y de otras diciplinas afines ganar mayor comprensión de cómo abordar la actividad humana para transformarla positivamente.
ABSTRACT
Everyday, people interact with different types of human machine interfaces, and the use of them is increasing, thus, it is necessary to design interfaces which are capable of responding in an intelligent, natural, inexpensive, and accessible way, regardless of social, cultural, economic, or physical features of a user. In this sense, it has been sought out the development of small interfaces to avoid any type of user annoyance. In this paper, bioelectric signals have been analyzed and characterized in order to propose a more natural human-machine interaction system. The proposed scheme is controlled by electromyographic signals that a person can create through arm movements. Such arm signals have been analyzed and characterized by a back-propagation neural network, and by a wavelet analysis, in this way control commands were obtained from such arm electromyographic signals. The developed interface, uses Extensible Messaging and Presence Protocol (XMPP) to send control commands remotely. In the experiment, it manipulated a vehicle that was approximately 52 km away from the user, with which it can be showed that a characterized electromyographic signal can be sufficient for controlling embedded devices such as a Raspberri Pi, and in this way we can use the neural network and the wavelet analysis to generate control words which can be used inside the Internet of Things too. A Tiva-C board has been used to acquire data instead of more popular development boards, with an adequate response. One of the most important aspects related to the proposed interface is that it can be used by almost anyone, including people with different abilities and even illiterate people. Due to the existence of individual efforts to characterize different types of bioelectric signals, we propose the generation of free access Bioelectric Control Dictionary, to define and consult each characterized biosignal.
Subject(s)
Neural Networks, Computer , User-Computer Interface , Algorithms , Humans , Man-Machine SystemsABSTRACT
People with severe disabilities may have difficulties when interacting with their home devices due to the limitations inherent to their disability. Simple home activities may even be impossible for this group of people. Although much work has been devoted to proposing new assistive technologies to improve the lives of people with disabilities, some studies have found that the abandonment of such technologies is quite high. This work presents a new assistive system based on eye tracking for controlling and monitoring a smart home, based on the Internet of Things, which was developed following concepts of user-centered design and usability. With this system, a person with severe disabilities was able to control everyday equipment in her residence, such as lamps, television, fan, and radio. In addition, her caregiver was able to monitor remotely, by Internet, her use of the system in real time. Additionally, the user interface developed here has some functionalities that allowed improving the usability of the system as a whole. The experiments were divided into two steps. In the first step, the assistive system was assembled in an actual home where tests were conducted with 29 participants without disabilities. In the second step, the system was tested with online monitoring for seven days by a person with severe disability (end-user), in her own home, not only to increase convenience and comfort, but also so that the system could be tested where it would in fact be used. At the end of both steps, all the participants answered the System Usability Scale (SUS) questionnaire, which showed that both the group of participants without disabilities and the person with severe disabilities evaluated the assistive system with mean scores of 89.9 and 92.5, respectively.