RESUMEN
BACKGROUND: "Ricominciare" is a single-center, prospective, pre-/post-intervention pilot study aimed at verifying the feasibility and safety of the ARC Intellicare (ARC) system (an artificial intelligence-powered and inertial motion unit-based mobile platform) in the home rehabilitation of people with disabilities due to respiratory or neurological diseases. METHODS: People with Parkinson's disease (pwPD) or post-COVID-19 condition (COV19) and an indication for exercise or home rehabilitation to optimize motor and respiratory function were enrolled. They underwent training for ARC usage and received an ARC unit to be used independently at home for 4 weeks, for 45 min 5 days/week sessions of respiratory and motor patient-tailored rehabilitation. ARC allows for exercise monitoring thanks to data from five IMU sensors, processed by an AI proprietary library to provide (i) patients with real-time feedback and (ii) therapists with information on patient adherence to the prescribed therapy. Usability (System Usability Scale, SUS), adherence, and adverse events were primary study outcomes. Modified Barthel Index (mBI), Barthel Dyspnea Index (BaDI), 2-Minute Walking Test (2MWT), Brief Fatigue Inventory (BFI), Beck Depression or Anxiety Inventory (BDI, BAI), and quality of life (EQ-5D) were also monitored pre- and post-treatment. RESULTS: A total of 21 out of 23 eligible patients were enrolled and completed the study: 11 COV19 and 10 pwPD. The mean total SUS score was 77/100. The median patients' adherence to exercise prescriptions was 80%. Clinical outcome measures (BaDI, 2MWT distance, BFI; BAI, BDI, and EQ-5D) improved significantly; no side effects were reported. CONCLUSION: ARC is usable and safe for home rehabilitation. Preliminary data suggest promising results on the effectiveness in subjects with post-COVID condition or Parkinson's disease.
Asunto(s)
COVID-19 , Personas con Discapacidad , Enfermedad de Parkinson , Telerrehabilitación , Humanos , Proyectos Piloto , Inteligencia Artificial , Estudios Prospectivos , Calidad de VidaRESUMEN
Nowadays, the growing interest in gathering physiological data and human behavior in everyday life scenarios is paralleled by an increase in wireless devices recording brain and body signals. However, the technical issues that characterize these solutions often limit the full brain-related assessments in real-life scenarios. Here we introduce the Biohub platform, a hardware/software (HW/SW) integrated wearable system for multistream synchronized acquisitions. This system consists of off-the-shelf hardware and state-of-art open-source software components, which are highly integrated into a high-tech low-cost solution, complete, yet easy to use outside conventional labs. It flexibly cooperates with several devices, regardless of the manufacturer, and overcomes the possibly limited resources of recording devices. The Biohub was validated through the characterization of the quality of (i) multistream synchronization, (ii) in-lab electroencephalographic (EEG) recordings compared with a medical-grade high-density device, and (iii) a Brain-Computer-Interface (BCI) in a real driving condition. Results show that this system can reliably acquire multiple data streams with high time accuracy and record standard quality EEG signals, becoming a valid device to be used for advanced ergonomics studies such as driving, telerehabilitation, and occupational safety.
Asunto(s)
Interfaces Cerebro-Computador , Dispositivos Electrónicos Vestibles , Electroencefalografía , Ergonomía , Humanos , Análisis de SistemasRESUMEN
Human motion monitoring and analysis can be an essential part of a wide spectrum of applications, including physical rehabilitation among other potential areas of interest. Creating non-invasive systems for monitoring patients while performing rehabilitation exercises, to provide them with an objective feedback, is one of the current challenges. In this paper we present a wearable multi-sensor system for human motion monitoring, which has been developed for use in rehabilitation. It is composed of a number of small modules that embed high-precision accelerometers and wireless communications to transmit the information related to the body motion to an acquisition device. The results of a set of experiments we made to assess its performance in real-world setups demonstrate its usefulness in human motion acquisition and tracking, as required, for example, in activity recognition, physical/athletic performance evaluation and rehabilitation.
RESUMEN
Understanding mental processes in complex human behavior is a key issue in driving, representing a milestone for developing user-centered assistive driving devices. Here, we propose a hybrid method based on electroencephalographic (EEG) and electromyographic (EMG) signatures to distinguish left and right steering in driving scenarios. Twenty-four participants took part in the experiment consisting of recordings of 128-channel EEG and EMG activity from deltoids and forearm extensors in non-ecological and ecological steering tasks. Specifically, we identified the EEG mu rhythm modulation correlates with motor preparation of self-paced steering actions in the non-ecological task, while the concurrent EMG activity of the left (right) deltoids correlates with right (left) steering. Consequently, we exploited the mu rhythm de-synchronization resulting from the non-ecological task to detect the steering side using cross-correlation analysis with the ecological EMG signals. Results returned significant cross-correlation values showing the coupling between the non-ecological EEG feature and the muscular activity collected in ecological driving conditions. Moreover, such cross-correlation patterns discriminate the steering side earlier relative to the single EMG signal. This hybrid system overcomes the limitation of the EEG signals collected in ecological settings such as low reliability, accuracy, and adaptability, thus adding to the EMG the characteristic predictive power of the cerebral data. These results prove how it is possible to complement different physiological signals to control the level of assistance needed by the driver. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-021-09776-w.
RESUMEN
Decoding motor intentions from non-invasive brain activity monitoring is one of the most challenging aspects in the Brain Computer Interface (BCI) field. This is especially true in online settings, where classification must be performed in real-time, contextually with the user's movements. In this work, we use a topology-preserving input representation, which is fed to a novel combination of 3D-convolutional and recurrent deep neural networks, capable of performing multi-class continual classification of subjects' movement intentions. Our model is able to achieve a higher accuracy than a related state-of-the-art model from literature, despite being trained in a much more restrictive setting and using only a simple form of input signal preprocessing. The results suggest that deep learning models are well suited for deployment in challenging real-time BCI applications such as movement intention recognition.
RESUMEN
Driving a car requires high cognitive demands, from sustained attention to perception and action planning. Recent research investigated the neural processes reflecting the planning of driving actions, aiming to better understand the factors leading to driving errors and to devise methodologies to anticipate and prevent such errors by monitoring the driver's cognitive state and intention. While such anticipation was shown for discrete driving actions, such as emergency braking, there is no evidence for robust neural signatures of continuous action planning. This study aims to fill this gap by investigating continuous steering actions during a driving task in a car simulator with multimodal recordings of behavioural and electroencephalography (EEG) signals. System identification is used to assess whether robust neurophysiological signatures emerge before steering actions. Linear decoding models are then used to determine whether such cortical signals can predict continuous steering actions with progressively longer anticipation. Results point to significant EEG signatures of continuous action planning. Such neural signals show consistent dynamics across participants for anticipations up to 1 s, while individual-subject neural activity could reliably decode steering actions and predict future actions for anticipations up to 1.8 s. Finally, we use canonical correlation analysis to attempt disentangling brain and non-brain contributors to the EEG-based decoding. Our results suggest that low-frequency cortical dynamics are involved in the planning of steering actions and that EEG is sensitive to that neural activity. As a result, we propose a framework to investigate anticipatory neural activity in realistic continuous motor tasks.
Asunto(s)
Anticipación Psicológica/fisiología , Conducción de Automóvil/psicología , Corteza Cerebral/fisiología , Análisis de Correlación Canónica , Simulación por Computador , Electroencefalografía , Humanos , Modelos Lineales , Redes Neurales de la Computación , Desempeño Psicomotor/fisiologíaRESUMEN
The capability of grasping and lifting an object in a suitable, stable and controlled way is an outstanding feature for a robot, and thus far, one of the major problems to be solved in robotics. No robotic tools able to perform an advanced control of the grasp as, for instance, the human hand does, have been demonstrated to date. Due to its capital importance in science and in many applications, namely from biomedics to manufacturing, the issue has been matter of deep scientific investigations in both the field of neurophysiology and robotics. While the former is contributing with a profound understanding of the dynamics of real-time control of the slippage and grasp force in the human hand, the latter tries more and more to reproduce, or take inspiration by, the nature's approach, by means of hardware and software technology. On this regard, one of the major constraints robotics has to overcome is the real-time processing of a large amounts of data generated by the tactile sensors while grasping, which poses serious problems to the available computational power. In this paper a bio-inspired approach to tactile data processing has been followed in order to design and test a hardware-software robotic architecture that works on the parallel processing of a large amount of tactile sensing signals. The working principle of the architecture bases on the cellular nonlinear/neural network (CNN) paradigm, while using both hand shape and spatial-temporal features obtained from an array of microfabricated force sensors, in order to control the sensory-motor coordination of the robotic system. Prototypical grasping tasks were selected to measure the system performances applied to a computer-interfaced robotic hand. Successful grasps of several objects, completely unknown to the robot, e.g. soft and deformable objects like plastic bottles, soft balls, and Japanese tofu, have been demonstrated.
Asunto(s)
Fuerza de la Mano/fisiología , Mano , Redes Neurales de la Computación , Robótica , Algoritmos , Fenómenos Biomecánicos , Mano/anatomía & histología , Mano/fisiología , Humanos , Robótica/instrumentación , Robótica/métodos , TactoRESUMEN
The objective of the present work was to identify electroencephalographic (EEG) components in order to distinguish between braking and accelerating intention in simulated car driving. To do so, we collected high-density EEG data from thirty participants while they were driving in a car simulator. The EEG was separated into independent components that were clustered across participants according to their scalp map topographies. For each component, time-frequency activity related to braking and acceleration events was determined through wavelet analysis, and the cortical generators were estimated through minimum norm source localisation. Comparisons of the time-frequency patterns of power and phase activations revealed that theta power synchronisation distinguishes braking from acceleration events 800â¯ms before the action and that phase-locked activity increases for braking 800â¯ms before foot movement in the theta-alpha frequency range. In addition, source reconstruction showed that the dorso-mesial part of the premotor cortex plays a key role in preparation of foot movement. Overall, the results illustrate that dorso-mesial premotor areas are involved in movement preparation while driving, and that low-frequency EEG rhythms could be exploited to predict drivers' intention to brake or accelerate.
Asunto(s)
Conducción de Automóvil/psicología , Tiempo de Reacción/fisiología , Aceleración , Adulto , Automóviles , Simulación por Computador , Electroencefalografía/métodos , Femenino , Humanos , Intención , Masculino , Corteza Motora/fisiología , Análisis Espacio-Temporal , Adulto JovenRESUMEN
In this study we summarize the features that characterize the pre-movements and pre-motor imageries (before imagining the movement) electroencephalography (EEG) data in humans from both Neuroscientists' and Engineers' point of view. We demonstrate what the brain status is before a voluntary movement and how it has been used in practical applications such as brain computer interfaces (BCIs). Usually, in BCI applications, the focus of study is on the after-movement or motor imagery potentials. However, this study shows that it is possible to develop BCIs based on the before-movement or motor imagery potentials such as the Bereitschaftspotential (BP). Using the pre-movement or pre-motor imagery potentials, we can correctly predict the onset of the upcoming movement, its direction and even the limb that is engaged in the performance. This information can help in designing a more efficient rehabilitation tool as well as BCIs with a shorter response time which appear more natural to the users.
RESUMEN
One of the changes seen in electroencephalography (EEG) data preceding human voluntary movement is a cortical potential called readiness potential (RP). Detection of this potential can benefit researchers in clinical neurosciences for rehabilitation of malfunctioning brain and those working on brain-computer interfacing to develop a suitable mechanism to detect the intention of movement. Here, a constrained blind source extraction (CBSE) is attempted for detection of RP. A suitable constraint is defined and applied. The results are also compared with those of the traditional blind source separation in terms of true positive rate, false positive rate, and computation time. The results show that the CBSE approach in overall has superior performance.