Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 16690, 2024 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030206

RESUMEN

Exoskeleton-based support for patients requires the learning of individual machine-learning models to recognize movement intentions of patients based on the electroencephalogram (EEG). A major issue in EEG-based movement intention recognition is the long calibration time required to train a model. In this paper, we propose a transfer learning approach that eliminates the need for a calibration session. This approach is validated on healthy subjects in this study. We will use the proposed approach in our future rehabilitation application, where the movement intention of the affected arm of a patient can be inferred from the EEG data recorded during bilateral arm movements enabled by the exoskeleton mirroring arm movements from the unaffected to the affected arm. For the initial evaluation, we compared two trained models for predicting unilateral and bilateral movement intentions without applying a classifier transfer. For the main evaluation, we predicted unilateral movement intentions without a calibration session by transferring the classifier trained on data from bilateral movement intentions. Our results showed that the classification performance for the transfer case was comparable to that in the non-transfer case, even with only 4 or 8 EEG channels. Our results contribute to robotic rehabilitation by eliminating the need for a calibration session, since EEG data for training is recorded during the rehabilitation session, and only a small number of EEG channels are required for model training.


Asunto(s)
Electroencefalografía , Dispositivo Exoesqueleto , Intención , Movimiento , Humanos , Electroencefalografía/métodos , Masculino , Calibración , Movimiento/fisiología , Adulto , Aprendizaje Automático , Femenino , Adulto Joven
3.
Front Neurorobot ; 17: 1297990, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38162893

RESUMEN

Robot learning based on implicitly extracted error detections (e.g., EEG-based error detections) has been well-investigated in human-robot interaction (HRI). In particular, the use of error-related potential (ErrP) evoked when recognizing errors is advantageous for robot learning when evaluation criteria cannot be explicitly defined, e.g., due to the complex behavior of robots. In most studies, erroneous behavior of robots were recognized visually. In some studies, visuo-tactile stimuli were used to evoke ErrPs or a tactile cue was used to indicate upcoming errors. To our knowledge, there are no studies in which ErrPs are evoked when recognizing errors only via the tactile channel. Hence, we investigated ErrPs evoked by tactile recognition of errors during HRI. In our scenario, subjects recognized errors caused by incorrect behavior of an orthosis during the execution of arm movements tactilely. EEG data from eight subjects was recorded. Subjects were asked to give a motor response to ensure error detection. Latency between the occurrence of errors and the response to errors was expected to be short. We assumed that the motor related brain activity is timely correlated with the ErrP and might be used from the classifier. To better interpret and test our results, we therefore tested ErrP detections in two additional scenarios, i.e., without motor response and with delayed motor response. In addition, we transferred three scenarios (motor response, no motor response, delayed motor response). Response times to error was short. However, high ErrP-classification performance was found for all subjects in case of motor response and no motor response condition. Further, ErrP classification performance was reduced for the transfer between motor response and delayed motor response, but not for the transfer between motor response and no motor response. We have shown that tactilely induced errors can be detected with high accuracy from brain activity. Our preliminary results suggest that also in tactile ErrPs the brain response is clear enough such that motor response is not relevant for classification. However, in future work, we will more systematically investigate tactile-based ErrP classification.

4.
Front Robot AI ; 7: 558531, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33501322

RESUMEN

During human-robot interaction, errors will occur. Hence, understanding the effects of interaction errors and especially the effect of prior knowledge on robot learning performance is relevant to develop appropriate approaches for learning under natural interaction conditions, since future robots will continue to learn based on what they have already learned. In this study, we investigated interaction errors that occurred under two learning conditions, i.e., in the case that the robot learned without prior knowledge (cold-start learning) and in the case that the robot had prior knowledge (warm-start learning). In our human-robot interaction scenario, the robot learns to assign the correct action to a current human intention (gesture). Gestures were not predefined but the robot had to learn their meaning. We used a contextual-bandit approach to maximize the expected payoff by updating (a) the current human intention (gesture) and (b) the current human intrinsic feedback after each action selection of the robot. As an intrinsic evaluation of the robot behavior we used the error-related potential (ErrP) in the human electroencephalogram as reinforcement signal. Either gesture errors (human intentions) can be misinterpreted by incorrectly captured gestures or errors in the ErrP classification (human feedback) can occur. We investigated these two types of interaction errors and their effects on the learning process. Our results show that learning and its online adaptation was successful under both learning conditions (except for one subject in cold-start learning). Furthermore, warm-start learning achieved faster convergence, while cold-start learning was less affected by online changes in the current context.

5.
Front Robot AI ; 5: 43, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-33500929

RESUMEN

We describe the BesMan learning platform which allows learning robotic manipulation behavior. It is a stand-alone solution which can be combined with different robotic systems and applications. Behavior that is adaptive to task changes and different target platforms can be learned to solve unforeseen challenges and tasks, which can occur during deployment of a robot. The learning platform is composed of components that deal with preprocessing of human demonstrations, segmenting the demonstrated behavior into basic building blocks, imitation, refinement by means of reinforcement learning, and generalization to related tasks. The core components are evaluated in an empirical study with 10 participants with respect to automation level and time requirements. We show that most of the required steps for transferring skills from humans to robots can be automated and all steps can be performed in reasonable time allowing to apply the learning platform on demand.

6.
Front Neurorobot ; 12: 84, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30618706

RESUMEN

The feeling of embodiment, i.e., experiencing the body as belonging to oneself and being able to integrate objects into one's bodily self-representation, is a key aspect of human self-consciousness and has been shown to importantly shape human cognition. An extension of such feelings toward robots has been argued as being crucial for assistive technologies aiming at restoring, extending, or simulating sensorimotor functions. Empirical and theoretical work illustrates the importance of sensory feedback for the feeling of embodiment and also immersion; we focus on the the perceptual level of touch and the role of tactile feedback in various assistive robotic devices. We critically review how different facets of tactile perception in humans, i.e., affective, social, and self-touch, might influence embodiment. This is particularly important as current assistive robotic devices - such as prostheses, orthoses, exoskeletons, and devices for teleoperation-often limit touch low-density and spatially constrained haptic feedback, i.e., the mere touch sensation linked to an action. Here, we analyze, discuss, and propose how and to what degree tactile feedback might increase the embodiment of certain robotic devices, e.g., prostheses, and the feeling of immersion in human-robot interaction, e.g., in teleoperation. Based on recent findings from cognitive psychology on interactive processes between touch and embodiment, we discuss technical solutions for specific applications, which might be used to enhance embodiment, and facilitate the study of how embodiment might alter human-robot interactions. We postulate that high-density and large surface sensing and stimulation are required to foster embodiment of such assistive devices.

7.
Sci Rep ; 7(1): 17562, 2017 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-29242555

RESUMEN

Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.


Asunto(s)
Potenciales Evocados , Refuerzo en Psicología , Robótica , Electroencefalografía , Retroalimentación Psicológica/fisiología , Femenino , Humanos , Masculino
8.
Sensors (Basel) ; 17(7)2017 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-28671632

RESUMEN

A current trend in the development of assistive devices for rehabilitation, for example exoskeletons or active orthoses, is to utilize physiological data to enhance their functionality and usability, for example by predicting the patient's upcoming movements using electroencephalography (EEG) or electromyography (EMG). However, these modalities have different temporal properties and classification accuracies, which results in specific advantages and disadvantages. To use physiological data analysis in rehabilitation devices, the processing should be performed in real-time, guarantee close to natural movement onset support, provide high mobility, and should be performed by miniaturized systems that can be embedded into the rehabilitation device. We present a novel Field Programmable Gate Array (FPGA) -based system for real-time movement prediction using physiological data. Its parallel processing capabilities allows the combination of movement predictions based on EEG and EMG and additionally a P300 detection, which is likely evoked by instructions of the therapist. The system is evaluated in an offline and an online study with twelve healthy subjects in total. We show that it provides a high computational performance and significantly lower power consumption in comparison to a standard PC. Furthermore, despite the usage of fixed-point computations, the proposed system achieves a classification accuracy similar to systems with double precision floating-point precision.


Asunto(s)
Movimiento , Interfaces Cerebro-Computador , Electroencefalografía , Electromiografía , Humanos , Aparatos Ortopédicos , Dispositivos de Autoayuda
9.
Comput Biol Med ; 79: 286-298, 2016 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-27837720

RESUMEN

With respect to single trial detection of event-related potentials (ERPs), spatial and spectral filters are two of the most commonly used pre-processing techniques for signal enhancement. Spatial filters reduce the dimensionality of the data while suppressing the noise contribution and spectral filters attenuate frequency components that most likely belong to noise subspace. However, the frequency spectrum of ERPs overlap with that of the ongoing electroencephalogram (EEG) and different types of artifacts. Therefore, proper selection of the spectral filter cutoffs is not a trivial task. In this research work, we developed a supervised method to estimate the spatial and finite impulse response (FIR) spectral filters, simultaneously. We evaluated the performance of the method on offline single trial classification of ERPs in datasets recorded during an oddball paradigm. The proposed spatio-spectral filter improved the overall single-trial classification performance by almost 9% on average compared with the case that no spatial filters were used. We also analyzed the effects of different spectral filter lengths and the number of retained channels after spatial filtering.


Asunto(s)
Electroencefalografía/métodos , Potenciales Evocados/fisiología , Procesamiento de Señales Asistido por Computador , Adulto , Algoritmos , Encéfalo/fisiología , Interfaces Cerebro-Computador , Humanos , Masculino
10.
IEEE Trans Neural Syst Rehabil Eng ; 24(3): 320-32, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26701866

RESUMEN

This paper proposes an application oriented approach that enables to transfer a classifier trained within an experimental scenario into a more complex application scenario or a specific rehabilitation situation which do not allow to collect sufficient training data within a reasonable amount of time. The proposed transfer approach is not limited to be applied to the same type of event-related potential. We show that a classifier trained to detect a certain brain pattern can be used successfully to detect another brain pattern, which is expected to be similar to the first one. In particular a classifier is transferred between two different types of error-related potentials (ErrPs) within the same subject. The classifier trained on observation ErrPs is used to detect interaction ErrPs, since twice as much training data is collected for observation ErrPs compared to interaction ErrPs during the same calibration time. Our results show that the proposed transfer approach is feasible and outperforms another approach, in which a classifier is transferred between different subjects but the same type of ErrP is used to train and test the classifier. The proposed approach is a promising way to handle few training data and to reduce calibration time in ErrP-based brain-computer interfaces.


Asunto(s)
Interfaces Cerebro-Computador , Encéfalo/fisiología , Electroencefalografía/clasificación , Potenciales Evocados , Adulto , Calibración , Femenino , Humanos , Masculino , Estimulación Luminosa , Desempeño Psicomotor , Procesamiento de Señales Asistido por Computador , Adulto Joven
11.
PLoS One ; 9(1): e85060, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24416341

RESUMEN

Assistive devices, like exoskeletons or orthoses, often make use of physiological data that allow the detection or prediction of movement onset. Movement onset can be detected at the executing site, the skeletal muscles, as by means of electromyography. Movement intention can be detected by the analysis of brain activity, recorded by, e.g., electroencephalography, or in the behavior of the subject by, e.g., eye movement analysis. These different approaches can be used depending on the kind of neuromuscular disorder, state of therapy or assistive device. In this work we conducted experiments with healthy subjects while performing self-initiated and self-paced arm movements. While other studies showed that multimodal signal analysis can improve the performance of predictions, we show that a sensible combination of electroencephalographic and electromyographic data can potentially improve the adaptability of assistive technical devices with respect to the individual demands of, e.g., early and late stages in rehabilitation therapy. In earlier stages for patients with weak muscle or motor related brain activity it is important to achieve high positive detection rates to support self-initiated movements. To detect most movement intentions from electroencephalographic or electromyographic data motivates a patient and can enhance her/his progress in rehabilitation. In a later stage for patients with stronger muscle or brain activity, reliable movement prediction is more important to encourage patients to behave more accurately and to invest more effort in the task. Further, the false detection rate needs to be reduced. We propose that both types of physiological data can be used in an and combination, where both signals must be detected to drive a movement. By this approach the behavior of the patient during later therapy can be controlled better and false positive detections, which can be very annoying for patients who are further advanced in rehabilitation, can be avoided.


Asunto(s)
Brazo/fisiología , Intención , Movimiento/fisiología , Aparatos Ortopédicos , Dispositivos de Autoayuda , Adulto , Electroencefalografía/métodos , Electromiografía/métodos , Humanos , Masculino , Valor Predictivo de las Pruebas , Robótica
12.
J Neurosci Methods ; 221: 41-7, 2014 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-24056231

RESUMEN

Electroencephalographic signals are commonly contaminated by eye artifacts, even if recorded under controlled conditions. The objective of this work was to quantitatively compare standard artifact removal methods (regression, filtered regression, Infomax, and second order blind identification (SOBI)) and two artifact identification approaches for independent component analysis (ICA) methods, i.e. ADJUST and correlation. To this end, eye artifacts were removed and the cleaned datasets were used for single trial classification of P300 (a type of event related potentials elicited using the oddball paradigm). Statistical analysis of the results confirms that the combination of Infomax and ADJUST provides a relatively better performance (0.6% improvement on average of all subject) while the combination of SOBI and correlation performs the worst. Low-pass filtering the data at lower cutoffs (here 4 Hz) can also improve the classification accuracy. Without requiring any artifact reference channel, the combination of Infomax and ADJUST improves the classification performance more than the other methods for both examined filtering cutoffs, i.e., 4 Hz and 25 Hz.


Asunto(s)
Algoritmos , Artefactos , Electroencefalografía , Procesamiento de Señales Asistido por Computador , Adulto , Electrooculografía , Potenciales Evocados/fisiología , Movimientos Oculares/fisiología , Humanos , Masculino
13.
PLoS One ; 8(12): e81732, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24358125

RESUMEN

The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.


Asunto(s)
Inteligencia Artificial , Mapeo Encefálico/métodos , Encéfalo/fisiología , Robótica , Interfaz Usuario-Computador , Electroencefalografía , Humanos
14.
IEEE Trans Haptics ; 6(3): 309-19, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24808327

RESUMEN

The goal of this study was to analyze the human ability of external force discrimination while actively moving the arm. With the approach presented here, we give an overview for the whole arm of the just-noticeable differences (JNDs) for controlled movements separately executed for the wrist, elbow, and shoulder joints. The work was originally motivated in the design phase of the actuation system of a wearable exoskeleton, which is used in a teleoperation scenario where force feedback should be provided to the subject. The amount of this force feedback has to be calibrated according to the human force discrimination abilities. In the experiments presented here, 10 subjects performed a series of movements facing an opposing force from a commercial haptic interface. Force changes had to be detected in a two-alternative forced choice task. For each of the three joints tested, perceptual thresholds were measured as absolute thresholds (no reference force) and three JNDs corresponding to three reference forces chosen. For this, we used the outcome of the QUEST procedure after 70 trials. Using these four measurements we computed the Weber fraction. Our results demonstrate that different Weber fractions can be measured with respect to the joint. These were 0.11, 0.13, and 0.08 for wrist, elbow, and shoulder, respectively. It is discussed that force perception may be affected by the number of muscles involved and the reproducibility of the movement itself. The minimum perceivable force, on average, was 0.04 N for all three joints.


Asunto(s)
Brazo/fisiología , Umbral Diferencial/fisiología , Discriminación en Psicología/fisiología , Retroalimentación Sensorial/fisiología , Movimiento/fisiología , Adulto , Fenómenos Biomecánicos/fisiología , Articulación del Codo/fisiología , Humanos , Masculino , Presión , Reproducibilidad de los Resultados , Articulación del Hombro/fisiología , Articulación de la Muñeca/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA