Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
PLoS Comput Biol ; 16(2): e1007065, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32012146

RESUMEN

The limited capacity of recent memory inevitably leads to partial memory of past stimuli. There is also evidence that behavioral and neural responses to novel or rare stimuli are dependent on one's memory of past stimuli. Thus, these responses may serve as a probe of different individuals' remembering and forgetting characteristics. Here, we utilize two lossy compression models of stimulus sequences that inherently involve forgetting, which in addition to being a necessity under many conditions, also has theoretical and behavioral advantages. One model is based on a simple stimulus counter and the other on the Information Bottleneck (IB) framework which suggests a more general, theoretically justifiable principle for biological and cognitive phenomena. These models are applied to analyze a novelty-detection event-related potential commonly known as the P300. The trial-by-trial variations of the P300 response, recorded in an auditory oddball paradigm, were subjected to each model to extract two stimulus-compression parameters for each subject: memory length and representation accuracy. These parameters were then utilized to estimate the subjects' recent memory capacity limit under the task conditions. The results, along with recently published findings on single neurons and the IB model, underscore how a lossy compression framework can be utilized to account for trial-by-trial variability of neural responses at different spatial scales and in different individuals, while at the same time providing estimates of individual memory characteristics at different levels of representation using a theoretically-based parsimonious model.


Asunto(s)
Memoria/fisiología , Reflejo de Sobresalto , Estimulación Acústica/métodos , Adulto , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino
2.
J Neural Eng ; 17(6): 066011, 2021 02 13.
Artículo en Inglés | MEDLINE | ID: mdl-33586668

RESUMEN

OBJECTIVE: One of the recent developments in the field of brain-computer interfaces (BCI) is the reinforcement learning (RL) based BCI paradigm, which uses neural error responses as the reward feedback on the agent's action. While having several advantages over motor imagery based BCI, the reliability of RL-BCI is critically dependent on the decoding accuracy of noisy neural error signals. A principled method is needed to optimally handle this inherent noise under general conditions. APPROACH: By determining a trade-off between the expected value and the informational cost of policies, the info-RL (IRL) algorithm provides optimal low-complexity policies, which are robust under noisy reward conditions and achieve the maximal obtainable value. In this work we utilize the IRL algorithm to characterize the maximal obtainable value under different noise levels, which in turn is used to extract the optimal robust policy for each noise level. MAIN RESULTS: Our simulation results of a setting with Gaussian noise show that the complexity level of the optimal policy is dependent on the reward magnitude but not on the reward variance, whereas the variance determines whether a lower complexity solution is favorable or not. We show how this analysis can be utilized to select optimal robust policies for an RL-BCI and demonstrate its use on EEG data. SIGNIFICANCE: We propose here a principled method to determine the optimal policy complexity of an RL problem with a noisy reward, which we argue is particularly useful for RL-based BCI paradigms. This framework may be used to minimize initial training time and allow for a more dynamic and robust shared control between the agent and the operator under different conditions.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Aprendizaje , Refuerzo en Psicología , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda