Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Exp Brain Res ; 240(6): 1873-1885, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35445861

RESUMEN

The pupil responds to a salient stimulus appearing in the environment, in addition to its modulation by global luminance. These pupillary responses can be evoked by visual or auditory stimuli, scaled with stimulus salience, and enhanced by multisensory presentation. In addition, pupil size is modulated by various visual stimulus attributes, such as color, area, and motion. However, research that concurrently examines the influence of different factors on pupillary responses is limited. To explore how presentation of multiple visual stimuli influences human pupillary responses, we presented arrays of visual stimuli and systematically varied their luminance, color, and set size. Saliency level, computed by the saliency model, systematically changed with set size across all conditions, with higher saliency levels in larger set sizes. Pupillary constriction responses were evoked by the appearance of visual stimuli, with larger pupillary responses observed in larger set size. These effects were pronounced even though the global luminance level was unchanged using isoluminant chromatic stimuli. Furthermore, larger pupillary constriction responses were obtained in the blue, compared to other color conditions. Together, we argue that both cortical and subcortical areas contribute to the observed pupillary constriction modulated by set size and color.


Asunto(s)
Luz , Pupila , Humanos , Estimulación Luminosa , Pupila/fisiología
2.
Proc Natl Acad Sci U S A ; 114(35): 9451-9456, 2017 08 29.
Artículo en Inglés | MEDLINE | ID: mdl-28808026

RESUMEN

Models of visual attention postulate the existence of a bottom-up saliency map that is formed early in the visual processing stream. Although studies have reported evidence of a saliency map in various cortical brain areas, determining the contribution of phylogenetically older pathways is crucial to understanding its origin. Here, we compared saliency coding from neurons in two early gateways into the visual system: the primary visual cortex (V1) and the evolutionarily older superior colliculus (SC). We found that, while the response latency to visual stimulus onset was earlier for V1 neurons than superior colliculus superficial visual-layer neurons (SCs), the saliency representation emerged earlier in SCs than in V1. Because the dominant input to the SCs arises from V1, these relative timings are consistent with the hypothesis that SCs neurons pool the inputs from multiple V1 neurons to form a feature-agnostic saliency map, which may then be relayed to other brain areas.


Asunto(s)
Corteza Visual/fisiología , Animales , Atención/fisiología , Macaca mulatta , Masculino , Neuronas/fisiología , Estimulación Luminosa , Psicofísica , Tiempo de Reacción , Colículos Superiores , Vías Visuales/fisiología
3.
Eur J Neurosci ; 2019 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-31077473

RESUMEN

The saliency map has played a long-standing role in models and theories of visual attention, and it is now supported by neurobiological evidence from several cortical and subcortical brain areas. While visual saliency is computed during moments of active fixation, it is not known whether the same is true while engaged in smooth pursuit of a moving stimulus, which is very common in real-world vision. Here, we examined extrafoveal saliency coding in the superior colliculus, a midbrain area associated with attention and gaze, during smooth pursuit eye movements. We found that SC neurons from the superficial visual layers showed a robust representation of peripheral saliency evoked by a conspicuous stimulus embedded in a wide-field array of goal-irrelevant stimuli. In contrast, visuomotor neurons from the intermediate saccade-related layers showed a poor saliency representation, even though most of these neurons were visually responsive during smooth pursuit. These results confirm and extend previous findings that place the SCs in a unique role as a saliency map that monitors peripheral vision during foveation of stationary and now moving objects.

4.
Neural Comput ; 31(2): 344-387, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30576615

RESUMEN

This work lays the foundation for a framework of cortical learning based on the idea of a competitive column, which is inspired by the functional organization of neurons in the cortex. A column describes a prototypical organization for neurons that gives rise to an ability to learn scale, rotation, and translation-invariant features. This is empowered by a recently developed learning rule, conflict learning, which enables the network to learn over both driving and modulatory feedforward, feedback, and lateral inputs. The framework is further supported by introducing both a notion of neural ambiguity and an adaptive threshold scheme. Ambiguity, which captures the idea that too many decisions lead to indecision, gives the network a dynamic way to resolve locally ambiguous decisions. The adaptive threshold operates over multiple timescales to regulate neural activity under the varied arrival timings of input in a highly interconnected multilayer network with feedforward and feedback. The competitive column architecture is demonstrated on a large-scale (54,000 neurons and 18 million synapses), invariant model of border ownership. The model is trained on four simple, fixed-scale shapes: two squares, one rectangle, and one symmetric L-shape. Tested on 1899 synthetic shapes of varying scale and complexity, the model correctly assigned border ownership with 74% accuracy. The model's abilities were also illustrated on contours of objects taken from natural images. Combined with conflict learning, the competitive column and ambiguity give a better intuitive understanding of how feedback, modulation, and inhibition may interact in the brain to influence activation and learning.

5.
J Vis ; 19(1): 11, 2019 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-30650434

RESUMEN

Most visual saliency models that integrate top-down factors process task and context information using machine learning techniques. Although these methods have been successful in improving prediction accuracy for human attention, they require significant training data and are unable to provide an understanding of what makes information relevant to a task such that it will attract gaze. This means that we still lack a general theory for the interaction between task and attention or eye movements. Recently, Tanner and Itti (2017) proposed the theory of goal relevance to explain what makes information relevant to goals. In this work, we record eye movements of 80 participants who each played one of four variants of a Mario video game and construct a combined saliency model using features from three sources: bottom-up, learned top-down, and goal relevance. We use this model to predict the eye behavior and find that the addition of goal relevance significantly improves the Normalized Scanpath Saliency score of the model from 4.35 to 5.82 (p < 1 × 10-100).


Asunto(s)
Atención/fisiología , Movimientos Oculares/fisiología , Objetivos , Percepción Visual/fisiología , Humanos , Modelos Teóricos , Reconocimiento Visual de Modelos/fisiología , Juegos de Video
6.
Behav Brain Sci ; 40: e140, 2017 01.
Artículo en Inglés | MEDLINE | ID: mdl-29342619

RESUMEN

Hulleman & Olivers (H&O) make a much-needed stride forward for a better understanding of visual search behavior by rejecting theories based on discrete stimulus items. I propose that the framework could be further enhanced by clearly delineating distinct mechanisms for attention guidance, selection, and enhancement during visual search, instead of conflating them into a single functional field of view.


Asunto(s)
Atención , Estimulación Luminosa
7.
J Neurosci ; 34(2): 408-17, 2014 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-24403141

RESUMEN

The sudden appearance of a novel stimulus in the environment initiates a series of orienting responses that include coordinated shifts of gaze and attention, and also transient changes in pupil size. Although numerous studies have identified a significant effect of stimulus saliency on shifts of gaze and attention, saliency effects on pupil size are less understood. To examine salience-evoked pupil responses, we presented visual, auditory, or audiovisual stimuli while monkeys fixated a central visual spot. Transient pupil dilation was elicited after visual stimulus presentation regardless of target luminance relative to background, and auditory stimuli also evoked similar pupil responses. Importantly, the evoked pupil response was modulated by contrast-based saliency, with faster and larger pupil responses following the presentation of more salient stimuli. The initial transient component of pupil dilation was qualitatively similar to that evoked by weak microstimulation of the midbrain superior colliculus. The pupil responses elicited by audiovisual stimuli were well predicted by a linear summation of each modality response. Together, the results suggest that the transient pupil response, as one component of orienting, is modulated by contrast-based saliency, and the superior colliculus is likely involved in its coordination.


Asunto(s)
Orientación/fisiología , Pupila/fisiología , Colículos Superiores/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Animales , Macaca mulatta , Masculino , Estimulación Luminosa
8.
J Vis ; 14(3): 29, 2014 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-24665092

RESUMEN

In a very influential yet anecdotal illustration, Yarbus suggested that human eye-movement patterns are modulated top down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Henderson, Shinkareva, Wang, Luke, & Olejarczyk, 2013; Iqbal & Bailey, 2004), Greene, Liu, and Wolfe (2012) argued against it by reporting a failure. In this study, we perform a more systematic investigation of this problem, probing a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We perform two experiments. In the first experiment, we reanalyze the data from a previous study by Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye-movement features slightly but significantly above chance, using a Boosting classifier (34.12% correct vs. 25% chance level; binomial test, p = 1.0722e - 04). In the second experiment, we repeat and extend Yarbus's original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus's scene) under Yarbus's seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance-level; binomial test, p = 2.4535e - 06). We thus conclude that Yarbus's idea is supported by our data and continues to be an inspiration for future computational and experimental eye-movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future directions in task decoding from eye movements.


Asunto(s)
Atención , Movimientos Oculares/fisiología , Reconocimiento Visual de Modelos/fisiología , Cognición/fisiología , Femenino , Humanos , Masculino , Desempeño Psicomotor/fisiología , Adulto Joven
9.
J Vis ; 14(13): 3, 2014 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-25371549

RESUMEN

Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered in constructing more predictive visual attention models in the future.


Asunto(s)
Atención/fisiología , Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Reconocimiento Visual de Modelos/fisiología , Señales (Psicología) , Cara , Femenino , Humanos , Masculino , Adulto Joven
10.
J Vis ; 14(1)2014 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-24396046

RESUMEN

How attention interacts with low-level visual representations to give rise to perception remains a central yet controversial question in neuroscience. While several previous studies suggest that the units of attentional selection are individual objects, other evidence points instead toward lower-level features, such as an attended color or direction of motion. We used both human fMRI and psychophysics to investigate the relationship between object-based and feature-based attention. Specifically, we focused on whether feature-based attention is modulated by object appearance, comparing three conditions: (a) features appearing as one object; (b) features appearing as two separate but identical objects; (c) features appearing as two different objects. Stimuli were two random-dot fields presented bilaterally to central fixation, and object appearance was induced by the presence of one or two boxes around the fields. In the fMRI experiment, participants performed a luminance discrimination task on one side, and ignored the other side, where we probed for enhanced activity when either it was perceived as belonging to a same object, or shared features with the task side. In the psychophysical experiments, participants performed luminance discrimination on both sides with overlapping red and green dots, now attending to either the same features (red/red or green/green) or different features (red/green or green/red) on both sides. Results show that feature-based attentional enhancement exists in all three conditions, i.e., regardless whether features appear as one object, two identical objects, or two different objects. Our findings indicate that feature-based attention differs from object-based attention in that it is not dependent upon object appearance. Thus feature-based attention may be mediated by earlier cortical processes independent of perceiving visual features into well-formed objects.


Asunto(s)
Atención/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Psicofísica
11.
Comput Biol Med ; 176: 108545, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38749325

RESUMEN

Reliable classification of sleep stages is crucial in sleep medicine and neuroscience research for providing valuable insights, diagnoses, and understanding of brain states. The current gold standard method for sleep stage classification is polysomnography (PSG). Unfortunately, PSG is an expensive and cumbersome process involving numerous electrodes, often conducted in an unfamiliar clinic and annotated by a professional. Although commercial devices like smartwatches track sleep, their performance is well below PSG. To address these disadvantages, we present a feed-forward neural network that achieves gold-standard levels of agreement using only a single lead of electrocardiography (ECG) data. Specifically, the median five-stage Cohen's kappa is 0.725 on a large, diverse dataset of 5 to 90-year-old subjects. Comparisons with a comprehensive meta-analysis of between-human inter-rater agreement confirm the non-inferior performance of our model. Finally, we developed a novel loss function to align the training objective with Cohen's kappa. Our method offers an inexpensive, automated, and convenient alternative for sleep stage classification-further enhanced by a real-time scoring option. Cardiosomnography, or a sleep study conducted with ECG only, could take expert-level sleep studies outside the confines of clinics and laboratories and into realistic settings. This advancement democratizes access to high-quality sleep studies, considerably enhancing the field of sleep medicine and neuroscience. It makes less-expensive, higher-quality studies accessible to a broader community, enabling improved sleep research and more personalized, accessible sleep-related healthcare interventions.


Asunto(s)
Electrocardiografía , Redes Neurales de la Computación , Fases del Sueño , Humanos , Electrocardiografía/métodos , Fases del Sueño/fisiología , Adulto , Persona de Mediana Edad , Masculino , Anciano , Adolescente , Femenino , Anciano de 80 o más Años , Niño , Preescolar , Polisomnografía/métodos , Procesamiento de Señales Asistido por Computador
12.
Sci Adv ; 10(3): eadk1525, 2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-38232159

RESUMEN

Field programmable gate array (FPGA) is widely used in the acceleration of deep learning applications because of its reconfigurability, flexibility, and fast time-to-market. However, conventional FPGA suffers from the trade-off between chip area and reconfiguration latency, making efficient FPGA accelerations that require switching between multiple configurations still elusive. Here, we propose a ferroelectric field-effect transistor (FeFET)-based context-switching FPGA supporting dynamic reconfiguration to break this trade-off, enabling loading of arbitrary configuration without interrupting the active configuration execution. Leveraging the intrinsic structure and nonvolatility of FeFETs, compact FPGA primitives are proposed and experimentally verified. The evaluation results show our design shows a 63.0%/74.7% reduction in a look-up table (LUT)/connection block (CB) area and 82.7%/53.6% reduction in CB/switch box power consumption with a minimal penalty in the critical path delay (9.6%). Besides, our design yields significant time savings by 78.7 and 20.3% on average for context-switching and dynamic reconfiguration applications, respectively.

13.
J Cogn Neurosci ; 25(10): 1754-68, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23691982

RESUMEN

The mechanisms that underlie the integration of visual and goal-related signals for the production of saccades remain poorly understood. Here, we examined how spatial proximity of competing stimuli shapes goal-directed responses in the superior colliculus (SC), a midbrain structure closely associated with the control of visual attention and eye movements. Monkeys were trained to perform an oculomotor-capture task [Theeuwes, J., Kramer, A. F., Hahn, S., Irwin, D. E., & Zelinsky, G. J. Influence of attentional capture on oculomotor control. Journal of Experimental Psychology. Human Perception and Performance, 25, 1595-1608, 1999], in which a target singleton was revealed via an isoluminant color change in all but one item. On a portion of the trials, an additional salient item abruptly appeared near or far from the target. We quantified how spatial proximity between the abrupt-onset and the target shaped the goal-directed response. We found that the appearance of an abrupt-onset near the target induced a transient decrease in goal-directed discharge of SC visuomotor neurons. Although this was indicative of spatial competition, it was immediately followed by a rebound in presaccadic activation, which facilitated the saccadic response (i.e., it induced shorter saccadic RT). A similar suppression also occurred at most nontarget locations even in the absence of the abrupt-onset. This is indicative of a mechanism that enabled monkeys to quickly discount stimuli that shared the common nontarget feature. These results reveal a pattern of excitation/inhibition across the SC visuomotor map that acted to facilitate optimal behavior-the short duration suppression minimized the probability of capture by salient distractors, whereas a subsequent boost in accumulation rate ensured a fast goal-directed response. Such nonlinear dynamics should be incorporated into future biologically plausible models of saccade behavior.


Asunto(s)
Movimientos Oculares/fisiología , Objetivos , Inhibición Neural/fisiología , Neuronas/fisiología , Colículos Superiores/citología , Percepción Visual/fisiología , Potenciales de Acción/fisiología , Análisis de Varianza , Animales , Humanos , Macaca mulatta , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Estadística como Asunto , Colículos Superiores/fisiología
14.
J Vis ; 13(10): 18, 2013 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-23988384

RESUMEN

Einhäuser, Spain, and Perona (2008) explored an alternative hypothesis to saliency maps (i.e., spatial image outliers) and claimed that "objects predict fixations better than early saliency." To test their hypothesis, they measured eye movements of human observers while they inspected 93 photographs of common natural scenes (Uncommon Places dataset by Shore, Tillman, & Schmidt-Wulen 2004; Supplement Figure S4). Subjects were asked to observe an image and, immediately afterwards, to name objects they saw (remembered). Einhäuser et al. showed that a map made of manually drawn object regions, each object weighted by its recall frequency, predicts fixations in individual images better than early saliency. Due to important implications of this hypothesis, we investigate it further. The core of our analysis is explained here. Please refer to the Supplement for details.


Asunto(s)
Fijación Ocular/fisiología , Modelos Psicológicos , Reconocimiento Visual de Modelos/fisiología , Femenino , Humanos , Masculino
15.
Nat Biomed Eng ; 7(4): 546-558, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-34795394

RESUMEN

For brain-computer interfaces (BCIs), obtaining sufficient training data for algorithms that map neural signals onto actions can be difficult, expensive or even impossible. Here we report the development and use of a generative model-a model that synthesizes a virtually unlimited number of new data distributions from a learned data distribution-that learns mappings between hand kinematics and the associated neural spike trains. The generative spike-train synthesizer is trained on data from one recording session with a monkey performing a reaching task and can be rapidly adapted to new sessions or monkeys by using limited additional neural data. We show that the model can be adapted to synthesize new spike trains, accelerating the training and improving the generalization of BCI decoders. The approach is fully data-driven, and hence, applicable to applications of BCIs beyond motor control.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Algoritmos , Neuronas , Fenómenos Biomecánicos
16.
Eur J Neurosci ; 35(11): 1738-52, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22639796

RESUMEN

Here we examined the influence of the visual response in the superior colliculus (SC) (an oculomotor control structure integrating sensory, motor and cognitive signals) on the development of the motor command that drives saccadic eye movements in monkeys. We varied stimulus luminance to alter the timing and magnitude of visual responses in the SC and examined how these changes correlated with resulting saccade behavior. Increasing target luminance resulted in multiple modulations of the visual response, including increased magnitude and decreased response onset latency. These signal modulations correlated strongly with changes in saccade latency and metrics, indicating that these signal properties carry through to the neural computations that determine when, where and how fast the eyes will move. Thus, components of the earliest part of the visual response in the SC provide important building blocks for the neural basis of the sensory-motor transformation, highlighting a critical link between the properties of the visual response and saccade behavior.


Asunto(s)
Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Movimientos Sacádicos/fisiología , Células Receptoras Sensoriales/fisiología , Colículos Superiores/fisiología , Animales , Macaca mulatta , Masculino , Estimulación Luminosa/métodos
17.
IEEE Trans Neural Netw Learn Syst ; 33(8): 3778-3791, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-33596177

RESUMEN

The human brain is the gold standard of adaptive learning. It not only can learn and benefit from experience, but also can adapt to new situations. In contrast, deep neural networks only learn one sophisticated but fixed mapping from inputs to outputs. This limits their applicability to more dynamic situations, where the input to output mapping may change with different contexts. A salient example is continual learning-learning new independent tasks sequentially without forgetting previous tasks. Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks. Herein, we propose a new biologically plausible type of deep neural network with extra, out-of-network, task-dependent biasing units to accommodate these dynamic situations. This allows, for the first time, a single network to learn potentially unlimited parallel input to output mappings, and to switch on the fly between them at runtime. Biasing units are programed by leveraging beneficial perturbations (opposite to well-known adversarial perturbations) for each task. Beneficial perturbations for a given task bias the network toward that task, essentially switching the network into a different mode to process that task. This largely eliminates catastrophic interference between tasks. Our approach is memory-efficient and parameter-efficient, can accommodate many tasks, and achieves the state-of-the-art performance across different tasks and domains.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Encéfalo , Humanos , Aprendizaje
18.
J Neurol ; 269(9): 4920-4938, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35501501

RESUMEN

OBJECTIVES: This study (1) describes and compares saccade and pupil abnormalities in patients with manifest alpha-synucleinopathies (αSYN: Parkinson's disease (PD), Multiple System Atrophy (MSA)) and a tauopathy (progressive supranuclear palsy (PSP)); (2) determines whether patients with rapid-eye-movement sleep behaviour disorder (RBD), a prodromal stage of αSYN, already have abnormal responses that may indicate a risk for developing PD or MSA. METHODS: Ninety (46 RBD, 27 PD, 17 MSA) patients with an αSYN, 10 PSP patients, and 132 healthy age-matched controls (CTRL) were examined with a 10-min video-based eye-tracking task (Free Viewing). Participants were free to look anywhere on the screen while saccade and pupil behaviours were measured. RESULTS: PD, MSA, and PSP spent more time fixating the centre of the screen than CTRL. All patient groups made fewer macro-saccades (> 2◦ amplitude) with smaller amplitude than CTRL. Saccade frequency was greater in RBD than in other patients. Following clip change, saccades were temporarily suppressed, then rebounded at a slower pace than CTRL in all patient groups. RBD had distinct, although discrete saccade abnormalities that were more marked in PD, MSA, and even more in PSP. The vertical saccade rate was reduced in all patients and decreased most in PSP. Clip changes produced large increases or decreases in screen luminance requiring pupil constriction or dilation, respectively. PSP elicited smaller pupil constriction/dilation responses than CTRL, while MSA elicited the opposite. CONCLUSION: RBD patients already have discrete but less pronounced saccade abnormalities than PD and MSA patients. Vertical gaze palsy and altered pupil control differentiate PSP from αSYN.


Asunto(s)
Atrofia de Múltiples Sistemas , Enfermedad de Parkinson , Parálisis Supranuclear Progresiva , Sinucleinopatías , Biomarcadores , Tecnología de Seguimiento Ocular , Humanos , Atrofia de Múltiples Sistemas/diagnóstico , Enfermedad de Parkinson/complicaciones , Enfermedad de Parkinson/diagnóstico , Parálisis Supranuclear Progresiva/diagnóstico
19.
Eur J Neurosci ; 34(5): 766-79, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21864319

RESUMEN

The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually sensitive neurons decreases in magnitude, that is, neurons adapt or habituate, although the mechanism is not yet known. We monitored the activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if the mechanism is habituation, response recovery ('dishabituation') should be seen for both the brighter and dimmer stimuli. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model, which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity in response to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile for both brighter and dimmer stimuli, and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world.


Asunto(s)
Habituación Psicofisiológica/fisiología , Neuronas/fisiología , Colículos Superiores/fisiología , Visión Ocular/fisiología , Percepción Visual/fisiología , Animales , Teorema de Bayes , Conducta Animal/fisiología , Simulación por Computador , Macaca mulatta , Masculino , Pruebas Neuropsicológicas , Curva ROC , Colículos Superiores/citología
20.
Sci Rep ; 11(1): 19020, 2021 09 24.
Artículo en Inglés | MEDLINE | ID: mdl-34561503

RESUMEN

Motor brain machine interfaces (BMIs) directly link the brain to artificial actuators and have the potential to mitigate severe body paralysis caused by neurological injury or disease. Most BMI systems involve a decoder that analyzes neural spike counts to infer movement intent. However, many classical BMI decoders (1) fail to take advantage of temporal patterns of spike trains, possibly over long time horizons; (2) are insufficient to achieve good BMI performance at high temporal resolution, as the underlying Gaussian assumption of decoders based on spike counts is violated. Here, we propose a new statistical feature that represents temporal patterns or temporal codes of spike events with richer description-wavelet average coefficients (WAC)-to be used as decoder input instead of spike counts. We constructed a wavelet decoder framework by using WAC features with a sliding-window approach, and compared the resulting decoder against classical decoders (Wiener and Kalman family) and new deep learning based decoders ( Long Short-Term Memory) using spike count features. We found that the sliding-window approach boosts decoding temporal resolution, and using WAC features significantly improves decoding performance over using spike count features.


Asunto(s)
Interfaces Cerebro-Computador , Corteza Motora/fisiología , Animales , Haplorrinos , Locomoción/fisiología , Aprendizaje Automático , Neuronas/fisiología , Análisis de Ondículas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA