Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Elife ; 122023 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-37792453

RESUMEN

Hippocampal place cell sequences have been hypothesized to serve as diverse purposes as the induction of synaptic plasticity, formation and consolidation of long-term memories, or navigation and planning. During spatial behaviors of rodents, sequential firing of place cells at the theta timescale (known as theta sequences) encodes running trajectories, which can be considered as one-dimensional behavioral sequences of traversed locations. In a two-dimensional space, however, each single location can be visited along arbitrary one-dimensional running trajectories. Thus, a place cell will generally take part in multiple different theta sequences, raising questions about how this two-dimensional topology can be reconciled with the idea of hippocampal sequences underlying memory of (one-dimensional) episodes. Here, we propose a computational model of cornu ammonis 3 (CA3) and dentate gyrus (DG), where sensorimotor input drives the direction-dependent (extrinsic) theta sequences within CA3 reflecting the two-dimensional spatial topology, whereas the intrahippocampal CA3-DG projections concurrently produce intrinsic sequences that are independent of the specific running trajectory. Consistent with experimental data, intrinsic theta sequences are less prominent, but can nevertheless be detected during theta activity, thereby serving as running-direction independent landmark cues. We hypothesize that the intrinsic sequences largely reflect replay and preplay activity during non-theta states.


Asunto(s)
Células de Lugar , Carrera , Hipocampo , Región CA3 Hipocampal , Memoria a Largo Plazo , Ritmo Teta , Potenciales de Acción
2.
J Neurosci ; 42(11): 2282-2297, 2022 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-35110389

RESUMEN

Running direction in the hippocampus is encoded by rate modulations of place field activity but also by spike timing correlations known as theta sequences. Whether directional rate codes and the directionality of place field correlations are related, however, has so far not been explored, and therefore the nature of how directional information is encoded in the cornu ammonis remains unresolved. Here, using a previously published dataset that contains the spike activity of rat hippocampal place cells in the CA1, CA2, and CA3 subregions during free foraging of male Long-Evans rats in a 2D environment, we found that rate and spike timing codes are related. Opposite to a preferred firing rate direction of a place field, spikes are more likely to undergo theta phase precession and, hence, more strongly affect paired correlations. Furthermore, we identified a subset of field pairs whose theta correlations are intrinsic in that they maintain the same firing order when the running direction is reversed. Both effects are associated with differences in theta phase distributions and are more prominent in CA3 than in CA1. We thus hypothesize that intrinsic spiking is most prominent when the directionally modulated sensory-motor drive of hippocampal firing rates is minimal, suggesting that extrinsic and intrinsic sequences contribute to phase precession as two distinct mechanisms.SIGNIFICANCE STATEMENT Hippocampal theta sequences, on the one hand, are thought to reflect the running trajectory of an animal, connecting past and future locations. On the other hand, sequences have been proposed to reflect the rich, recursive hippocampal connectivity, related to memories of previous trajectories or even to experience-independent prestructure. Such intrinsic sequences are inherently one dimensional and cannot be easily reconciled with running trajectories in two dimensions as place fields can be approached on multiple one-dimensional paths. In this article, we dissect phase precession along different directions in all hippocampal subareas and find that CA3 in particular shows a high level of direction-independent correlations that are inconsistent with the notion of representing running trajectories. These intrinsic correlations are associated with later spike phases.


Asunto(s)
Células de Lugar , Ritmo Teta , Potenciales de Acción , Animales , Hipocampo , Masculino , Modelos Neurológicos , Ratas , Ratas Long-Evans
3.
J Neurosci Methods ; 324: 108307, 2019 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-31176683

RESUMEN

BACKGROUND: A prerequisite for many eye tracking and video-oculography (VOG) methods is an accurate localization of the pupil. Several existing techniques face challenges in images with artifacts and under naturalistic low-light conditions, e.g. with highly dilated pupils. NEW METHOD: For the first time, we propose to use a fully convolutional neural network (FCNN) for segmentation of the whole pupil area, trained on 3946 VOG images hand-annotated at our institute. We integrate the FCNN into DeepVOG, along with an established method for gaze estimation from elliptical pupil contours, which we improve upon by considering our FCNN's segmentation confidence measure. RESULTS: The FCNN output simultaneously enables us to perform pupil center localization, elliptical contour estimation and blink detection, all with a single network and with an assigned confidence value, at framerates above 130 Hz on commercial workstations with GPU acceleration. Pupil centre coordinates can be estimated with a median accuracy of around 1.0 pixel, and gaze estimation is accurate to within 0.5 degrees. The FCNN is able to robustly segment the pupil in a wide array of datasets that were not used for training. COMPARISON WITH EXISTING METHODS: We validate our method against gold standard eye images that were artificially rendered, as well as hand-annotated VOG data from a gold-standard clinical system (EyeSeeCam) at our institute. CONCLUSIONS: Our proposed FCNN-based pupil segmentation framework is accurate, robust and generalizes well to new VOG datasets. We provide our code and pre-trained FCNN model open-source and for free under www.github.com/pydsgz/DeepVOG.


Asunto(s)
Aprendizaje Profundo , Fijación Ocular/fisiología , Pupila/fisiología , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Neurociencias/métodos , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...