Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
bioRxiv ; 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38915704

RESUMO

Methodological advances in neuroscience have enabled the collection of massive datasets which demand innovative approaches for scientific communication. Existing platforms for data storage lack intuitive tools for data exploration, limiting our ability to interact effectively with these brain-wide datasets. We introduce two public websites: (Data and Atlas) developed for the International Brain Laboratory which provide access to millions of behavioral trials and hundreds of thousands of individual neurons. These interfaces allow users to discover both the raw and processed brain-wide data released by the IBL at the scale of the whole brain, individual sessions, trials, and neurons. By hosting these data interfaces as websites they are available cross-platform with no installation. By releasing each site's code as a modular open-source framework, other researchers can easily develop their own web interfaces and explore their own data. As neuroscience datasets continue to expand, customizable web interfaces offer a glimpse into a future of streamlined data exploration and act as blueprints for future tools.

2.
Nat Methods ; 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38918605

RESUMO

Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce 'Lightning Pose', an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We released a cloud application that allows users to label data, train networks and process new videos directly from the browser.

4.
bioRxiv ; 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38766250

RESUMO

Computational psychiatry has suggested that humans within the autism spectrum disorder (ASD) inflexibly update their expectations (i.e., Bayesian priors). Here, we leveraged high-yield rodent psychophysics (n = 75 mice), extensive behavioral modeling (including principled and heuristics), and (near) brain-wide single cell extracellular recordings (over 53k units in 150 brain areas) to ask (1) whether mice with different genetic perturbations associated with ASD show this same computational anomaly, and if so, (2) what neurophysiological features are shared across genotypes in subserving this deficit. We demonstrate that mice harboring mutations in Fmr1 , Cntnap2 , and Shank3B show a blunted update of priors during decision-making. Neurally, the differentiating factor between animals flexibly and inflexibly updating their priors was a shift in the weighting of prior encoding from sensory to frontal cortices. Further, in mouse models of ASD frontal areas showed a preponderance of units coding for deviations from the animals' long-run prior, and sensory responses did not differentiate between expected and unexpected observations. These findings demonstrate that distinct genetic instantiations of ASD may yield common neurophysiological and behavioral phenotypes.

5.
bioRxiv ; 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-37162966

RESUMO

Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce "Lightning Pose," an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry, and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post-hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We release a cloud application that allows users to label data, train networks, and predict new videos directly from the browser.

6.
Philos Trans R Soc Lond B Biol Sci ; 378(1886): 20220344, 2023 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-37545300

RESUMO

A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.


Assuntos
Percepção de Movimento , Humanos , Movimentos Oculares , Movimento (Física) , Movimentos Sacádicos , Orientação , Estimulação Luminosa
7.
bioRxiv ; 2023 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-37547006

RESUMO

Self-initiated behavior is accompanied by the experience of willing our actions. Here, we leverage the unique opportunity to examine the full intentional chain - from will (W) to action (A) to environmental effects (E) - in a tetraplegic person fitted with a primary motor cortex (M1) brain machine interface (BMI) generating hand movements via neuromuscular electrical stimulation (NMES). This combined BMI-NMES approach allowed us to selectively manipulate each element of the intentional chain (W, A, and E) while performing extra-cellular recordings and probing subjective experience. Our results reveal single-cell, multi-unit, and population-level dynamics in human M1 that encode W and may predict its subjective onset. Further, we show that the proficiency of a neural decoder in M1 reflects the degree of W-A binding, tracking the participant's subjective experience of intention in (near) real time. These results point to M1 as a critical node in forming the subjective experience of intention and demonstrate the relevance of intention-related signals for translational neuroprosthetics.

8.
bioRxiv ; 2023 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-37577498

RESUMO

Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here we have macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlation analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between prefrontal cortex neurons maintains a stable population code and context-invariant beliefs during naturalistic behavior with closed action-perception loops.

9.
Trends Cogn Sci ; 27(7): 631-641, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37183143

RESUMO

Autism impacts a wide range of behaviors and neural functions. As such, theories of autism spectrum disorder (ASD) are numerous and span different levels of description, from neurocognitive to molecular. We propose how existent behavioral, computational, algorithmic, and neural accounts of ASD may relate to one another. Specifically, we argue that ASD may be cast as a disorder of causal inference (computational level). This computation relies on marginalization, which is thought to be subserved by divisive normalization (algorithmic level). In turn, divisive normalization may be impaired by excitatory-to-inhibitory imbalances (neural implementation level). We also discuss ASD within similar frameworks, those of predictive coding and circular inference. Together, we hope to motivate work unifying the different accounts of ASD.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Humanos
10.
bioRxiv ; 2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36778376

RESUMO

A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.

11.
J Neurosci ; 42(45): 8450-8459, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-36351831

RESUMO

Since the discovery of conspicuously spatially tuned neurons in the hippocampal formation over 50 years ago, characterizing which, where, and how neurons encode navigationally relevant variables has been a major thrust of navigational neuroscience. While much of this effort has centered on the hippocampal formation and functionally-adjacent structures, recent work suggests that spatial codes, in some form or another, can be found throughout the brain, even in areas traditionally associated with sensation, movement, and executive function. In this review, we highlight these unexpected results, draw insights from comparison of these codes across contexts, regions, and species, and finally suggest an avenue for future work to make sense of these diverse and dynamic navigational codes.


Assuntos
Navegação Espacial , Navegação Espacial/fisiologia , Encéfalo/fisiologia , Mapeamento Encefálico , Hipocampo/fisiologia , Neurônios/fisiologia
12.
Elife ; 112022 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-36282071

RESUMO

We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.


Assuntos
Movimentos Oculares , Lobo Temporal , Animais , Macaca mulatta , Lobo Parietal , Córtex Pré-Frontal , Estimulação Luminosa/métodos
14.
PLoS Comput Biol ; 18(9): e1010464, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-36103520

RESUMO

Accurately predicting contact between our bodies and environmental objects is paramount to our evolutionary survival. It has been hypothesized that multisensory neurons responding both to touch on the body, and to auditory or visual stimuli occurring near them-thus delineating our peripersonal space (PPS)-may be a critical player in this computation. However, we lack a normative account (i.e., a model specifying how we ought to compute) linking impact prediction and PPS encoding. Here, we leverage Bayesian Decision Theory to develop such a model and show that it recapitulates many of the characteristics of PPS. Namely, a normative model of impact prediction (i) delineates a graded boundary between near and far space, (ii) demonstrates an enlargement of PPS as the speed of incoming stimuli increases, (iii) shows stronger contact prediction for looming than receding stimuli-but critically is still present for receding stimuli when observation uncertainty is non-zero-, (iv) scales with the value we attribute to environmental objects, and finally (v) can account for the differing sizes of PPS for different body parts. Together, these modeling results support the conjecture that PPS reflects the computation of impact prediction, and make a number of testable predictions for future empirical studies.


Assuntos
Espaço Pessoal , Percepção do Tato , Teorema de Bayes , Neurônios , Percepção Espacial/fisiologia , Tato/fisiologia , Percepção do Tato/fisiologia
15.
J Neurosci ; 42(27): 5451-5462, 2022 07 06.
Artigo em Inglês | MEDLINE | ID: mdl-35641186

RESUMO

Sensory evidence accumulation is considered a hallmark of decision-making in noisy environments. Integration of sensory inputs has been traditionally studied using passive stimuli, segregating perception from action. Lessons learned from this approach, however, may not generalize to ethological behaviors like navigation, where there is an active interplay between perception and action. We designed a sensory-based sequential decision task in virtual reality in which humans and monkeys navigated to a memorized location by integrating optic flow generated by their own joystick movements. A major challenge in such closed-loop tasks is that subjects' actions will determine future sensory input, causing ambiguity about whether they rely on sensory input rather than expectations based solely on a learned model of the dynamics. To test whether subjects integrated optic flow over time, we used three independent experimental manipulations, unpredictable optic flow perturbations, which pushed subjects off their trajectory; gain manipulation of the joystick controller, which changed the consequences of actions; and manipulation of the optic flow density, which changed the information borne by sensory evidence. Our results suggest that both macaques (male) and humans (female/male) relied heavily on optic flow, thereby demonstrating a critical role for sensory evidence accumulation during naturalistic action-perception closed-loop tasks.SIGNIFICANCE STATEMENT The temporal integration of evidence is a fundamental component of mammalian intelligence. Yet, it has traditionally been studied using experimental paradigms that fail to capture the closed-loop interaction between actions and sensations inherent in real-world continuous behaviors. These conventional paradigms use binary decision tasks and passive stimuli with statistics that remain stationary over time. Instead, we developed a naturalistic visuomotor visual navigation paradigm that mimics the causal structure of real-world sensorimotor interactions and probed the extent to which participants integrate sensory evidence by adding task manipulations that reveal complementary aspects of the computation.


Assuntos
Fluxo Óptico , Animais , Feminino , Humanos , Masculino , Mamíferos , Movimento
16.
Elife ; 112022 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-35579424

RESUMO

Autism spectrum disorder (ASD) is characterized by a panoply of social, communicative, and sensory anomalies. As such, a central goal of computational psychiatry is to ascribe the heterogenous phenotypes observed in ASD to a limited set of canonical computations that may have gone awry in the disorder. Here, we posit causal inference - the process of inferring a causal structure linking sensory signals to hidden world causes - as one such computation. We show that audio-visual integration is intact in ASD and in line with optimal models of cue combination, yet multisensory behavior is anomalous in ASD because this group operates under an internal model favoring integration (vs. segregation). Paradoxically, during explicit reports of common cause across spatial or temporal disparities, individuals with ASD were less and not more likely to report common cause, particularly at small cue disparities. Formal model fitting revealed differences in both the prior probability for common cause (p-common) and choice biases, which are dissociable in implicit but not explicit causal inference tasks. Together, this pattern of results suggests (i) different internal models in attributing world causes to sensory signals in ASD relative to neurotypical individuals given identical sensory cues, and (ii) the presence of an explicit compensatory mechanism in ASD, with these individuals putatively having learned to compensate for their bias to integrate in explicit reports.


Assuntos
Transtorno do Espectro Autista , Causalidade , Sinais (Psicologia) , Humanos
17.
Artigo em Inglês | MEDLINE | ID: mdl-33845169

RESUMO

BACKGROUND: Autism spectrum disorder (ASD) affects many aspects of life, from social interactions to (multi)sensory processing. Similarly, the condition expresses at a variety of levels of description, from genetics to neural circuits and interpersonal behavior. We attempt to bridge between domains and levels of description by detailing the behavioral, electrophysiological, and putative neural network basis of peripersonal space (PPS) updating in ASD during a social context, given that the encoding of this space relies on appropriate multisensory integration, is malleable by social context, and is thought to delineate the boundary between the self and others. METHODS: Fifty (20 male/30 female) young adults, either diagnosed with ASD or age- and sex-matched individuals, took part in a visuotactile reaction time task indexing PPS, while high-density electroencephalography was continuously recorded. Neural network modeling was performed in silico. RESULTS: Multisensory psychophysics demonstrates that while PPS in neurotypical individuals shrinks in the presence of others-as to "give space"-this does not occur in ASD. Likewise, electroencephalography recordings suggest that multisensory integration is altered by social context in neurotypical individuals but not in individuals with ASD. Finally, a biologically plausible neural network model shows, as a proof of principle, that PPS updating may be inflexible in ASD owing to the altered excitatory/inhibitory balance that characterizes neural circuits in animal models of ASD. CONCLUSIONS: Findings are conceptually in line with recent statistical inference accounts, suggesting diminished flexibility in ASD, and further these observations by suggesting within an example relevant for social cognition that such inflexibility may be due to excitatory/inhibitory imbalances.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Feminino , Humanos , Masculino , Redes Neurais de Computação , Espaço Pessoal , Meio Social
18.
Annu Rev Psychol ; 73: 103-129, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-34546803

RESUMO

Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.


Assuntos
Percepção de Movimento , Neurociências , Encéfalo/fisiologia , Cognição , Sinais (Psicologia) , Humanos , Percepção de Movimento/fisiologia
19.
PLoS Comput Biol ; 17(9): e1009439, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34550974

RESUMO

Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.


Assuntos
Algoritmos , Inteligência Artificial/estatística & dados numéricos , Comportamento Animal , Gravação em Vídeo , Animais , Biologia Computacional , Simulação por Computador , Cadeias de Markov , Camundongos , Modelos Estatísticos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado/estatística & dados numéricos , Aprendizado de Máquina não Supervisionado/estatística & dados numéricos , Gravação em Vídeo/estatística & dados numéricos
20.
PLoS Biol ; 19(5): e3001215, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33979326

RESUMO

Perceptual anomalies in individuals with autism spectrum disorder (ASD) have been attributed to an imbalance in weighting incoming sensory evidence with prior knowledge when interpreting sensory information. Here, we show that sensory encoding and how it adapts to changing stimulus statistics during feedback also characteristically differs between neurotypical and ASD groups. In a visual orientation estimation task, we extracted the accuracy of sensory encoding from psychophysical data by using an information theoretic measure. Initially, sensory representations in both groups reflected the statistics of visual orientations in natural scenes, but encoding capacity was overall lower in the ASD group. Exposure to an artificial (i.e., uniform) distribution of visual orientations coupled with performance feedback altered the sensory representations of the neurotypical group toward the novel experimental statistics, while also increasing their total encoding capacity. In contrast, neither total encoding capacity nor its allocation significantly changed in the ASD group. Across both groups, the degree of adaptation was correlated with participants' initial encoding capacity. These findings highlight substantial deficits in sensory encoding-independent from and potentially in addition to deficits in decoding-in individuals with ASD.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Percepção Visual/fisiologia , Adolescente , Transtorno do Espectro Autista/metabolismo , Humanos , Masculino , Modelos Teóricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA