Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.322
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Cell ; 175(3): 643-651.e14, 2018 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-30340039

RESUMEN

The biophysical features of neurons shape information processing in the brain. Cortical neurons are larger in humans than in other species, but it is unclear how their size affects synaptic integration. Here, we perform direct electrical recordings from human dendrites and report enhanced electrical compartmentalization in layer 5 pyramidal neurons. Compared to rat dendrites, distal human dendrites provide limited excitation to the soma, even in the presence of dendritic spikes. Human somas also exhibit less bursting due to reduced recruitment of dendritic electrogenesis. Finally, we find that decreased ion channel densities result in higher input resistance and underlie the lower coupling of human dendrites. We conclude that the increased length of human neurons alters their input-output properties, which will impact cortical computation. VIDEO ABSTRACT.


Asunto(s)
Dendritas/fisiología , Células Piramidales/fisiología , Potenciales de Acción , Adulto , Animales , Femenino , Humanos , Canales Iónicos/metabolismo , Masculino , Células Piramidales/citología , Ratas , Ratas Sprague-Dawley , Especificidad de la Especie , Potenciales Sinápticos
2.
Annu Rev Neurosci ; 47(1): 211-234, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39115926

RESUMEN

The cerebral cortex performs computations via numerous six-layer modules. The operational dynamics of these modules were studied primarily in early sensory cortices using bottom-up computation for response selectivity as a model, which has been recently revolutionized by genetic approaches in mice. However, cognitive processes such as recall and imagery require top-down generative computation. The question of whether the layered module operates similarly in top-down generative processing as in bottom-up sensory processing has become testable by advances in the layer identification of recorded neurons in behaving monkeys. This review examines recent advances in laminar signaling in these two computations, using predictive coding computation as a common reference, and shows that each of these computations recruits distinct laminar circuits, particularly in layer 5, depending on the cognitive demands. These findings highlight many open questions, including how different interareal feedback pathways, originating from and terminating at different layers, convey distinct functional signals.


Asunto(s)
Corteza Cerebral , Cognición , Animales , Cognición/fisiología , Corteza Cerebral/fisiología , Humanos , Neuronas/fisiología , Modelos Neurológicos , Vías Nerviosas/fisiología , Red Nerviosa/fisiología , Transducción de Señal/fisiología
3.
Annu Rev Neurosci ; 46: 17-37, 2023 07 10.
Artículo en Inglés | MEDLINE | ID: mdl-37428604

RESUMEN

How neurons detect the direction of motion is a prime example of neural computation: Motion vision is found in the visual systems of virtually all sighted animals, it is important for survival, and it requires interesting computations with well-defined linear and nonlinear processing steps-yet the whole process is of moderate complexity. The genetic methods available in the fruit fly Drosophila and the charting of a connectome of its visual system have led to rapid progress and unprecedented detail in our understanding of how neurons compute the direction of motion in this organism. The picture that emerged incorporates not only the identity, morphology, and synaptic connectivity of each neuron involved but also its neurotransmitters, its receptors, and their subcellular localization. Together with the neurons' membrane potential responses to visual stimulation, this information provides the basis for a biophysically realistic model of the circuit that computes the direction of visual motion.


Asunto(s)
Percepción de Movimiento , Animales , Percepción de Movimiento/fisiología , Vías Visuales/fisiología , Drosophila/fisiología , Visión Ocular , Neuronas/fisiología , Estimulación Luminosa
4.
Annu Rev Neurosci ; 46: 233-258, 2023 07 10.
Artículo en Inglés | MEDLINE | ID: mdl-36972611

RESUMEN

Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.


Asunto(s)
Encéfalo , Aprendizaje , Hipocampo , Corteza Prefrontal , Simulación por Computador
5.
Annu Rev Neurosci ; 44: 403-424, 2021 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-33863252

RESUMEN

Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code.


Asunto(s)
Encéfalo , Neuronas , Potenciales de Acción , Modelos Neurológicos
6.
Annu Rev Neurosci ; 44: 275-293, 2021 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-33730512

RESUMEN

The dense reconstruction of neuronal wiring diagrams from volumetric electron microscopy data has the potential to generate fundamentally new insights into mechanisms of information processing and storage in neuronal circuits. Zebrafish provide unique opportunities for dynamical connectomics approaches that combine reconstructions of wiring diagrams with measurements of neuronal population activity and behavior. Such approaches have the power to reveal higher-order structure in wiring diagrams that cannot be detected by sparse sampling of connectivity and that is essential for neuronal computations. In the brain stem, recurrently connected neuronal modules were identified that can account for slow, low-dimensional dynamics in an integrator circuit. In the spinal cord, connectivity specifies functional differences between premotor interneurons. In the olfactory bulb, tuning-dependent connectivity implements a whitening transformation that is based on the selective suppression of responses to overrepresented stimulus features. These findings illustrate the potential of dynamical connectomics in zebrafish to analyze the circuit mechanisms underlying higher-order neuronal computations.


Asunto(s)
Red Nerviosa , Pez Cebra , Animales , Interneuronas , Neuronas , Bulbo Olfatorio
7.
Annu Rev Neurosci ; 43: 249-275, 2020 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-32640928

RESUMEN

Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.


Asunto(s)
Encéfalo/fisiología , Biología Computacional , Aprendizaje Profundo , Red Nerviosa/fisiología , Animales , Biología Computacional/métodos , Humanos , Neuronas/fisiología , Dinámica Poblacional
8.
Mol Cell ; 75(4): 769-780.e4, 2019 08 22.
Artículo en Inglés | MEDLINE | ID: mdl-31442423

RESUMEN

The ability to process and store information in living cells is essential for developing next-generation therapeutics and studying biology in situ. However, existing strategies have limited recording capacity and are challenging to scale. To overcome these limitations, we developed DOMINO, a robust and scalable platform for encoding logic and memory in bacterial and eukaryotic cells. Using an efficient single-nucleotide-resolution Read-Write head for DNA manipulation, DOMINO converts the living cells' DNA into an addressable, readable, and writable medium for computation and storage. DOMINO operators enable analog and digital molecular recording for long-term monitoring of signaling dynamics and cellular events. Furthermore, multiple operators can be layered and interconnected to encode order-independent, sequential, and temporal logic, allowing recording and control over the combination, order, and timing of molecular events in cells. We envision that DOMINO will lay the foundation for building robust and sophisticated computation-and-memory gene circuits for numerous biotechnological and biomedical applications.


Asunto(s)
Computadores Moleculares , ADN , ADN/química , ADN/metabolismo , Células HEK293 , Humanos
9.
Proc Natl Acad Sci U S A ; 121(38): e2409160121, 2024 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-39264740

RESUMEN

Animals are born with extensive innate behavioral capabilities, which arise from neural circuits encoded in the genome. However, the information capacity of the genome is orders of magnitude smaller than that needed to specify the connectivity of an arbitrary brain circuit, indicating that the rules encoding circuit formation must fit through a "genomic bottleneck" as they pass from one generation to the next. Here, we formulate the problem of innate behavioral capacity in the context of artificial neural networks in terms of lossy compression of the weight matrix. We find that several standard network architectures can be compressed by several orders of magnitude, yielding pretraining performance that can approach that of the fully trained network. Interestingly, for complex but not for simple test problems, the genomic bottleneck algorithm also captures essential features of the circuit, leading to enhanced transfer learning to novel tasks and datasets. Our results suggest that compressing a neural circuit through the genomic bottleneck serves as a regularizer, enabling evolution to select simple circuits that can be readily adapted to important real-world tasks. The genomic bottleneck also suggests how innate priors can complement conventional approaches to learning in designing algorithms for AI.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Animales , Genómica/métodos , Genoma , Humanos
10.
Proc Natl Acad Sci U S A ; 121(12): e2314995121, 2024 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-38470918

RESUMEN

Collective properties of complex systems composed of many interacting components such as neurons in our brain can be modeled by artificial networks based on disordered systems. We show that a disordered neural network of superconducting loops with Josephson junctions can exhibit computational properties like categorization and associative memory in the time evolution of its state in response to information from external excitations. Superconducting loops can trap multiples of fluxons in many discrete memory configurations defined by the local free energy minima in the configuration space of all possible states. A memory state can be updated by exciting the Josephson junctions to fire or allow the movement of fluxons through the network as the current through them surpasses their critical current thresholds. Simulations performed with a lumped element circuit model of a 4-loop network show that information written through excitations is translated into stable states of trapped flux and their time evolution. Experimental implementation on a high-Tc superconductor YBCO-based 4-loop network shows dynamically stable flux flow in each pathway characterized by the correlations between junction firing statistics. Neural network behavior is observed as energy barriers separating state categories in simulations in response to multiple excitations, and experimentally as junction responses characterizing different flux flow patterns in the network. The state categories that produce these patterns have different temporal stabilities relative to each other and the excitations. This provides strong evidence for time-dependent (short-to-long-term) memories, that are dependent on the geometrical and junction parameters of the loops, as described with a network model.

11.
Proc Natl Acad Sci U S A ; 121(18): e2312992121, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38648479

RESUMEN

Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.


Asunto(s)
Modelos Neurológicos , Neuronas , Neuronas/fisiología , Teorema de Bayes , Red Nerviosa/fisiología , Dinámicas no Lineales , Humanos , Aprendizaje/fisiología , Animales , Encéfalo/fisiología
12.
Proc Natl Acad Sci U S A ; 121(3): e2311885121, 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38198531

RESUMEN

The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.


Asunto(s)
Interneuronas , Neuronas , Encéfalo , Redes Neurales de la Computación , Reproducción
13.
Proc Natl Acad Sci U S A ; 120(14): e2220270120, 2023 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-36972429

RESUMEN

Control of carbon dioxide and water vapor exchange between a leaf's interior and the surrounding air is accomplished by variations in the turgor pressures in the small epidermal and guard cells that cover the leaf's surface. These pressures respond to changes in light intensity and wavelength, temperature, CO2 concentration, and air humidity. The dynamical equations that describe such processes are formally identical to those that define computation in a two-layer, adaptive, cellular nonlinear network. This exact identification suggests that leaf gas-exchange processes can be understood as analog computation and that exploiting the output of two-layer, adaptive, cellular nonlinear networks might provide new tools in applied plant research.


Asunto(s)
Hojas de la Planta , Estomas de Plantas , Luz , Presión , Dióxido de Carbono
14.
Proc Natl Acad Sci U S A ; 120(3): e2201699120, 2023 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-36630454

RESUMEN

Neurons are characterized by elaborate tree-like dendritic structures that support local computations by integrating multiple inputs from upstream presynaptic neurons. It is less clear whether simple neurons, consisting of a few or even a single neurite, may perform local computations as well. To address this question, we focused on the compact neural network of Caenorhabditis elegans animals for which the full wiring diagram is available, including the coordinates of individual synapses. We find that the positions of the chemical synapses along the neurites are not randomly distributed nor can they be explained by anatomical constraints. Instead, synapses tend to form clusters, an organization that supports local compartmentalized computations. In mutually synapsing neurons, connections of opposite polarity cluster separately, suggesting that positive and negative feedback dynamics may be implemented in discrete compartmentalized regions along neurites. In triple-neuron circuits, the nonrandom synaptic organization may facilitate local functional roles, such as signal integration and coordinated activation of functionally related downstream neurons. These clustered synaptic topologies emerge as a guiding principle in the network, presumably to facilitate distinct parallel functions along a single neurite, which effectively increase the computational capacity of the neural network.


Asunto(s)
Caenorhabditis elegans , Neuronas , Animales , Caenorhabditis elegans/fisiología , Neuronas/fisiología , Sinapsis/fisiología , Neuritas , Redes Neurales de la Computación
15.
Proc Natl Acad Sci U S A ; 120(27): e2303168120, 2023 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-37339185

RESUMEN

We briefly review the majorization-minimization (MM) principle and elaborate on the closely related notion of proximal distance algorithms, a generic approach for solving constrained optimization problems via quadratic penalties. We illustrate how the MM and proximal distance principles apply to a variety of problems from statistics, finance, and nonlinear optimization. Drawing from our selected examples, we also sketch a few ideas pertinent to the acceleration of MM algorithms: a) structuring updates around efficient matrix decompositions, b) path following in proximal distance iteration, and c) cubic majorization and its connections to trust region methods. These ideas are put to the test on several numerical examples, but for the sake of brevity, we omit detailed comparisons to competing methods. The current article, which is a mix of review and current contributions, celebrates the MM principle as a powerful framework for designing optimization algorithms and reinterpreting existing ones.

16.
Proc Natl Acad Sci U S A ; 120(25): e2220022120, 2023 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-37307461

RESUMEN

In the mid-1930s, the English mathematician and logician Alan Turing invented an imaginary machine which could emulate the process of manipulating finite symbolic configurations by human computers. His machine launched the field of computer science and provided a foundation for the modern-day programmable computer. A decade later, building on Turing's machine, the American-Hungarian mathematician John von Neumann invented an imaginary self-reproducing machine capable of open-ended evolution. Through his machine, von Neumann answered one of the deepest questions in Biology: Why is it that all living organisms carry a self-description in the form of DNA? The story behind how two pioneers of computer science stumbled on the secret of life many years before the discovery of the DNA double helix is not well known, not even to biologists, and you will not find it in biology textbooks. Yet, the story is just as relevant today as it was eighty years ago: Turing and von Neumann left a blueprint for studying biological systems as if they were computing machines. This approach may hold the key to answering many remaining questions in Biology and could even lead to advances in computer science.


Asunto(s)
Marcha , Personal de Salud , Humanos
17.
Proc Natl Acad Sci U S A ; 120(37): e2217330120, 2023 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-37669382

RESUMEN

DNA is an incredibly dense storage medium for digital data. However, computing on the stored information is expensive and slow, requiring rounds of sequencing, in silico computation, and DNA synthesis. Prior work on accessing and modifying data using DNA hybridization or enzymatic reactions had limited computation capabilities. Inspired by the computational power of "DNA strand displacement," we augment DNA storage with "in-memory" molecular computation using strand displacement reactions to algorithmically modify data in a parallel manner. We show programs for binary counting and Turing universal cellular automaton Rule 110, the latter of which is, in principle, capable of implementing any computer algorithm. Information is stored in the nicks of DNA, and a secondary sequence-level encoding allows high-throughput sequencing-based readout. We conducted multiple rounds of computation on 4-bit data registers, as well as random access of data (selective access and erasure). We demonstrate that large strand displacement cascades with 244 distinct strand exchanges (sequential and in parallel) can use naturally occurring DNA sequence from M13 bacteriophage without stringent sequence design, which has the potential to improve the scale of computation and decrease cost. Our work merges DNA storage and DNA computing, setting the foundation of entirely molecular algorithms for parallel manipulation of digital information preserved in DNA.


Asunto(s)
Computadores Moleculares , ADN , Replicación del ADN , Algoritmos , Bacteriófago M13
18.
Proc Natl Acad Sci U S A ; 120(32): e2300558120, 2023 08 08.
Artículo en Inglés | MEDLINE | ID: mdl-37523562

RESUMEN

While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.


Asunto(s)
Modelos Neurológicos , N-Metilaspartato , Aprendizaje/fisiología , Neuronas/fisiología , Percepción
19.
Proc Natl Acad Sci U S A ; 120(49): e2311014120, 2023 Dec 05.
Artículo en Inglés | MEDLINE | ID: mdl-38039273

RESUMEN

For quantum computing (QC) to emerge as a practically indispensable computational tool, there is a need for quantum protocols with end-to-end practical applications-in this instance, fluid dynamics. We debut here a high-performance quantum simulator which we term QFlowS (Quantum Flow Simulator), designed for fluid flow simulations using QC. Solving nonlinear flows by QC generally proceeds by solving an equivalent infinite dimensional linear system as a result of linear embedding. Thus, we first choose to simulate two well-known flows using QFlowS and demonstrate a previously unseen, full gate-level implementation of a hybrid and high precision Quantum Linear Systems Algorithms (QLSA) for simulating such flows at low Reynolds numbers. The utility of this simulator is demonstrated by extracting error estimates and power law scaling that relates [Formula: see text] (a parameter crucial to Hamiltonian simulations) to the condition number [Formula: see text] of the simulation matrix and allows the prediction of an optimal scaling parameter for accurate eigenvalue estimation. Further, we include two speedup preserving algorithms for a) the functional form or sparse quantum state preparation and b) in situ quantum postprocessing tool for computing nonlinear functions of the velocity field. We choose the viscous dissipation rate as an example, for which the end-to-end complexity is shown to be [Formula: see text], where [Formula: see text] is the size of the linear system of equations, [Formula: see text] is the solution error, and [Formula: see text] is the error in postprocessing. This work suggests a path toward quantum simulation of fluid flows and highlights the special considerations needed at the gate-level implementation of QC.

20.
J Neurosci ; 44(24)2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38670806

RESUMEN

Visual crowding refers to the phenomenon where a target object that is easily identifiable in isolation becomes difficult to recognize when surrounded by other stimuli (distractors). Many psychophysical studies have investigated this phenomenon and proposed alternative models for the underlying mechanisms. One prominent hypothesis, albeit with mixed psychophysical support, posits that crowding arises from the loss of information due to pooled encoding of features from target and distractor stimuli in the early stages of cortical visual processing. However, neurophysiological studies have not rigorously tested this hypothesis. We studied the responses of single neurons in macaque (one male, one female) area V4, an intermediate stage of the object-processing pathway, to parametrically designed crowded displays and texture statistics-matched metameric counterparts. Our investigations reveal striking parallels between how crowding parameters-number, distance, and position of distractors-influence human psychophysical performance and V4 shape selectivity. Importantly, we also found that enhancing the salience of a target stimulus could alleviate crowding effects in highly cluttered scenes, and this could be temporally protracted reflecting a dynamical process. Thus, a pooled encoding of nearby stimuli cannot explain the observed responses, and we propose an alternative model where V4 neurons preferentially encode salient stimuli in crowded displays. Overall, we conclude that the magnitude of crowding effects is determined not just by the number of distractors and target-distractor separation but also by the relative salience of targets versus distractors based on their feature attributes-the similarity of distractors and the contrast between target and distractor stimuli.


Asunto(s)
Macaca mulatta , Neuronas , Estimulación Luminosa , Corteza Visual , Animales , Masculino , Femenino , Corteza Visual/fisiología , Estimulación Luminosa/métodos , Neuronas/fisiología , Humanos , Reconocimiento Visual de Modelos/fisiología , Psicofísica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA