Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 317
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Cell ; 182(6): 1372-1376, 2020 09 17.
Article in English | MEDLINE | ID: mdl-32946777

ABSTRACT

Large scientific projects in genomics and astronomy are influential not because they answer any single question but because they enable investigation of continuously arising new questions from the same data-rich sources. Advances in automated mapping of the brain's synaptic connections (connectomics) suggest that the complicated circuits underlying brain function are ripe for analysis. We discuss benefits of mapping a mouse brain at the level of synapses.


Subject(s)
Brain/physiology , Connectome/methods , Nerve Net/physiology , Neurons/physiology , Synapses/physiology , Animals , Mice
2.
Cell ; 164(1-2): 13-15, 2016 Jan 14.
Article in English | MEDLINE | ID: mdl-26771481

ABSTRACT

To understand the origins of spatial navigational signals, Acharya et al. record the activity of hippocampal neurons in rats running in open two-dimensional environments in both the real world and in virtual reality. They find that a subset of hippocampal neurons have directional tuning that persists in virtual reality, where vestibular cues are absent.


Subject(s)
Appetitive Behavior , Hippocampus/physiology , Animals , Humans , Male
3.
Nature ; 587(7834): 432-436, 2020 11.
Article in English | MEDLINE | ID: mdl-33029013

ABSTRACT

Perceptual sensitivity varies from moment to moment. One potential source of this variability is spontaneous fluctuations in cortical activity that can travel as waves1. Spontaneous travelling waves have been reported during anaesthesia2-7, but it is not known whether they have a role during waking perception. Here, using newly developed analytic techniques to characterize the moment-to-moment dynamics of noisy multielectrode data, we identify spontaneous waves of activity in the extrastriate visual cortex of awake, behaving marmosets (Callithrix jacchus). In monkeys trained to detect faint visual targets, the timing and position of spontaneous travelling waves before target onset predicted the magnitude of target-evoked activity and the likelihood of target detection. By contrast, spatially disorganized fluctuations of neural activity were much less predictive. These results reveal an important role for spontaneous travelling waves in sensory processing through the modulation of neural and perceptual sensitivity.


Subject(s)
Brain Waves , Visual Cortex/physiology , Visual Perception/physiology , Wakefulness/physiology , Action Potentials , Animals , Behavior, Animal , Callithrix/physiology , Electrodes , Evoked Potentials, Visual , Female , Male , Photic Stimulation , Probability , Retina/physiology
4.
Proc Natl Acad Sci U S A ; 120(39): e2300445120, 2023 09 26.
Article in English | MEDLINE | ID: mdl-37738297

ABSTRACT

Animals move smoothly and reliably in unpredictable environments. Models of sensorimotor control, drawing on control theory, have assumed that sensory information from the environment leads to actions, which then act back on the environment, creating a single, unidirectional perception-action loop. However, the sensorimotor loop contains internal delays in sensory and motor pathways, which can lead to unstable control. We show here that these delays can be compensated by internal feedback signals that flow backward, from motor toward sensory areas. This internal feedback is ubiquitous in neural sensorimotor systems, and we show how internal feedback compensates internal delays. This is accomplished by filtering out self-generated and other predictable changes so that unpredicted, actionable information can be rapidly transmitted toward action by the fastest components, effectively compressing the sensory input to more efficiently use feedforward pathways: Tracts of fast, giant neurons necessarily convey less accurate signals than tracts with many smaller neurons, but they are crucial for fast and accurate behavior. We use a mathematically tractable control model to show that internal feedback has an indispensable role in achieving state estimation, localization of function (how different parts of the cortex control different parts of the body), and attention, all of which are crucial for effective sensorimotor control. This control model can explain anatomical, physiological, and behavioral observations, including motor signals in the visual cortex, heterogeneous kinetics of sensory receptors, and the presence of giant cells in the cortex of humans as well as internal feedback patterns and unexplained heterogeneity in neural systems.


Subject(s)
Behavior Observation Techniques , Sensory Receptor Cells , Animals , Humans , Feedback , Efferent Pathways , Perception
5.
PLoS Comput Biol ; 20(4): e1011800, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38656994

ABSTRACT

Biochemical signaling pathways in living cells are often highly organized into spatially segregated volumes, membranes, scaffolds, subcellular compartments, and organelles comprising small numbers of interacting molecules. At this level of granularity stochastic behavior dominates, well-mixed continuum approximations based on concentrations break down and a particle-based approach is more accurate and more efficient. We describe and validate a new version of the open-source MCell simulation program (MCell4), which supports generalized 3D Monte Carlo modeling of diffusion and chemical reaction of discrete molecules and macromolecular complexes in solution, on surfaces representing membranes, and combinations thereof. The main improvements in MCell4 compared to the previous versions, MCell3 and MCell3-R, include a Python interface and native BioNetGen reaction language (BNGL) support. MCell4's Python interface opens up completely new possibilities for interfacing with external simulators to allow creation of sophisticated event-driven multiscale/multiphysics simulations. The native BNGL support, implemented through a new open-source library libBNG (also introduced in this paper), provides the capability to run a given BNGL model spatially resolved in MCell4 and, with appropriate simplifying assumptions, also in the BioNetGen simulation environment, greatly accelerating and simplifying model validation and comparison.


Subject(s)
Monte Carlo Method , Software , Diffusion , Computer Simulation , Models, Biological , Programming Languages , Computational Biology/methods , Signal Transduction/physiology
6.
Proc Natl Acad Sci U S A ; 119(24): e2117234119, 2022 06 14.
Article in English | MEDLINE | ID: mdl-35679342

ABSTRACT

Investigating neural interactions is essential to understanding the neural basis of behavior. Many statistical methods have been used for analyzing neural activity, but estimating the direction of network interactions correctly and efficiently remains a difficult problem. Here, we derive dynamical differential covariance (DDC), a method based on dynamical network models that detects directional interactions with low bias and high noise tolerance under nonstationarity conditions. Moreover, DDC scales well with the number of recording sites and the computation required is comparable to that needed for covariance. DDC was validated and compared favorably with other methods on networks with false positive motifs and multiscale neural simulations where the ground-truth connectivity was known. When applied to recordings of resting-state functional magnetic resonance imaging (rs-fMRI), DDC consistently detected regional interactions with strong structural connectivity in over 1,000 individual subjects obtained by diffusion MRI (dMRI). DDC is a promising family of methods for estimating connectivity that can be generalized to a wide range of dynamical models and recording techniques and to other applications where system identification is needed.


Subject(s)
Brain , Connectome , Nerve Net , Brain/physiology , Connectome/methods , Diffusion Magnetic Resonance Imaging/methods , Humans , Nerve Net/physiology , Neural Pathways
7.
Neural Comput ; 36(5): 781-802, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38658027

ABSTRACT

Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.


Subject(s)
Information Theory , Neuronal Plasticity , Synapses , Animals , Synapses/physiology , Neuronal Plasticity/physiology , Dendritic Spines/physiology , CA1 Region, Hippocampal/physiology , Models, Neurological , Information Storage and Retrieval , Male , Hippocampus/physiology , Rats
8.
PLoS Comput Biol ; 19(11): e1011618, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37983250

ABSTRACT

Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system-artificial neural networks (ANNs)-to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network's state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.


Subject(s)
Learning , Memory, Short-Term , Animals , Humans , Haplorhini , Cognition , Neural Networks, Computer
9.
Nature ; 562(7726): 236-239, 2018 10.
Article in English | MEDLINE | ID: mdl-30232456

ABSTRACT

Soaring birds often rely on ascending thermal plumes (thermals) in the atmosphere as they search for prey or migrate across large distances1-4. The landscape of convective currents is rugged and shifts on timescales of a few minutes as thermals constantly form, disintegrate or are transported away by the wind5,6. How soaring birds find and navigate thermals within this complex landscape is unknown. Reinforcement learning7 provides an appropriate framework in which to identify an effective navigational strategy as a sequence of decisions made in response to environmental cues. Here we use reinforcement learning to train a glider in the field to navigate atmospheric thermals autonomously. We equipped a glider of two-metre wingspan with a flight controller that precisely controlled the bank angle and pitch, modulating these at intervals with the aim of gaining as much lift as possible. A navigational strategy was determined solely from the glider's pooled experiences, collected over several days in the field. The strategy relies on on-board methods to accurately estimate the local vertical wind accelerations and the roll-wise torques on the glider, which serve as navigational cues. We establish the validity of our learned flight policy through field experiments, numerical simulations and estimates of the noise in measurements caused by atmospheric turbulence. Our results highlight the role of vertical wind accelerations and roll-wise torques as effective mechanosensory cues for soaring birds and provide a navigational strategy that is directly applicable to the development of autonomous soaring vehicles.


Subject(s)
Air Movements , Atmosphere , Birds/physiology , Flight, Animal/physiology , Learning/physiology , Spatial Navigation/physiology , Temperature , Algorithms , Animals , Birds/anatomy & histology , Cues , Wings, Animal/anatomy & histology , Wings, Animal/physiology
10.
Proc Natl Acad Sci U S A ; 118(22)2021 06 01.
Article in English | MEDLINE | ID: mdl-34050009

ABSTRACT

Nervous systems sense, communicate, compute, and actuate movement using distributed components with severe trade-offs in speed, accuracy, sparsity, noise, and saturation. Nevertheless, brains achieve remarkably fast, accurate, and robust control performance due to a highly effective layered control architecture. Here, we introduce a driving task to study how a mountain biker mitigates the immediate disturbance of trail bumps and responds to changes in trail direction. We manipulated the time delays and accuracy of the control input from the wheel as a surrogate for manipulating the characteristics of neurons in the control loop. The observed speed-accuracy trade-offs motivated a theoretical framework consisting of two layers of control loops-a fast, but inaccurate, reflexive layer that corrects for bumps and a slow, but accurate, planning layer that computes the trajectory to follow-each with components having diverse speeds and accuracies within each physical level, such as nerve bundles containing axons with a wide range of sizes. Our model explains why the errors from two control loops are additive and shows how the errors in each control loop can be decomposed into the errors caused by the limited speeds and accuracies of the components. These results demonstrate that an appropriate diversity in the properties of neurons across layers helps to create "diversity-enabled sweet spots," so that both fast and accurate control is achieved using slow or inaccurate components.


Subject(s)
Models, Biological , Movement/physiology , Psychomotor Performance/physiology , Reaction Time/physiology , Adult , Humans , Male
11.
Nat Rev Neurosci ; 19(5): 255-268, 2018 05.
Article in English | MEDLINE | ID: mdl-29563572

ABSTRACT

Multichannel recording technologies have revealed travelling waves of neural activity in multiple sensory, motor and cognitive systems. These waves can be spontaneously generated by recurrent circuits or evoked by external stimuli. They travel along brain networks at multiple scales, transiently modulating spiking and excitability as they pass. Here, we review recent experimental findings that have found evidence for travelling waves at single-area (mesoscopic) and whole-brain (macroscopic) scales. We place these findings in the context of the current theoretical understanding of wave generation and propagation in recurrent networks. During the large low-frequency rhythms of sleep or the relatively desynchronized state of the awake cortex, travelling waves may serve a variety of functions, from long-term memory consolidation to processing of dynamic visual stimuli. We explore new avenues for experimental and computational understanding of the role of spatiotemporal activity patterns in the cortex.


Subject(s)
Brain Waves/physiology , Cerebral Cortex/physiology , Computer Simulation , Neural Pathways/physiology , Animals , Electroencephalography , Humans , Models, Neurological
12.
Neural Comput ; 35(3): 309-342, 2023 02 17.
Article in English | MEDLINE | ID: mdl-36746144

ABSTRACT

Large language models (LLMs) have been transformative. They are pretrained foundational models that are self-supervised and can be adapted with fine-tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and, more recently, LaMDA, both of them LLMs, can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a reverse Turing test. If so, then by studying interviews, we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable, they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems and how LLMs could in turn be used to uncover new insights into brain function.


Subject(s)
Artificial Intelligence , Brain , Humans , Learning , Language
13.
PLoS Comput Biol ; 18(5): e1010068, 2022 05.
Article in English | MEDLINE | ID: mdl-35533198

ABSTRACT

Chemical synapses exhibit a diverse array of internal mechanisms that affect the dynamics of transmission efficacy. Many of these processes, such as release of neurotransmitter and vesicle recycling, depend strongly on activity-dependent influx and accumulation of Ca2+. To model how each of these processes may affect the processing of information in neural circuits, and how their dysfunction may lead to disease states, requires a computationally efficient modelling framework, capable of generating accurate phenomenology without incurring a heavy computational cost per synapse. Constructing a phenomenologically realistic model requires the precise characterization of the timing and probability of neurotransmitter release. Difficulties arise in that functional forms of instantaneous release rate can be difficult to extract from noisy data without running many thousands of trials, and in biophysical synapses, facilitation of per-vesicle release probability is confounded by depletion. To overcome this, we obtained traces of free Ca2+ concentration in response to various action potential stimulus trains from a molecular MCell model of a hippocampal Schaffer collateral axon. Ca2+ sensors were placed at varying distance from a voltage-dependent calcium channel (VDCC) cluster, and Ca2+ was buffered by calbindin. Then, using the calcium traces to drive deterministic state vector models of synaptotagmin 1 and 7 (Syt-1/7), which respectively mediate synchronous and asynchronous release in excitatory hippocampal synapses, we obtained high-resolution profiles of instantaneous release rate, to which we applied functional fits. Synchronous vesicle release occurred predominantly within half a micron of the source of spike-evoked Ca2+ influx, while asynchronous release occurred more consistently at all distances. Both fast and slow mechanisms exhibited multi-exponential release rate curves, whose magnitudes decayed exponentially with distance from the Ca2+ source. Profile parameters facilitate on different time scales according to a single, general facilitation function. These functional descriptions lay the groundwork for efficient mesoscale modelling of vesicular release dynamics.


Subject(s)
Calcium , Synapses , Action Potentials/physiology , Neurotransmitter Agents , Synapses/physiology , Synaptic Transmission/physiology
14.
Proc Natl Acad Sci U S A ; 117(48): 30033-30038, 2020 12 01.
Article in English | MEDLINE | ID: mdl-31992643

ABSTRACT

Deep learning networks have been trained to recognize speech, caption photographs, and translate text between languages at high levels of performance. Although applications of deep learning networks to real-world problems have become ubiquitous, our understanding of why they are so effective is lacking. These empirical results should not be possible according to sample complexity in statistics and nonconvex optimization theory. However, paradoxes in the training and effectiveness of deep learning networks are being investigated and insights are being found in the geometry of high-dimensional spaces. A mathematical theory of deep learning would illuminate how they function, allow us to assess the strengths and weaknesses of different network architectures, and lead to major improvements. Deep learning has provided natural ways for humans to communicate with digital devices and is foundational for building artificial general intelligence. Deep learning was inspired by the architecture of the cerebral cortex and insights into autonomy and general intelligence may be found in other brain regions that are essential for planning and survival, but major breakthroughs will be needed to achieve these goals.

15.
Proc Natl Acad Sci U S A ; 117(47): 29872-29882, 2020 11 24.
Article in English | MEDLINE | ID: mdl-33154155

ABSTRACT

The prefrontal cortex encodes and stores numerous, often disparate, schemas and flexibly switches between them. Recent research on artificial neural networks trained by reinforcement learning has made it possible to model fundamental processes underlying schema encoding and storage. Yet how the brain is able to create new schemas while preserving and utilizing old schemas remains unclear. Here we propose a simple neural network framework that incorporates hierarchical gating to model the prefrontal cortex's ability to flexibly encode and use multiple disparate schemas. We show how gating naturally leads to transfer learning and robust memory savings. We then show how neuropsychological impairments observed in patients with prefrontal damage are mimicked by lesions of our network. Our architecture, which we call DynaMoE, provides a fundamental framework for how the prefrontal cortex may handle the abundance of schemas necessary to navigate the real world.


Subject(s)
Learning/physiology , Models, Neurological , Neural Networks, Computer , Prefrontal Cortex/physiology , Reinforcement, Psychology , Behavior Observation Techniques , Cognition Disorders/etiology , Cognition Disorders/physiopathology , Humans , Mental Disorders/etiology , Mental Disorders/physiopathology , Prefrontal Cortex/injuries
16.
Proc Natl Acad Sci U S A ; 117(26): 15200-15208, 2020 06 30.
Article in English | MEDLINE | ID: mdl-32527855

ABSTRACT

Do dopaminergic reward structures represent the expected utility of information similarly to a reward? Optimal experimental design models from Bayesian decision theory and statistics have proposed a theoretical framework for quantifying the expected value of information that might result from a query. In particular, this formulation quantifies the value of information before the answer to that query is known, in situations where payoffs are unknown and the goal is purely epistemic: That is, to increase knowledge about the state of the world. Whether and how such a theoretical quantity is represented in the brain is unknown. Here we use an event-related functional MRI (fMRI) task design to disentangle information expectation, information revelation and categorization outcome anticipation, and response-contingent reward processing in a visual probabilistic categorization task. We identify a neural signature corresponding to the expectation of information, involving the left lateral ventral striatum. Moreover, we show a temporal dissociation in the activation of different reward-related regions, including the nucleus accumbens, medial prefrontal cortex, and orbitofrontal cortex, during information expectation versus reward-related processing.


Subject(s)
Anticipation, Psychological/physiology , Motivation/physiology , Reward , Ventral Striatum/physiology , Adult , Humans , Magnetic Resonance Imaging , Male , Ventral Striatum/diagnostic imaging , Young Adult
17.
Proc Natl Acad Sci U S A ; 117(25): 14503-14511, 2020 06 23.
Article in English | MEDLINE | ID: mdl-32513712

ABSTRACT

The nanoscale co-organization of neurotransmitter receptors facing presynaptic release sites is a fundamental determinant of their coactivation and of synaptic physiology. At excitatory synapses, how endogenous AMPARs, NMDARs, and mGluRs are co-organized inside the synapse and their respective activation during glutamate release are still unclear. Combining single-molecule superresolution microscopy, electrophysiology, and modeling, we determined the average quantity of each glutamate receptor type, their nanoscale organization, and their respective activation. We observed that NMDARs form a unique cluster mainly at the center of the PSD, while AMPARs segregate in clusters surrounding the NMDARs. mGluR5 presents a different organization and is homogenously dispersed at the synaptic surface. From these results, we build a model predicting the synaptic transmission properties of a unitary synapse, allowing better understanding of synaptic physiology.


Subject(s)
Models, Neurological , Neurons/metabolism , Receptor, Metabotropic Glutamate 5/metabolism , Receptors, AMPA/metabolism , Receptors, N-Methyl-D-Aspartate/metabolism , Synaptic Transmission/physiology , Animals , Cells, Cultured , Embryo, Mammalian , Female , Glutamic Acid/metabolism , Hippocampus/cytology , Hippocampus/diagnostic imaging , Hippocampus/physiology , Intravital Microscopy , Neurons/ultrastructure , Patch-Clamp Techniques , Primary Cell Culture , Rats , Rats, Sprague-Dawley , Single Molecule Imaging
18.
Chaos ; 33(10)2023 Oct 01.
Article in English | MEDLINE | ID: mdl-37832517

ABSTRACT

Differential equations serve as models for many physical systems. But, are these equations unique? We prove here that when a 3D system of ordinary differential equations for a dynamical system is transformed to the jerk or differential form, the jerk form is preserved in relation to a given variable and, therefore, the transformed system shares the time series of that given variable with the original untransformed system. Multiple algebraically different systems of ordinary differential equations can share the same jerk form. They may also share the same time series of the transformed variable depending on the parameters of the jerk form. Here, we studied 17 algebraically different Lorenz-like systems that share the same functional jerk form. There are groups of these systems that share the jerk parameters and, therefore, also have the same time series of the transformed variable.

19.
Chaos ; 33(12)2023 Dec 01.
Article in English | MEDLINE | ID: mdl-38156987

ABSTRACT

Delay Differential Analysis (DDA) is a nonlinear method for analyzing time series based on principles from nonlinear dynamical systems. DDA is extended here to incorporate network aspects to improve the dynamical characterization of complex systems. To demonstrate its effectiveness, DDA with network capabilities was first applied to the well-known Rössler system under different parameter regimes and noise conditions. Network-motif DDA, based on cortical regions, was then applied to invasive intracranial electroencephalographic data from drug-resistant epilepsy patients undergoing presurgical monitoring. The directional network motifs between brain areas that emerge from this analysis change dramatically before, during, and after seizures. Neural systems provide a rich source of complex data, arising from varying internal states generated by network interactions.


Subject(s)
Brain , Seizures , Humans , Electrocorticography/methods , Nonlinear Dynamics , Electroencephalography/methods
20.
PLoS Biol ; 17(2): e2006732, 2019 02.
Article in English | MEDLINE | ID: mdl-30768592

ABSTRACT

Whole-brain recordings give us a global perspective of the brain in action. In this study, we describe a method using light field microscopy to record near-whole brain calcium and voltage activity at high speed in behaving adult flies. We first obtained global activity maps for various stimuli and behaviors. Notably, we found that brain activity increased on a global scale when the fly walked but not when it groomed. This global increase with walking was particularly strong in dopamine neurons. Second, we extracted maps of spatially distinct sources of activity as well as their time series using principal component analysis and independent component analysis. The characteristic shapes in the maps matched the anatomy of subneuropil regions and, in some cases, a specific neuron type. Brain structures that responded to light and odor were consistent with previous reports, confirming the new technique's validity. We also observed previously uncharacterized behavior-related activity as well as patterns of spontaneous voltage activity.


Subject(s)
Behavior, Animal/physiology , Brain/anatomy & histology , Drosophila melanogaster/physiology , Imaging, Three-Dimensional , Photic Stimulation , Algorithms , Animals , Brain/physiology , Dopamine/metabolism , Electrophysiological Phenomena , Neurons/physiology , Neuropil Threads/metabolism , Principal Component Analysis , Time Factors , Walking
SELECTION OF CITATIONS
SEARCH DETAIL