RESUMEN
Despite music's omnipresence, the specific neural mechanisms responsible for perceiving and anticipating temporal patterns in music are unknown. To study potential mechanisms for keeping time in rhythmic contexts, we train a biologically constrained RNN, with excitatory (E) and inhibitory (I) units, on seven different stimulus tempos (2-8 Hz) on a synchronization and continuation task, a standard experimental paradigm. Our trained RNN generates a network oscillator that uses an input current (context parameter) to control oscillation frequency and replicates key features of neural dynamics observed in neural recordings of monkeys performing the same task. We develop a reduced three-variable rate model of the RNN and analyze its dynamic properties. By treating our understanding of the mathematical structure for oscillations in the reduced model as predictive, we confirm that the dynamical mechanisms are found also in the RNN. Our neurally plausible reduced model reveals an E-I circuit with two distinct inhibitory sub-populations, of which one is tightly synchronized with the excitatory units.
Asunto(s)
Modelos Neurológicos , Animales , Música , Neuronas/fisiología , Red Nerviosa/fisiología , Percepción Auditiva/fisiologíaRESUMEN
The human auditory system in attempting to decipher ambiguous sounds appears to resort to perceptual exploration as evidenced by multi-stable perceptual alternations. This phenomenon has been widely investigated via the auditory streaming paradigm, employing ABA_ triplet sequences with much research focused on perceptual bi-stability with the alternate percepts as either a single integrated stream or as two simultaneous distinct streams. We extend this inquiry with experiments and modeling to include tri-stable perception. Here, the segregated percepts may involve a foreground/background distinction. We collected empirical data from participants engaged in a tri-stable auditory task, utilizing this dataset to refine a neural mechanistic model that had successfully reproduced multiple features of auditory bi-stability. Remarkably, the model successfully emulated basic statistical characteristics of tri-stability without substantial modification. This model also allows us to demonstrate a parsimonious approach to account for individual variability by adjusting the parameter of either the noise level or the neural adaptation strength.
Asunto(s)
Estimulación Acústica , Percepción Auditiva , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Modelos Neurológicos , Vías Auditivas/fisiología , Factores de Tiempo , Enmascaramiento PerceptualRESUMEN
Firing rate models for describing the mean-field activities of neuronal ensembles can be used effectively to study network function and dynamics, including synchronization and rhythmicity of excitatory-inhibitory populations. However, traditional Wilson-Cowan-like models, even when extended to include an explicit dynamic synaptic activation variable, are found unable to capture some dynamics such as Interneuronal Network Gamma oscillations (ING). Use of an explicit delay is helpful in simulations at the expense of complicating mathematical analysis. We resolve this issue by introducing a dynamic variable, u, that acts as an effective delay in the negative feedback loop between firing rate (r) and synaptic gating of inhibition (s). In effect, u endows synaptic activation with second order dynamics. With linear stability analysis, numerical branch-tracking and simulations, we show that our r-u-s rate model captures some key qualitative features of spiking network models for ING. We also propose an alternative formulation, a v-u-s model, in which mean membrane potential v satisfies an averaged current-balance equation. Furthermore, we extend the framework to E-I networks. With our six-variable v-u-s model, we demonstrate in firing rate models the transition from Pyramidal-Interneuronal Network Gamma (PING) to ING by increasing the external drive to the inhibitory population without adjusting synaptic weights. Having PING and ING available in a single network, without invoking synaptic blockers, is plausible and natural for explaining the emergence and transition of two different types of gamma oscillations.
Asunto(s)
Potenciales de Acción , Ritmo Gamma , Modelos Neurológicos , Potenciales de Acción/fisiología , Ritmo Gamma/fisiología , Red Nerviosa/fisiología , Animales , Neuronas/fisiología , Simulación por Computador , Humanos , Sinapsis/fisiología , Inhibición Neural/fisiología , Células Piramidales/fisiología , Interneuronas/fisiología , Redes Neurales de la ComputaciónRESUMEN
Homeostatic regulation of synapses is vital for nervous system function and key to understanding a range of neurological conditions. Synaptic homeostasis is proposed to operate over hours to counteract the destabilizing influence of long-term potentiation (LTP) and long-term depression (LTD). The prevailing view holds that synaptic scaling is a slow first-order process that regulates postsynaptic glutamate receptors and fundamentally differs from LTP or LTD. Surprisingly, we find that the dynamics of scaling induced by neuronal inactivity are not exponential or monotonic, and the mechanism requires calcineurin and CaMKII, molecules dominant in LTD and LTP. Our quantitative model of these enzymes reconstructs the unexpected dynamics of homeostatic scaling and reveals how synapses can efficiently safeguard future capacity for synaptic plasticity. This mechanism of synaptic adaptation supports a broader set of homeostatic changes, including action potential autoregulation, and invites further inquiry into how such a mechanism varies in health and disease.
Asunto(s)
Calcineurina , Proteína Quinasa Tipo 2 Dependiente de Calcio Calmodulina , Homeostasis , Sinapsis , Animales , Sinapsis/metabolismo , Sinapsis/fisiología , Calcineurina/metabolismo , Proteína Quinasa Tipo 2 Dependiente de Calcio Calmodulina/metabolismo , Potenciación a Largo Plazo/fisiología , Plasticidad Neuronal/fisiología , Depresión Sináptica a Largo Plazo/fisiología , Neuronas/metabolismo , Neuronas/fisiología , RatonesRESUMEN
Rhythmicity permeates large parts of human experience. Humans generate various motor and brain rhythms spanning a range of frequencies. We also experience and synchronize to externally imposed rhythmicity, for example from music and song or from the 24-h light-dark cycles of the sun. In the context of music, humans have the ability to perceive, generate, and anticipate rhythmic structures, for example, "the beat." Experimental and behavioral studies offer clues about the biophysical and neural mechanisms that underlie our rhythmic abilities, and about different brain areas that are involved but many open questions remain. In this paper, we review several theoretical and computational approaches, each centered at different levels of description, that address specific aspects of musical rhythmic generation, perception, attention, perception-action coordination, and learning. We survey methods and results from applications of dynamical systems theory, neuro-mechanistic modeling, and Bayesian inference. Some frameworks rely on synchronization of intrinsic brain rhythms that span the relevant frequency range; some formulations involve real-time adaptation schemes for error-correction to align the phase and frequency of a dedicated circuit; others involve learning and dynamically adjusting expectations to make rhythm tracking predictions. Each of the approaches, while initially designed to answer specific questions, offers the possibility of being integrated into a larger framework that provides insights into our ability to perceive and generate rhythmic patterns.
RESUMEN
The Hodgkin-Huxley (HH) model and squid axon (bathed in reduced Ca2+) fire repetitively for steady current injection. Moreover, for a current-range just suprathreshold, repetitive firing coexists with a stable steady state. Neuronal excitability, as such, shows bistability and hysteresis providing the opportunity for the system to perform as switchable between firing and non-firing states with transient input and providing the backbone as a dynamical mechanism for bursting oscillations. Some conditions for bistability can be derived by intricate analysis (bifurcation theory) and characterized by simulation, but conditions for emergence and robustness of such bistability do not typically follow from intuition. Here, we demonstrate with a semi-quantitative two-variable, V-w, reduction of the HH model features that promote/reduce bistability. Visualization of flow and trajectories in the V-w phase plane provides an intuitive grasp for bistability. The geometry of action potential recovery involves a late phase during which the dynamic negative feedback of [Formula: see text] inactivation and [Formula: see text] activation over/undershoot, respectively, their resting values, thereby leading to hyperexcitabilty and an intrinsically generated opportunity to by-pass the spiral-like stable rest state and initiate the next spike upstroke. We illustrate control of bistability and dependence of the degree of hysteresis on recovery timescales and gating properties. Our dynamical dissection reveals the strongly attracting depolarized phase of the spike, enabling approximations like the resetting feature of adapting integrate-and-fire models. We extend our insights and show that the Morris-Lecar model can also exhibit robust bistability.
Asunto(s)
Modelos Neurológicos , Neuronas , Neuronas/fisiología , Potenciales de Acción/fisiología , Simulación por ComputadorRESUMEN
The current dominant view of the hippocampus is that it is a navigation "device" guided by environmental inputs. Yet, a critical aspect of navigation is a sequence of planned, coordinated actions. We examined the role of action in the neuronal organization of the hippocampus by training rats to jump a gap on a linear track. Recording local field potentials and ensembles of single units in the hippocampus, we found that jumping produced a stereotypic behavior associated with consistent electrophysiological patterns, including phase reset of theta oscillations, predictable global firing-rate changes, and population vector shifts of hippocampal neurons. A subset of neurons ("jump cells") were systematically affected by the gap but only in one direction of travel. Novel place fields emerged and others were either boosted or attenuated by jumping, yet the theta spike phase versus animal position relationship remained unaltered. Thus, jumping involves an action plan for the animal to traverse the same route as without jumping, which is faithfully tracked by hippocampal neuronal activity.
Asunto(s)
Hipocampo , Actividad Motora , Animales , Electrofisiología , Hipocampo/citología , Hipocampo/fisiología , Actividad Motora/fisiología , Neuronas/citología , Neuronas/fisiología , RatasRESUMEN
Bursting is one of the fundamental rhythms that excitable cells can generate either in response to incoming stimuli or intrinsically. It has been a topic of intense research in computational biology for several decades. The classification of bursting oscillations in excitable systems has been the subject of active research since the early 1980s and is still ongoing. As a by-product, it establishes analytical and numerical foundations for studying complex temporal behaviors in multiple timescale models of cellular activity. In this review, we first present the seminal works of Rinzel and Izhikevich in classifying bursting patterns of excitable systems. We recall a complementary mathematical classification approach by Bertram and colleagues, and then by Golubitsky and colleagues, which, together with the Rinzel-Izhikevich proposals, provide the state-of-the-art foundations to these classifications. Beyond classical approaches, we review a recent bursting example that falls outside the previous classification systems. Generalizing this example leads us to propose an extended classification, which requires the analysis of both fast and slow subsystems of an underlying slow-fast model and allows the dissection of a larger class of bursters. Namely, we provide a general framework for bursting systems with both subthreshold and superthreshold oscillations. A new class of bursters with at least 2 slow variables is then added, which we denote folded-node bursters, to convey the idea that the bursts are initiated or annihilated via a folded-node singularity. Key to this mechanism are so-called canard or duck orbits, organizing the underpinning excitability structure. We describe the 2 main families of folded-node bursters, depending upon the phase (active/spiking or silent/nonspiking) of the bursting cycle during which folded-node dynamics occurs. We classify both families and give examples of minimal systems displaying these novel bursting patterns. Finally, we provide a biophysical example by reinterpreting a generic conductance-based episodic burster as a folded-node burster, showing that the associated framework can explain its subthreshold oscillations over a larger parameter region than the fast subsystem approach.
Asunto(s)
Biología Computacional , Patos , Potenciales de Acción/fisiología , Animales , MatemáticaRESUMEN
The ability to estimate and produce appropriately timed responses is central to many behaviors including speaking, dancing, and playing a musical instrument. A classical framework for estimating or producing a time interval is the pacemaker-accumulator model in which pulses of a pacemaker are counted and compared to a stored representation. However, the neural mechanisms for how these pulses are counted remain an open question. The presence of noise and stochasticity further complicates the picture. We present a biophysical model of how to keep count of a pacemaker in the presence of various forms of stochasticity using a system of bistable Wilson-Cowan units asymmetrically connected in a one-dimensional array; all units receive the same input pulses from a central clock but only one unit is active at any point in time. With each pulse from the clock, the position of the activated unit changes thereby encoding the total number of pulses emitted by the clock. This neural architecture maps the counting problem into the spatial domain, which in turn translates count to a time estimate. We further extend the model to a hierarchical structure to be able to robustly achieve higher counts.
Asunto(s)
Percepción del Tiempo , Percepción del Tiempo/fisiologíaRESUMEN
Attention is a crucial component in sound source segregation allowing auditory objects of interest to be both singled out and held in focus. Our study utilizes a fundamental paradigm for sound source segregation: a sequence of interleaved tones, A and B, of different frequencies that can be heard as a single integrated stream or segregated into two streams (auditory streaming paradigm). We focus on the irregular alternations between integrated and segregated that occur for long presentations, so-called auditory bistability. Psychaoustic experiments demonstrate how attentional control, a listener's intention to experience integrated or segregated, biases perception in favour of different perceptual interpretations. Our data show that this is achieved by prolonging the dominance times of the attended percept and, to a lesser extent, by curtailing the dominance times of the unattended percept, an effect that remains consistent across a range of values for the difference in frequency between A and B. An existing neuromechanistic model describes the neural dynamics of perceptual competition downstream of primary auditory cortex (A1). The model allows us to propose plausible neural mechanisms for attentional control, as linked to different attentional strategies, in a direct comparison with behavioural data. A mechanism based on a percept-specific input gain best accounts for the effects of attentional control.
Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Modelos Teóricos , Psicoacústica , Adulto , Femenino , Humanos , MasculinoRESUMEN
Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics ("neural sequences") of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.
Asunto(s)
Neoplasias , Redes Neurales de la Computación , Humanos , AprendizajeRESUMEN
The process by which humans synchronize to a musical beat is believed to occur through error-correction where an individual's estimates of the period and phase of the beat time are iteratively adjusted to align with an external stimuli. Mathematically, error-correction can be described using a two-dimensional map where convergence to a fixed point corresponds to synchronizing to the beat. In this paper, we show how a neural system, called a beat generator, learns to adapt its oscillatory behavior through error-correction to synchronize to an external periodic signal. We construct a two-dimensional event-based map, which iteratively adjusts an internal parameter of the beat generator to speed up or slow down its oscillatory behavior to bring it into synchrony with the periodic stimulus. The map is novel in that the order of events defining the map are not a priori known. Instead, the type of error-correction adjustment made at each iterate of the map is determined by a sequence of expected events. The map possesses a rich repertoire of dynamics, including periodic solutions and chaotic orbits.
Asunto(s)
Aprendizaje , HumanosRESUMEN
A repeating triplet-sequence ABA- of non-overlapping brief tones, A and B, is a valued paradigm for studying auditory stream formation and the cocktail party problem. The stimulus is "heard" either as a galloping pattern (integration) or as two interleaved streams (segregation); the initial percept is typically integration then followed by spontaneous alternations between segregation and integration, each being dominant for a few seconds. The probability of segregation grows over seconds, from near-zero to a steady value, defining the buildup function, BUF. Its stationary level increases with the difference in tone frequencies, DF, and the BUF rises faster. Percept durations have DF-dependent means and are gamma-like distributed. Behavioral and computational studies usually characterize triplet streaming either during alternations or during buildup. Here, our experimental design and modeling encompass both. We propose a pseudo-neuromechanistic model that incorporates spiking activity in primary auditory cortex, A1, as input and resolves perception along two network-layers downstream of A1. Our model is straightforward and intuitive. It describes the noisy accumulation of evidence against the current percept which generates switches when reaching a threshold. Accumulation can saturate either above or below threshold; if below, the switching dynamics resemble noise-induced transitions from an attractor state. Our model accounts quantitatively for three key features of data: the BUFs, mean durations, and normalized dominance duration distributions, at various DF values. It describes perceptual alternations without competition per se, and underscores that treating triplets in the sequence independently and averaging across trials, as implemented in earlier widely cited studies, is inadequate.
Asunto(s)
Corteza Auditiva/fisiología , Estimulación Acústica , Percepción Auditiva , Femenino , Humanos , MasculinoRESUMEN
In Neuroscience, mathematical modelling involving multiple spatial and temporal scales can unveil complex oscillatory activity such as excitable responses to an input current, subthreshold oscillations, spiking or bursting. While the number of slow and fast variables and the geometry of the system determine the type of the complex oscillations, canard structures define boundaries between them. In this study, we use geometric singular perturbation theory to identify and characterise boundaries between different dynamical regimes in multiple-timescale firing rate models of the developing spinal cord. These rate models are either three or four dimensional with state variables chosen within an overall group of two slow and two fast variables. The fast subsystem corresponds to a recurrent excitatory network with fast activity-dependent synaptic depression, and the slow variables represent the cell firing threshold and slow activity-dependent synaptic depression, respectively. We start by demonstrating canard-induced bursting and mixed-mode oscillations in two different three-dimensional rate models. Then, in the full four-dimensional model we show that a canard-mediated slow passage creates dynamics that combine these complex oscillations and give rise to mixed-mode bursting oscillations (MMBOs). We unveil complicated isolas along which MMBOs exist in parameter space. The profile of solutions along each isola undergoes canard-mediated transitions between the sub-threshold regime and the bursting regime; these explosive transitions change the number of oscillations in each regime. Finally, we relate the MMBO dynamics to experimental recordings and discuss their effects on the silent phases of bursting patterns as well as their potential role in creating subthreshold fluctuations that are often interpreted as noise. The mathematical framework used in this paper is relevant for modelling multiple timescale dynamics in excitable systems.
Asunto(s)
Modelos Neurológicos , Red Nerviosa/fisiología , Potenciales de Acción/fisiología , Animales , Embrión de Pollo , Simulación por Computador , Conceptos Matemáticos , Red Nerviosa/embriología , Análisis Espacio-Temporal , Médula Espinal/embriología , Médula Espinal/fisiología , Procesos EstocásticosRESUMEN
We explore stream segregation with temporally modulated acoustic features using behavioral experiments and modelling. The auditory streaming paradigm in which alternating high- A and low-frequency tones B appear in a repeating ABA-pattern, has been shown to be perceptually bistable for extended presentations (order of minutes). For a fixed, repeating stimulus, perception spontaneously changes (switches) at random times, every 2-15â¯s, between an integrated interpretation with a galloping rhythm and segregated streams. Streaming in a natural auditory environment requires segregation of auditory objects with features that evolve over time. With the relatively idealized ABA-triplet paradigm, we explore perceptual switching in a non-static environment by considering slowly and periodically varying stimulus features. Our previously published model captures the dynamics of auditory bistability and predicts here how perceptual switches are entrained, tightly locked to the rising and falling phase of modulation. In psychoacoustic experiments we find that entrainment depends on both the period of modulation and the intrinsic switch characteristics of individual listeners. The extended auditory streaming paradigm with slowly modulated stimulus features presented here will be of significant interest for future imaging and neurophysiology experiments by reducing the need for subjective perceptual reports of ongoing perception.
Asunto(s)
Vías Auditivas/fisiología , Ambiente , Enmascaramiento Perceptual , Percepción de la Altura Tonal , Estimulación Acústica , Simulación por Computador , Femenino , Humanos , Masculino , Modelos Neurológicos , Psicoacústica , Adulto JovenRESUMEN
Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.
Asunto(s)
Corteza Auditiva , Localización de Sonidos , Estimulación Acústica , Percepción Auditiva , SonidoRESUMEN
During non-rapid eye movement (NREM) sleep, neuronal populations in the mammalian forebrain alternate between periods of spiking and inactivity. Termed the slow oscillation in the neocortex and sharp wave-ripples in the hippocampus, these alternations are often considered separately but are both crucial for NREM functions. By directly comparing experimental observations of naturally-sleeping rats with a mean field model of an adapting, recurrent neuronal population, we find that the neocortical alternations reflect a dynamical regime in which a stable active state is interrupted by transient inactive states (slow waves) while the hippocampal alternations reflect a stable inactive state interrupted by transient active states (sharp waves). We propose that during NREM sleep in the rodent, hippocampal and neocortical populations are excitable: each in a stable state from which internal fluctuations or external perturbation can evoke the stereotyped population events that mediate NREM functions.
Asunto(s)
Ondas Encefálicas/fisiología , Hipocampo/fisiología , Neocórtex/fisiología , Neuronas/fisiología , Sueño de Onda Lenta/fisiología , Animales , Electroencefalografía , Masculino , Modelos Neurológicos , Ratas , Sueño/fisiología , Fases del Sueño/fisiologíaRESUMEN
When listening to music, humans can easily identify and move to the beat. Numerous experimental studies have identified brain regions that may be involved with beat perception and representation. Several theoretical and algorithmic approaches have been proposed to account for this ability. Related to, but different from the issue of how we perceive a beat, is the question of how we learn to generate and hold a beat. In this paper, we introduce a neuronal framework for a beat generator that is capable of learning isochronous rhythms over a range of frequencies that are relevant to music and speech. Our approach combines ideas from error-correction and entrainment models to investigate the dynamics of how a biophysically-based neuronal network model synchronizes its period and phase to match that of an external stimulus. The model makes novel use of on-going faster gamma rhythms to form a set of discrete clocks that provide estimates, but not exact information, of how well the beat generator spike times match those of a stimulus sequence. The beat generator is endowed with plasticity allowing it to quickly learn and thereby adjust its spike times to achieve synchronization. Our model makes generalizable predictions about the existence of asymmetries in the synchronization process, as well as specific predictions about resynchronization times after changes in stimulus tempo or phase. Analysis of the model demonstrates that accurate rhythmic time keeping can be achieved over a range of frequencies relevant to music, in a manner that is robust to changes in parameters and to the presence of noise.
Asunto(s)
Percepción Auditiva/fisiología , Neuronas/fisiología , Estimulación Acústica , Fenómenos Biomecánicos/fisiología , Encéfalo/fisiología , Electroencefalografía , Ritmo Gamma , Humanos , Modelos Neurológicos , Música , Periodicidad , Percepción del TiempoRESUMEN
Gamma oscillations are readily observed in a variety of brain regions during both waking and sleeping states. Computational models of gamma oscillations typically involve simulations of large networks of synaptically coupled spiking units. These networks can exhibit strongly synchronized gamma behavior, whereby neurons fire in near synchrony on every cycle, or weakly modulated gamma behavior, corresponding to stochastic, sparse firing of the individual units on each cycle of the population gamma rhythm. These spiking models offer valuable biophysical descriptions of gamma oscillations; however, because they involve many individual neuronal units they are limited in their ability to communicate general network-level dynamics. Here we demonstrate that few-variable firing rate models with established synaptic timescales can account for both strongly synchronized and weakly modulated gamma oscillations. These models go beyond the classical formulations of rate models by including at least two dynamic variables per population: firing rate and synaptic activation. The models' flexibility to capture the broad range of gamma behavior depends directly on the timescales that represent recruitment of the excitatory and inhibitory firing rates. In particular, we find that weakly modulated gamma oscillations occur robustly when the recruitment timescale of inhibition is faster than that of excitation. We present our findings by using an extended Wilson-Cowan model and a rate model derived from a network of quadratic integrate-and-fire neurons. These biophysical rate models capture the range of weakly modulated and coherent gamma oscillations observed in spiking network models, while additionally allowing for greater tractability and systems analysis. NEW & NOTEWORTHY Here we develop simple and tractable models of gamma oscillations, a dynamic feature observed throughout much of the brain with significant correlates to behavior and cognitive performance in a variety of experimental contexts. Our models depend on only a few dynamic variables per population, but despite this they qualitatively capture features observed in previous biophysical models of gamma oscillations that involve many individual spiking units.
Asunto(s)
Encéfalo/fisiología , Ritmo Gamma , Modelos Neurológicos , Animales , Encéfalo/citología , Humanos , Neuronas/fisiología , Potenciales SinápticosRESUMEN
Coincidence detector neurons transmit timing information by responding preferentially to concurrent synaptic inputs. Principal cells of the medial superior olive (MSO) in the mammalian auditory brainstem are superb coincidence detectors. They encode sound source location with high temporal precision, distinguishing submillisecond timing differences among inputs. We investigate computationally how dynamic coupling between the input region (soma and dendrite) and the spike-generating output region (axon and axon initial segment) can enhance coincidence detection in MSO neurons. To do this, we formulate a two-compartment neuron model and characterize extensively coincidence detection sensitivity throughout a parameter space of coupling configurations. We focus on the interaction between coupling configuration and two currents that provide dynamic, voltage-gated, negative feedback in subthreshold voltage range: sodium current with rapid inactivation and low-threshold potassium current, IKLT. These currents reduce synaptic summation and can prevent spike generation unless inputs arrive with near simultaneity. We show that strong soma-to-axon coupling promotes the negative feedback effects of sodium inactivation and is, therefore, advantageous for coincidence detection. Furthermore, the feedforward combination of strong soma-to-axon coupling and weak axon-to-soma coupling enables spikes to be generated efficiently (few sodium channels needed) and with rapid recovery that enhances high-frequency coincidence detection. These observations detail the functional benefit of the strongly feedforward configuration that has been observed in physiological studies of MSO neurons. We find that IKLT further enhances coincidence detection sensitivity, but with effects that depend on coupling configuration. For instance, in models with weak soma-to-axon and weak axon-to-soma coupling, IKLT in the axon enhances coincidence detection more effectively than IKLT in the soma. By using a minimal model of soma-to-axon coupling, we connect structure, dynamics, and computation. Although we consider the particular case of MSO coincidence detectors, our method for creating and exploring a parameter space of two-compartment models can be applied to other neurons.