Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
J Neurosci ; 33(32): 13225-32, 2013 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-23926274

RESUMEN

In both vertebrates and invertebrates, evidence supports separation of luminance increments and decrements (ON and OFF channels) in early stages of visual processing (Hartline, 1938; Joesch et al., 2010); however, less is known about how these parallel pathways are recombined to encode form and motion. In Drosophila, genetic knockdown of inputs to putative ON and OFF pathways and direct recording from downstream neurons in the wide-field motion pathway reveal that local elementary motion detectors exist in pairs that separately correlate contrast polarity channels, ON with ON and OFF with OFF (Joesch et al., 2013). However, behavioral responses to reverse-phi motion of discrete features reveal additional correlations of the opposite signs (Clark et al., 2011). We here present intracellular recordings from feature detecting neurons in the dragonfly that provide direct physiological evidence for the correlation of OFF and ON pathways. These neurons show clear polarity selectivity for feature contrast, responding strongly to targets that are darker than the background and only weakly to dark contrasting edges. These dark target responses are much stronger than the linear combination of responses to ON and OFF edges. We compare these data with output from elementary motion detector-based models (Eichner et al., 2011; Clark et al., 2011), with and without stages of strong center-surround antagonism. Our data support an alternative elementary small target motion detector model, which derives dark target selectivity from the correlation of a delayed OFF with an un-delayed ON signal at each individual visual processing unit (Wiederman et al., 2008, 2009).


Asunto(s)
Oscuridad , Modelos Neurológicos , Percepción de Movimiento/fisiología , Neuronas/clasificación , Neuronas/fisiología , Vías Visuales/fisiología , Potenciales de Acción/fisiología , Animales , Femenino , Insectos , Masculino , Estimulación Luminosa , Estadística como Asunto , Factores de Tiempo , Campos Visuales , Vías Visuales/citología
2.
J Exp Biol ; 217(Pt 4): 558-69, 2014 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-24198267

RESUMEN

The behavioral algorithms and neural subsystems for visual figure-ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) - the behavioral analog of a spatiotemporal receptive field - revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input.


Asunto(s)
Drosophila melanogaster/fisiología , Vuelo Animal , Animales , Conducta Animal , Estimulación Luminosa , Percepción Espacial , Alas de Animales/fisiología
3.
Biol Cybern ; 104(4-5): 339-50, 2011 May.
Artículo en Inglés | MEDLINE | ID: mdl-21626306

RESUMEN

We generated panoramic imagery by simulating a fly-like robot carrying an imaging sensor, moving in free flight through a virtual arena bounded by walls, and containing obstructions. Flight was conducted under closed-loop control by a bio-inspired algorithm for visual guidance with feedback signals corresponding to the true optic flow that would be induced on an imager (computed by known kinematics and position of the robot relative to the environment). The robot had dynamics representative of a housefly-sized organism, although simplified to two-degree-of-freedom flight to generate uniaxial (azimuthal) optic flow on the retina in the plane of travel. Surfaces in the environment contained images of natural and man-made scenes that were captured by the moving sensor. Two bio-inspired motion detection algorithms and two computational optic flow estimation algorithms were applied to sequences of image data, and their performance as optic flow estimators was evaluated by estimating the mutual information between outputs and true optic flow in an equatorial section of the visual field. Mutual information for individual estimators at particular locations within the visual field was surprisingly low (less than 1 bit in all cases) and considerably poorer for the bio-inspired algorithms that the man-made computational algorithms. However, mutual information between weighted sums of these signals and comparable sums of the true optic flow showed significant increases for the bio-inspired algorithms, whereas such improvement did not occur for the computational algorithms. Such summation is representative of the spatial integration performed by wide-field motion-sensitive neurons in the third optic ganglia of flies.


Asunto(s)
Vuelo Animal , Insectos/fisiología , Modelos Biológicos , Óptica y Fotónica , Animales , Fenómenos Biomecánicos
4.
Front Neural Circuits ; 15: 684872, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34483847

RESUMEN

Dragonflies are highly skilled and successful aerial predators that are even capable of selectively attending to one target within a swarm. Detection and tracking of prey is likely to be driven by small target motion detector (STMD) neurons identified from several insect groups. Prior work has shown that dragonfly STMD responses are facilitated by targets moving on a continuous path, enhancing the response gain at the present and predicted future location of targets. In this study, we combined detailed morphological data with computational modeling to test whether a combination of dendritic morphology and nonlinear properties of NMDA receptors could explain these observations. We developed a hybrid computational model of neurons within the dragonfly optic lobe, which integrates numerical and morphological components. The model was able to generate potent facilitation for targets moving on continuous trajectories, including a localized spotlight of maximal sensitivity close to the last seen target location, as also measured during in vivo recordings. The model did not, however, include a mechanism capable of producing a traveling or spreading wave of facilitation. Our data support a strong role for the high dendritic density seen in the dragonfly neuron in enhancing non-linear facilitation. An alternative model based on the morphology of an unrelated type of motion processing neuron from a dipteran fly required more than three times higher synaptic gain in order to elicit similar levels of facilitation, despite having only 20% fewer synapses. Our data support a potential role for NMDA receptors in target tracking and also demonstrate the feasibility of combining biologically plausible dendritic computations with more abstract computational models for basic processing as used in earlier studies.


Asunto(s)
Odonata , Animales , Simulación por Computador , Insectos , Neuronas
5.
Front Comput Neurosci ; 13: 76, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31787888

RESUMEN

Insects can detect the presence of discrete objects in their visual fields based on a range of differences in spatiotemporal characteristics between the images of object and background. This includes but is not limited to relative motion. Evidence suggests that edge detection is an integral part of this capability, and this study examines the ability of a bio-inspired processing model to detect the presence of boundaries between two regions of a one-dimensional visual field, based on general differences in image dynamics. The model consists of two parts. The first is an early vision module inspired by insect visual processing, which implements adaptive photoreception, ON and OFF channels with transient and sustained characteristics, and delayed and undelayed signal paths. This is replicated for a number of photoreceptors in a small linear array. It is followed by an artificial neural network trained to discriminate the presence vs. absence of an edge based on the array output signals. Input data are derived from natural imagery and feature both static and moving edges between regions with moving texture, flickering texture, and static patterns in all possible combinations. The model can discriminate the presence of edges, stationary or moving, at rates far higher than chance. The resources required (numbers of neurons and visual signals) are realistic relative to those available in the insect second optic ganglion, where the bulk of such processing would be likely to take place.

6.
Artículo en Inglés | MEDLINE | ID: mdl-25741276

RESUMEN

A neural circuit that relies on the electrical properties of NMDA synaptic receptors is shown by numerical and theoretical analysis to be capable of realizing the winner-takes-all function, a powerful computational primitive that is often attributed to biological nervous systems. This biophysically-plausible model employs global lateral inhibition in a simple feedback arrangement. As its inputs increase, high-gain and then bi- or multi-stable equilibrium states may be assumed in which there is significant depolarization of a single neuron and hyperpolarization or very weak depolarization of other neurons in the network. The state of the winning neuron conveys analog information about its input. The winner-takes-all characteristic depends on the nonmonotonic current-voltage relation of NMDA receptor ion channels, as well as neural thresholding, and the gain and nature of the inhibitory feedback. Dynamical regimes vary with input strength. Fixed points may become unstable as the network enters a winner-takes-all regime, which can lead to entrained oscillations. Under some conditions, oscillatory behavior can be interpreted as winner-takes-all in nature. Stable winner-takes-all behavior is typically recovered as inputs increase further, but with still larger inputs, the winner-takes-all characteristic is ultimately lost. Network stability may be enhanced by biologically plausible mechanisms.

7.
Front Neural Circuits ; 8: 130, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25400550

RESUMEN

A moving visual figure may contain first-order signals defined by variation in mean luminance, as well as second-order signals defined by constant mean luminance and variation in luminance envelope, or higher-order signals that cannot be estimated by taking higher moments of the luminance distribution. Separating these properties of a moving figure to experimentally probe the visual subsystems that encode them is technically challenging and has resulted in debated mechanisms of visual object detection by flies. Our prior work took a white noise systems identification approach using a commercially available electronic display system to characterize the spatial variation in the temporal dynamics of two distinct subsystems for first- and higher-order components of visual figure tracking. The method relied on the use of single pixel displacements of two visual stimuli according to two binary maximum length shift register sequences (m-sequences) and cross-correlation of each m-sequence with time-varying flight steering measurements. The resultant spatio-temporal action fields represent temporal impulse responses parameterized by the azimuthal location of the visual figure, one STAF for first-order and another for higher-order components of compound stimuli. Here we review m-sequence and reverse correlation procedures, then describe our application in detail, provide Matlab code, validate the STAFs, and demonstrate the utility and robustness of STAFs by predicting the results of other published experimental procedures. This method has demonstrated how two relatively modest innovations on classical white noise analysis--the inclusion of space as a way to organize response kernels and the use of linear decoupling to measure the response to two channels of visual information simultaneously--could substantially improve our basic understanding of visual processing in the fly.


Asunto(s)
Percepción de Movimiento/fisiología , Programas Informáticos , Visión Ocular/fisiología , Campos Visuales/fisiología , Vías Visuales/fisiología , Análisis de Varianza , Animales , Drosophila , Modelos Biológicos , Estimulación Luminosa , Reproducibilidad de los Resultados
8.
Curr Biol ; 22(6): 482-7, 2012 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-22386313

RESUMEN

Visual figures may be distinguished based on elementary motion or higher-order non-Fourier features, and flies track both. The canonical elementary motion detector, a compact computation for Fourier motion direction and amplitude, can also encode higher-order signals provided elaborate preprocessing. However, the way in which a fly tracks a moving figure containing both elementary and higher-order signals has not been investigated. Using a novel white noise approach, we demonstrate that (1) the composite response to an object containing both elementary motion (EM) and uncorrelated higher-order figure motion (FM) reflects the linear superposition of each component; (2) the EM-driven component is velocity-dependent, whereas the FM component is driven by retinal position; (3) retinotopic variation in EM and FM responses are different from one another; (4) the FM subsystem superimposes saccadic turns upon smooth pursuit; and (5) the two systems in combination are necessary and sufficient to predict the full range of figure tracking behaviors, including those that generate no EM cues at all. This analysis requires an extension of the model that fly motion vision is based on simple elementary motion detectors and provides a novel method to characterize the subsystems responsible for the pursuit of visual figures.


Asunto(s)
Drosophila melanogaster/fisiología , Visión Ocular/fisiología , Animales , Femenino , Modelos Biológicos , Percepción de Movimiento/fisiología , Retina/fisiología , Movimientos Sacádicos/fisiología
9.
Artículo en Inglés | MEDLINE | ID: mdl-23112764

RESUMEN

Dragonflies detect and pursue targets such as other insects for feeding and conspecific interaction. They have a class of neurons highly specialized for this task in their lobula, the "small target motion detecting" (STMD) neurons. One such neuron, CSTMD1, reaches maximum response slowly over hundreds of milliseconds of target motion. Recording the intracellular response from CSTMD1 and a second neuron in this system, BSTMD1, we determined that for the neurons to reach maximum response levels, target motion must produce sequential local activation of elementary motion detecting elements. This facilitation effect is most pronounced when targets move at velocities slower than what was previously thought to be optimal. It is completely disrupted if targets are instantaneously displaced a few degrees from their current location. Additionally, we utilize a simple computational model to discount the parsimonious hypothesis that CSTMD1's slow build-up to maximum response is due to it incorporating a sluggish neural delay filter. Whilst the observed facilitation may be too slow to play a role in prey pursuit flights, which are typically rapidly resolved, we hypothesize that it helps maintain elevated sensitivity during prolonged, aerobatically intricate conspecific pursuits. Since the effect seems to be localized, it most likely enhances the relative salience of the most recently "seen" locations during such pursuit flights.

10.
Artículo en Inglés | MEDLINE | ID: mdl-20700393

RESUMEN

The tiny brains of insects presumably impose significant computational limitations on algorithms controlling their behavior. Nevertheless, they perform fast and sophisticated visual maneuvers. This includes tracking features composed of second-order motion, in which the feature is defined by higher-order image statistics, but not simple correlations in luminance. Flies can track the true direction of even theta motions, in which the first-order (luminance) motion is directed opposite the second-order moving feature. We exploited this paradoxical feature tracking response to dissect the particular image properties that flies use to track moving objects. We find that theta motion detection is not simply a result of steering toward any spatially restricted flicker. Rather, our results show that fly high-order feature tracking responses can be broken down into positional and velocity components - in other words, the responses can be modeled as a superposition of two independent steering efforts. We isolate these elements to show that each has differing influence on phase and amplitude of steering responses, and together they explain the time course of second-order motion tracking responses during flight. These observations are relevant to natural scenes, where moving features can be much more complex.

11.
PLoS One ; 3(7): e2784, 2008 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-18665213

RESUMEN

We present a computational model for target discrimination based on intracellular recordings from neurons in the fly visual system. Determining how insects detect and track small moving features, often against cluttered moving backgrounds, is an intriguing challenge, both from a physiological and a computational perspective. Previous research has characterized higher-order neurons within the fly brain, known as 'small target motion detectors' (STMD), that respond robustly to moving features, even when the velocity of the target is matched to the background (i.e. with no relative motion cues). We recorded from intermediate-order neurons in the fly visual system that are well suited as a component along the target detection pathway. This full-wave rectifying, transient cell (RTC) reveals independent adaptation to luminance changes of opposite signs (suggesting separate ON and OFF channels) and fast adaptive temporal mechanisms, similar to other cell types previously described. From this physiological data we have created a numerical model for target discrimination. This model includes nonlinear filtering based on the fly optics, the photoreceptors, the 1(st) order interneurons (Large Monopolar Cells), and the newly derived parameters for the RTC. We show that our RTC-based target detection model is well matched to properties described for the STMDs, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter' to successfully detect most targets from the background. Importantly, this model can explain this type of feature discrimination without the need for relative motion cues.


Asunto(s)
Visión Ocular , Animales , Simulación por Computador , Computadores , Sensibilidad de Contraste , Electrofisiología/métodos , Insectos , Modelos Biológicos , Modelos Neurológicos , Percepción de Movimiento , Neuronas/metabolismo , Programas Informáticos , Factores de Tiempo , Vías Visuales , Percepción Visual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA