Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 76
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(27): e2316608121, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38941277

RESUMO

Coordination of goal-directed behavior depends on the brain's ability to recover the locations of relevant objects in the world. In humans, the visual system encodes the spatial organization of sensory inputs, but neurons in early visual areas map objects according to their retinal positions, rather than where they are in the world. How the brain computes world-referenced spatial information across eye movements has been widely researched and debated. Here, we tested whether shifts of covert attention are sufficiently precise in space and time to track an object's real-world location across eye movements. We found that observers' attentional selectivity is remarkably precise and is barely perturbed by the execution of saccades. Inspired by recent neurophysiological discoveries, we developed an observer model that rapidly estimates the real-world locations of objects and allocates attention within this reference frame. The model recapitulates the human data and provides a parsimonious explanation for previously reported phenomena in which observers allocate attention to task-irrelevant locations across eye movements. Our findings reveal that visual attention operates in real-world coordinates, which can be computed rapidly at the earliest stages of cortical processing.


Assuntos
Atenção , Movimentos Sacádicos , Humanos , Atenção/fisiologia , Movimentos Sacádicos/fisiologia , Adulto , Masculino , Feminino , Percepção Visual/fisiologia , Campos Visuais/fisiologia , Modelos Neurológicos , Estimulação Luminosa/métodos
2.
Proc Natl Acad Sci U S A ; 120(38): e2305759120, 2023 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-37695898

RESUMO

Movement control is critical for successful interaction with our environment. However, movement does not occur in complete isolation of sensation, and this is particularly true of eye movements. Here, we show that the neuronal eye movement commands emitted by the superior colliculus (SC), a structure classically associated with oculomotor control, encompass a robust visual sensory representation of eye movement targets. Thus, similar saccades toward different images are associated with different saccade-related "motor" bursts. Such sensory tuning in SC saccade motor commands appeared for all image manipulations that we tested, from simple visual features to real-life object images, and it was also strongest in the most motor neurons in the deeper collicular layers. Visual-feature discrimination performance in the motor commands was also stronger than in visual responses. Comparing SC motor command feature discrimination performance to that in the primary visual cortex during steady-state gaze fixation revealed that collicular motor bursts possess a reliable perisaccadic sensory representation of the peripheral saccade target's visual appearance, exactly when retinal input is expected to be most uncertain. Our results demonstrate that SC neuronal movement commands likely serve a fundamentally sensory function.


Assuntos
Movimentos Oculares , Movimento , Neurônios Motores , Movimentos Sacádicos , Discriminação Psicológica
3.
Proc Natl Acad Sci U S A ; 119(19): e2121660119, 2022 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-35503912

RESUMO

Visually active animals coordinate vision and movement to achieve spectacular tasks. An essential prerequisite to guide agile locomotion is to keep gaze level and stable. Since the eyes, head and body can move independently to control gaze, how does the brain effectively coordinate these distinct motor outputs? Furthermore, since the eyes, head, and body have distinct mechanical constraints (e.g., inertia), how does the nervous system adapt its control to these constraints? To address these questions, we studied gaze control in flying fruit flies (Drosophila) using a paradigm which permitted direct measurement of head and body movements. By combining experiments with mathematical modeling, we show that body movements are sensitive to the speed of visual motion whereas head movements are sensitive to its acceleration. This complementary tuning of the head and body permitted flies to stabilize a broader range of visual motion frequencies. We discovered that flies implement proportional-derivative (PD) control, but unlike classical engineering control systems, relay the proportional and derivative signals in parallel to two distinct motor outputs. This scheme, although derived from flies, recapitulated classic primate vision responses thus suggesting convergent mechanisms across phyla. By applying scaling laws, we quantify that animals as diverse as flies, mice, and humans as well as bio-inspired robots can benefit energetically by having a high ratio between head, body, and eye inertias. Our results provide insights into the mechanical constraints that may have shaped the evolution of active vision and present testable neural control hypotheses for visually guided behavior across phyla.


Assuntos
Movimentos da Cabeça , Cabeça , Animais , Movimentos Oculares , Retroalimentação , Cabeça/fisiologia , Movimentos da Cabeça/fisiologia , Movimento (Física)
4.
Proc Natl Acad Sci U S A ; 118(34)2021 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-34417308

RESUMO

Natural vision is a dynamic and continuous process. Under natural conditions, visual object recognition typically involves continuous interactions between ocular motion and visual contrasts, resulting in dynamic retinal activations. In order to identify the dynamic variables that participate in this process and are relevant for image recognition, we used a set of images that are just above and below the human recognition threshold and whose recognition typically requires >2 s of viewing. We recorded eye movements of participants while attempting to recognize these images within trials lasting 3 s. We then assessed the activation dynamics of retinal ganglion cells resulting from ocular dynamics using a computational model. We found that while the saccadic rate was similar between recognized and unrecognized trials, the fixational ocular speed was significantly larger for unrecognized trials. Interestingly, however, retinal activation level was significantly lower during these unrecognized trials. We used retinal activation patterns and oculomotor parameters of each fixation to train a binary classifier, classifying recognized from unrecognized trials. Only retinal activation patterns could predict recognition, reaching 80% correct classifications on the fourth fixation (on average, ∼2.5 s from trial onset). We thus conclude that the information that is relevant for visual perception is embedded in the dynamic interactions between the oculomotor sequence and the image. Hence, our results suggest that ocular dynamics play an important role in recognition and that understanding the dynamics of retinal activation is crucial for understanding natural vision.


Assuntos
Fixação Ocular , Retina/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Projetos Piloto , Movimentos Sacádicos , Adulto Jovem
5.
Sensors (Basel) ; 24(13)2024 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-39001185

RESUMO

The types of obstacles encountered in the road environment are complex and diverse, and accurate and reliable detection of obstacles is the key to improving traffic safety. Traditional obstacle detection methods are limited by the type of samples and therefore cannot detect others comprehensively. Therefore, this paper proposes an obstacle detection method based on longitudinal active vision. The obstacles are recognized according to the height difference characteristics between the obstacle imaging points and the ground points in the image, and the obstacle detection in the target area is realized without accurately distinguishing the obstacle categories, which reduces the spatial and temporal complexity of the road environment perception. The method of this paper is compared and analyzed with the obstacle detection methods based on VIDAR (vision-IMU based detection and range method), VIDAR + MSER, and YOLOv8s. The experimental results show that the method in this paper has high detection accuracy and verifies the feasibility of obstacle detection in road environments where unknown obstacles exist.

6.
J Neurophysiol ; 130(2): 225-237, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37377194

RESUMO

For successful adaptive behavior, exogenous environmental events must be sensed and reacted to as efficiently as possible. In the lab, the mechanisms underlying such efficiency are often studied with eye movements. Using controlled trials, careful measures of eye movement reaction times, directions, and kinematics suggest a form of "exogenous" oculomotor capture by external events. However, even in controlled trials, exogenous onsets necessarily come asynchronously to internal brain state. We argue that variability in the effectiveness of "exogenous" capture is inevitable. We review an extensive set of evidence demonstrating that before orienting must come interruption, a process that partially explains such variability. More importantly, we present a novel neural mechanistic account of interruption, leveraging the presence of early sensory processing capabilities in the very final stages of oculomotor control brain circuitry.


Assuntos
Movimentos Sacádicos , Colículos Superiores , Movimentos Oculares , Tempo de Reação , Encéfalo
7.
J Theor Biol ; 562: 111416, 2023 04 07.
Artigo em Inglês | MEDLINE | ID: mdl-36681182

RESUMO

Developing a functional description of the neural control circuits and visual feedback paths underlying insect flight behaviors is an active research area. Feedback controllers incorporating engineering models of the insect visual system outputs have described some flight behaviors, yet they do not explain how insects are able to stabilize their body position relative to nearby targets such as neighbors or forage sources, especially in challenging environments in which optic flow is poor. The insect experimental community is simultaneously recording a growing library of in-flight head and eye motions that may be linked to increased perception. This study develops a quantitative model of the optic flow experienced by a flying insect or robot during head yawing rotations (distinct from lateral peering motions in previous work) with a single other target in view. This study then applies a model of insect visuomotor feedback to show via analysis and simulation of five species that these head motions sufficiently enrich the optic flow and that the output feedback can provide relative position regulation relative to the single target (asymptotic stability). In the simplifying case of pure rotation relative to the body, theoretical analysis provides a stronger stability guarantee. The results are shown to be robust to anatomical neck angle limits and body vibrations, persist with more detailed Drosophila lateral-directional flight dynamics simulations, and generalize to recent retinal motion studies. Together, these results suggest that the optic flow enrichment provided by head or pseudopupil rotation could be used in an insect's neural processing circuit to enable position regulation.


Assuntos
Fluxo Óptico , Animais , Drosophila , Voo Animal/fisiologia , Insetos/fisiologia , Retina
8.
J Neurophysiol ; 125(6): 2432-2443, 2021 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-34010579

RESUMO

Successful interaction with the environment requires the dissociation of self-induced from externally induced sensory stimulation. Temporal proximity of action and effect is hereby often used as an indicator of whether an observed event should be interpreted as a result of own actions or not. We tested how the delay between an action (press of a touch bar) and an effect (onset of simulated self-motion) influences the processing of visually simulated self-motion in the ventral intraparietal area (VIP) of macaque monkeys. We found that a delay between the action and the start of the self-motion stimulus led to a rise of activity above the baseline activity before motion onset in a subpopulation of 21% of the investigated neurons. In the responses to the stimulus, we found a significantly lower sustained activity when the press of a touch bar and the motion onset were contiguous compared to the condition when the motion onset was delayed. We speculate that this weak inhibitory effect might be part of a mechanism that sharpens the tuning of VIP neurons during self-induced motion and thus has the potential to increase the precision of heading information that is required to adjust the orientation of self-motion in everyday navigational tasks.NEW & NOTEWORTHY Neurons in macaque ventral intraparietal area (VIP) are responding to sensory stimulation related to self-motion, e.g. visual optic flow. Here, we found that self-motion induced activation depends on the sense of agency, i.e., it differed when optic flow was perceived as self- or externally induced. This demonstrates that area VIP is well suited for study of the interplay between active behavior and sensory processing during self-motion.


Assuntos
Cinestesia/fisiologia , Percepção de Movimento/fisiologia , Atividade Motora/fisiologia , Fluxo Óptico/fisiologia , Lobo Parietal/fisiologia , Animais , Eletrocorticografia , Macaca mulatta , Masculino , Neurônios/fisiologia
9.
J Neurophysiol ; 125(4): 1121-1138, 2021 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-33534661

RESUMO

The primate superior colliculus (SC) has recently been shown to possess both a large foveal representation as well as a varied visual processing repertoire. This structure is also known to contribute to eye movement generation. Here, we describe our current understanding of how SC visual and movement-related signals interact within the realm of small eye movements associated with the foveal scale of visuomotor behavior. Within the SC's foveal representation, there is a full spectrum of visual, visual-motor, and motor-related discharge for fixational eye movements. Moreover, a substantial number of neurons only emit movement-related discharge when microsaccades are visually guided, but not when similar movements are generated toward a blank. This represents a particularly striking example of integrating vision and action at the foveal scale. Beyond that, SC visual responses themselves are strongly modulated, and in multiple ways, by the occurrence of small eye movements. Intriguingly, this impact can extend to eccentricities well beyond the fovea, causing both sensitivity enhancement and suppression in the periphery. Because of large foveal magnification of neural tissue, such long-range eccentricity effects are neurally warped into smaller differences in anatomical space, providing a structural means for linking peripheral and foveal visual modulations around fixational eye movements. Finally, even the retinal-image visual flows associated with tiny fixational eye movements are signaled fairly faithfully by peripheral SC neurons with relatively large receptive fields. These results demonstrate how studying active vision at the foveal scale represents an opportunity for understanding primate vision during natural behaviors involving ever-present foveating eye movements.NEW & NOTEWORTHY The primate superior colliculus (SC) is ideally suited for active vision at the foveal scale: it enables detailed foveal visual analysis by accurately driving small eye movements, and it also possesses a visual processing machinery that is sensitive to active eye movement behavior. Studying active vision at the foveal scale in the primate SC is informative for broader aspects of active perception, including the overt and covert processing of peripheral extra-foveal visual scene locations.


Assuntos
Comportamento Animal/fisiologia , Movimentos Oculares/fisiologia , Fóvea Central/fisiologia , Atividade Motora/fisiologia , Primatas/fisiologia , Desempenho Psicomotor/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Animais
10.
J Neurosci ; 39(41): 8064-8078, 2019 10 09.
Artigo em Inglês | MEDLINE | ID: mdl-31488610

RESUMO

Heading perception in primates depends heavily on visual optic-flow cues. Yet during self-motion, heading percepts remain stable, even though smooth-pursuit eye movements often distort optic flow. According to theoretical work, self-motion can be represented accurately by compensating for these distortions in two ways: via retinal mechanisms or via extraretinal efference-copy signals, which predict the sensory consequences of movement. Psychophysical evidence strongly supports the efference-copy hypothesis, but physiological evidence remains inconclusive. Neurons that signal the true heading direction during pursuit are found in visual areas of monkey cortex, including the dorsal medial superior temporal area (MSTd). Here we measured heading tuning in MSTd using a novel stimulus paradigm, in which we stabilize the optic-flow stimulus on the retina during pursuit. This approach isolates the effects on neuronal heading preferences of extraretinal signals, which remain active while the retinal stimulus is prevented from changing. Our results from 3 female monkeys demonstrate a significant but small influence of extraretinal signals on the preferred heading directions of MSTd neurons. Under our stimulus conditions, which are rich in retinal cues, we find that retinal mechanisms dominate physiological corrections for pursuit eye movements, suggesting that extraretinal cues, such as predictive efference-copy mechanisms, have a limited role under naturalistic conditions.SIGNIFICANCE STATEMENT Sensory systems discount stimulation caused by an animal's own behavior. For example, eye movements cause irrelevant retinal signals that could interfere with motion perception. The visual system compensates for such self-generated motion, but how this happens is unclear. Two theoretical possibilities are a purely visual calculation or one using an internal signal of eye movements to compensate for their effects. The latter can be isolated by experimentally stabilizing the image on a moving retina, but this approach has never been adopted to study motion physiology. Using this method, we find that extraretinal signals have little influence on activity in visual cortex, whereas visually based corrections for ongoing eye movements have stronger effects and are likely most important under real-world conditions.


Assuntos
Orientação/fisiologia , Retina/fisiologia , Lobo Temporal/fisiologia , Algoritmos , Animais , Sinais (Psicologia) , Fenômenos Eletrofisiológicos/fisiologia , Feminino , Fixação Ocular/fisiologia , Macaca mulatta , Fluxo Óptico , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Acompanhamento Ocular Uniforme/fisiologia , Vias Visuais/fisiologia
11.
J Neurosci ; 39(32): 6265-6275, 2019 08 07.
Artigo em Inglês | MEDLINE | ID: mdl-31182633

RESUMO

In this paper, we draw from recent theoretical work on active perception, which suggests that the brain makes use of an internal (i.e., generative) model to make inferences about the causes of sensations. This view treats visual sensations as consequent on action (i.e., saccades) and implies that visual percepts must be actively constructed via a sequence of eye movements. Oculomotor control calls on a distributed set of brain sources that includes the dorsal and ventral frontoparietal (attention) networks. We argue that connections from the frontal eye fields to ventral parietal sources represent the mapping from "where", fixation location to information derived from "what" representations in the ventral visual stream. During scene construction, this mapping must be learned, putatively through changes in the effective connectivity of these synapses. Here, we test the hypothesis that the coupling between the dorsal frontal cortex and the right temporoparietal cortex is modulated during saccadic interrogation of a simple visual scene. Using dynamic causal modeling for magnetoencephalography with (male and female) human participants, we assess the evidence for changes in effective connectivity by comparing models that allow for this modulation with models that do not. We find strong evidence for modulation of connections between the two attention networks; namely, a disinhibition of the ventral network by its dorsal counterpart.SIGNIFICANCE STATEMENT This work draws from recent theoretical accounts of active vision and provides empirical evidence for changes in synaptic efficacy consistent with these computational models. In brief, we used magnetoencephalography in combination with eye-tracking to assess the neural correlates of a form of short-term memory during a dot cancellation task. Using dynamic causal modeling to quantify changes in effective connectivity, we found evidence that the coupling between the dorsal and ventral attention networks changed during the saccadic interrogation of a simple visual scene. Intuitively, this is consistent with the idea that these neuronal connections may encode beliefs about "what I would see if I looked there", and that this mapping is optimized as new data are obtained with each fixation.


Assuntos
Atenção/fisiologia , Modelos Neurológicos , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Causalidade , Conectoma , Cultura , Dominância Cerebral , Feminino , Fixação Ocular/fisiologia , Lobo Frontal/fisiologia , Humanos , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Lobo Parietal/fisiologia , Transtornos da Percepção/fisiopatologia , Estimulação Luminosa , Movimentos Sacádicos/fisiologia , Lobo Temporal/fisiologia , Adulto Jovem
12.
J Neurophysiol ; 124(1): 40-48, 2020 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-32432502

RESUMO

The term "active sensing" has been defined in multiple ways. Most strictly, the term refers to sensing that uses self-generated energy to sample the environment (e.g., echolocation). More broadly, the definition includes all sensing that occurs when the sensor is moving (e.g., tactile stimuli obtained by an immobile versus moving fingertip) and, broader still, includes all sensing guided by attention or intent (e.g., purposeful eye movements). The present work offers a framework to help disambiguate aspects of the "active sensing" terminology and reveals properties of tactile sensing unique among all modalities. The framework begins with the well-described "sensorimotor loop," which expresses the perceptual process as a cycle involving four subsystems: environment, sensor, nervous system, and actuator. Using system dynamics, we examine how information flows through the loop. This "sensory-energetic loop" reveals two distinct sensing mechanisms that subdivide active sensing into homeoactive and alloactive sensing. In homeoactive sensing, the animal can change the state of the environment, while in alloactive sensing the animal can alter only the sensor's configurational parameters and thus the mapping between input and output. Given these new definitions, examination of the sensory-energetic loop helps identify two unique characteristics of tactile sensing: 1) in tactile systems, alloactive and homeoactive sensing merge to a mutually controlled sensing mechanism, and 2) tactile sensing may require fundamentally different predictions to anticipate reafferent input. We expect this framework may help resolve ambiguities in the active sensing community and form a basis for future theoretical and experimental work regarding alloactive and homeoactive sensing.


Assuntos
Atenção/fisiologia , Comportamento Animal/fisiologia , Intenção , Percepção/fisiologia , Sensação/fisiologia , Animais , Tato/fisiologia , Percepção do Tato/fisiologia
13.
J Neurophysiol ; 123(3): 912-926, 2020 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-31967932

RESUMO

Segregation of objects from the background is a basic and essential property of the visual system. We studied the neural detection of objects defined by orientation difference from background in barn owls (Tyto alba). We presented wide-field displays of densely packed stripes with a dominant orientation. Visual objects were created by orienting a circular patch differently from the background. In head-fixed conditions, neurons in both tecto- and thalamofugal visual pathways (optic tectum and visual Wulst) were weakly responsive to these objects in their receptive fields. However, notably, in freely viewing conditions, barn owls occasionally perform peculiar side-to-side head motions (peering) when scanning the environment. In the second part of the study we thus recorded the neural response from head-fixed owls while the visual displays replicated the peering conditions; i.e., the displays (objects and backgrounds) were shifted along trajectories that induced a retinal motion identical to sampled peering motions during viewing of a static object. These conditions induced dramatic neural responses to the objects, in the very same neurons that where unresponsive to the objects in static displays. By reverting to circular motions of the display, we show that the pattern of the neural response is mostly shaped by the orientation of the background relative to motion and not the orientation of the object. Thus our findings provide evidence that peering and/or other self-motions can facilitate orientation-based figure-ground segregation through interaction with inhibition from the surround.NEW & NOTEWORTHY Animals frequently move their sensory organs and thereby create motion cues that can enhance object segregation from background. We address a special example of such active sensing, in barn owls. When scanning the environment, barn owls occasionally perform small-amplitude side-to-side head movements called peering. We show that the visual outcome of such peering movements elicit neural detection of objects that are rotated from the dominant orientation of the background scene and which are otherwise mostly undetected. These results suggest a novel role for self-motions in sensing objects that break the regular orientation of elements in the scene.


Assuntos
Movimentos da Cabeça/fisiologia , Percepção de Movimento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Colículos Superiores/fisiologia , Telencéfalo/fisiologia , Vias Visuais/fisiologia , Animais , Feminino , Masculino , Ilusões Ópticas , Estrigiformes
14.
Learn Behav ; 46(1): 7-22, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29484541

RESUMO

Navigation is an essential skill for many animals, and understanding how animal use environmental information, particularly visual information, to navigate has a long history in both ethology and psychology. In birds, the dominant approach for investigating navigation at small-scales comes from comparative psychology, which emphasizes the cognitive representations underpinning spatial memory. The majority of this work is based in the laboratory and it is unclear whether this context itself affects the information that birds learn and use when they search for a location. Data from hummingbirds suggests that birds in the wild might use visual information in quite a different manner. To reconcile these differences, here we propose a new approach to avian navigation, inspired by the sensory-driven study of navigation in insects. Using methods devised for studying the navigation of insects, it is possible to quantify the visual information available to navigating birds, and then to determine how this information influences those birds' navigation decisions. Focusing on four areas that we consider characteristic of the insect navigation perspective, we discuss how this approach has shone light on the information insects use to navigate, and assess the prospects of taking a similar approach with birds. Although birds and insects differ in many ways, there is nothing in the insect-inspired approach of the kind we describe that means these methods need be restricted to insects. On the contrary, adopting such an approach could provide a fresh perspective on the well-studied question of how birds navigate through a variety of environments.


Assuntos
Aves/fisiologia , Insetos/fisiologia , Memória Espacial/fisiologia , Navegação Espacial/fisiologia , Animais
15.
Sensors (Basel) ; 18(3)2018 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-29518902

RESUMO

In the last few years, there has been a steadily growing interest in autonomous vehicles and robotic systems. While many of these agents are expected to have limited resources, these systems should be able to dynamically interact with other objects in their environment. We present an approach where lightweight sensory and processing techniques, requiring very limited memory and processing power, can be successfully applied to the task of object retrieval using sensors of different modalities. We use the Hough framework to fuse optical and orientation information of the different views of the objects. In the presented spatio-temporal perception technique, we apply active vision, where, based on the analysis of initial measurements, the direction of the next view is determined to increase the hit-rate of retrieval. The performance of the proposed methods is shown on three datasets loaded with heavy noise.

16.
J Neurosci ; 36(17): 4876-87, 2016 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-27122042

RESUMO

UNLABELLED: Here, we studied neural correlates of orientation-contrast-based saliency in the optic tectum (OT) of barn owls. Neural responses in the intermediate/deep layers of the OT were recorded from lightly anesthetized owls confronted with arrays of bars in which one bar (the target) was orthogonal to the remaining bars (the distractors). Responses to target bars were compared with responses to distractor bars in the receptive field (RF). Initially, no orientation-contrast sensitivity was observed. However, if the position of the target bar in the array was randomly shuffled across trials so that it occasionally appeared in the RF, then such sensitivity emerged. The effect started to become significant after three or four positional changes of the target bar and strengthened with additional trials. Our data further suggest that this effect arises due to specific adaptation to the stimulus in the RF combined with suppression from the surround. By jittering the position of the bar inside the RF across trials, we demonstrate that the adaptation has two components, one position specific and one orientation specific. The findings give rise to the hypothesis that barn owls, by active scanning of the scene, can induce adaptation of the tectal circuitry to the common orientation and thus achieve a "pop-out" of rare orientations. Such a model is consistent with several behavioral observations in owls and may be relevant to other visual features and species. SIGNIFICANCE STATEMENT: Natural scenes are often characterized by a dominant orientation, such as the scenery of a pine forest or the sand dunes in a windy desert. Therefore, orientation that contrasts the regularity of the scene is perceived salient for many animals as a means to break camouflage. By actively moving the scene between each trial, we show here that neurons in the retinotopic map of the barn owl's optic tectum specifically adapt to the common orientation, giving rise to preferential representation of odd orientations. Based on this, we suggest a new mechanism for orientation-based camouflage breaking that links active scanning of scenes with neural adaptation. This mechanism may be relevant to pop-out in other species and visual features.


Assuntos
Adaptação Fisiológica/fisiologia , Orientação/fisiologia , Estrigiformes/fisiologia , Colículos Superiores/fisiologia , Animais , Sensibilidades de Contraste , Feminino , Masculino , Neurônios/fisiologia , Colículos Superiores/citologia , Visão Ocular , Campos Visuais/fisiologia
17.
Sensors (Basel) ; 17(8)2017 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-28792483

RESUMO

This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512 × 512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system.

18.
J Neurosci ; 35(19): 7403-13, 2015 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-25972169

RESUMO

The brain is proposed to operate through probabilistic inference, testing and refining predictions about the world. Here, we search for neural activity compatible with the violation of active predictions, learned from the contingencies between actions and the consequent changes in sensory input. We focused on vision, where eye movements produce stimuli shifts that could, in principle, be predicted. We compared, in humans, error signals to saccade-contingent changes of veridical and inferred inputs by contrasting the electroencephalographic activity after saccades to a stimulus presented inside or outside the blind spot. We observed early (<250 ms) and late (>250 ms) error signals after stimulus change, indicating the violation of sensory and associative predictions, respectively. Remarkably, the late response was diminished for blind-spot trials. These results indicate that predictive signals occur across multiple levels of the visual hierarchy, based on generative models that differentiate between signals that originate from the outside world and those that are inferred.


Assuntos
Mapeamento Encefálico , Potenciais Evocados Visuais/fisiologia , Movimentos Oculares/fisiologia , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Modelos Lineares , Masculino , Estimulação Luminosa , Valor Preditivo dos Testes , Tempo de Reação/fisiologia , Fatores de Tempo , Campos Visuais/fisiologia , Adulto Jovem
19.
J Exp Biol ; 217(Pt 15): 2633-42, 2014 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-25079890

RESUMO

Insects inform themselves about the 3D structure of their surroundings through motion parallax. During flight, they often simplify this task by minimising rotational image movement. Coordinated head and body movements generate rapid shifts of gaze separated by periods of almost zero rotational movement, during which the distance of objects from the insect can be estimated through pure translational optic flow. This saccadic strategy is less appropriate for assessing the distance between objects. Bees and wasps face this problem when learning the position of their nest-hole relative to objects close to it. They acquire the necessary information during specialised flights performed on leaving the nest. Here, we show that the bumblebee's saccadic strategy differs from other reported cases. In the fixations between saccades, a bumblebee's head continues to turn slowly, generating rotational flow. At specific points in learning flights these imperfect fixations generate a form of 'pivoting parallax', which is centred on the nest and enhances the visibility of features near the nest. Bumblebees may thus utilize an alternative form of motion parallax to that delivered by the standard 'saccade and fixate' strategy in which residual rotational flow plays a role in assessing the distances of objects from a focal point of interest.


Assuntos
Abelhas/fisiologia , Voo Animal/fisiologia , Movimentos da Cabeça , Aprendizagem/fisiologia , Fluxo Óptico/fisiologia , Animais , Movimento , Orientação/fisiologia , Movimentos Sacádicos/fisiologia , Percepção Espacial/fisiologia
20.
J Exp Biol ; 217(Pt 11): 1933-9, 2014 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-24625647

RESUMO

Primates can analyse visual scenes extremely rapidly, making accurate decisions for presentation times of only 20 ms. We asked whether bumblebees, despite having potentially more limited processing power, could similarly detect and discriminate visual patterns presented for durations of 100 ms or less. Bumblebees detected stimuli and discriminated between differently oriented and coloured stimuli when presented as briefly as 25 ms but failed to identify ecologically relevant shapes (predatory spiders on flowers) even when presented for 100 ms. This suggests an important difference between primate and insect visual processing, so that while primates can capture entire visual scenes 'at a glance', insects might have to rely on continuous online sampling of the world around them, using a process of active vision, which requires longer integration times.


Assuntos
Abelhas/fisiologia , Reconhecimento Visual de Modelos , Percepção Visual , Animais , Percepção de Cores
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa