Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biol Cybern ; 113(4): 439-451, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31076867

RESUMO

Wasps and bees perform learning flights when leaving their nest or food locations for the first time during which they acquire visual information that enables them to return successfully. Here we present and test a set of simple control rules underlying the execution of learning flights that closely mimic those performed by ground-nesting wasps. In the simplest model, we assume that the angle between flight direction and the nest direction as seen from the position of the insect is constant and only flips sign when pivoting direction around the nest is changed, resulting in a concatenation of piecewise defined logarithmic spirals. We then added characteristic properties of real learning flights, such as head saccades and the condition that the nest entrance within the visual field is kept nearly constant to describe the development of a learning flight in a head-centered frame of reference, assuming that the retinal position of the nest is known. We finally implemented a closed-loop simulation of learning flights based on a small set of visual control rules. The visual input for this model are rendered views generated from 3D reconstructions of natural wasp nesting sites, and the retinal nest position is controlled by means of simple template-based tracking. We show that naturalistic paths can be generated without knowledge of the absolute distance to the nest or of the flight speed. We demonstrate in addition that nest-tagged views recorded during such simulated learning flights are sufficient for a homing agent to pinpoint the goal, by identifying nest direction when encountering familiar views. We discuss how the information acquired during learning flights close to the nest can be integrated with long-range homing models.


Assuntos
Voo Animal/fisiologia , Comportamento de Retorno ao Território Vital/fisiologia , Orientação Espacial/fisiologia , Campos Visuais/fisiologia , Animais , Insetos , Vespas
2.
Curr Biol ; 26(4): 470-82, 2016 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-26877083

RESUMO

Nesting insects perform learning flights to establish a visual representation of the nest environment that allows them to subsequently return to the nest. It has remained unclear when insects learn what during these flights, what determines their overall structure, and, in particular, how what is learned is used to guide an insect's return. We analyzed learning flights in ground-nesting wasps (Sphecidae: Cerceris australis) using synchronized high-speed cameras to determine 3D head position and orientation. Wasps move along arcs centered on the nest entrance, whereby rapid changes in gaze assure that the nest is seen at lateral positions in the left or the right visual field. Between saccades, the wasps translate along arc segments around the nest while keeping gaze fixed. We reconstructed panoramic views along the paths of learning and homing wasps to test specific predictions about what wasps learn during their learning flights and how they use this information to guide their return. Our evidence suggests that wasps monitor changing views during learning flights and use the differences they experience relative to previously encountered views to decide when to begin a new arc. Upon encountering learned views, homing wasps move left or right, depending on the nest direction associated with that view, and in addition appear to be guided by features on the ground close to the nest. We test our predictions on how wasps use views for homing by simulating homing flights of a virtual wasp guided by views rendered in a 3D model of a natural wasp environment.


Assuntos
Comportamento de Retorno ao Território Vital , Aprendizagem , Percepção Visual , Vespas/fisiologia , Animais , Orientação
3.
PLoS One ; 11(2): e0147106, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26886006

RESUMO

Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.


Assuntos
Abelhas/fisiologia , Voo Animal/fisiologia , Percepção de Forma/fisiologia , Percepção Visual/fisiologia , Animais , Comportamento Animal , Discriminação Psicológica , Modelos Teóricos , Movimento (Física) , Fluxo Óptico , Estimulação Luminosa
4.
PLoS One ; 10(8): e0128413, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26308839

RESUMO

The control of self-motion is a basic, but complex task for both technical and biological systems. Various algorithms have been proposed that allow the estimation of self-motion from the optic flow on the eyes. We show that two apparently very different approaches to solve this task, one technically and one biologically inspired, can be transformed into each other under certain conditions. One estimator of self-motion is based on a matched filter approach; it has been developed to describe the function of motion sensitive cells in the fly brain. The other estimator, the Koenderink and van Doorn (KvD) algorithm, was derived analytically with a technical background. If the distances to the objects in the environment can be assumed to be known, the two estimators are linear and equivalent, but are expressed in different mathematical forms. However, for most situations it is unrealistic to assume that the distances are known. Therefore, the depth structure of the environment needs to be determined in parallel to the self-motion parameters and leads to a non-linear problem. It is shown that the standard least mean square approach that is used by the KvD algorithm leads to a biased estimator. We derive a modification of this algorithm in order to remove the bias and demonstrate its improved performance by means of numerical simulations. For self-motion estimation it is beneficial to have a spherical visual field, similar to many flying insects. We show that in this case the representation of the depth structure of the environment derived from the optic flow can be simplified. Based on this result, we develop an adaptive matched filter approach for systems with a nearly spherical visual field. Then only eight parameters about the environment have to be memorized and updated during self-motion.


Assuntos
Algoritmos , Insetos/fisiologia , Movimento (Física) , Animais
5.
Artigo em Inglês | MEDLINE | ID: mdl-25863682

RESUMO

Much evidence has accumulated in recent years, demonstrating that the degree to which navigating insects rely on path integration or landmark guidance when displaced depends on the navigational information content of their specific habitat. There is thus a need to quantify this information content. Here we present one way of achieving this by constructing 3D models of natural environments using a laser scanner and purely camera-based methods that allow us to render panoramic views at any location. We provide (1) ground-truthing of such reconstructed views against panoramic images recorded at the same locations; (2) evidence of their potential to map the navigational information content of natural habitats; (3) methods to register these models with GPS or with stereo camera recordings and (4) examples of their use in reconstructing the visual information available to walking and flying insects. We discuss the current limitations of 3D modelling, including the lack of spectral and polarisation information, but also the opportunities such models offer to map the navigational information content of natural habitats and to test visual navigation algorithms under 'real-life' conditions.


Assuntos
Meio Ambiente , Imageamento Tridimensional , Modelos Biológicos , Orientação/fisiologia , Comportamento Espacial , Animais , Austrália , Sinais (Psicologia) , Sistemas de Informação Geográfica , Comportamento de Retorno ao Território Vital , Insetos
7.
Philos Trans R Soc Lond B Biol Sci ; 369(1636): 20130034, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24395961

RESUMO

We caught solitary foragers of the Australian Jack Jumper ant, Myrmecia croslandi, and released them in three compass directions at distances of 10 and 15 m from the nest at locations they have never been before. We recorded the head orientation and the movements of ants within a radius of 20 cm from the release point and, in some cases, tracked their subsequent paths with a differential GPS. We find that upon surfacing from their transport vials onto a release platform, most ants move into the home direction after looking around briefly. The ants use a systematic scanning procedure, consisting of saccadic head and body rotations that sweep gaze across the scene with an average angular velocity of 90° s(-1) and intermittent changes in turning direction. By mapping the ants' gaze directions onto the local panorama, we find that neither the ants' gaze nor their decisions to change turning direction are clearly associated with salient or significant features in the scene. Instead, the ants look most frequently in the home direction and start walking fast when doing so. Displaced ants can thus identify home direction with little translation, but exclusively through rotational scanning. We discuss the navigational information content of the ants' habitat and how the insects' behaviour informs us about how they may acquire and retrieve that information.


Assuntos
Formigas/fisiologia , Comportamento Alimentar/fisiologia , Comportamento de Retorno ao Território Vital/fisiologia , Visão Ocular/fisiologia , Animais , Sinais (Psicologia) , Memória , Orientação
8.
Front Behav Neurosci ; 5: 20, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21541258

RESUMO

Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees' navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture.

9.
J Comput Neurosci ; 30(2): 409-26, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20721613

RESUMO

The estimation of motion direction from time varying retinal images is a fundamental task of visual systems. Neurons that selectively respond to directional visual motion are found in almost all species. In many of them already in the retina direction selective neurons signal their preferred direction of movement. Scientific evidences suggest that direction selectivity is carried from the retina to higher brain areas. Here we adopt a simple integrate-and-fire neuron model, inspired by recent work of Casti et al. (2008), to investigate how directional selectivity changes in cells postsynaptic to directional selective retinal ganglion cells (DSRGC). Our model analysis shows that directional selectivity in the postsynaptic cells increases over a wide parameter range. The degree of directional selectivity positively correlates with the probability of burst-like firing of presynaptic DSRGCs. Postsynaptic potentials summation and spike threshold act together as a temporal filter upon the input spike train. Prior to the intricacy of neural circuitry between retina and higher brain areas, we suggest that sharpening is a straightforward result of the intrinsic spiking pattern of the DSRGCs combined with the summation of excitatory postsynaptic potentials and the spike threshold in postsynaptic neurons.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Percepção de Movimento/fisiologia , Orientação/fisiologia , Retina/citologia , Células Ganglionares da Retina/fisiologia , Animais , Coelhos
10.
J Exp Biol ; 213(Pt 17): 2913-23, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20709919

RESUMO

Visual landmarks guide humans and animals including insects to a goal location. Insects, with their miniature brains, have evolved a simple strategy to find their nests or profitable food sources; they approach a goal by finding a close match between the current view and a memorised retinotopic representation of the landmark constellation around the goal. Recent implementations of such a matching scheme use raw panoramic images ('image matching') and show that it is well suited to work on robots and even in natural environments. However, this matching scheme works only if relevant landmarks can be detected by their contrast and texture. Therefore, we tested how honeybees perform in localising a goal if the landmarks can hardly be distinguished from the background by such cues. We recorded the honeybees' flight behaviour with high-speed cameras and compared the search behaviour with computer simulations. We show that honeybees are able to use landmarks that have the same contrast and texture as the background and suggest that the bees use relative motion cues between the landmark and the background. These cues are generated on the eyes when the bee moves in a characteristic way in the vicinity of the landmarks. This extraordinary navigation performance can be explained by a matching scheme that includes snapshots based on optic flow amplitudes ('optic flow matching'). This new matching scheme provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment.


Assuntos
Comportamento Apetitivo/fisiologia , Abelhas/fisiologia , Objetivos , Fluxo Óptico/fisiologia , Animais , Voo Animal/fisiologia
11.
Proc Biol Sci ; 277(1689): 1899-906, 2010 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-20147329

RESUMO

Honeybees turn their thorax and thus their flight motor to change direction or to fly sideways. If the bee's head were fixed to its thorax, such movements would have great impact on vision. Head movements independent of thorax orientation can stabilize gaze and thus play an important and active role in shaping the structure of the visual input the animal receives. Here, we investigate how gaze and flight control interact in a homing task. We use high-speed video equipment to record the head and body movements of honeybees approaching and departing from a food source that was located between three landmarks in an indoor flight arena. During these flights, the bees' trajectories consist of straight flight segments combined with rapid turns. These short and fast yaw turns ('saccades') are in most cases accompanied by even faster head yaw turns that start about 8 ms earlier than the body saccades. Between saccades, gaze stabilization leads to a behavioural elimination of rotational components from the optical flow pattern, which facilitates depth perception from motion parallax.


Assuntos
Abelhas/fisiologia , Comportamento Animal/fisiologia , Voo Animal/fisiologia , Atividade Motora/fisiologia , Fenômenos Fisiológicos Oculares , Animais , Cabeça , Gravação em Vídeo
12.
J Exp Psychol Anim Behav Process ; 34(1): 1-14, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18248111

RESUMO

Animals relocating a target corner in a rectangular space often make rotational errors searching not only at the target corner but also at the diagonally opposite corner. The authors tested whether view-based navigation can explain rotational errors by recording panoramic snapshots at regularly spaced locations in a rectangular box. The authors calculated the global image difference between the image at each location and the image recorded at a target location in 1 of the corners, thus creating a 2-dimensional map of image differences. The authors found the most pronounced minima of image differences at the target corner and the diagonally opposite corner--conditions favoring rotational errors. The authors confirmed these results in virtual reality simulations and showed that the relative salience of different visual cues determines whether image differences are dominated by geometry or by features. The geometry of space is thus implicitly contained in panoramic images and does not require explicit computation by a dedicated module. A testable prediction is that animals making rotational errors in rectangular spaces are guided by remembered views.


Assuntos
Comportamento de Retorno ao Território Vital , Percepção Visual , Animais , Comportamento Animal , Comportamento Espacial , Interface Usuário-Computador
13.
J Exp Psychol Anim Behav Process ; 34(1): 15-30, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18248112

RESUMO

Two recent studies testing navigation of rats in swimming pools have posed problems for any account of the use of purely geometric properties of space in navigation (M. Graham, M. A. Good, A. McGregor, & J. M. Pearce, 2006; J. M. Pearce, M. A. Good, P. M. Jones, & A. McGregor, 2004). The authors simulated 1 experiment from each study in a virtual reality environment to test whether experimental results could be explained by view-based navigation. The authors recorded a reference image at the target location and then determined global panoramic image differences between this image and images taken at regularly spaced locations throughout the arena. A formal model, in which an agent attempts to minimize image differences between the reference image and current views, generated trajectories that could be compared with the search performance of rats. For both experiments, this model mimics many aspects of rat behavior. View-based navigation provides a sufficient and parsimonious explanation for a range of navigational behaviors of rats under these experimental conditions.


Assuntos
Aprendizagem , Percepção Espacial , Comportamento Espacial , Interface Usuário-Computador , Percepção Visual , Animais , Comportamento Animal , Ratos
14.
Biol Cybern ; 96(5): 519-31, 2007 May.
Artigo em Inglês | MEDLINE | ID: mdl-17443340

RESUMO

Panoramic image differences can be used for view-based homing under natural outdoor conditions, because they increase smoothly with distance from a reference location (Zeil et al., J Opt Soc Am A 20(3):450-469, 2003). The particular shape, slope and depth of such image difference functions (IDFs) recorded at any one place, however, depend on a number of factors that so far have only been qualitatively identified. Here we show how the shape of difference functions depends on the depth structure and the contrast of natural scenes, by quantifying the depth- distribution of different outdoor scenes and by comparing it to the difference functions calculated with differently processed panoramic images, which were recorded at the same locations. We find (1) that IDFs and catchment areas become systematically wider as the average distance of objects increases, (2) that simple image processing operations -- like subtracting the local mean, difference-of-Gaussian filtering and local contrast normalization -- make difference functions robust against changes in illumination and the spurious effects of shadows, and (3) by comparing depth-dependent translational and depth-independent rotational difference functions, we show that IDFs of contrast-normalized snapshots are predominantly determined by the depth-structure and possibly also by occluding contours in a scene. We propose a model for the shape of IDFs as a tool for quantitative comparisons between the shapes of these functions in different scenes.


Assuntos
Percepção de Profundidade/fisiologia , Modelos Neurológicos , Percepção Visual/fisiologia , Sensibilidades de Contraste , Cibernética/métodos , Humanos , Imageamento Tridimensional , Percepção de Tamanho/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...