Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 5119, 2023 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-36991062

RESUMO

When attempting to land on a ship deck tossed by the sea, helicopter pilots must make sure that the helicopter can develop sufficient lift to be able to safely touchdown. This reminder of affordance theory led us to model and study the affordance of deck-landing-ability, which defines whether it is possible to land safely on a ship deck depending on the helicopter's available lift and the ship's deck heave movements. Two groups of participants with no piloting experience using a laptop helicopter simulator attempted to land either a low-lifter or a heavy-lifter helicopter on a virtual ship deck by either triggering a pre-programmed lift serving as the descent law if it was deemed possible to land, or aborting the deck-landing maneuver. The deck-landing-ability was manipulated by varying the helicopter's initial altitude and the ship's heave phase between trials. We designed a visual augmentation making visible the deck-landing-ability, and thus enabling participants to maximize the safety of their deck-landing attempts and reduce the number of unsafe deck-landing. The visual augmentation presented here was perceived by participants as a means of facilitating this decision-making process. The benefits were found to have originated from the clear-cut distinction it helped them to make between safe and unsafe deck-landing windows and the display of the optimal time for initiating the landing.

2.
Nature ; 610(7932): 485-490, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36261554

RESUMO

Attitude control is an essential flight capability. Whereas flying robots commonly rely on accelerometers1 for estimating attitude, flying insects lack an unambiguous sense of gravity2,3. Despite the established role of several sense organs in attitude stabilization3-5, the dependence of flying insects on an internal gravity direction estimate remains unclear. Here we show how attitude can be extracted from optic flow when combined with a motion model that relates attitude to acceleration direction. Although there are conditions such as hover in which the attitude is unobservable, we prove that the ensuing control system is still stable, continuously moving into and out of these conditions. Flying robot experiments confirm that accommodating unobservability in this manner leads to stable, but slightly oscillatory, attitude control. Moreover, experiments with a bio-inspired flapping-wing robot show that residual, high-frequency attitude oscillations from flapping motion improve observability. The presented approach holds a promise for robotics, with accelerometer-less autopilots paving the road for insect-scale autonomous flying robots6. Finally, it forms a hypothesis on insect attitude estimation and control, with the potential to provide further insight into known biological phenomena5,7,8 and to generate new predictions such as reduced head and body attitude variance at higher flight speeds9.


Assuntos
Fenômenos Biomecânicos , Fluxo Óptico , Robótica , Animais , Voo Animal , Insetos , Modelos Biológicos , Robótica/métodos , Asas de Animais , Acelerometria , Biomimética , Materiais Biomiméticos , Movimento (Física)
3.
Biol Lett ; 18(3): 20210534, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35317623

RESUMO

To investigate altitude control in honeybees, an optical configuration was designed to manipulate or cancel the optic flow. It has been widely accepted that honeybees rely on the optic flow generated by the ground to control their altitude. Here, we create an optical configuration enabling a better understanding of the mechanism of altitude control in honeybees. This optical configuration aims to mimic some of the conditions that honeybees experience over a natural water body. An optical manipulation, based on a pair of opposed horizontal mirrors, was designed to remove any visual information coming from the floor and ceiling. Such an optical manipulation allowed us to get closer to the seminal experiment of Heran & Lindauer 1963. Zeitschrift für vergleichende Physiologie47, 39-55. (doi:10.1007/BF00342890). Our results confirmed that a reduction or an absence of ventral optic flow in honeybees leads to a loss in altitude, and eventually a collision with the floor.


Assuntos
Voo Animal , Fluxo Óptico , Altitude , Animais , Abelhas , Visão Ocular
4.
J R Soc Interface ; 18(182): 20210567, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34493092

RESUMO

Honeybees foraging and recruiting nest-mates by performing the waggle dance need to be able to gauge the flight distance to the food source regardless of the wind and terrain conditions. Previous authors have hypothesized that the foragers' visual odometer mathematically integrates the angular velocity of the ground image sweeping backward across their ventral viewfield, known as translational optic flow. The question arises as to how mathematical integration of optic flow (usually expressed in radians/s) can reliably encode distances, regardless of the height and speed of flight. The vertical self-oscillatory movements observed in honeybees trigger expansions and contractions of the optic flow vector field, yielding an additional visual cue called optic flow divergence. We have developed a self-scaled model for the visual odometer in which the translational optic flow is scaled by the visually estimated current clearance from the ground. In simulation, this model, which we have called SOFIa, was found to be reliable in a large range of flight trajectories, terrains and wind conditions. It reduced the statistical dispersion of the estimated flight distances approximately 10-fold in comparison with the mathematically integrated raw optic flow model. The SOFIa model can be directly implemented in robotic applications based on minimalistic visual equipment.


Assuntos
Voo Animal , Robótica , Animais , Abelhas , Vento
5.
PLoS One ; 16(8): e0255779, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34379645

RESUMO

Helicopter landing on a ship is a visually regulated "rendezvous" task during which pilots must use fine control to land a powerful rotorcraft on the deck of a moving ship tossed by the sea while minimizing the energy at impact. Although augmented reality assistance can be hypothesized to improve pilots' performance and the safety of landing maneuvers by guiding action toward optimal behavior in complex and stressful situations, the question of the optimal information to be displayed to feed the pilots' natural information-movement coupling remains to be investigated. Novice participants were instructed to land a simplified helicopter on a ship in a virtual reality simulator while minimizing energy at impact and landing duration. The wave amplitude and related ship heave were manipulated. We compared the benefits of two types of visual augmentation whose design was based on either solving cockpit-induced visual occlusion problems or strengthening the online regulation of the deceleration by keeping the current [Formula: see text] variable around an ideal value of -0.5 to conduct smooth and efficient landing. Our results showed that the second augmentation, ecologically grounded, offers benefits at several levels of analysis. It decreases the landing duration, improves the control of the helicopter displacement, and sharpens the sensitivity to changes in [Formula: see text]. This underlines the importance for designers of augmented reality systems to collaborate with psychologists to identify the relevant perceptual-motor strategy that must be encouraged before designing an augmentation that will enhance it.


Assuntos
Aeronaves , Aviação , Pilotos , Navios , Adulto , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Militares , Análise e Desempenho de Tarefas , Adulto Jovem
6.
Front Robot AI ; 8: 614206, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33969000

RESUMO

Miniature multi-rotors are promising robots for navigating subterranean networks, but maintaining a radio connection underground is challenging. In this paper, we introduce a distributed algorithm, called U-Chain (for Underground-chain), that coordinates a chain of flying robots between an exploration drone and an operator. Our algorithm only uses the measurement of the signal quality between two successive robots and an estimate of the ground speed based on an optic flow sensor. It leverages a distributed policy for each UAV and a Kalman filter to get reliable estimates of the signal quality. We evaluate our approach formally and in simulation, and we describe experimental results with a chain of 3 real miniature quadrotors (12 by 12 cm) and a base station.

7.
PLoS Comput Biol ; 17(1): e1008629, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33497381

RESUMO

Both neurophysiological and psychophysical experiments have pointed out the crucial role of recurrent and feedback connections to process context-dependent information in the early visual cortex. While numerous models have accounted for feedback effects at either neural or representational level, none of them were able to bind those two levels of analysis. Is it possible to describe feedback effects at both levels using the same model? We answer this question by combining Predictive Coding (PC) and Sparse Coding (SC) into a hierarchical and convolutional framework applied to realistic problems. In the Sparse Deep Predictive Coding (SDPC) model, the SC component models the internal recurrent processing within each layer, and the PC component describes the interactions between layers using feedforward and feedback connections. Here, we train a 2-layered SDPC on two different databases of images, and we interpret it as a model of the early visual system (V1 & V2). We first demonstrate that once the training has converged, SDPC exhibits oriented and localized receptive fields in V1 and more complex features in V2. Second, we analyze the effects of feedback on the neural organization beyond the classical receptive field of V1 neurons using interaction maps. These maps are similar to association fields and reflect the Gestalt principle of good continuation. We demonstrate that feedback signals reorganize interaction maps and modulate neural activity to promote contour integration. Third, we demonstrate at the representational level that the SDPC feedback connections are able to overcome noise in input images. Therefore, the SDPC captures the association field principle at the neural level which results in a better reconstruction of blurred images at the representational level.


Assuntos
Aprendizado Profundo , Modelos Neurológicos , Vias Visuais , Algoritmos , Animais , Biologia Computacional , Retroalimentação , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Córtex Visual/fisiologia
8.
Sci Rep ; 10(1): 18130, 2020 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-33093603

RESUMO

The homing journeys of nine loggerhead turtles translocated from their nesting beach to offshore release sites, were reconstructed through Argos and GPS telemetry while their water-related orientation was simultaneously recorded at high temporal resolution by multi-sensor data loggers featuring a three-axis magnetic sensor. All turtles managed to return to the nesting beach area, although with indirect routes encompassing an initial straight leg not precisely oriented towards home, and a successive homebound segment carried out along the coast. Logger data revealed that, after an initial period of disorientation, turtles were able to precisely maintain a consistent direction for several hours while moving in the open sea, even during night-time. Their water-related headings were in accordance with the orientation of the resulting route, showing little or no effect of current drift. This study reveals a biphasic homing strategy of displaced turtles involving an initial orientation weakly related to home and a successive shift to coastal navigation, which is in line with the modern conceptual framework of animal migratory navigation as deriving from sequential mechanisms acting at different spatial scales.


Assuntos
Migração Animal/fisiologia , Comportamento de Retorno ao Território Vital/fisiologia , Orientação/fisiologia , Tartarugas/fisiologia , Animais , Magnetismo , Estações do Ano , Água do Mar , Telemetria
9.
Neural Comput ; 32(11): 2279-2309, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32946716

RESUMO

Hierarchical sparse coding (HSC) is a powerful model to efficiently represent multidimensional, structured data such as images. The simplest solution to solve this computationally hard problem is to decompose it into independent layer-wise subproblems. However, neuroscientific evidence would suggest interconnecting these subproblems as in predictive coding (PC) theory, which adds top-down connections between consecutive layers. In this study, we introduce a new model, 2-layer sparse predictive coding (2L-SPC), to assess the impact of this interlayer feedback connection. In particular, the 2L-SPC is compared with a hierarchical Lasso (Hi-La) network made out of a sequence of independent Lasso layers. The 2L-SPC and a 2-layer Hi-La networks are trained on four different databases and with different sparsity parameters on each layer. First, we show that the overall prediction error generated by 2L-SPC is lower thanks to the feedback mechanism as it transfers prediction error between layers. Second, we demonstrate that the inference stage of the 2L-SPC is faster to converge and generates a refined representation in the second layer compared to the Hi-La model. Third, we show that the 2L-SPC top-down connection accelerates the learning process of the HSC problem. Finally, the analysis of the emerging dictionaries shows that the 2L-SPC features are more generic and present a larger spatial extension.

10.
J R Soc Interface ; 16(159): 20190486, 2019 10 31.
Artigo em Inglês | MEDLINE | ID: mdl-31594521

RESUMO

For studies of how birds control their altitude, seabirds are of particular interest because they forage offshore where the visual environment can be simply modelled by a flat world textured by waves then generating only ventral visual cues. This study suggests that optic flow, i.e. the rate at which the sea moves across the eye's retina, can explain gulls' altitude control over seas. In particular, a new flight model that includes both energy and optical invariants helps explain the gulls' trajectories during offshore takeoff and cruising flight. A linear mixed model applied to 352 flights from 16 individual lesser black backed gulls (Larus fuscus) revealed a statistically significant optic flow set-point of ca 25° s-1. Thereafter, an optic flow-based flight model was applied to 18 offshore takeoff flights from nine individual gulls. By introducing an upper limit in climb rate on the elevation dynamics, coupled with an optic flow set-point, the predicted altitude gives an optimized fit factor value of 63% on average (30-83% in range) with respect to the GPS data. We conclude that the optic flow regulation principle helps gulls to adjust their altitude over sea without having to directly measure their current altitude.


Assuntos
Altitude , Charadriiformes/fisiologia , Voo Animal/fisiologia , Modelos Biológicos , Visão Ocular , Animais , Oceanos e Mares
11.
Bioinspir Biomim ; 14(3): 036002, 2019 02 14.
Artigo em Inglês | MEDLINE | ID: mdl-30654332

RESUMO

Here we present a novel bio-inspired visual processing system, which enables a robot to locate and follow a target, using an artificial compound eye called CurvACE. This visual sensor actively scanned the environment at an imposed frequency (50 Hz) with an angular scanning amplitude of [Formula: see text] and succeeded in locating a textured cylindrical target with hyperacuity, i.e. much finer resolution than the coarse inter-receptor angle of the compound eye. Equipped with this small, lightweight visual scanning sensor, a Mecanum-wheeled mobile robot named ACEbot was able to follow a target at a constant distance by localizing the right and left edges of the target. The localization of the target's contrasted edges is based on a bio-inspired summation of Gaussian receptive fields in the visual system. By means of its auto-adaptive pixels, ACEbot consistently achieved similar pursuit performances under various lighting conditions with a high level of repeatability. The robotic pursuit pattern mimics finely the behavior of the male fly Syritta Pipens L. while pursuing the female. The high similarity in the trajectories as well as the biomimicry of the visual system provides strong support for the hypothesis that flies do maintain center the target and constant its subtended angle during smooth pursuit. Moreover, we discuss the fact that such simple strategy can also provide a trajectory compatible with motion camouflage.


Assuntos
Comportamento Animal , Biomimética , Dípteros , Locomoção , Robótica , Animais , Masculino
12.
Science ; 361(6407): 1073-1074, 2018 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-30213902
13.
Sci Rep ; 7(1): 9231, 2017 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-28835634

RESUMO

Studies on insects' visual guidance systems have shed little light on how learning contributes to insects' altitude control system. In this study, honeybees were trained to fly along a double-roofed tunnel after entering it near either the ceiling or the floor of the tunnel. The honeybees trained to hug the ceiling therefore encountered a sudden change in the tunnel configuration midways: i.e. a "dorsal ditch". Thus, the trained honeybees met a sudden increase in the distance to the ceiling, corresponding to a sudden strong change in the visual cues available in their dorsal field of view. Honeybees reacted by rising quickly and hugging the new, higher ceiling, keeping a similar forward speed, distance to the ceiling and dorsal optic flow to those observed during the training step; whereas bees trained to follow the floor kept on following the floor regardless of the change in the ceiling height. When trained honeybees entered the tunnel via the other entry (the lower or upper entry) to that used during the training step, they quickly changed their altitude and hugged the surface they had previously learned to follow. These findings clearly show that trained honeybees control their altitude based on visual cues memorized during training. The memorized visual cues generated by the surfaces followed form a complex optic flow pattern: trained honeybees may attempt to match the visual cues they perceive with this memorized optic flow pattern by controlling their altitude.


Assuntos
Altitude , Abelhas/fisiologia , Comportamento Animal , Animais , Voo Animal , Aprendizagem Espacial , Visão Ocular
14.
Arthropod Struct Dev ; 46(5): 703-717, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28655645

RESUMO

Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight.


Assuntos
Voo Animal , Insetos/fisiologia , Fluxo Óptico/fisiologia , Robótica , Animais , Meio Ambiente , Modelos Biológicos
15.
Sensors (Basel) ; 17(3)2017 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-28287484

RESUMO

For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis-Menten Auto-adaptive Pixel (M 2 APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6 × 10 - 7 to 1 . 6 × 10 - 2 W·cm - 2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast "time of travel" between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels' signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M 2 APix sensor. While both algorithms adequately measured optical flow between 25 ∘ /s and 1000 ∘ /s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources.

16.
Bioinspir Biomim ; 11(6): 066007, 2016 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-27831937

RESUMO

Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual  guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M2APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.


Assuntos
Algoritmos , Automóveis , Fluxo Óptico , Robótica/instrumentação , Animais , Condução de Veículo , Calibragem , Humanos , Insetos/fisiologia , Reconhecimento Visual de Modelos , Software
17.
Opt Express ; 23(5): 5614-35, 2015 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-25836794

RESUMO

In this paper, we present: (i) a novel analog silicon retina featuring auto-adaptive pixels that obey the Michaelis-Menten law, i.e. V=V(m) I(n)/I(n)+σ(n); (ii) a method of characterizing silicon retinas, which makes it possible to accurately assess the pixels' response to transient luminous changes in a ±3-decade range, as well as changes in the initial steady-state intensity in a 7-decade range. The novel pixel, called M(2)APix, which stands for Michaelis-Menten Auto-Adaptive Pixel, can auto-adapt in a 7-decade range and responds appropriately to step changes up to ±3 decades in size without causing any saturation of the Very Large Scale Integration (VLSI) transistors. Thanks to the intrinsic properties of the Michaelis-Menten equation, the pixel output always remains within a constant limited voltage range. The range of the Analog to Digital Converter (ADC) was therefore adjusted so as to obtain a Least Significant Bit (LSB) voltage of 2.35mV and an effective resolution of about 9 bits. The results presented here show that the M(2)APix produced a quasi-linear contrast response once it had adapted to the average luminosity. Differently to what occurs in its biological counterparts, neither the sensitivity to changes in light nor the contrast response of the M(2)APix depend on the mean luminosity (i.e. the ambient lighting conditions). Lastly, a full comparison between the M(2)APix and the Delbrück auto-adaptive pixel is provided.

18.
Bioinspir Biomim ; 10(2): 026003, 2015 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-25717052

RESUMO

Two bio-inspired guidance principles involving no reference frame are presented here and were implemented in a rotorcraft, which was equipped with panoramic optic flow (OF) sensors but (as in flying insects) no accelerometer. To test these two guidance principles, we built a tethered tandem rotorcraft called BeeRotor (80 grams), which was tested flying along a high-roofed tunnel. The aerial robot adjusts its pitch and hence its speed, hugs the ground and lands safely without any need for an inertial reference frame. The rotorcraft's altitude and forward speed are adjusted via two OF regulators piloting the lift and the pitch angle on the basis of the common-mode and differential rotor speeds, respectively. The robot equipped with two wide-field OF sensors was tested in order to assess the performances of the following two systems of guidance involving no inertial reference frame: (i) a system with a fixed eye orientation based on the curved artificial compound eye (CurvACE) sensor, and (ii) an active system of reorientation based on a quasi-panoramic eye which constantly realigns its gaze, keeping it parallel to the nearest surface followed. Safe automatic terrain following and landing were obtained with CurvACE under dim light to daylight conditions and the active eye-reorientation system over rugged, changing terrain, without any need for an inertial reference frame.


Assuntos
Aeronaves/instrumentação , Biomimética/instrumentação , Voo Animal/fisiologia , Fluxo Óptico/fisiologia , Fotometria/instrumentação , Robótica/instrumentação , Acelerometria/instrumentação , Animais , Olho Composto de Artrópodes , Simulação por Computador , Desenho Assistido por Computador , Sinais (Psicologia) , Desenho de Equipamento , Análise de Falha de Equipamento , Modelos Biológicos , Movimento (Física)
19.
Sensors (Basel) ; 14(11): 21702-21, 2014 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-25407908

RESUMO

The demand for bendable sensors increases constantly in the challenging field of soft and micro-scale robotics. We present here, in more detail, the flexible, functional, insect-inspired curved artificial compound eye (CurvACE) that was previously introduced in the Proceedings of the National Academy of Sciences (PNAS, 2013). This cylindrically-bent sensor with a large panoramic field-of-view of 180° × 60° composed of 630 artificial ommatidia weighs only 1.75 g, is extremely compact and power-lean (0.9 W), while it achieves unique visual motion sensing performance (1950 frames per second) in a five-decade range of illuminance. In particular, this paper details the innovative Very Large Scale Integration (VLSI) sensing layout, the accurate assembly fabrication process, the innovative, new fast read-out interface, as well as the auto-adaptive dynamic response of the CurvACE sensor. Starting from photodetectors and microoptics on wafer substrates and flexible printed circuit board, the complete assembly of CurvACE was performed in a planar configuration, ensuring high alignment accuracy and compatibility with state-of-the art assembling processes. The characteristics of the photodetector of one artificial ommatidium have been assessed in terms of their dynamic response to light steps. We also characterized the local auto-adaptability of CurvACE photodetectors in response to large illuminance changes: this feature will certainly be of great interest for future applications in real indoor and outdoor environments.


Assuntos
Biomimética/instrumentação , Olho Composto de Artrópodes/fisiologia , Interpretação de Imagem Assistida por Computador/instrumentação , Imageamento Tridimensional/instrumentação , Fotometria/instrumentação , Semicondutores , Processamento de Sinais Assistido por Computador/instrumentação , Animais , Desenho de Equipamento , Análise de Falha de Equipamento , Olho Artificial , Insetos/fisiologia , Lentes , Miniaturização , Integração de Sistemas
20.
Bioinspir Biomim ; 9(3): 036003, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24615558

RESUMO

Here we present the first systematic comparison between the visual guidance behaviour of a biomimetic robot and those of honeybees flying in similar environments. We built a miniature hovercraft which can travel safely along corridors with various configurations. For the first time, we implemented on a real physical robot the 'lateral optic flow regulation autopilot', which we previously studied computer simulations. This autopilot inspired by the results of experiments on various species of hymenoptera consists of two intertwined feedback loops, the speed and lateral control loops, each of which has its own optic flow (OF) set-point. A heading-lock system makes the robot move straight ahead as fast as 69 cm s(-1) with a clearance from one wall as small as 31 cm, giving an unusually high translational OF value (125° s(-1)). Our biomimetic robot was found to navigate safely along straight, tapered and bent corridors, and to react appropriately to perturbations such as the lack of texture on one wall, the presence of a tapering or non-stationary section of the corridor and even a sloping terrain equivalent to a wind disturbance. The front end of the visual system consists of only two local motion sensors (LMS), one on each side. This minimalistic visual system measuring the lateral OF suffices to control both the robot's forward speed and its clearance from the walls without ever measuring any speeds or distances. We added two additional LMSs oriented at +/-45° to improve the robot's performances in stiffly tapered corridors. The simple control system accounts for worker bees' ability to navigate safely in six challenging environments: straight corridors, single walls, tapered corridors, straight corridors with part of one wall moving or missing, as well as in the presence of wind.


Assuntos
Aeronaves/instrumentação , Abelhas/fisiologia , Biomimética/instrumentação , Voo Animal/fisiologia , Navegação Espacial/fisiologia , Percepção Visual/fisiologia , Animais , Inteligência Artificial , Desenho de Equipamento , Análise de Falha de Equipamento , Retroalimentação Fisiológica/fisiologia , Humanos , Robótica/instrumentação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...