ABSTRACT
Attitude control is an essential flight capability. Whereas flying robots commonly rely on accelerometers1 for estimating attitude, flying insects lack an unambiguous sense of gravity2,3. Despite the established role of several sense organs in attitude stabilization3-5, the dependence of flying insects on an internal gravity direction estimate remains unclear. Here we show how attitude can be extracted from optic flow when combined with a motion model that relates attitude to acceleration direction. Although there are conditions such as hover in which the attitude is unobservable, we prove that the ensuing control system is still stable, continuously moving into and out of these conditions. Flying robot experiments confirm that accommodating unobservability in this manner leads to stable, but slightly oscillatory, attitude control. Moreover, experiments with a bio-inspired flapping-wing robot show that residual, high-frequency attitude oscillations from flapping motion improve observability. The presented approach holds a promise for robotics, with accelerometer-less autopilots paving the road for insect-scale autonomous flying robots6. Finally, it forms a hypothesis on insect attitude estimation and control, with the potential to provide further insight into known biological phenomena5,7,8 and to generate new predictions such as reduced head and body attitude variance at higher flight speeds9.
Subject(s)
Biomechanical Phenomena , Optic Flow , Robotics , Animals , Flight, Animal , Insecta , Models, Biological , Robotics/methods , Wings, Animal , Accelerometry , Biomimetics , Biomimetic Materials , MotionABSTRACT
Both neurophysiological and psychophysical experiments have pointed out the crucial role of recurrent and feedback connections to process context-dependent information in the early visual cortex. While numerous models have accounted for feedback effects at either neural or representational level, none of them were able to bind those two levels of analysis. Is it possible to describe feedback effects at both levels using the same model? We answer this question by combining Predictive Coding (PC) and Sparse Coding (SC) into a hierarchical and convolutional framework applied to realistic problems. In the Sparse Deep Predictive Coding (SDPC) model, the SC component models the internal recurrent processing within each layer, and the PC component describes the interactions between layers using feedforward and feedback connections. Here, we train a 2-layered SDPC on two different databases of images, and we interpret it as a model of the early visual system (V1 & V2). We first demonstrate that once the training has converged, SDPC exhibits oriented and localized receptive fields in V1 and more complex features in V2. Second, we analyze the effects of feedback on the neural organization beyond the classical receptive field of V1 neurons using interaction maps. These maps are similar to association fields and reflect the Gestalt principle of good continuation. We demonstrate that feedback signals reorganize interaction maps and modulate neural activity to promote contour integration. Third, we demonstrate at the representational level that the SDPC feedback connections are able to overcome noise in input images. Therefore, the SDPC captures the association field principle at the neural level which results in a better reconstruction of blurred images at the representational level.
Subject(s)
Deep Learning , Models, Neurological , Visual Pathways , Algorithms , Animals , Computational Biology , Feedback , Female , Humans , Image Processing, Computer-Assisted , Male , Visual Cortex/physiologyABSTRACT
To investigate altitude control in honeybees, an optical configuration was designed to manipulate or cancel the optic flow. It has been widely accepted that honeybees rely on the optic flow generated by the ground to control their altitude. Here, we create an optical configuration enabling a better understanding of the mechanism of altitude control in honeybees. This optical configuration aims to mimic some of the conditions that honeybees experience over a natural water body. An optical manipulation, based on a pair of opposed horizontal mirrors, was designed to remove any visual information coming from the floor and ceiling. Such an optical manipulation allowed us to get closer to the seminal experiment of Heran & Lindauer 1963. Zeitschrift für vergleichende Physiologie47, 39-55. (doi:10.1007/BF00342890). Our results confirmed that a reduction or an absence of ventral optic flow in honeybees leads to a loss in altitude, and eventually a collision with the floor.
Subject(s)
Flight, Animal , Optic Flow , Altitude , Animals , Bees , Vision, OcularABSTRACT
Hierarchical sparse coding (HSC) is a powerful model to efficiently represent multidimensional, structured data such as images. The simplest solution to solve this computationally hard problem is to decompose it into independent layer-wise subproblems. However, neuroscientific evidence would suggest interconnecting these subproblems as in predictive coding (PC) theory, which adds top-down connections between consecutive layers. In this study, we introduce a new model, 2-layer sparse predictive coding (2L-SPC), to assess the impact of this interlayer feedback connection. In particular, the 2L-SPC is compared with a hierarchical Lasso (Hi-La) network made out of a sequence of independent Lasso layers. The 2L-SPC and a 2-layer Hi-La networks are trained on four different databases and with different sparsity parameters on each layer. First, we show that the overall prediction error generated by 2L-SPC is lower thanks to the feedback mechanism as it transfers prediction error between layers. Second, we demonstrate that the inference stage of the 2L-SPC is faster to converge and generates a refined representation in the second layer compared to the Hi-La model. Third, we show that the 2L-SPC top-down connection accelerates the learning process of the HSC problem. Finally, the analysis of the emerging dictionaries shows that the 2L-SPC features are more generic and present a larger spatial extension.
ABSTRACT
For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis-Menten Auto-adaptive Pixel (M 2 APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6 × 10 - 7 to 1 . 6 × 10 - 2 W·cm - 2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast "time of travel" between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels' signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M 2 APix sensor. While both algorithms adequately measured optical flow between 25 ∘ /s and 1000 ∘ /s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources.
ABSTRACT
In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories.
Subject(s)
Biomimetics/methods , Compound Eye, Arthropod/anatomy & histology , Models, Anatomic , Robotics/methods , Synthetic Biology/methods , Animals , Biomimetics/instrumentation , Motion Perception/physiologyABSTRACT
In this paper, we present: (i) a novel analog silicon retina featuring auto-adaptive pixels that obey the Michaelis-Menten law, i.e. V=V(m) I(n)/I(n)+σ(n); (ii) a method of characterizing silicon retinas, which makes it possible to accurately assess the pixels' response to transient luminous changes in a ±3-decade range, as well as changes in the initial steady-state intensity in a 7-decade range. The novel pixel, called M(2)APix, which stands for Michaelis-Menten Auto-Adaptive Pixel, can auto-adapt in a 7-decade range and responds appropriately to step changes up to ±3 decades in size without causing any saturation of the Very Large Scale Integration (VLSI) transistors. Thanks to the intrinsic properties of the Michaelis-Menten equation, the pixel output always remains within a constant limited voltage range. The range of the Analog to Digital Converter (ADC) was therefore adjusted so as to obtain a Least Significant Bit (LSB) voltage of 2.35mV and an effective resolution of about 9 bits. The results presented here show that the M(2)APix produced a quasi-linear contrast response once it had adapted to the average luminosity. Differently to what occurs in its biological counterparts, neither the sensitivity to changes in light nor the contrast response of the M(2)APix depend on the mean luminosity (i.e. the ambient lighting conditions). Lastly, a full comparison between the M(2)APix and the Delbrück auto-adaptive pixel is provided.
ABSTRACT
The demand for bendable sensors increases constantly in the challenging field of soft and micro-scale robotics. We present here, in more detail, the flexible, functional, insect-inspired curved artificial compound eye (CurvACE) that was previously introduced in the Proceedings of the National Academy of Sciences (PNAS, 2013). This cylindrically-bent sensor with a large panoramic field-of-view of 180° × 60° composed of 630 artificial ommatidia weighs only 1.75 g, is extremely compact and power-lean (0.9 W), while it achieves unique visual motion sensing performance (1950 frames per second) in a five-decade range of illuminance. In particular, this paper details the innovative Very Large Scale Integration (VLSI) sensing layout, the accurate assembly fabrication process, the innovative, new fast read-out interface, as well as the auto-adaptive dynamic response of the CurvACE sensor. Starting from photodetectors and microoptics on wafer substrates and flexible printed circuit board, the complete assembly of CurvACE was performed in a planar configuration, ensuring high alignment accuracy and compatibility with state-of-the art assembling processes. The characteristics of the photodetector of one artificial ommatidium have been assessed in terms of their dynamic response to light steps. We also characterized the local auto-adaptability of CurvACE photodetectors in response to large illuminance changes: this feature will certainly be of great interest for future applications in real indoor and outdoor environments.
Subject(s)
Biomimetics/instrumentation , Compound Eye, Arthropod/physiology , Image Interpretation, Computer-Assisted/instrumentation , Imaging, Three-Dimensional/instrumentation , Photometry/instrumentation , Semiconductors , Signal Processing, Computer-Assisted/instrumentation , Animals , Equipment Design , Equipment Failure Analysis , Eye, Artificial , Insecta/physiology , Lenses , Miniaturization , Systems IntegrationABSTRACT
When attempting to land on a ship deck tossed by the sea, helicopter pilots must make sure that the helicopter can develop sufficient lift to be able to safely touchdown. This reminder of affordance theory led us to model and study the affordance of deck-landing-ability, which defines whether it is possible to land safely on a ship deck depending on the helicopter's available lift and the ship's deck heave movements. Two groups of participants with no piloting experience using a laptop helicopter simulator attempted to land either a low-lifter or a heavy-lifter helicopter on a virtual ship deck by either triggering a pre-programmed lift serving as the descent law if it was deemed possible to land, or aborting the deck-landing maneuver. The deck-landing-ability was manipulated by varying the helicopter's initial altitude and the ship's heave phase between trials. We designed a visual augmentation making visible the deck-landing-ability, and thus enabling participants to maximize the safety of their deck-landing attempts and reduce the number of unsafe deck-landing. The visual augmentation presented here was perceived by participants as a means of facilitating this decision-making process. The benefits were found to have originated from the clear-cut distinction it helped them to make between safe and unsafe deck-landing windows and the display of the optimal time for initiating the landing.
ABSTRACT
Honeybees foraging and recruiting nest-mates by performing the waggle dance need to be able to gauge the flight distance to the food source regardless of the wind and terrain conditions. Previous authors have hypothesized that the foragers' visual odometer mathematically integrates the angular velocity of the ground image sweeping backward across their ventral viewfield, known as translational optic flow. The question arises as to how mathematical integration of optic flow (usually expressed in radians/s) can reliably encode distances, regardless of the height and speed of flight. The vertical self-oscillatory movements observed in honeybees trigger expansions and contractions of the optic flow vector field, yielding an additional visual cue called optic flow divergence. We have developed a self-scaled model for the visual odometer in which the translational optic flow is scaled by the visually estimated current clearance from the ground. In simulation, this model, which we have called SOFIa, was found to be reliable in a large range of flight trajectories, terrains and wind conditions. It reduced the statistical dispersion of the estimated flight distances approximately 10-fold in comparison with the mathematically integrated raw optic flow model. The SOFIa model can be directly implemented in robotic applications based on minimalistic visual equipment.
Subject(s)
Flight, Animal , Robotics , Animals , Bees , WindABSTRACT
Helicopter landing on a ship is a visually regulated "rendezvous" task during which pilots must use fine control to land a powerful rotorcraft on the deck of a moving ship tossed by the sea while minimizing the energy at impact. Although augmented reality assistance can be hypothesized to improve pilots' performance and the safety of landing maneuvers by guiding action toward optimal behavior in complex and stressful situations, the question of the optimal information to be displayed to feed the pilots' natural information-movement coupling remains to be investigated. Novice participants were instructed to land a simplified helicopter on a ship in a virtual reality simulator while minimizing energy at impact and landing duration. The wave amplitude and related ship heave were manipulated. We compared the benefits of two types of visual augmentation whose design was based on either solving cockpit-induced visual occlusion problems or strengthening the online regulation of the deceleration by keeping the current [Formula: see text] variable around an ideal value of -0.5 to conduct smooth and efficient landing. Our results showed that the second augmentation, ecologically grounded, offers benefits at several levels of analysis. It decreases the landing duration, improves the control of the helicopter displacement, and sharpens the sensitivity to changes in [Formula: see text]. This underlines the importance for designers of augmented reality systems to collaborate with psychologists to identify the relevant perceptual-motor strategy that must be encouraged before designing an augmentation that will enhance it.
Subject(s)
Aircraft , Aviation , Pilots , Ships , Adult , Eye Movements/physiology , Female , Humans , Male , Military Personnel , Task Performance and Analysis , Young AdultABSTRACT
Miniature multi-rotors are promising robots for navigating subterranean networks, but maintaining a radio connection underground is challenging. In this paper, we introduce a distributed algorithm, called U-Chain (for Underground-chain), that coordinates a chain of flying robots between an exploration drone and an operator. Our algorithm only uses the measurement of the signal quality between two successive robots and an estimate of the ground speed based on an optic flow sensor. It leverages a distributed policy for each UAV and a Kalman filter to get reliable estimates of the signal quality. We evaluate our approach formally and in simulation, and we describe experimental results with a chain of 3 real miniature quadrotors (12 by 12 cm) and a base station.
ABSTRACT
When insects are flying forward, the image of the ground sweeps backward across their ventral viewfield and forms an "optic flow," which depends on both the groundspeed and the groundheight. To explain how these animals manage to avoid the ground by using this visual motion cue, we suggest that insect navigation hinges on a visual-feedback loop we have called the optic-flow regulator, which controls the vertical lift. To test this idea, we built a micro-helicopter equipped with an optic-flow regulator and a bio-inspired optic-flow sensor. This fly-by-sight micro-robot can perform exacting tasks such as take-off, level flight, and landing. Our control scheme accounts for many hitherto unexplained findings published during the last 70 years on insects' visually guided performances; for example, it accounts for the fact that honeybees descend in a headwind, land with a constant slope, and drown when travelling over mirror-smooth water. Our control scheme explains how insects manage to fly safely without any of the instruments used onboard aircraft to measure the groundheight, groundspeed, and descent speed. An optic-flow regulator is quite simple in terms of its neural implementation and just as appropriate for insects as it would be for aircraft.
Subject(s)
Flight, Animal/physiology , Insecta/physiology , Models, Biological , Motion Perception/physiology , Robotics/instrumentation , Robotics/methods , Vision, Ocular/physiology , Altitude , Animals , WindABSTRACT
To further elucidate the mechanisms underlying insects' height and speed control, we trained outdoor honeybees to fly along a high-roofed tunnel, part of which was equipped with a moving floor. Honeybees followed the stationary part of the floor at a given height. On encountering the moving part of the floor, which moved in the same direction as their flight, honeybees descended and flew at a lower height, thus gradually restoring their ventral optic flow (OF) to a similar value to that they had percieved when flying over the stationary part of the floor. This was therefore achieved not by increasing their airspeed, but by lowering their height of flight. These results can be accounted for by a control system called an optic flow regulator, as proposed in previous studies. This visuo-motor control scheme explains how honeybees can navigate safely along tunnels on the sole basis of OF measurements, without any need to measure either their speed or the clearance from the surrounding walls.
Subject(s)
Bees/physiology , Behavior, Animal/physiology , Animals , Motion Perception/physiology , Motor Activity/physiology , Visual Pathways/physiology , Visual Perception/physiologyABSTRACT
The homing journeys of nine loggerhead turtles translocated from their nesting beach to offshore release sites, were reconstructed through Argos and GPS telemetry while their water-related orientation was simultaneously recorded at high temporal resolution by multi-sensor data loggers featuring a three-axis magnetic sensor. All turtles managed to return to the nesting beach area, although with indirect routes encompassing an initial straight leg not precisely oriented towards home, and a successive homebound segment carried out along the coast. Logger data revealed that, after an initial period of disorientation, turtles were able to precisely maintain a consistent direction for several hours while moving in the open sea, even during night-time. Their water-related headings were in accordance with the orientation of the resulting route, showing little or no effect of current drift. This study reveals a biphasic homing strategy of displaced turtles involving an initial orientation weakly related to home and a successive shift to coastal navigation, which is in line with the modern conceptual framework of animal migratory navigation as deriving from sequential mechanisms acting at different spatial scales.
Subject(s)
Animal Migration/physiology , Homing Behavior/physiology , Orientation/physiology , Turtles/physiology , Animals , Magnetics , Seasons , Seawater , TelemetryABSTRACT
Here we present a novel bio-inspired visual processing system, which enables a robot to locate and follow a target, using an artificial compound eye called CurvACE. This visual sensor actively scanned the environment at an imposed frequency (50 Hz) with an angular scanning amplitude of [Formula: see text] and succeeded in locating a textured cylindrical target with hyperacuity, i.e. much finer resolution than the coarse inter-receptor angle of the compound eye. Equipped with this small, lightweight visual scanning sensor, a Mecanum-wheeled mobile robot named ACEbot was able to follow a target at a constant distance by localizing the right and left edges of the target. The localization of the target's contrasted edges is based on a bio-inspired summation of Gaussian receptive fields in the visual system. By means of its auto-adaptive pixels, ACEbot consistently achieved similar pursuit performances under various lighting conditions with a high level of repeatability. The robotic pursuit pattern mimics finely the behavior of the male fly Syritta Pipens L. while pursuing the female. The high similarity in the trajectories as well as the biomimicry of the visual system provides strong support for the hypothesis that flies do maintain center the target and constant its subtended angle during smooth pursuit. Moreover, we discuss the fact that such simple strategy can also provide a trajectory compatible with motion camouflage.
Subject(s)
Behavior, Animal , Biomimetics , Diptera , Locomotion , Robotics , Animals , MaleABSTRACT
For studies of how birds control their altitude, seabirds are of particular interest because they forage offshore where the visual environment can be simply modelled by a flat world textured by waves then generating only ventral visual cues. This study suggests that optic flow, i.e. the rate at which the sea moves across the eye's retina, can explain gulls' altitude control over seas. In particular, a new flight model that includes both energy and optical invariants helps explain the gulls' trajectories during offshore takeoff and cruising flight. A linear mixed model applied to 352 flights from 16 individual lesser black backed gulls (Larus fuscus) revealed a statistically significant optic flow set-point of ca 25° s-1. Thereafter, an optic flow-based flight model was applied to 18 offshore takeoff flights from nine individual gulls. By introducing an upper limit in climb rate on the elevation dynamics, coupled with an optic flow set-point, the predicted altitude gives an optimized fit factor value of 63% on average (30-83% in range) with respect to the GPS data. We conclude that the optic flow regulation principle helps gulls to adjust their altitude over sea without having to directly measure their current altitude.
Subject(s)
Altitude , Charadriiformes/physiology , Flight, Animal/physiology , Models, Biological , Vision, Ocular , Animals , Oceans and SeasABSTRACT
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight.
Subject(s)
Flight, Animal , Insecta/physiology , Optic Flow/physiology , Robotics , Animals , Environment , Models, BiologicalABSTRACT
Studies on insects' visual guidance systems have shed little light on how learning contributes to insects' altitude control system. In this study, honeybees were trained to fly along a double-roofed tunnel after entering it near either the ceiling or the floor of the tunnel. The honeybees trained to hug the ceiling therefore encountered a sudden change in the tunnel configuration midways: i.e. a "dorsal ditch". Thus, the trained honeybees met a sudden increase in the distance to the ceiling, corresponding to a sudden strong change in the visual cues available in their dorsal field of view. Honeybees reacted by rising quickly and hugging the new, higher ceiling, keeping a similar forward speed, distance to the ceiling and dorsal optic flow to those observed during the training step; whereas bees trained to follow the floor kept on following the floor regardless of the change in the ceiling height. When trained honeybees entered the tunnel via the other entry (the lower or upper entry) to that used during the training step, they quickly changed their altitude and hugged the surface they had previously learned to follow. These findings clearly show that trained honeybees control their altitude based on visual cues memorized during training. The memorized visual cues generated by the surfaces followed form a complex optic flow pattern: trained honeybees may attempt to match the visual cues they perceive with this memorized optic flow pattern by controlling their altitude.
Subject(s)
Altitude , Bees/physiology , Behavior, Animal , Animals , Flight, Animal , Spatial Learning , Vision, OcularABSTRACT
Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M2APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.