Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 361-372, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-32750822

RESUMO

Optical flow is a crucial component of the feature space for early visual processing of dynamic scenes especially in new applications such as self-driving vehicles, drones and autonomous robots. The dynamic vision sensors are well suited for such applications because of their asynchronous, sparse and temporally precise representation of the visual dynamics. Many algorithms proposed for computing visual flow for these sensors suffer from the aperture problem as the direction of the estimated flow is governed by the curvature of the object rather than the true motion direction. Some methods that do overcome this problem by temporal windowing under-utilize the true precise temporal nature of the dynamic sensors. In this paper, we propose a novel multi-scale plane fitting based visual flow algorithm that is robust to the aperture problem and also computationally fast and efficient. Our algorithm performs well in many scenarios ranging from fixed camera recording simple geometric shapes to real world scenarios such as camera mounted on a moving car and can successfully perform event-by-event motion estimation of objects in the scene to allow for predictions of upto 500 ms i.e., equivalent to 10 to 25 frames with traditional cameras.

2.
Front Neurosci ; 14: 587, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32848527

RESUMO

We present the first purely event-based method for face detection using the high temporal resolution properties of an event-based camera to detect the presence of a face in a scene using eye blinks. Eye blinks are a unique and stable natural dynamic temporal signature of human faces across population that can be fully captured by event-based sensors. We show that eye blinks have a unique temporal signature over time that can be easily detected by correlating the acquired local activity with a generic temporal model of eye blinks that has been generated from a wide population of users. In a second stage once a face has been located it becomes possible to apply a probabilistic framework to track its spatial location for each incoming event while using eye blinks to correct for drift and tracking errors. Results are shown for several indoor and outdoor experiments. We also release an annotated data set that can be used for future work on the topic.

3.
Front Neurosci ; 14: 275, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32327968

RESUMO

In this paper, we introduce a framework for dynamic gesture recognition with background suppression operating on the output of a moving event-based camera. The system is developed to operate in real-time using only the computational capabilities of a mobile phone. It introduces a new development around the concept of time-surfaces. It also presents a novel event-based methodology to dynamically remove backgrounds that uses the high temporal resolution properties of event-based cameras. To our knowledge, this is the first Android event-based framework for vision-based recognition of dynamic gestures running on a smartphone without off-board processing. We assess the performances by considering several scenarios in both indoors and outdoors, for static and dynamic conditions, in uncontrolled lighting conditions. We also introduce a new event-based dataset for gesture recognition with static and dynamic backgrounds (made publicly available). The set of gestures has been selected following a clinical trial to allow human-machine interaction for the visually impaired and older adults. We finally report comparisons with prior work that addressed event-based gesture recognition reporting comparable results, without the use of advanced classification techniques nor power greedy hardware.

4.
Nat Commun ; 10(1): 4884, 2019 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-31653848

RESUMO

Astrocytes play essential roles in the neural tissue where they form a continuous network, while displaying important local heterogeneity. Here, we performed multiclonal lineage tracing using combinatorial genetic markers together with a new large volume color imaging approach to study astrocyte development in the mouse cortex. We show that cortical astrocyte clones intermix with their neighbors and display extensive variability in terms of spatial organization, number and subtypes of cells generated. Clones develop through 3D spatial dispersion, while at the individual level astrocytes acquire progressively their complex morphology. Furthermore, we find that the astroglial network is supplied both before and after birth by ventricular progenitors that scatter in the neocortex and can give rise to protoplasmic as well as pial astrocyte subtypes. Altogether, these data suggest a model in which astrocyte precursors colonize the neocortex perinatally in a non-ordered manner, with local environment likely determining astrocyte clonal expansion and final morphotype.


Assuntos
Astrócitos/citologia , Diferenciação Celular , Córtex Cerebral/citologia , Animais , Astrócitos/metabolismo , Linhagem da Célula , Plasticidade Celular , Proliferação de Células , Células Clonais/citologia , Camundongos
5.
Front Neurosci ; 13: 827, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31496927

RESUMO

Most dynamic systems are controlled by discrete time controllers. One of the main challenges faced during the design of a digital control law is the selection of the appropriate sampling time. A small sampling time will increase the accuracy of the controlled output at the expense of heavy computations. In contrast, a large sampling time will decrease the computational power needed to update the control law at the expense of a smaller stability region. In addition, once the setpoint is reached, the controlled input is still updated, making the overall controlled system not energetically efficient. To be more efficient, one can update the control law based on a significant fixed change of the controlled signal (send-on-delta or event-based controller). Like for time-based discretization, the amplitude of the significant change must be chosen carefully to avoid oscillations around the setpoint (e.g., if the setpoint is in between two samples) or an unnecessary increase of the samples number needed to reach the setpoint with a given accuracy. This paper proposes a novel non-linear event-based discretization method based on inter-events duration. We demonstrate that our new method reaches an arbitrary accuracy independently of the setpoint amplitude without increasing the network data transmission bandwidth. The method decreases the overall number of samples needed to estimate the states of a dynamical system and the update rate of an actuator, making it more energetically efficient.

6.
Neural Comput ; 31(6): 1114-1138, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30979350

RESUMO

In this work, we propose a two-layered descriptive model for motion processing from retina to the cortex, with an event-based input from the asynchronous time-based image sensor (ATIS) camera. Spatial and spatiotemporal filtering of visual scenes by motion energy detectors has been implemented in two steps in a simple layer of a lateral geniculate nucleus model and a set of three-dimensional Gabor kernels, eventually forming a probabilistic population response. The high temporal resolution of independent and asynchronous local sensory pixels from the ATIS provides a realistic stimulation to study biological motion processing, as well as developing bio-inspired motion processors for computer vision applications. Our study combines two significant theories in neuroscience: event-based stimulation and probabilistic sensory representation. We have modeled how this might be done at the vision level, as well as suggesting this framework as a generic computational principle among different sensory modalities.


Assuntos
Modelos Neurológicos , Percepção de Movimento , Estimulação Luminosa/métodos , Córtex Visual , Humanos , Percepção de Movimento/fisiologia , Probabilidade , Retina/fisiologia , Visão Ocular/fisiologia , Córtex Visual/fisiologia
7.
Sci Rep ; 9(1): 3744, 2019 03 06.
Artigo em Inglês | MEDLINE | ID: mdl-30842458

RESUMO

Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100 Hz for a power budget lower than a 200 mW (10 mW for the camera, 90 mW for the liquid lens and ~100 mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature.


Assuntos
Percepção de Profundidade/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Visão Ocular/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Encéfalo/fisiologia , Computadores , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Retina/fisiologia
8.
Front Neurosci ; 13: 1338, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31969799

RESUMO

This paper introduces an new open-source, header-only and modular C++ framework to facilitate the implementation of event-driven algorithms. The framework relies on three independent components: sepia (file IO), tarsier (algorithms), and chameleon (display). Our benchmarks show that algorithms implemented with tarsier are faster and have a lower latency than identical implementations in other state-of-the-art frameworks, thanks to static polymorphism (compile-time pipeline assembly). The observer pattern used throughout the framework encourages implementations that better reflect the event-driven nature of the algorithms and the way they process events, easing future translation to neuromorphic hardware. The framework integrates drivers to communicate with the DVS, the DAVIS, the Opal Kelly ATIS, and the CCam ATIS.

9.
IEEE Trans Biomed Circuits Syst ; 12(6): 1467-1474, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30334806

RESUMO

Johnson-Nyquist noise is the electronic noise generated by the thermal agitation of charge carriers, which increases when the sensor overheats. Current high-speed cameras used in low-light conditions are often cooled down to reduce thermal noise and increase their signal to noise ratio. These sensors, however, record hundreds of frames per second, which takes time, requires energy, and heavy computing power due to the substantial data load. Event-based sensors benefit from a high temporal resolution and record the information in a sparse manner. Based on an asynchronous time-based image sensor, we developed another version of this event-based camera whose pixels were designed for low-light applications and added a Peltier-effect-based cooling system at the back of the sensor in order to reduce thermal noise. We show the benefits from thermal noise reduction and study the improvement of the signal to noise ratio in the estimation of event-based normal flow norm and angle and particle tracking in microscopy.


Assuntos
Algoritmos , Temperatura Baixa , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Desenho de Equipamento , Microscopia
10.
Artigo em Inglês | MEDLINE | ID: mdl-30222585

RESUMO

This paper introduces an event-based luminance-free algorithm for line and segment detection from the output of asynchronous event-based neuromorphic retinas. These recent biomimetic vision sensors are composed of autonomous pixels, each of them asynchronously generating visual events that encode relative changes in pixels' illumination at high temporal resolutions. This frame-free approach results in an increased energy efficiency and in real-time operation, making these sensors especially suitable for applications such as autonomous robotics. The proposed algorithm is based on an iterative event-based weighted least squares fitting, and it is consequently well suited to the high temporal resolution and asynchronous acquisition of neuromorphic cameras: parameters of a current line are updated for each event attributed (i.e., spatio-temporally close) to it, while implicitly forgetting the contribution of older events according to a speed-tuned exponentially decaying function. A detection occurs if a measure of activity, i.e., implicit measure of the number of contributing events and using the same decay function, exceeds a given threshold. The speed-tuned decreasing function is based on a measure of the apparent motion, i.e., the optical flow computed around each event. This latter ensures that the algorithm behaves independently of the edges' dynamics. Line segments are then extracted from the lines, allowing for the tracking of the corresponding endpoints. We provide experiments showing the accuracy of our algorithm and study the influence of the apparent velocity and relative orientation of the observed edges. Finally, evaluations of its computational efficiency show that this algorithm can be envisioned for high-speed applications, such as vision-based robotic navigation.

11.
IEEE Trans Neural Netw Learn Syst ; 29(9): 4223-4237, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29989974

RESUMO

Object tracking is a major problem for many computer vision applications, but it continues to be computationally expensive. The use of bio-inspired neuromorphic event-driven dynamic vision sensors (DVSs) has heralded new methods for vision processing, exploiting reduced amount of data and very precise timing resolutions. Previous studies have shown these neural spiking sensors to be well suited to implementing single-sensor object tracking systems, although they experience difficulties when solving ambiguities caused by object occlusion. DVSs have also performed well in 3-D reconstruction in which event matching techniques are applied in stereo setups. In this paper, we propose a new event-driven stereo object tracking algorithm that simultaneously integrates 3-D reconstruction and cluster tracking, introducing feedback information in both tasks to improve their respective performances. This algorithm, inspired by human vision, identifies objects and learns their position and size in order to solve ambiguities. This strategy has been validated in four different experiments where the 3-D positions of two objects were tracked in a stereo setup even when occlusion occurred. The objects studied in the experiments were: 1) two swinging pens, the distance between which during movement was measured with an error of less than 0.5%; 2) a pen and a box, to confirm the correctness of the results obtained with a more complex object; 3) two straws attached to a fan and rotating at 6 revolutions per second, to demonstrate the high-speed capabilities of this approach; and 4) two people walking in a real-world environment.

12.
Front Neurosci ; 12: 442, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30013461

RESUMO

3D reconstruction from multiple viewpoints is an important problem in machine vision that allows recovering tridimensional structures from multiple two-dimensional views of a given scene. Reconstructions from multiple views are conventionally achieved through a process of pixel luminance-based matching between different views. Unlike conventional machine vision methods that solve matching ambiguities by operating only on spatial constraints and luminance, this paper introduces a fully time-based solution to stereovision using the high temporal resolution of neuromorphic asynchronous event-based cameras. These cameras output dynamic visual information in the form of what is known as "change events" that encode the time, the location and the sign of the luminance changes. A more advanced event-based camera, the Asynchronous Time-based Image Sensor (ATIS), in addition of change events, encodes absolute luminance as time differences. The stereovision problem can then be formulated solely in the time domain as a problem of events coincidences detection problem. This work is improving existing event-based stereovision techniques by adding luminance information that increases the matching reliability. It also introduces a formulation that does not require to build local frames (though it is still possible) from the luminances which can be costly to implement. Finally, this work also introduces a methodology for time based stereovision in the context of binocular and trinocular configurations using time based event matching criterion combining for the first time all together: space, time, luminance, and motion.

13.
Front Neurosci ; 12: 373, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29946231

RESUMO

This paper introduces an event-based methodology to perform arbitrary linear basis transformations that encompass a broad range of practically important signal transforms, such as the discrete Fourier transform (DFT) and the discrete wavelet transform (DWT). We present a complexity analysis of the proposed method, and show that the amount of required multiply-and-accumulate operations is reduced in comparison to frame-based method in natural video sequences, when the required temporal resolution is high enough. Experimental results on natural video sequences acquired by the asynchronous time-based neuromorphic image sensor (ATIS) are provided to support the feasibility of the method, and to illustrate the gain in computation resources.

14.
Front Neurosci ; 12: 135, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29695948

RESUMO

This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond). Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels.

16.
IEEE Trans Image Process ; 26(5): 2192-2202, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28186889

RESUMO

This paper introduces a method to compute the FFT of a visual scene at a high temporal precision of around 1- [Formula: see text] output from an asynchronous event-based camera. Event-based cameras allow to go beyond the widespread and ingrained belief that acquiring series of images at some rate is a good way to capture visual motion. Each pixel adapts its own sampling rate to the visual input it receives and defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. As a consequence, the sampling process is no longer governed by a fixed timing source but by the signal to be sampled itself, or more precisely by the variations of the signal in the amplitude domain. Event-based cameras acquisition paradigm allows to go beyond the current conventional method to compute the FFT. The event-driven FFT algorithm relies on a heuristic methodology designed to operate directly on incoming gray level events to update incrementally the FFT while reducing both computation and data load. We show that for reasonable levels of approximations at equivalent frame rates beyond the millisecond, the method performs faster and more efficiently than conventional image acquisition. Several experiments are carried out on indoor and outdoor scenes where both conventional and event-driven FFT computation is shown and compared.

17.
Sci Rep ; 7: 40703, 2017 01 12.
Artigo em Inglês | MEDLINE | ID: mdl-28079187

RESUMO

Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

18.
Front Neurosci ; 10: 391, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27642275

RESUMO

The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time.

19.
Front Neurosci ; 10: 208, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27242412

RESUMO

The goal of the Perspective-n-Point problem (PnP) is to find the relative pose between an object and a camera from a set of n pairings between 3D points and their corresponding 2D projections on the focal plane. Current state of the art solutions, designed to operate on images, rely on computationally expensive minimization techniques. For the first time, this work introduces an event-based PnP algorithm designed to work on the output of a neuromorphic event-based vision sensor. The problem is formulated here as a least-squares minimization problem, where the error function is updated with every incoming event. The optimal translation is then computed in closed form, while the desired rotation is given by the evolution of a virtual mechanical system whose energy is proven to be equal to the error function. This allows for a simple yet robust solution of the problem, showing how event-based vision can simplify computer vision tasks. The approach takes full advantage of the high temporal resolution of the sensor, as the estimated pose is incrementally updated with every incoming event. Two approaches are proposed: the Full and the Efficient methods. These two methods are compared against a state of the art PnP algorithm both on synthetic and on real data, producing similar accuracy in addition of being faster.

20.
Front Neurosci ; 10: 596, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28220057

RESUMO

State of the art scene flow estimation techniques are based on projections of the 3D motion on image using luminance-sampled at the frame rate of the cameras-as the principal source of information. We introduce in this paper a pure time based approach to estimate the flow from 3D point clouds primarily output by neuromorphic event-based stereo camera rigs, or by any existing 3D depth sensor even if it does not provide nor use luminance. This method formulates the scene flow problem by applying a local piecewise regularization of the scene flow. The formulation provides a unifying framework to estimate scene flow from synchronous and asynchronous 3D point clouds. It relies on the properties of 4D space time using a decomposition into its subspaces. This method naturally exploits the properties of the neuromorphic asynchronous event based vision sensors that allows continuous time 3D point clouds reconstruction. The approach can also handle the motion of deformable object. Experiments using different 3D sensors are presented.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...