Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36146344

RESUMO

Dexterous manipulation in robotic hands relies on an accurate sense of artificial touch. Here we investigate neuromorphic tactile sensation with an event-based optical tactile sensor combined with spiking neural networks for edge orientation detection. The sensor incorporates an event-based vision system (mini-eDVS) into a low-form factor artificial fingertip (the NeuroTac). The processing of tactile information is performed through a Spiking Neural Network with unsupervised Spike-Timing-Dependent Plasticity (STDP) learning, and the resultant output is classified with a 3-nearest neighbours classifier. Edge orientations were classified in 10-degree increments while tapping vertically downward and sliding horizontally across the edge. In both cases, we demonstrate that the sensor is able to reliably detect edge orientation, and could lead to accurate, bio-inspired, tactile processing in robotics and prosthetics applications.


Assuntos
Robótica , Percepção do Tato , Redes Neurais de Computação , Tato
2.
Sci Robot ; 7(67): eabl8419, 2022 06 29.
Artigo em Inglês | MEDLINE | ID: mdl-35767646

RESUMO

Neuromorphic hardware enables fast and power-efficient neural network-based artificial intelligence that is well suited to solving robotic tasks. Neuromorphic algorithms can be further developed following neural computing principles and neural network architectures inspired by biological neural systems. In this Viewpoint, we provide an overview of recent insights from neuroscience that could enhance signal processing in artificial neural networks on chip and unlock innovative applications in robotics and autonomous intelligent systems. These insights uncover computing principles, primitives, and algorithms on different levels of abstraction and call for more research into the basis of neural computation and neuronally inspired computing hardware.


Assuntos
Inteligência Artificial , Robótica , Algoritmos , Computadores , Redes Neurais de Computação
3.
IEEE Trans Cybern ; 52(9): 9251-9262, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35533159

RESUMO

Falling down is a serious problem for health and has become one of the major etiologies of accidental death for the elderly living alone. In recent years, many efforts have been paid to fall recognition based on wearable sensors or standard vision sensors. However, the prior methods have the risk of privacy leaks, and almost all these methods are based on video clips, which cannot localize where the falls occurred in long videos. For these reasons, in this article, the bioinspired vision sensor-based falls temporal localization framework is proposed. The bioinspired vision sensors, such as dynamic and active-pixel vision sensor (DAVIS) camera applied in this work responds to pixels' brightness change, and each pixel works independently and asynchronously compared to the standard vision sensors. This property makes it have a very high dynamic range and privacy preserving. First, to better represent event data, compared with the typical constant temporal window mechanism, an adaptive temporal window conversion mechanism is developed. The temporal localization framework follows a proven proposal and classification paradigm. Second, for the high-efficient and recall proposal generation, different from the traditional sliding window scheme, the event temporal density as the actionness score is set and the 1D-watershed algorithm to generate proposals is applied. In addition, we combine the temporal and spatial attention mechanism with our feature extraction network to temporally model the falls. Finally, to evaluate the performance of our framework, 30 volunteers are recruited to join the simulated fall experiments. According to the results of experiments, our framework can realize precise falls temporal localization and achieve the state-of-the-art performance.


Assuntos
Acidentes por Quedas , Algoritmos , Acidentes por Quedas/prevenção & controle , Idoso , Humanos
4.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 154-180, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-32750812

RESUMO

Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of µs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.


Assuntos
Algoritmos , Robótica , Redes Neurais de Computação
5.
IEEE Trans Vis Comput Graph ; 27(5): 2577-2586, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33780340

RESUMO

The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g., low latency predictive rendering, or to study quick and subtle eye motions like microsaccades using head-mounted devices in the wild. Here, we propose a hybrid frame-event-based near-eye gaze tracking system offering update rates beyond 10,000 Hz with an accuracy that matches that of high-end desktop-mounted commercial trackers when evaluated in the same conditions. Our system, previewed in Figure 1, builds on emerging event cameras that simultaneously acquire regularly sampled frames and adaptively sampled events. We develop an online 2D pupil fitting method that updates a parametric model every one or few events. Moreover, we propose a polynomial regressor for estimating the point of gaze from the parametric pupil model in real time. Using the first event-based gaze dataset, we demonstrate that our system achieves accuracies of 0.45°-1.75° for fields of view from 45° to 98°. With this technology, we hope to enable a new generation of ultra-low-latency gaze-contingent rendering and display techniques for virtual and augmented reality.

6.
Front Neurorobot ; 13: 84, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31680925

RESUMO

Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.

7.
Front Neurorobot ; 13: 10, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31001104

RESUMO

Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.

8.
Front Neurosci ; 13: 73, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30809114

RESUMO

A neuromorphic vision sensors is a novel passive sensing modality and frameless sensors with several advantages over conventional cameras. Frame-based cameras have an average frame-rate of 30 fps, causing motion blur when capturing fast motion, e.g., hand gesture. Rather than wastefully sending entire images at a fixed frame rate, neuromorphic vision sensors only transmit the local pixel-level changes induced by the movement in a scene when they occur. This leads to advantageous characteristics, including low energy consumption, high dynamic range, a sparse event stream and low response latency. In this study, a novel representation learning method was proposed: Fixed Length Gists Representation (FLGR) learning for event-based gesture recognition. Previous methods accumulate events into video frames in a time duration (e.g., 30 ms) to make the accumulated image-level representation. However, the accumulated-frame-based representation waives the friendly event-driven paradigm of neuromorphic vision sensor. New representation are urgently needed to fill the gap in non-accumulated-frame-based representation and exploit the further capabilities of neuromorphic vision. The proposed FLGR is a sequence learned from mixture density autoencoder and preserves the nature of event-based data better. FLGR has a data format of fixed length, and it is easy to feed to sequence classifier. Moreover, an RNN-HMM hybrid was proposed to address the continuous gesture recognition problem. Recurrent neural network (RNN) was applied for FLGR sequence classification while hidden Markov model (HMM) is employed for localizing the candidate gesture and improving the result in a continuous sequence. A neuromorphic continuous hand gestures dataset (Neuro ConGD Dataset) was developed with 17 hand gestures classes for the community of the neuromorphic research. Hopefully, FLGR can inspire the study on the event-based highly efficient, high-speed, and high-dynamic-range sequence classification tasks.

9.
Sensors (Basel) ; 19(1)2019 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-30626132

RESUMO

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject's motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from "BCI Competition IV". Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Movimento/fisiologia , Redes Neurais de Computação , Algoritmos , Mãos/fisiologia , Humanos , Aprendizado de Máquina
10.
J Neural Eng ; 16(2): 026014, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30577030

RESUMO

OBJECTIVE: The objective of this work is to use the capability of spiking neural networks to capture the spatio-temporal information encoded in time-series signals and decode them without the use of hand-crafted features and vector-based learning and the realization of the spiking model on low-power neuromorphic hardware. APPROACH: The NeuCube spiking model was used to classify different grasp movements directly from raw surface electromyography signals (sEMG), the estimations of the applied finger forces as well as the classification of two motor imagery movements from raw electroencephalography (EEG). In a parallel investigation, the designed spiking decoder was implemented on SpiNNaker neuromorphic hardware, which allows low-energy real-time processing. MAIN RESULTS: Experimental results reveal a better classification accuracy using the NeuCube model compared to traditional machine learning methods. For sEMG classification, we reached a training accuracy of 85% and a test accuracy of 84.8%, as well as less than 19% of relative root mean square error (rRMSE) when estimating finger forces from six subjects. For the EEG classification, a mean accuracy of 75% was obtained when tested on raw EEG data from nine subjects from the existing 2b dataset from 'BCI competition IV'. SIGNIFICANCE: This work provides a proof of concept for a successful implementation of the NeuCube spiking model on the SpiNNaker neuromorphic platform for raw sEMG and EEG decoding, which could chart a route ahead for a new generation of portable closed-loop and low-power neuroprostheses.


Assuntos
Redes Neurais de Computação , Algoritmos , Eletroencefalografia , Eletromiografia , Feminino , Mãos , Força da Mão/fisiologia , Humanos , Aprendizado de Máquina , Masculino , Modelos Neurológicos , Próteses e Implantes , Desenho de Prótese , Adulto Jovem
11.
J Neural Eng ; 15(6): 065003, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30215610

RESUMO

OBJECTIVE: The objective of this work is to present gumpy, a new free and open source Python toolbox designed for hybrid brain-computer interface (BCI). APPROACH: Gumpy provides state-of-the-art algorithms and includes a rich selection of signal processing methods that have been employed by the BCI community over the last 20 years. In addition, a wide range of classification methods that span from classical machine learning algorithms to deep neural network models are provided. Gumpy can be used for both EEG and EMG biosignal analysis, visualization, real-time streaming and decoding. RESULTS: The usage of the toolbox was demonstrated through two different offline example studies, namely movement prediction from EEG motor imagery, and the decoding of natural grasp movements with the applied finger forces from surface EMG (sEMG) signals. Additionally, gumpy was used for real-time control of a robot arm using steady-state visually evoked potentials (SSVEP) as well as for real-time prosthetic hand control using sEMG. Overall, obtained results with the gumpy toolbox are comparable or better than previously reported results on the same datasets. SIGNIFICANCE: Gumpy is a free and open source software, which allows end-users to perform online hybrid BCIs and provides different techniques for processing and decoding of EEG and EMG signals. More importantly, the achieved results reveal that gumpy's deep learning toolbox can match or outperform the state-of-the-art in terms of accuracy. This can therefore enable BCI researchers to develop more robust decoding algorithms using novel techniques and hence chart a route ahead for new BCI improvements.


Assuntos
Interfaces Cérebro-Computador , Software , Algoritmos , Eletroencefalografia , Eletromiografia , Mãos , Humanos , Imaginação/fisiologia , Aprendizado de Máquina , Movimento/fisiologia , Linguagens de Programação , Próteses e Implantes , Desempenho Psicomotor/fisiologia , Reprodutibilidade dos Testes
12.
Front Neurorobot ; 12: 4, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29515386

RESUMO

In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor.

13.
Comput Biol Med ; 95: 271-276, 2018 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29180004

RESUMO

Ageing has an effect on many parameters of the physical condition, and one of them is the way a person walks. This property, the gait pattern, can unintrusively be observed by letting people walk over a sensor floor. The electric capacitance sensors built into the floor deliver information about when and where feet get into close proximity and contact with the floor during the phases of human locomotion. We processed gait patterns recorded this way by extracting a feature vector containing the discretised distribution of occurring geometrical extents of significant sensor readings. This kind of feature vector is an implicit measure encoding the ratio of swing-to stance phase timings in the gait cycle and representing how cleanly the leg swing is performed. We then used the dataset to train a Multi-Layer Perceptron to perform regression with the age of the person as the target value, and the feature vector as input. With this method and a dataset size of 142 persons recorded, we achieved a mean absolute error of approximately 10 years between the true age and the estimated age of the person. Considering the novelty of our approach, this is an acceptable result. The combination of a floor sensor and machine learning methods for interpreting the sensor data seems promising for further research and applications in care and medicine.


Assuntos
Envelhecimento/fisiologia , Aprendizado de Máquina , Modelos Biológicos , Caminhada/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
14.
Front Neurorobot ; 11: 28, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28747883

RESUMO

Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.

15.
Sensors (Basel) ; 16(10)2016 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-27775621

RESUMO

Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor.

16.
Front Comput Neurosci ; 10: 13, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26924979

RESUMO

After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments.

17.
Front Neurorobot ; 10: 1, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26913002

RESUMO

We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.

19.
Artigo em Inglês | MEDLINE | ID: mdl-26737015

RESUMO

Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.


Assuntos
Cegueira/terapia , Retina/fisiologia , Visão Ocular/fisiologia , Pessoas com Deficiência Visual , Adulto , Algoritmos , Desenho de Equipamento , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Som , Adulto Jovem
20.
Ann N Y Acad Sci ; 1164: 353-66, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19645927

RESUMO

Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.


Assuntos
Movimentos Oculares , Movimentos da Cabeça , Animais , Gatos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...