Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Neuroimage ; 71: 114-24, 2013 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-23321153

RESUMO

Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical).


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Gravitação , Percepção de Movimento/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Movimento (Física) , Estimulação Luminosa , Adulto Jovem
2.
Neuroimage ; 66: 402-11, 2013 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-23142071

RESUMO

Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2×2×2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions.


Assuntos
Sinais (Psicologia) , Emoções/fisiologia , Expressão Facial , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Animais , Mapeamento Encefálico , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
3.
J Vis ; 10(10): 18, 2010 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-20884483

RESUMO

Walking through a crowd or driving on a busy street requires monitoring your own movement and that of others. The segmentation of these other, independently moving, objects is one of the most challenging tasks in vision as it requires fast and accurate computations for the disentangling of independent motion from egomotion, often in cluttered scenes. This is accomplished in our brain by the dorsal visual stream relying on heavy parallel-hierarchical processing across many areas. This study is the first to utilize the potential of such design in an artificial vision system. We emulate large parts of the dorsal stream in an abstract way and implement an architecture with six interdependent feature extraction stages (e.g., edges, stereo, optical flow, etc.). The computationally highly demanding combination of these features is used to reliably extract moving objects in real time. This way-utilizing the advantages of parallel-hierarchical design-we arrive at a novel and powerful artificial vision system that approaches richness, speed, and accuracy of visual processing in biological systems.


Assuntos
Percepção de Movimento/fisiologia , Interface Usuário-Computador , Córtex Visual/fisiologia , Vias Visuais/fisiologia , Humanos
4.
Data Brief ; 11: 491-498, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28289699

RESUMO

We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB - an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

5.
Int J Neural Syst ; 22(3): 1250007, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23627623

RESUMO

We present a hybrid neural network architecture that supports the estimation of binocular disparity in a cyclopean, head-centric coordinate system without explicitly establishing retinal correspondences. Instead the responses of binocular energy neurons are gain-modulated by oculomotor signals. The network can handle the full six degrees of freedom of binocular gaze and operates directly on image pairs of possibly varying contrast. Furthermore, we show that in the absence of an oculomotor signal the same architecture is capable of estimating the epipolar geometry directly from the population response. The increased complexity of the scenarios considered in this work provides an important step towards the application of computational models centered on gain modulation mechanisms in real-world robotic applications. The proposed network is shown to outperform a standard computer vision technique on a disparity estimation task involving real-world stereo images.


Assuntos
Cabeça , Modelos Neurológicos , Neurônios/fisiologia , Disparidade Visual/fisiologia , Visão Binocular/fisiologia , Vias Visuais/patologia , Algoritmos , Materiais Biomiméticos , Simulação por Computador , Fixação Ocular/fisiologia , Lateralidade Funcional , Humanos , Redes Neurais de Computação , Orientação/fisiologia , Estimulação Luminosa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA