Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Elife ; 132024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38506719

RESUMO

Current models of scene processing in the human brain include three scene-selective areas: the parahippocampal place area (or the temporal place areas), the restrosplenial cortex (or the medial place area), and the transverse occipital sulcus (or the occipital place area). Here, we challenged this model by showing that at least one other scene-selective site can also be detected within the human posterior intraparietal gyrus. Despite the smaller size of this site compared to the other scene-selective areas, the posterior intraparietal gyrus scene-selective (PIGS) site was detected consistently in a large pool of subjects (n = 59; 33 females). The reproducibility of this finding was tested based on multiple criteria, including comparing the results across sessions, utilizing different scanners (3T and 7T) and stimulus sets. Furthermore, we found that this site (but not the other three scene-selective areas) is significantly sensitive to ego-motion in scenes, thus distinguishing the role of PIGS in scene perception relative to other scene-selective areas. These results highlight the importance of including finer scale scene-selective sites in models of scene processing - a crucial step toward a more comprehensive understanding of how scenes are encoded under dynamic conditions.


Assuntos
Encéfalo , Córtex Cerebral , Feminino , Humanos , Reprodutibilidade dos Testes , Meio Ambiente , Ego
2.
Res Sq ; 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38260553

RESUMO

Current models of scene processing in the human brain include three scene-selective areas: the Parahippocampal Place Area (or the temporal place areas; PPA/TPA), the restrosplenial cortex (or the medial place area; RSC/MPA) and the transverse occipital sulcus (or the occipital place area; TOS/OPA). Here, we challenged this model by showing that at least one other scene-selective site can also be detected within the human posterior intraparietal gyrus. Despite the smaller size of this site compared to the other scene-selective areas, the posterior intraparietal gyrus scene-selective (PIGS) site was detected consistently in a large pool of subjects (n=59; 33 females). The reproducibility of this finding was tested based on multiple criteria, including comparing the results across sessions, utilizing different scanners (3T and 7T) and stimulus sets. Furthermore, we found that this site (but not the other three scene-selective areas) is significantly sensitive to ego-motion in scenes, thus distinguishing the role of PIGS in scene perception relative to other scene-selective areas. These results highlight the importance of including finer scale scene-selective sites in models of scene processing - a crucial step toward a more comprehensive understanding of how scenes are encoded under dynamic conditions.

3.
Health Inf Sci Syst ; 12(1): 4, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38093716

RESUMO

Based on deep learning, monocular visual 3D reconstruction methods have been applied in various conventional fields. In the aspect of medical endoscopic imaging, due to the difficulty in obtaining real information, self-supervised deep learning has always been a focus of research. However, current research on endoscopic 3D reconstruction is mainly conducted in laboratory environments, lacking experience in dealing with complex clinical surgical environments. In this work, we use an optical flow-based neural network to address the problem of inconsistent brightness between frames. Additionally, attention modules and inter-layer losses are introduced to tackle the complexity of endoscopic scenes in clinical surgeries. The attention mechanism allows the network to better focus on pixel texture details and depth differences, while the inter-layer losses supervise the network at different scales. We have established a complete monocular endoscopic 3D reconstruction framework and conducted quantitative experiments on a clinical dataset using the cross-correlation coefficient as a metric. Compared with other self-supervised methods, our framework can better simulate the mapping relationship between adjacent frames during endoscope motion. To validate the generalization performance of our framework, we tested the model trained on the clinical dataset on the SCARED dataset and achieved equally excellent results.

4.
Front Neurosci ; 16: 1010302, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36507348

RESUMO

Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision.

5.
Neuroimage ; 264: 119715, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36334557

RESUMO

All volitional movement in a three-dimensional space requires multisensory integration, in particular of visual and vestibular signals. Where and how the human brain processes and integrates self-motion signals remains enigmatic. Here, we applied visual and vestibular self-motion stimulation using fast and precise whole-brain neuroimaging to delineate and characterize the entire cortical and subcortical egomotion network in a substantial cohort (n=131). Our results identify a core egomotion network consisting of areas in the cingulate sulcus (CSv, PcM/pCi), the cerebellum (uvula), and the temporo-parietal cortex including area VPS and an unnamed region in the supramarginal gyrus. Based on its cerebral connectivity pattern and anatomical localization, we propose that this region represents the human homologue of macaque area 7a. Whole-brain connectivity and gradient analyses imply an essential role of the connections between the cingulate sulcus and the cerebellar uvula in egomotion perception. This could be via feedback loops involved updating visuo-spatial and vestibular information. The unique functional connectivity patterns of PcM/pCi hint at central role in multisensory integration essential for the perception of self-referential spatial awareness. All cortical egomotion hubs showed modular functional connectivity with other visual, vestibular, somatosensory and higher order motor areas, underlining their mutual function in general sensorimotor integration.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Humanos , Estimulação Luminosa , Imageamento por Ressonância Magnética/métodos , Córtex Cerebral/fisiologia , Encéfalo/fisiologia
6.
Sensors (Basel) ; 22(17)2022 Aug 25.
Artigo em Inglês | MEDLINE | ID: mdl-36080853

RESUMO

Ego-motion estimation is a foundational capability for autonomous combine harvesters, supporting high-level functions such as navigation and harvesting. This paper presents a novel approach for estimating the motion of a combine harvester from a sequence of stereo images. The proposed method starts with tracking a set of 3D landmarks which are triangulated from stereo-matched features. Six Degree of Freedom (DoF) ego motion is obtained by minimizing the reprojection error of those landmarks on the current frame. Then, local bundle adjustment is performed to refine structure (i.e., landmark positions) and motion (i.e., keyframe poses) jointly in a sliding window. Both processes are encapsulated into a two-threaded architecture to achieve real-time performance. Our method utilizes a stereo camera, which enables estimation at true scale and easy startup of the system. Quantitative tests were performed on real agricultural scene data, comprising several different working paths, in terms of estimating accuracy and real-time performance. The experimental results demonstrated that our proposed perception system achieved favorable accuracy, outputting the pose at 10 Hz, which is sufficient for online ego-motion estimation for combine harvesters.


Assuntos
Ego , Movimento (Física)
7.
Sensors (Basel) ; 22(4)2022 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-35214285

RESUMO

This paper proposes a novel unsupervised learning framework for depth recovery and camera ego-motion estimation from monocular video. The framework exploits the optical flow (OF) property to jointly train the depth and the ego-motion models. Unlike the existing unsupervised methods, our method extracts the features from the optical flow rather than from the raw RGB images, thereby enhancing unsupervised learning. In addition, we exploit the forward-backward consistency check of the optical flow to generate a mask of the invalid region in the image, and accordingly, eliminate the outlier regions such as occlusion regions and moving objects for the learning. Furthermore, in addition to using view synthesis as a supervised signal, we impose additional loss functions, including optical flow consistency loss and depth consistency loss, as additional supervision signals on the valid image region to further enhance the training of the models. Substantial experiments on multiple benchmark datasets demonstrate that our method outperforms other unsupervised methods.


Assuntos
Fluxo Óptico , Ego , Movimento (Física) , Aprendizado de Máquina não Supervisionado
8.
Med Image Anal ; 77: 102338, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35016079

RESUMO

Recently, self-supervised learning technology has been applied to calculate depth and ego-motion from monocular videos, achieving remarkable performance in autonomous driving scenarios. One widely adopted assumption of depth and ego-motion self-supervised learning is that the image brightness remains constant within nearby frames. Unfortunately, the endoscopic scene does not meet this assumption because there are severe brightness fluctuations induced by illumination variations, non-Lambertian reflections and interreflections during data collection, and these brightness fluctuations inevitably deteriorate the depth and ego-motion estimation accuracy. In this work, we introduce a novel concept referred to as appearance flow to address the brightness inconsistency problem. The appearance flow takes into consideration any variations in the brightness pattern and enables us to develop a generalized dynamic image constraint. Furthermore, we build a unified self-supervised framework to estimate monocular depth and ego-motion simultaneously in endoscopic scenes, which comprises a structure module, a motion module, an appearance module and a correspondence module, to accurately reconstruct the appearance and calibrate the image brightness. Extensive experiments are conducted on the SCARED dataset and EndoSLAM dataset, and the proposed unified framework exceeds other self-supervised approaches by a large margin. To validate our framework's generalization ability on different patients and cameras, we train our model on SCARED but test it on the SERV-CT and Hamlyn datasets without any fine-tuning, and the superior results reveal its strong generalization ability. Code is available at: https://github.com/ShuweiShao/AF-SfMLearner.


Assuntos
Ego , Endoscopia Gastrointestinal , Humanos , Movimento (Física)
9.
Neuroimage ; 244: 118581, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34543763

RESUMO

During real-world locomotion, in order to be able to move along a path or avoid an obstacle, continuous changes in self-motion direction (i.e. heading) are needed. Control of heading changes during locomotion requires the integration of multiple signals (i.e., visual, somatomotor, vestibular). Recent fMRI studies have shown that both somatomotor areas (human PEc [hPEc], human PE [hPE], primary somatosensory cortex [S-I]) and egomotion visual regions (cingulate sulcus visual area [CSv], posterior cingulate area [pCi], posterior insular cortex [PIC]) respond to either leg movements and egomotion-compatible visual stimulations, suggesting a role in the analysis of both visual attributes of egomotion and somatomotor signals with the aim of guiding locomotion. However, whether these regions are able to integrate egomotion-related visual signals with somatomotor inputs coming from leg movements during heading changes remains an open question. Here we used a combined approach of individual functional localizers and task-evoked activity by fMRI. In thirty subjects we first localized three egomotion areas (CSv, pCi, PIC) and three somatomotor regions (S-I, hPE, hPEc). Then, we tested their responses in a multisensory integration experiment combining visual and somatomotor signals relevant to locomotion in congruent or incongruent trials. We used an fMR-adaptation paradigm to explore the sensitivity to the repeated presentation of these bimodal stimuli in the six regions of interest. Results revealed that hPE, S-I and CSv showed an adaptation effect regardless of congruency, while PIC, pCi and hPEc showed sensitivity to congruency. PIC exhibited a preference for congruent trials compared to incongruent trials. Areas pCi and hPEc exhibited an adaptation effect only for congruent and incongruent trials, respectively. PIC, pCi and hPEc sensitivity to the congruency relationship between visual (locomotion-compatible) cues and (leg-related) somatomotor inputs suggests that these regions are involved in multisensory integration processes, likely in order to guide/adjust leg movements during heading changes.


Assuntos
Córtex Insular/fisiologia , Locomoção/fisiologia , Córtex Motor/fisiologia , Adulto , Potenciais Evocados , Feminino , Humanos , Perna (Membro)/fisiologia , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
10.
Sensors (Basel) ; 21(15)2021 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-34372242

RESUMO

The fusion of motion data is key in the fields of robotic and automated driving. Most existing approaches are filter-based or pose-graph-based. By using filter-based approaches, parameters should be set very carefully and the motion data can usually only be fused in a time forward direction. Pose-graph-based approaches can fuse data in time forward and backward directions. However, pre-integration is needed by applying measurements from inertial measurement units. Additionally, both approaches only provide discrete fusion results. In this work, we address this problem and present a uniform B-spline-based continuous fusion approach, which can fuse motion measurements from an inertial measurement unit and pose data from other localization systems robustly, accurately and efficiently. In our continuous fusion approach, an axis-angle is applied as our rotation representation method and uniform B-spline as the back-end optimization base. Evaluation results performed on the real world data show that our approach provides accurate, robust and continuous fusion results, which again supports our continuous fusion concept.

11.
Brain Struct Funct ; 226(9): 2911-2930, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34043075

RESUMO

In humans, several neuroimaging studies have demonstrated that passive viewing of optic flow stimuli activates higher-level motion areas, like V6 and the cingulate sulcus visual area (CSv). In macaque, there are few studies on the sensitivity of V6 and CSv to egomotion compatible optic flow. The only fMRI study on this issue revealed selectivity to egomotion compatible optic flow in macaque CSv but not in V6 (Cotterau et al. Cereb Cortex 27(1):330-343, 2017, but see Fan et al. J Neurosci. 35:16303-16314, 2015). Yet, it is unknown whether monkey visual motion areas MT + and V6 display any distinctive fMRI functional profile relative to the optic flow stimulation, as it is the case for the homologous human areas (Pitzalis et al., Cereb Cortex 20(2):411-424, 2010). Here, we described the sensitivity of the monkey brain to two motion stimuli (radial rings and flow fields) originally used in humans to functionally map the motion middle temporal area MT + (Tootell et al. J Neurosci 15: 3215-3230, 1995a; Nature 375:139-141, 1995b) and the motion medial parietal area V6 (Pitzalis et al. 2010), respectively. In both animals, we found regions responding only to optic flow or radial rings stimulation, and regions responding to both stimuli. A region in the parieto-occipital sulcus (likely including V6) was one of the most highly selective area for coherently moving fields of dots, further demonstrating the power of this type of stimulation to activate V6 in both humans and monkeys. We did not find any evidence that putative macaque CSv responds to Flow Fields.


Assuntos
Percepção de Movimento , Fluxo Óptico , Córtex Visual , Animais , Macaca , Imageamento por Ressonância Magnética , Estimulação Luminosa
12.
Sensors (Basel) ; 21(4)2021 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-33673119

RESUMO

In this paper, we study deep learning approaches for monocular visual odometry (VO). Deep learning solutions have shown to be effective in VO applications, replacing the need for highly engineered steps, such as feature extraction and outlier rejection in a traditional pipeline. We propose a new architecture combining ego-motion estimation and sequence-based learning using deep neural networks. We estimate camera motion from optical flow using Convolutional Neural Networks (CNNs) and model the motion dynamics using Recurrent Neural Networks (RNNs). The network outputs the relative 6-DOF camera poses for a sequence, and implicitly learns the absolute scale without the need for camera intrinsics. The entire trajectory is then integrated without any post-calibration. We evaluate the proposed method on the KITTI dataset and compare it with traditional and other deep learning approaches in the literature.

13.
Sensors (Basel) ; 21(3)2021 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-33573136

RESUMO

Estimating the depth of image and egomotion of agent are important for autonomous and robot in understanding the surrounding environment and avoiding collision. Most existing unsupervised methods estimate depth and camera egomotion by minimizing photometric error between adjacent frames. However, the photometric consistency sometimes does not meet the real situation, such as brightness change, moving objects and occlusion. To reduce the influence of brightness change, we propose a feature pyramid matching loss (FPML) which captures the trainable feature error between a current and the adjacent frames and therefore it is more robust than photometric error. In addition, we propose the occlusion-aware mask (OAM) network which can indicate occlusion according to change of masks to improve estimation accuracy of depth and camera pose. The experimental results verify that the proposed unsupervised approach is highly competitive against the state-of-the-art methods, both qualitatively and quantitatively. Specifically, our method reduces absolute relative error (Abs Rel) by 0.017-0.088.

14.
Proc Natl Acad Sci U S A ; 117(52): 33161-33169, 2020 12 29.
Artigo em Inglês | MEDLINE | ID: mdl-33328275

RESUMO

There is considerable support for the hypothesis that perception of heading in the presence of rotation is mediated by instantaneous optic flow. This hypothesis, however, has never been tested. We introduce a method, termed "nonvarying phase motion," for generating a stimulus that conveys a single instantaneous optic flow field, even though the stimulus is presented for an extended period of time. In this experiment, observers viewed stimulus videos and performed a forced-choice heading discrimination task. For nonvarying phase motion, observers made large errors in heading judgments. This suggests that instantaneous optic flow is insufficient for heading perception in the presence of rotation. These errors were mostly eliminated when the velocity of phase motion was varied over time to convey the evolving sequence of optic flow fields corresponding to a particular heading. This demonstrates that heading perception in the presence of rotation relies on the time-varying evolution of optic flow. We hypothesize that the visual system accurately computes heading, despite rotation, based on optic acceleration, the temporal derivative of optic flow.


Assuntos
Percepção de Movimento , Fluxo Óptico , Aceleração , Adulto , Discriminação Psicológica , Feminino , Humanos , Masculino , Rotação , Tempo
15.
Sensors (Basel) ; 20(13)2020 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-32635370

RESUMO

We propose a completely unsupervised approach to simultaneously estimate scene depth, ego-pose, ground segmentation and ground normal vector from only monocular RGB video sequences. In our approach, estimation for different scene structures can mutually benefit each other by the joint optimization. Specifically, we use the mutual information loss to pre-train the ground segmentation network and before adding the corresponding self-learning label obtained by a geometric method. By using the static nature of the ground and its normal vector, the scene depth and ego-motion can be efficiently learned by the self-supervised learning procedure. Extensive experimental results on both Cityscapes and KITTI benchmark demonstrate the significant improvement on the estimation accuracy for both scene depth and ego-pose by our approach. We also achieve an average error of about 3° for estimated ground normal vectors. By deploying our proposed geometric constraints, the IOUaccuracy of unsupervised ground segmentation is increased by 35% on the Cityscapes dataset.

16.
Sensors (Basel) ; 19(23)2019 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-31795509

RESUMO

Compound eyes, also known as insect eyes, have a unique structure. They have a hemispheric surface, and a lot of single eyes are deployed regularly on the surface. Thanks to this unique form, using the compound images has several advantages, such as a large field of view (FOV) with low aberrations. We can exploit these benefits in high-level vision applications, such as object recognition, or semantic segmentation for a moving robot, by emulating the compound images that describe the captured scenes from compound eye cameras. In this paper, to the best of our knowledge, we propose the first convolutional neural network (CNN)-based ego-motion classification algorithm designed for the compound eye structure. To achieve this, we introduce a voting-based approach that fully utilizes one of the unique features of compound images, specifically, the compound images consist of a lot of single eye images. The proposed method classifies a number of local motions by CNN, and these local classifications which represent the motions of each single eye image, are aggregated to the final classification by a voting procedure. For the experiments, we collected a new dataset for compound eye camera ego-motion classification which contains scenes of the inside and outside of a certain building. The samples of the proposed dataset consist of two consequent emulated compound images and the corresponding ego-motion class. The experimental results show that the proposed method has achieved the classification accuracy of 85.0%, which is superior compared to the baselines on the proposed dataset. Also, the proposed model is light-weight compared to the conventional CNN-based image recognition algorithms such as AlexNet, ResNet50, and MobileNetV2.


Assuntos
Processamento de Imagem Assistida por Computador , Movimento (Física) , Redes Neurais de Computação , Gravação em Vídeo/instrumentação , Algoritmos , Animais , Olho Composto de Artrópodes , Humanos , Propriedades de Superfície
17.
Sensors (Basel) ; 19(11)2019 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-31146404

RESUMO

Herein, we propose an unsupervised learning architecture under coupled consistency conditions to estimate the depth, ego-motion, and optical flow. Previously invented learning techniques in computer vision adopted a large amount of the ground truth dataset for network training. A ground truth dataset, including depth and optical flow collected from the real world, requires tremendous effort in pre-processing due to the exposure to noise artifacts. In this paper, we propose a framework that trains networks while using a different type of data with combined losses that are derived from a coupled consistency structure. The core concept is composed of two parts. First, we compare the optical flows, which are estimated from both the depth plus ego-motion and flow estimation network. Subsequently, to prevent the effects of the artifacts of the occluded regions in the estimated optical flow, we compute flow local consistency along the forward-backward directions. Second, synthesis consistency enables the exploration of the geometric correlation between the spatial and temporal domains in a stereo video. We perform extensive experiments on the depth, ego-motion, and optical flow estimation on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset. We verify that the flow local consistency loss improves the optical flow accuracy in terms of the occluded regions. Furthermore, we also show that the view-synthesis-based photometric loss enhances the depth and ego-motion accuracy via scene projection. The experimental results exhibit the competitive performance of the estimated depth and the optical flow; moreover, the induced ego-motion is comparable to that obtained from other unsupervised methods.

18.
Micromachines (Basel) ; 9(3)2018 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-30424047

RESUMO

This paper proposes an adaptive absolute ego-motion estimation method using wearable visual-inertial sensors for indoor positioning. We introduce a wearable visual-inertial device to estimate not only the camera ego-motion, but also the 3D motion of the moving object in dynamic environments. Firstly, a novel method dynamic scene segmentation is proposed using two visual geometry constraints with the help of inertial sensors. Moreover, this paper introduces a concept of "virtual camera" to consider the motion area related to each moving object as if a static object were viewed by a "virtual camera". We therefore derive the 3D moving object's motion from the motions for the real and virtual camera because the virtual camera's motion is actually the combined motion of both the real camera and the moving object. In addition, a multi-rate linear Kalman-filter (MR-LKF) as our previous work was selected to solve both the problem of scale ambiguity in monocular camera tracking and the different sampling frequencies of visual and inertial sensors. The performance of the proposed method is evaluated by simulation studies and practical experiments performed in both static and dynamic environments. The results show the method's robustness and effectiveness compared with the results from a Pioneer robot as the ground truth.

19.
Sensors (Basel) ; 18(9)2018 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-30154311

RESUMO

Vision-based motion estimation is an effective means for mobile robot localization and is often used in conjunction with other sensors for navigation and path planning. This paper presents a low-overhead real-time ego-motion estimation (visual odometry) system based on either a stereo or RGB-D sensor. The algorithm's accuracy outperforms typical frame-to-frame approaches by maintaining a limited local map, while requiring significantly less memory and computational power in contrast to using global maps common in full visual SLAM methods. The algorithm is evaluated on common publicly available datasets that span different use-cases and performance is compared to other comparable open-source systems in terms of accuracy, frame rate and memory requirements. This paper accompanies the release of the source code as a modular software package for the robotics community compatible with the Robot Operating System (ROS).

20.
Cereb Cortex ; 27(1): 330-343, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-28108489

RESUMO

The cortical network that processes visual cues to self-motion was characterized with functional magnetic resonance imaging in 3 awake behaving macaques. The experimental protocol was similar to previous human studies in which the responses to a single large optic flow patch were contrasted with responses to an array of 9 similar flow patches. This distinguishes cortical regions where neurons respond to flow in their receptive fields regardless of surrounding motion from those that are sensitive to whether the overall image arises from self-motion. In all 3 animals, significant selectivity for egomotion-consistent flow was found in several areas previously associated with optic flow processing, and notably dorsal middle superior temporal area, ventral intra-parietal area, and VPS. It was also seen in areas 7a (Opt), STPm, FEFsem, FEFsac and in a region of the cingulate sulcus that may be homologous with human area CSv. Selectivity for egomotion-compatible flow was never total but was particularly strong in VPS and putative macaque CSv. Direct comparison of results with the equivalent human studies reveals several commonalities but also some differences.


Assuntos
Córtex Cerebral/fisiologia , Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Animais , Mapeamento Encefálico , Sinais (Psicologia) , Feminino , Macaca mulatta , Imageamento por Ressonância Magnética , Estimulação Luminosa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA