Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Exp Brain Res ; 194(1): 91-102, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19093105

RESUMO

Recognizing a natural object requires one to pool information from various sensory modalities, and to ignore information from competing objects. That the same semantic knowledge can be accessed through different modalities makes it possible to explore the retrieval of supramodal object concepts. Here, object-recognition processes were investigated by manipulating the relationships between sensory modalities, specifically, semantic content, and spatial alignment between auditory and visual information. Experiments were run under realistic virtual environment. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality and to inhibit a distractor object (go/no-go task). Spatial alignment had no effect on object-recognition time. The only spatial effect observed was a stimulus-response compatibility between the auditory stimulus and the hand position. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and visual targets. Interestingly, this bimodal facilitation effect was twice as large as found in previous studies that also used information-rich stimuli. An interference effect was observed (i.e. longer reaction times to semantically incongruent stimuli than to the corresponding unimodal stimulus) only when the distractor was auditory. When the distractor was visual, the semantic incongruence did not interfere with object recognition. Our results show that immersive displays with large visual stimuli may provide large multimodal integration effects, and reveal a possible asymmetry in the attentional filtering of irrelevant auditory and visual information.


Assuntos
Percepção Auditiva , Reconhecimento Psicológico , Percepção Visual , Estimulação Acústica , Adulto , Análise de Variância , Feminino , Mãos , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação , Percepção Espacial
2.
BMC Proc ; 8(Suppl 2 Proceedings of the 3rd Annual Symposium on Biologica): S5, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25237392

RESUMO

BACKGROUND: A complete understanding of the relationship between the amino acid sequence and resulting protein function remains an open problem in the biophysical sciences. Current approaches often rely on diagnosing functionally relevant mutations by determining whether an amino acid frequently occurs at a specific position within the protein family. However, these methods do not account for the biophysical properties and the 3D structure of the protein. We have developed an interactive visualization technique, Mu-8, that provides researchers with a holistic view of the differences of a selected protein with respect to a family of homologous proteins. Mu-8 helps to identify areas of the protein that exhibit: (1) significantly different bio-chemical characteristics, (2) relative conservation in the family, and (3) proximity to other regions that have suspect behavior in the folded protein. METHODS: Our approach quantifies and communicates the difference between a reference protein and its family based on amino acid indices or principal components of amino acid index classes, while accounting for conservation, proximity amongst residues, and overall 3D structure. RESULTS: We demonstrate Mu-8 in a case study with data provided by the 2013 BioVis contest. When comparing the sequence of a dysfunctional protein to its functional family, Mu-8 reveals several candidate regions that may cause function to break down.

3.
IEEE Trans Vis Comput Graph ; 17(10): 1459-74, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21041884

RESUMO

We present an image-based approach to relighting photographs of tree canopies. Our goal is to minimize capture overhead; thus the only input required is a set of photographs of the tree taken at a single time of day, while allowing relighting at any other time. We first analyze lighting in a tree canopy both theoretically and using simulations. From this analysis, we observe that tree canopy lighting is similar to volumetric illumination. We assume a single-scattering volumetric lighting model for tree canopies, and diffuse leaf reflectance; we validate our assumptions with synthetic renderings. We create a volumetric representation of the tree from 10-12 images taken at a single time of day and use a single-scattering participating media lighting model. An analytical sun and sky illumination model provides consistent representation of lighting for the captured input and unknown target times. We relight the input image by applying a ratio of the target and input time lighting representations. We compute this representation efficiently by simultaneously coding transmittance from the sky and to the eye in spherical harmonics. We validate our method by relighting images of synthetic trees and comparing to path-traced solutions. We also present results for photographs, validating with time-lapse ground truth sequences.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA