Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 102
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Vis ; 24(5): 15, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38814934

RESUMO

Temporal asynchrony is a cue for the perceptual segregation of spatial regions. Past research found attribute invariance of this phenomenon such that asynchrony induces perceptual segmentation regardless of the changing attribute type, and it does so even when asynchrony occurs between different attributes. To test the generality of this finding and obtain insights into the underlying computational mechanism, we compared the segmentation performance for changes in luminance, color, motion direction, and their combinations. Our task was to detect the target quadrant in which a periodic alternation in attribute was phase-delayed compared to the remaining quadrants. When stimulus elements made a square-wave attribute change, target detection was not clearly attribute invariant, being more difficult for motion direction change than for luminance or color changes and nearly impossible for the combination of motion direction and luminance or color. We suspect that waveform mismatch might cause anomalous behavior of motion direction since a square-wave change in motion direction is a triangular-wave change in the spatial phase (i.e., a second-order change in the direction of the spatial phase change). In agreement with this idea, we found that the segregation performance was strongly affected by the waveform type (square wave, triangular wave, or their combination), and when this factor was controlled, the performance was nearly, though not perfectly, invariant against attribute type. The results were discussed with a model in which different visual attributes share a common asynchrony-based segmentation mechanism.


Assuntos
Percepção de Movimento , Estimulação Luminosa , Percepção Espacial , Humanos , Percepção de Movimento/fisiologia , Estimulação Luminosa/métodos , Percepção Espacial/fisiologia , Percepção de Cores/fisiologia , Sinais (Psicologia) , Adulto
2.
J Vis ; 23(12): 5, 2023 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-37856108

RESUMO

To encode binocular disparity, the visual system uses a pair of left eye and right eye bandpass filters with either a position or a phase offset between them. Such pairs are considered to exit at multiple scales to encode a wide range of disparity. However, local disparity measurements by bandpass mechanisms can be ambiguous, particularly when the actual disparity is larger than a half-cycle of the preferred spatial frequency of the filter, which often occurs in fine scales. In this study, we investigated whether the visual system uses a coarse-to-fine interaction to resolve this ambiguity at finer scales for depth estimation from disparity. The stimuli were stereo grating patches composed of a target and comparison patterns. The target patterns contained spatial frequencies of 1 and 4 cycles per degree (cpd). The phase disparity of the low-frequency component was 0° (at the horopter), -90° (uncrossed), or 90° (crossed), and that of the high-frequency components was changed independent of the low-frequency disparity, in the range between -90° (uncrossed) and 90° (crossed). The observers' task was to indicate whether the target appeared closer to the comparison pattern, which always shared the disparity with the low-frequency component of the target. Regardless of whether the comparison pattern was a 1-cpd + 4-cpd compound or a 1-cpd simple grating, the perceived depth order of the target and the comparison varied in accordance with the phase disparity of the high-frequency component of the target. This effect occurred not only when the low-frequency component was at the horopter, but also when it contained a large disparity corresponding to one cycle of the high-frequency component (±90°). Our findings suggest a coarse-to-fine interaction in multiscale disparity processing, in which the depth interpretation of the high-frequency changes based on the disparity of the low-frequency component.


Assuntos
Percepção de Profundidade , Disparidade Visual , Humanos , Visão Binocular
3.
J Vis ; 22(10): 18, 2022 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-36149676

RESUMO

Theories of visual confidence have largely been grounded in the gaussian signal detection framework. This framework is so dominant that idiosyncratic consequences from this distributional assumption have remained unappreciated. This article reports systematic comparisons of the gaussian signal detection framework to its logistic counterpart in the measurement of metacognitive accuracy. Because of the difference in their distribution kurtosis, these frameworks are found to provide different perspectives regarding the efficiency of confidence rating relative to objective decision (the logistic model intrinsically gives greater meta-d'/d' ratio than the gaussian model). These frameworks can also provide opposing conclusions regarding the metacognitive inefficiency along the internal evidence continuum (whether meta-d' is larger or smaller for higher levels of confidence). Previous theories developed on these lines of analysis may need to be revisited as the gaussian and logistic metacognitive models received somewhat equivalent support in our quantitative model comparisons. Despite these discrepancies, however, we found that across-condition or across-participant comparisons of metacognitive measures are relatively robust against the distributional assumptions, which provides much assurance to conventional research practice. We hope this article promotes the awareness for the significance of hidden modeling assumptions, contributing to the cumulative development of the relevant field.


Assuntos
Metacognição , Humanos , Modelos Logísticos
4.
J Vis ; 22(2): 17, 2022 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35195670

RESUMO

Complex visual processing involved in perceiving the object materials can be better elucidated by taking a variety of research approaches. Sharing stimulus and response data is an effective strategy to make the results of different studies directly comparable and can assist researchers with different backgrounds to jump into the field. Here, we constructed a database containing several sets of material images annotated with visual discrimination performance. We created the material images using physically based computer graphics techniques and conducted psychophysical experiments with them in both laboratory and crowdsourcing settings. The observer's task was to discriminate materials on one of six dimensions (gloss contrast, gloss distinctness of image, translucent vs. opaque, metal vs. plastic, metal vs. glass, and glossy vs. painted). The illumination consistency and object geometry were also varied. We used a nonverbal procedure (an oddity task) applicable for diverse use cases, such as cross-cultural, cross-species, clinical, or developmental studies. Results showed that the material discrimination depended on the illuminations and geometries and that the ability to discriminate the spatial consistency of specular highlights in glossiness perception showed larger individual differences than in other tasks. In addition, analysis of visual features showed that the parameters of higher order color texture statistics can partially, but not completely, explain task performance. The results obtained through crowdsourcing were highly correlated with those obtained in the laboratory, suggesting that our database can be used even when the experimental conditions are not strictly controlled in the laboratory. Several projects using our dataset are underway.


Assuntos
Percepção de Forma , Sensibilidades de Contraste , Percepção de Forma/fisiologia , Humanos , Estimulação Luminosa , Propriedades de Superfície , Percepção Visual/fisiologia
5.
PLoS Comput Biol ; 16(8): e1008018, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32813688

RESUMO

Visually inferring material properties is crucial for many tasks, yet poses significant computational challenges for biological vision. Liquids and gels are particularly challenging due to their extreme variability and complex behaviour. We reasoned that measuring and modelling viscosity perception is a useful case study for identifying general principles of complex visual inferences. In recent years, artificial Deep Neural Networks (DNNs) have yielded breakthroughs in challenging real-world vision tasks. However, to model human vision, the emphasis lies not on best possible performance, but on mimicking the specific pattern of successes and errors humans make. We trained a DNN to estimate the viscosity of liquids using 100.000 simulations depicting liquids with sixteen different viscosities interacting in ten different scenes (stirring, pouring, splashing, etc). We find that a shallow feedforward network trained for only 30 epochs predicts mean observer performance better than most individual observers. This is the first successful image-computable model of human viscosity perception. Further training improved accuracy, but predicted human perception less well. We analysed the network's features using representational similarity analysis (RSA) and a range of image descriptors (e.g. optic flow, colour saturation, GIST). This revealed clusters of units sensitive to specific classes of feature. We also find a distinct population of units that are poorly explained by hand-engineered features, but which are particularly important both for physical viscosity estimation, and for the specific pattern of human responses. The final layers represent many distinct stimulus characteristics-not just viscosity, which the network was trained on. Retraining the fully-connected layer with a reduced number of units achieves practically identical performance, but results in representations focused on viscosity, suggesting that network capacity is a crucial parameter determining whether artificial or biological neural networks use distributed vs. localized representations.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Viscosidade , Percepção Visual/fisiologia , Adulto , Biologia Computacional , Feminino , Humanos , Masculino , Adulto Jovem
6.
PLoS Comput Biol ; 14(4): e1006061, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29702644

RESUMO

Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world.


Assuntos
Percepção de Forma/fisiologia , Percepção Visual/fisiologia , Biologia Computacional , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador , Modelos Neurológicos , Modelos Psicológicos , Estimulação Luminosa , Psicofísica , Propriedades de Superfície
7.
Proc Natl Acad Sci U S A ; 112(33): E4620-7, 2015 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-26240313

RESUMO

Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.


Assuntos
Encéfalo/fisiologia , Percepção Visual/fisiologia , Simulação por Computador , Sinais (Psicologia) , Percepção de Profundidade , Percepção de Forma , Humanos , Percepção de Movimento , Psicofísica , Refratometria , Software , Gravação em Vídeo , Visão Ocular , Água
8.
J Vis ; 18(8): 3, 2018 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-30098175

RESUMO

Dynamic image deformation produces the perception of a transparent material that appears to deform the background image by light refraction. Since past studies on this phenomenon have mainly used subjective judgment about the presence of a transparent layer, it remains unsolved whether this is a real perceptual transparency effect in the sense that it forms surface representations, as do conventional transparency effects. Visual computation for color and luminance transparency, induced mainly by surface-contour information, can be decomposed into two components: surface formation to determine foreground and background layers, and scission to assign color and luminance to each layer. Here we show that deformation-induced perceptual transparency aids surface formation by color transparency and consequently resolves color scission. We asked observers to report the color of the front layer in a spatial region with a neutral physical color. The layer color could be seen as either reddish or greenish depending on the spatial context producing the color transparency, which was, however, ambiguous about the order of layers. We found that adding to the display a deformation-induced transparency that could specify the front layer significantly biased color scission in the predicted way if and only if the deformation-induced transparency was spatially coincident with the interpretation of color transparency. The results indicate that deformation-induced transparency is indeed a novel type of perceptual transparency that plays a role in surface formation in cooperation with color transparency.


Assuntos
Percepção de Cores/fisiologia , Percepção de Forma/fisiologia , Distorção da Percepção/fisiologia , Adulto , Feminino , Humanos , Masculino , Visão Ocular
9.
J Vis ; 17(13): 15, 2017 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-29192314

RESUMO

The majority of work on the perception of transparency has focused on static images with luminance-defined contour junctions, but recent work has shown that dynamic image sequences with dynamic image deformations also provide information about transparency. The present study demonstrates that when part of a static image is dynamically deformed, contour junctions at which deforming and nondeforming contours are connected facilitate the deformation-based perception of a transparent layer. We found that the impression of a transparent layer was stronger when a dynamically deforming area was adjacent to static nondeforming areas than when presented alone. When contour junctions were not formed at the dynamic-static boundaries, however, the impression of a transparent layer was not facilitated by the presence of static surrounding areas. The effect of the deformation-defined junctions was attenuated when the spatial pattern of luminance contrast at the junctions was inconsistent with the perceived transparency related to luminance contrast, while the effect did not change when the spatial luminance pattern was consistent with it. In addition, the results showed that contour completions across the junctions were required for the perception of a transparent layer. These results indicate that deformation-defined junctions that involve contour completion between deforming and nondeforming regions enhance the perception of a transparent layer, and that the deformation-based perceptual transparency can be promoted by the simultaneous presence of appropriately configured luminance and contrast-other features that can also by themselves produce the sensation of perceiving transparency.


Assuntos
Encéfalo/fisiologia , Percepção de Forma/fisiologia , Visão Ocular/fisiologia , Adulto , Simulação por Computador , Sinais (Psicologia) , Percepção de Profundidade , Feminino , Humanos , Masculino , Percepção de Movimento , Psicofísica , Percepção Visual
10.
J Vis ; 17(4): 8, 2017 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-28423413

RESUMO

We are surrounded by many textures with fine dense structures, such as human hair and fabrics, whose individual elements are often finer than the spatial resolution limit of the visual system or that of a digitized image. Here we show that human observers have an ability to visually estimate subresolution fineness of those textures. We carried out a psychophysical experiment to show that observers could correctly discriminate differences in the fineness of hair-like dense line textures even when the thinnest line element was much finer than the resolution limit of the eye or that of the display. The physical image analysis of the textures, along with a theoretical analysis based on the central limit theorem, indicates that as the fineness of texture increases and the number of texture elements per resolvable unit increases, the intensity contrast of the texture decreases and the intensity histogram approaches a Gaussian shape. Subsequent psychophysical experiments showed that these image features indeed play critical roles in fineness perception; i.e., lowering the contrast made artificial and natural textures look finer, and this effect was most evident for textures with unimodal Gaussian-like intensity distributions. These findings indicate that the human visual system is able to estimate subresolution texture fineness on the basis of diagnostic image features correlated with subresolution fineness, such as the intensity contrast and the shape of the intensity histogram.


Assuntos
Discriminação Psicológica/fisiologia , Percepção de Forma/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Sensibilidades de Contraste/fisiologia , Sinais (Psicologia) , Humanos , Julgamento , Masculino , Distribuição Normal , Psicofísica
11.
J Vis ; 17(5): 7, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-28505665

RESUMO

Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.


Assuntos
Percepção de Cores/fisiologia , Cor , Luz , Molhabilidade , Humanos , Psicofísica , Propriedades de Superfície
12.
J Vis ; 17(6): 14, 2017 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-28637053

RESUMO

Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.


Assuntos
Modelos Neurológicos , Percepção de Movimento/fisiologia , Dinâmica não Linear , Potenciais de Ação , Algoritmos , Humanos , Modelos Lineares , Psicofísica
13.
J Neurophysiol ; 115(3): 1620-9, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26843600

RESUMO

The brain can precisely encode the temporal relationship between tactile inputs. While behavioural studies have demonstrated precise interfinger temporal judgments, the underlying neural mechanism remains unknown. Computationally, two kinds of neural responses can act as the information source. One is the phase-locked response to the phase of relatively slow inputs, and the other is the response to the amplitude change of relatively fast inputs. To isolate the contributions of these components, we measured performance of a synchrony judgment task for sine wave and amplitude-modulation (AM) wave stimuli. The sine wave stimulus was a low-frequency sinusoid, with the phase shifted in the asynchronous stimulus. The AM wave stimulus was a low-frequency sinusoidal AM of a 250-Hz carrier, with only the envelope shifted in the asynchronous stimulus. In the experiment, three stimulus pairs, two synchronous ones and one asynchronous one, were sequentially presented to neighboring fingers, and participants were asked to report which one was the asynchronous pair. We found that the asynchrony of AM waves could be detected as precisely as single impulse pair, with the threshold asynchrony being ∼20 ms. On the other hand, the asynchrony of sine waves could not be detected at all in the range from 5 to 30 Hz. Our results suggest that the timing signal for tactile judgments is provided not by the stimulus phase information but by the envelope of the response of the high-frequency-sensitive Pacini channel (PC), although they do not exclude a possible contribution of the envelope of non-PCs.


Assuntos
Julgamento , Tempo de Reação , Percepção do Tato , Adulto , Feminino , Dedos/inervação , Dedos/fisiologia , Humanos , Masculino , Desempenho Psicomotor
14.
J Vis ; 16(15): 7, 2016 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-27936271

RESUMO

The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.


Assuntos
Adaptação Fisiológica/fisiologia , Percepção de Movimento/fisiologia , Orientação/fisiologia , Estimulação Luminosa/métodos , Humanos , Rotação
15.
Proc Biol Sci ; 282(1805)2015 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-25788590

RESUMO

Recent sensory experience modifies subjective timing perception. For example, when visual events repeatedly lead auditory events, such as when the sound and video tracks of a movie are out of sync, subsequent vision-leads-audio presentations are reported as more simultaneous. This phenomenon could provide insights into the fundamental problem of how timing is represented in the brain, but the underlying mechanisms are poorly understood. Here, we show that the effect of recent experience on timing perception is not just subjective; recent sensory experience also modifies relative timing discrimination. This result indicates that recent sensory history alters the encoding of relative timing in sensory areas, excluding explanations of the subjective phenomenon based only on decision-level changes. The pattern of changes in timing discrimination suggests the existence of two sensory components, similar to those previously reported for visual spatial attributes: a lateral shift in the nonlinear transducer that maps relative timing into perceptual relative timing and an increase in transducer slope around the exposed timing. The existence of these components would suggest that previous explanations of how recent experience may change the sensory encoding of timing, such as changes in sensory latencies or simple implementations of neural population codes, cannot account for the effect of sensory adaptation on timing perception.


Assuntos
Percepção Auditiva , Percepção do Tempo , Percepção Visual , Estimulação Acústica , Adaptação Fisiológica , Humanos , Masculino , Estimulação Luminosa
16.
J Vis ; 15(13): 2, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26381833

RESUMO

When eyes track a moving target, a stationary background environment moves in the direction opposite to the eye movement on the observer's retina. Here, we report a novel effect in which smooth pursuit can enhance the retinal motion in the direction opposite to eye movement, under certain conditions. While performing smooth pursuit, the observers were presented with a counterphase grating on the retina. The counterphase grating consisted of two drifting component gratings: one drifting in the direction opposite to the eye movement and the other drifting in the same direction as the pursuit. Although the overall perceived motion direction should be ambiguous if only retinal information is considered, our results indicated that the stimulus almost always appeared to be moving in the direction opposite to the pursuit direction. This effect was ascribable to the perceptual dominance of the environmentally stationary component over the other. The effect was robust at suprathreshold contrasts, but it disappeared at lower overall contrasts. The effect was not associated with motion capture by a reference frame served by peripheral moving images. Our findings also indicate that the brain exploits eye-movement information not only for eye-contingent image motion suppression but also to develop an ecologically plausible interpretation of ambiguous retinal motion signals. Based on this biological assumption, we argue that visual processing has the functional consequence of reducing the apparent motion blur of a stationary background pattern during eye movements and that it does so through integration of the trajectories of pattern and color signals.


Assuntos
Percepção de Movimento/fisiologia , Acompanhamento Ocular Uniforme/fisiologia , Sensibilidades de Contraste/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos
17.
J Vis ; 15(1): 15.1.25, 2015 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-25624464

RESUMO

Whereas early visual processing has been considered primarily retinotopic, recent studies have revealed significant contributions of nonretinotopic processing to the human perception of fundamental visual features. For adult vision, it has been shown that information about color, shape, and size is nonretinotipically integrated along the motion trajectory, which could bring about clear and unblurred perception of a moving object. Since this nonretinotopic processing presumably includes tight and elaborated cooperation among functional cortical modules for different visual attributes, how this processing matures in the course of brain development is an important unexplored question. Here we show that the nonretinotopic integration of color signals is fully developed in infants at five months of age. Using preferential looking, we found significantly better temporal segregation of colors for moving patterns than for flickering patterns, even when the retinal color alternation rate was the same. This effect could be ascribed to the integration of color signals along a motion trajectory. Furthermore, the infants' color segmentation performance was comparable to that of human adults. Given that both the motion processing and color vision of 5-month-old infants are still under development, our findings suggest that nonretinotopic color processing develops concurrently with basic color and motion processing. Our findings not only support the notion of an early presence of cross-modal interactions in the brain, but also indicate the early development of a purposive cross-module interaction for elegant visual computation.


Assuntos
Visão de Cores/fisiologia , Fusão Flicker/fisiologia , Percepção de Movimento/fisiologia , Retina/fisiologia , Movimentos Oculares/fisiologia , Feminino , Humanos , Lactente , Masculino
18.
J Vis ; 14(4)2014 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-24744448

RESUMO

Interest in the perception of the material of objects has been growing. While material perception is a critical ability for animals to properly regulate behavioral interactions with surrounding objects (e.g., eating), little is known about its underlying processing. Vision and audition provide useful information for material perception; using only its visual appearance or impact sound, we can infer what an object is made from. However, what material is perceived when the visual appearance of one material is combined with the impact sound of another, and what are the rules that govern cross-modal integration of material information? We addressed these questions by asking 16 human participants to rate how likely it was that audiovisual stimuli (48 combinations of visual appearances of six materials and impact sounds of eight materials) along with visual-only stimuli and auditory-only stimuli fell into each of 13 material categories. The results indicated strong interactions between audiovisual material perceptions; for example, the appearance of glass paired with a pepper sound is perceived as transparent plastic. Rating material-category likelihoods follow a multiplicative integration rule in that the categories judged to be likely are consistent with both visual and auditory stimuli. On the other hand, rating-material properties, such as roughness and hardness, follow a weighted average rule. Despite a difference in their integration calculations, both rules can be interpreted as optimal Bayesian integration of independent audiovisual estimations for the two types of material judgment, respectively.


Assuntos
Percepção Auditiva/fisiologia , Percepção de Forma/fisiologia , Percepção Visual/fisiologia , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Som , Inquéritos e Questionários , Adulto Jovem
19.
Psychol Methods ; 2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38573668

RESUMO

Human decision behavior entails a graded awareness of its certainty, known as a feeling of confidence. Until now, considerable interest has been paid to behavioral and computational dissociations of decision and confidence, which has raised an urgent need for measurement frameworks that can quantify the efficiency of confidence rating relative to decision accuracy (metacognitive efficiency). As a unique addition to such frameworks, we have developed a new signal detection theory paradigm utilizing the generalized Gaussian distribution (GGSDT). This framework evaluates the observer's metacognitive efficiency and internal standard deviation ratio through shape and scale parameters, respectively. The shape parameter quantifies the kurtosis of internal distributions and can practically be understood in reference to the proportion of the Gaussian ideal observer's confidence being disrupted with random guessing (metacognitive lapse rate). This interpretation holds largely irrespective of the contaminating effects of decision accuracy or operating characteristic asymmetry. Thus, the GGSDT enables hitherto unexplored research protocols (e.g., direct comparison of yes/no vs. forced-choice metacognitive efficiency), expected to find applications in various fields of behavioral science. This article provides a detailed walkthrough of the GGSDT analysis with an accompanying R package (ggsdt). (PsycInfo Database Record (c) 2024 APA, all rights reserved).

20.
Proc Biol Sci ; 280(1763): 20130991, 2013 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-23740784

RESUMO

Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.


Assuntos
Causalidade , Percepção/fisiologia , Desempenho Psicomotor/fisiologia , Estimulação Acústica/métodos , Adulto , Humanos , Estimulação Luminosa/métodos , Tato/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA