Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 92
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Cogn Neurosci ; 36(1): 187-199, 2024 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-37902587

RESUMO

The oddball protocol has been used to study the neural and perceptual consequences of implicit predictions in the human brain. The protocol involves presenting a sequence of identical repeated events that are eventually broken by a novel "oddball" presentation. Oddball presentations have been linked to increased neural responding and to an exaggeration of perceived duration relative to repeated events. Because the number of repeated events in such protocols is circumscribed, as more repeats are encountered, the conditional probability of a further repeat decreases-whereas the conditional probability of an oddball increases. These facts have not been appreciated in many analyses of oddballs; repeats and oddballs have rather been treated as binary event categories. Here, we show that the human brain is sensitive to conditional event probabilities in an active, visual oddball paradigm. P300 responses (a relatively late component of visually evoked potentials measured with EEG) tended to be greater for less likely oddballs and repeats. By contrast, P1 responses (an earlier component) increased for repeats as a goal-relevant target presentation neared, but this effect occurred even when repeat probabilities were held constant, and oddball P1 responses were invariant. We also found that later, more likely oddballs seemed to last longer, and this effect was largely independent of the number of preceding repeats. These findings speak against a repetition suppression account of the temporal oddball effect. Overall, our data highlight an impact of event probability on later, rather than earlier, electroencephalographic measures previously related to predictive processes-and the importance of considering conditional probabilities in sequential presentation paradigms.


Assuntos
Eletroencefalografia , Potenciais Evocados , Humanos , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Probabilidade , Encéfalo/fisiologia
2.
Conscious Cogn ; 123: 103728, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39018832

RESUMO

Humans experience feelings of confidence in their decisions. In perception, these feelings are typically accurate - we tend to feel more confident about correct decisions. The degree of insight people have into the accuracy of their decisions is known as metacognitive sensitivity. Currently popular methods of estimating metacognitive sensitivity are subject to interpretive ambiguities because they assume people have normally shaped distributions of different experiences when they are repeatedly exposed to a single input. If this normality assumption is violated, calculations can erroneously underestimate metacognitive sensitivity. Here, we describe a means of estimating metacognitive sensitivity that is more robust to violations of the normality assumption. This improved method can easily be added to standard behavioral experiments, and the authors provide Matlab code to help researchers implement these analyses and experimental procedures.


Assuntos
Metacognição , Humanos , Metacognição/fisiologia , Tomada de Decisões/fisiologia
3.
J Vis ; 24(3): 5, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38506794

RESUMO

The ability of humans to identify and reproduce short time intervals (in the region of a second) may be affected by many factors ranging from the gender and personality of the individual observer, through the attentional state, to the precise spatiotemporal structure of the stimulus. The relative roles of these very different factors are a challenge to describe and define; several methodological approaches have been used to achieve this to varying degrees of success. Here we describe and model the results of a paradigm affording not only a first-order measurement of the perceived duration of an interval but also a second-order metacognitive judgement of perceived time. This approach, we argue, expands the form of the data generally collected in duration-judgements and allows more detailed comparison of psychophysical behavior to the underlying theory. We also describe a hierarchical Bayesian measurement model that performs a quantitative analysis of the trial-by-trial data calculating the variability of the temporal estimates and the metacognitive judgments allowing direct comparison between an actual and an ideal observer. We fit the model to data collected for judgements of 750 ms (bisecting 1500 ms) and 1500 ms (bisecting 3000 ms) intervals across three stimulus modalities (visual, audio, and audiovisual). This enhanced form of data on a given interval judgement and the ability to track its progression on a trial-by-trial basis offers a way of looking at the different roles that subject-based, task-based and stimulus-based factors have on the perception of time.


Assuntos
Metacognição , Percepção do Tempo , Humanos , Teorema de Bayes , Julgamento
4.
Conscious Cogn ; 115: 103583, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37839114

RESUMO

Human vision is shaped by historic and by predictive processes. The lingering impact of visual adaptation, for instance, can act to exaggerate differences between past and present inputs, whereas predictive processes can promote extrapolation effects that allow us to anticipate the near future. It is unclear to what extent either of these effects manifest in changes to conscious visual experience. It is also unclear how these influences combine, when acting in concert or opposition. We had people make decisions about the sizes of inputs, and report on levels of decisional confidence. Tests were either selectively subject to size adaptation, to an extrapolation effect, or to both of these effects. When these two effects were placed in opposition, extrapolation had a greater impact on decision making. However, our data suggest the influence of extrapolation is primarily decisional, whereas size adaptation more fully manifests in changes to conscious visual awareness.


Assuntos
Estado de Consciência , Percepção Visual , Humanos , Visão Ocular
5.
Conscious Cogn ; 113: 103532, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37295196

RESUMO

Signal-detection theory (SDT) is one of the most popular frameworks for analyzing data from studies of human behavior - including investigations of confidence. SDT-based analyses of confidence deliver both standard estimates of sensitivity (d'), and a second estimate informed by high-confidence decisions - meta d'. The extent to which meta d' estimates fall short of d' estimates is regarded as a measure of metacognitive inefficiency, quantifying the contamination of confidence by additional noise. These analyses rely on a key but questionable assumption - that repeated exposures to an input will evoke a normally-shaped distribution of perceptual experiences (the normality assumption). Here we show, via analyses inspired by an experiment and modelling, that when distributions of experience do not conform with the normality assumption, meta d' can be systematically underestimated relative to d'. Our data highlight that SDT-based analyses of confidence do not provide a ground truth measure of human metacognitive inefficiency. We explain why deviance from the normality assumption is especially a problem for some popular SDT-based analyses of confidence, in contrast to other analyses inspired by the SDT framework, which are more robust to violations of the normality assumption.


Assuntos
Metacognição , Humanos , Detecção de Sinal Psicológico
6.
Proc Natl Acad Sci U S A ; 117(51): 32791-32798, 2020 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-33293422

RESUMO

It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether facial and vocal tract movements are linked during speech production by comparing videos of the face and fast magnetic resonance (MR) image sequences of the vocal tract. The joint variation in the face and vocal tract was extracted using an application of principal components analysis (PCA), and we demonstrate that MR image sequences can be reconstructed with high fidelity using only the facial video and PCA. Reconstruction fidelity was significantly higher when images from the two sequences corresponded in time, and including implicit temporal information by combining contiguous frames also led to a significant increase in fidelity. A "Bubbles" technique was used to identify which areas of the face were important for recovering information about the vocal tract, and vice versa, on a frame-by-frame basis. Our data reveal that there is sufficient information in the face to recover vocal tract shape during speech. In addition, the facial and vocal tract regions that are important for reconstruction are those that are used to generate the acoustic speech signal.


Assuntos
Face , Percepção da Fala , Prega Vocal , Adulto , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Experimentação Humana não Terapêutica , Análise de Componente Principal , Acústica da Fala , Percepção Visual
7.
J Vis ; 22(11): 7, 2022 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-36223110

RESUMO

Exposure to a dynamic texture reduces the perceived separation between objects, altering the mapping between physical relations in the environment and their neural representations. Here we investigated the spatial tuning and spatial frame of reference of this aftereffect to understand the stage(s) of processing where adaptation-induced changes occur. In Experiment 1, we measured apparent separation at different positions relative to the adapted area, revealing a strong but tightly tuned compression effect. We next tested the spatial frame of reference of the effect, either by introducing a gaze shift between adaptation and test phase (Experiment 2) or by decoupling the spatial selectivity of adaptation in retinotopic and world-centered coordinates (Experiment 3). Results across the two experiments indicated that both retinotopic and world-centered adaptation effects can occur independently. Spatial attention to the location of the adaptor alone could not account for the world-centered transfer we observed, and retinotopic adaptation did not transfer to world-centered coordinates after a saccade (Experiment 4). Finally, we found that aftereffects in different reference frames have a similar, narrow spatial tuning profile (Experiment 5). Together, our results suggest that the neural representation of local separation resides early in the visual cortex, but it can also be modulated by activity in higher visual areas.


Assuntos
Retina , Córtex Visual , Adaptação Fisiológica , Humanos , Estimulação Luminosa/métodos , Movimentos Sacádicos
8.
Proc Biol Sci ; 288(1956): 20211276, 2021 08 11.
Artigo em Inglês | MEDLINE | ID: mdl-34344185

RESUMO

Humans experience levels of confidence in perceptual decisions that tend to scale with the precision of their judgements; but not always. Sometimes precision can be held constant while confidence changes-leading researchers to assume precision and confidence are shaped by different types of information (e.g. perceptual and decisional). To assess this, we examined how visual adaptation to oriented inputs changes tilt perception, perceptual sensitivity and confidence. Some adaptors had a greater detrimental impact on measures of confidence than on precision. We could account for this using an observer model, where precision and confidence rely on different magnitudes of sensory information. These data show that differences in perceptual sensitivity and confidence can therefore emerge, not because these factors rely on different types of information, but because they rely on different magnitudes of sensory information.


Assuntos
Julgamento , Percepção , Humanos , Percepção Visual
9.
PLoS Comput Biol ; 16(10): e1008335, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33112846

RESUMO

Facial expressions carry key information about an individual's emotional state. Research into the perception of facial emotions typically employs static images of a small number of artificially posed expressions taken under tightly controlled experimental conditions. However, such approaches risk missing potentially important facial signals and within-person variability in expressions. The extent to which patterns of emotional variance in such images resemble more natural ambient facial expressions remains unclear. Here we advance a novel protocol for eliciting natural expressions from dynamic faces, using a dimension of emotional valence as a test case. Subjects were video recorded while delivering either positive or negative news to camera, but were not instructed to deliberately or artificially pose any specific expressions or actions. A PCA-based active appearance model was used to capture the key dimensions of facial variance across frames. Linear discriminant analysis distinguished facial change determined by the emotional valence of the message, and this also generalised across subjects. By sampling along the discriminant dimension, and back-projecting into the image space, we extracted a behaviourally interpretable dimension of emotional valence. This dimension highlighted changes commonly represented in traditional face stimuli such as variation in the internal features of the face, but also key postural changes that would typically be controlled away such as a dipping versus raising of the head posture from negative to positive valences. These results highlight the importance of natural patterns of facial behaviour in emotional expressions, and demonstrate the efficacy of using data-driven approaches to study the representation of these cues by the perceptual system. The protocol and model described here could be readily extended to other emotional and non-emotional dimensions of facial variance.


Assuntos
Emoções/classificação , Expressão Facial , Processamento de Imagem Assistida por Computador/métodos , Adulto , Algoritmos , Face/anatomia & histologia , Face/fisiologia , Feminino , Humanos , Masculino , Gravação em Vídeo
10.
J Neurophysiol ; 121(5): 1787-1797, 2019 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-30840536

RESUMO

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world's global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object's motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object's global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


Assuntos
Percepção de Movimento , Acompanhamento Ocular Uniforme , Adaptação Fisiológica , Adolescente , Adulto , Anisotropia , Discriminação Psicológica , Feminino , Humanos , Masculino
11.
Perception ; 47(9): 976-984, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30020018

RESUMO

While personality has typically been considered to influence gaze behaviour, literature relating to the topic is mixed. Previously, we found no evidence of self-reported personality traits on preferred gaze duration between a participant and a person looking at them via a video. In this study, 77 of the original participants answered an in-depth follow-up survey containing a more comprehensive assessment of personality traits (Big Five Inventory) than was initially used, to check whether earlier findings were caused by the personality measure being too coarse. In addition to preferred mutual gaze duration, we also examined two other factors linked to personality traits: number of blinks and total fixation duration in the eye region of observed faces. No significant correlations were found between any of these measures and participant personality traits. We suggest that effects previously reported in the literature may stem from contextual differences or modulation of arousal.


Assuntos
Reconhecimento Facial/fisiologia , Fixação Ocular/fisiologia , Personalidade/fisiologia , Percepção Social , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
12.
J Vis ; 18(3): 12, 2018 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-29677327

RESUMO

It has recently been shown that adapting to a densely textured stimulus alters the perception of visual space, such that the distance between two points subsequently presented in the adapted region appears reduced (Hisakata, Nishida, & Johnston, 2016). We asked whether this form of adaptation-induced spatial compression alters visual crowding. To address this question, we first adapted observers to a dynamic dot texture presented within an annular region surrounding the test location. Following adaptation, observers perceived a test array comprised of multiple oriented dot dipoles as spatially compressed, resulting in an overall reduction in perceived size. We then tested to what extent this spatial compression influences crowding by measuring orientation discrimination of a single dipole flanked by randomly oriented dipoles across a range of separations. Following adaptation, we found that the magnitude of crowding was predicted by the physical rather than perceptual separation between center and flanking dipoles. These findings contrast with previous studies in which crowding has been shown to increase when motion-induced position shifts act to reduce apparent separation (Dakin, Greenwood, Carlson, & Bex, 2011; Maus, Fischer, & Whitney, 2011).


Assuntos
Adaptação Ocular/fisiologia , Aglomeração , Percepção Espacial/fisiologia , Humanos , Orientação , Psicometria , Processamento Espacial , Adulto Jovem
13.
J Vis ; 16(14): 16, 2016 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-27893894

RESUMO

The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eye-scanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don't know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eye-position data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration. We record and exploit the largest set of eye-tracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gaze-based models from computer vision to clinical psychology.


Assuntos
Face/fisiologia , Reconhecimento Facial/fisiologia , Fixação Ocular/fisiologia , Percepção Visual/fisiologia , Adulto , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Fatores Sexuais , Adulto Jovem
14.
J Vis ; 16(9): 2, 2016 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-27380471

RESUMO

We have shown in previous work that the perception of order in point patterns is consistent with an interval scale structure (Protonotarios, Baum, Johnston, Hunter, & Griffin, 2014). The psychophysical scaling method used relies on the confusion between stimuli with similar levels of order, and the resulting discrimination scale is expressed in just-noticeable differences (jnds). As with other perceptual dimensions, an interesting question is whether suprathreshold (perceptual) differences are consistent with distances between stimuli on the discrimination scale. To test that, we collected discrimination data, and data based on comparison of perceptual differences. The stimuli were jittered square lattices of dots, covering the range from total disorder (Poisson) to perfect order (square lattice), roughly equally spaced on the discrimination scale. Observers picked the most ordered pattern from a pair, and the pair of patterns with the greatest difference in order from two pairs. Although the judgments of perceptual difference were found to be consistent with an interval scale, like the discrimination judgments, no common interval scale that could predict both sets of data was possible. In particular, the midpattern of the perceptual scale is 11 jnds away from the ordered end, and 5 jnds from the disordered end of the discrimination scale.


Assuntos
Limiar Diferencial/fisiologia , Discriminação Psicológica/fisiologia , Julgamento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Psicofísica/métodos , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
J Vis ; 16(15): 7, 2016 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-27936271

RESUMO

The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.


Assuntos
Adaptação Fisiológica/fisiologia , Percepção de Movimento/fisiologia , Orientação/fisiologia , Estimulação Luminosa/métodos , Humanos , Rotação
16.
J Vis ; 16(11): 23, 2016 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-27690163

RESUMO

The synchronous change of a feature across multiple discrete elements, i.e., temporal synchrony, has been shown to be a powerful cue for grouping and segmentation. This has been demonstrated with both static and dynamic stimuli for a range of tasks. However, in addition to temporal synchrony, stimuli in previous research have included other cues which can also facilitate grouping and segmentation, such as good continuation and coherent spatial configuration. To evaluate the effectiveness of temporal synchrony for grouping and segmentation in isolation, here we measure signal detection thresholds using a global-Gabor stimulus in the presence/absence of a synchronous event. We also examine the impact of the spatial proximity of the to-be-grouped elements on the effectiveness of temporal synchrony, and the duration for which elements are bound together following a synchronous event in the absence of further segmentation cues. The results show that temporal synchrony (in isolation) is an effective cue for grouping local elements together to extract a global signal. Further, we find that the effectiveness of temporal synchrony as a cue for segmentation is modulated by the spatial proximity of signal elements. Finally, we demonstrate that following a synchronous event, elements are perceptually bound together for an average duration of 200 ms.

17.
Psychol Sci ; 26(4): 512-7, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25711281

RESUMO

Upright static faces are widely thought to recruit holistic representations, whereby individual features are integrated into nondecomposable wholes for recognition and interpretation. In contrast, little is known about the perceptual integration of dynamic features when viewing moving faces. People are frequently exposed to correlated eye and mouth movements, such as the characteristic changes that accompany facial emotion, yawning, sneezing, and laughter. However, it is unclear whether the visual system is sensitive to these dynamic regularities, encoding facial behavior relative to a set of dynamic global prototypes, or whether it simply forms piecemeal descriptions of feature states over time. To address this question, we sought evidence of perceptual interactions between dynamic facial features. Crucially, we found illusory slowing of feature motion in the presence of another moving feature, but it was limited to upright faces and particular relative-phase relationships. Perceptual interactions between dynamic features suggest that local changes are integrated into models of global facial change.


Assuntos
Expressão Facial , Distorção da Percepção , Adulto , Pálpebras/anatomia & histologia , Face/anatomia & histologia , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos , Percepção Visual
18.
J Vis ; 15(6): 2, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26024450

RESUMO

It is well established that the apparent duration of moving visual objects is greater at higher as compared to slower speeds. Here we report the effects of acceleration and deceleration on the perceived duration of a drifting grating with average speed kept constant (10°/s).For acceleration, increasing the speed range progressively reduced perceived duration. The magnitude of apparent duration compression was determined by speed rather than temporal frequency and was proportional to speed range (independent of standard duration) rather than acceleration. The perceived duration reduction was also proportional to the standard length. The effects of increases and decreases in speed were highly asymmetric. Reducing speed through the interval induced a moderate increase in perceived duration. These results could not be explained by changes in apparent onset or offset or differences in perceived average speed between intervals containing increasing speed and intervals containing decreasing speed. Paradoxically, for intervals combining increasing speed and decreasing speed, compression only occurred when increasing speed occurred in the second half of the interval. We show that this pattern of results in the duration domain was concomitant with changes in the reported direction of apparent motion of Gaussian blobs, embedded in intervals of increasing or decreasing speed, that could be predicted from adaptive changes in the temporal impulse response function. We detected similar changes after flicker adaptation, suggesting that the two effects might be linked through changes in the temporal tuning of visual filters.


Assuntos
Aceleração , Desaceleração , Percepção de Movimento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adaptação Fisiológica/fisiologia , Feminino , Fixação Ocular/fisiologia , Humanos , Masculino , Fatores de Tempo
19.
J Vis ; 14(8): 18, 2014 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-25057943

RESUMO

We examined how ambiguous motion signals are integrated over space to support the unambiguous perception of global motion. The motion of a Gaussian windowed drifting sine grating (Gabor) is consistent with an infinite number of grating velocities. To extract the consistent global motion of multi-Gabor arrays, the visual system must integrate ambiguous motion signals from disparate regions of visual space. We found an interaction between spatial arrangement and global motion integration in this process. Linear arrays of variably oriented Gabor elements appeared to move more slowly, reflecting suboptimal integration, when the direction of global translation was orthogonal to the line as opposed to along it. Circular arrays of Gabor elements appeared to move more slowly when the global motion was an expansion or contraction rather than a rotation. However, there was no difference in perceived speed for densely packed annular arrays for these global motion pattern directions. We conclude that the region over which ambiguous motion is integrated is biased in the direction of global motion, and the concept of the association field, held to link like elements along a contour, needs to be extended to include global motion computation over disparate elements referencing the same global motion.


Assuntos
Percepção de Movimento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Humanos , Estimulação Luminosa , Psicofísica , Limiar Sensorial/fisiologia
20.
Psychol Sci ; 24(1): 93-8, 2013 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-23196637

RESUMO

Imitation of facial gestures requires the cognitive system to equate the seen-but-unfelt with the felt-but-unseen. Rival accounts propose that this "correspondence problem" is solved either by an innate supramodal mechanism (the active intermodal-mapping, or AIM, model) or by learned, direct links between the corresponding visual and proprioceptive representations of actions (the associative sequence-learning, or ASL, model). Two experiments tested these alternative models using a new technology that permits, for the first time, the automated objective measurement of imitative accuracy. Euclidean distances, measured in image-derived principal component space, were used to quantify the accuracy of adult participants' attempts to replicate their own facial expressions before, during, and after training. Results supported the ASL model. In Experiment 1, participants reliant solely on proprioceptive feedback got progressively worse at self-imitation. In Experiment 2, participants who received visual feedback that did not match their execution of facial gestures also failed to improve. However, in both experiments, groups that received visual feedback contingent on their execution of facial gestures showed progressive improvement.


Assuntos
Imagem Corporal/psicologia , Expressão Facial , Retroalimentação Psicológica , Comportamento Imitativo , Propriocepção , Autoimagem , Percepção Visual , Adulto , Associação , Feminino , Humanos , Masculino , Modelos Psicológicos , Prática Psicológica , Análise de Componente Principal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA