RESUMO
On a daily basis, humans interact with the outside world using judgments of sensorimotor confidence, constantly evaluating our actions for success. We ask, what sensory and motor-execution cues are used in making these judgements and when are they available? Two sources of temporally distinct information are prospective cues, available prior to the action (e.g., knowledge of motor noise and past performance), and retrospective cues specific to the action itself (e.g., proprioceptive measurements). We investigated the use of these two cues in two tasks, a secondary motor-awareness task and a main task in which participants reached toward a visual target with an unseen hand and then made a continuous judgment of confidence about the success of the reach. Confidence was reported by setting the size of a circle centered on the reach-target location, where a larger circle reflects lower confidence. Points were awarded if the confidence circle enclosed the true endpoint, with fewer points returned for larger circles. This incentivized accurate reaches and attentive reporting to maximize the score. We compared three Bayesian-inference models of sensorimotor confidence based on either prospective cues, retrospective cues, or both sources of information to maximize expected gain (i.e., an ideal-performance model). Our findings primarily showed two distinct strategies: participants either performed as ideal observers, using both prospective and retrospective cues to make the confidence judgment, or relied solely on prospective information, ignoring retrospective cues. Thus, participants can make use of retrospective cues, evidenced by the behavior observed in our motor-awareness task, but these cues are not always included in the computation of sensorimotor confidence.
Assuntos
Sinais (Psicologia) , Julgamento , Humanos , Teorema de Bayes , Estudos Prospectivos , Estudos Retrospectivos , Desempenho PsicomotorRESUMO
Accurate motion perception requires that the visual system integrate the 2D retinal motion signals received by the two eyes into a single representation of 3D motion. However, most experimental paradigms present the same stimulus to the two eyes, signaling motion limited to a 2D fronto-parallel plane. Such paradigms are unable to dissociate the representation of 3D head-centric motion signals (i.e., 3D object motion relative to the observer) from the associated 2D retinal motion signals. Here, we used stereoscopic displays to present separate motion signals to the two eyes and examined their representation in visual cortex using fMRI. Specifically, we presented random-dot motion stimuli that specified various 3D head-centric motion directions. We also presented control stimuli, which matched the motion energy of the retinal signals, but were inconsistent with any 3D motion direction. We decoded motion direction from BOLD activity using a probabilistic decoding algorithm. We found that 3D motion direction signals can be reliably decoded in three major clusters in the human visual system. Critically, in early visual cortex (V1-V3), we found no significant difference in decoding performance between stimuli specifying 3D motion directions and the control stimuli, suggesting that these areas represent the 2D retinal motion signals, rather than 3D head-centric motion itself. In voxels in and surrounding hMT and IPS0 however, decoding performance was consistently superior for stimuli that specified 3D motion directions compared to control stimuli. Our results reveal the parts of the visual processing hierarchy that are critical for the transformation of retinal into 3D head-centric motion signals and suggest a role for IPS0 in their representation, in addition to its sensitivity to 3D object structure and static depth.
Assuntos
Percepção de Movimento , Córtex Visual , Humanos , Retina/diagnóstico por imagem , Percepção Visual , Córtex Visual/diagnóstico por imagem , Movimento (Física) , Estimulação LuminosaRESUMO
Perceptual confidence is an important internal signal about the certainty of our decisions and there is a substantial debate on how it is computed. We highlight three confidence metric types from the literature: observers either use 1) the full probability distribution to compute probability correct (Probability metrics), 2) point estimates from the perceptual decision process to estimate uncertainty (Evidence-Strength metrics), or 3) heuristic confidence from stimulus-based cues to uncertainty (Heuristic metrics). These metrics are rarely tested against one another, so we examined models of all three types on a suprathreshold spatial discrimination task. Observers were shown a cloud of dots sampled from a dot generating distribution and judged if the mean of the distribution was left or right of centre. In addition to varying the horizontal position of the mean, there were two sensory uncertainty manipulations: the number of dots sampled and the spread of the generating distribution. After every two perceptual decisions, observers made a confidence forced-choice judgement whether they were more confident in the first or second decision. Model results showed that the majority of observers were best-fit by either: 1) the Heuristic model, which used dot cloud position, spread, and number of dots as cues; or 2) an Evidence-Strength model, which computed the distance between the sensory measurement and discrimination criterion, scaled according to sensory uncertainty. An accidental repetition of some sessions also allowed for the measurement of confidence agreement for identical pairs of stimuli. This N-pass analysis revealed that human observers were more consistent than their best-fitting model would predict, indicating there are still aspects of confidence that are not captured by our modelling. As such, we propose confidence agreement as a useful technique for computational studies of confidence. Taken together, these findings highlight the idiosyncratic nature of confidence computations for complex decision contexts and the need to consider different potential metrics and transformations in the confidence computation.
Assuntos
Sinais (Psicologia) , Tomada de Decisões , Humanos , Julgamento , Probabilidade , IncertezaRESUMO
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants' behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability-less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.
Assuntos
Percepção Auditiva/fisiologia , Processamento Espacial/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Viés de Atenção/fisiologia , Encéfalo/fisiologia , Causalidade , Biologia Computacional , Sinais (Psicologia) , Feminino , Humanos , Masculino , Modelos Neurológicos , Modelos Psicológicos , Estimulação Luminosa , Reprodutibilidade dos Testes , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Adulto JovemRESUMO
Optimal sensory decision-making requires the combination of uncertain sensory signals with prior expectations. The effect of prior probability is often described as a shift in the decision criterion. Can observers track sudden changes in probability? To answer this question, we used a change-point detection paradigm that is frequently used to examine behavior in changing environments. In a pair of orientation-categorization tasks, we investigated the effects of changing probabilities on decision-making. In both tasks, category probability was updated using a sample-and-hold procedure: probability was held constant for a period of time before jumping to another probability state that was randomly selected from a predetermined set of probability states. We developed an ideal Bayesian change-point detection model in which the observer marginalizes over both the current run length (i.e., time since last change) and the current category probability. We compared this model to various alternative models that correspond to different strategies-from approximately Bayesian to simple heuristics-that the observers may have adopted to update their beliefs about probabilities. While a number of models provided decent fits to the data, model comparison favored a model in which probability is estimated following an exponential averaging model with a bias towards equal priors, consistent with a conservative bias, and a flexible variant of the Bayesian change-point detection model with incorrect beliefs. We interpret the former as a simpler, more biologically plausible explanation suggesting that the mechanism underlying change of decision criterion is a combination of on-line estimation of prior probability and a stable, long-term equal-probability prior, thus operating at two very different timescales.
Assuntos
Adaptação Psicológica , Probabilidade , Teorema de Bayes , Tomada de Decisões , Humanos , Análise e Desempenho de Tarefas , IncertezaRESUMO
A fundamental and nearly ubiquitous feature of sensory encoding is that neuronal responses are strongly influenced by recent experience, or adaptation. Theoretical and computational studies have proposed that many adaptation effects may result in part from changes in the strength of normalization signals. Normalization is a "canonical" computation in which a neuron's response is modulated (normalized) by the pooled activity of other neurons. Here, we test whether adaptation can alter the strength of cross-orientation suppression, or masking, a paradigmatic form of normalization evident in primary visual cortex (V1). We made extracellular recordings of V1 neurons in anesthetized male macaques and measured responses to plaid stimuli composed of two overlapping, orthogonal gratings before and after prolonged exposure to two distinct adapters. The first adapter was a plaid consisting of orthogonal gratings and led to stronger masking. The second adapter presented the same orthogonal gratings in an interleaved manner and led to weaker masking. The strength of adaptation's effects on masking depended on the orientation of the test stimuli relative to the orientation of the adapters, but was independent of neuronal orientation preference. Changes in masking could not be explained by altered neuronal responsivity. Our results suggest that normalization signals can be strengthened or weakened by adaptation depending on the temporal contingencies of the adapting stimuli. Our findings reveal an interplay between two widespread computations in cortical circuits, adaptation and normalization, that enables flexible adjustments to the structure of the environment, including the temporal relationships among sensory stimuli.SIGNIFICANCE STATEMENT Two fundamental features of sensory responses are that they are influenced by adaptation and that they are modulated by the activity of other nearby neurons via normalization. Our findings reveal a strong interaction between these two aspects of cortical computation. Specifically, we show that cross-orientation masking, a form of normalization, can be strengthened or weakened by adaptation depending on the temporal contingencies between sensory inputs. Our findings support theoretical proposals that some adaptation effects may involve altered normalization and offer a network-based explanation for how cortex adjusts to current sensory demands.
Assuntos
Adaptação Fisiológica/fisiologia , Rede Nervosa/fisiologia , Estimulação Luminosa/métodos , Córtex Visual/fisiologia , Animais , Macaca fascicularis , Masculino , Distribuição Aleatória , Fatores de TempoRESUMO
The motor system executes actions in a highly stereotyped manner despite the high number of degrees of freedom available. Studies of motor adaptation leverage this fact by disrupting, or perturbing, visual feedback to measure how the motor system compensates. To elicit detectable effects, perturbations are often large compared to trial-to-trial reach endpoint variability. However, awareness of large perturbations can elicit qualitatively different compensation processes than unnoticeable ones can. The current experiment measures the perturbation detection threshold, and investigates how humans combine proprioception and vision to decide whether displayed reach endpoint errors are self-generated only, or are due to experimenter-imposed perturbation. We scaled or rotated the position of the visual feedback of center-out reaches to targets and asked subjects to indicate whether visual feedback was perturbed. Subjects detected perturbations when they were at least 1.5 times the standard deviation of trial-to-trial endpoint variability. In contrast to previous studies, subjects suboptimally combined vision and proprioception. Instead of using proprioceptive input, they responded based on the final (possibly perturbed) visual feedback. These results inform methodology in motor system experimentation, and more broadly highlight the ability to attribute errors to one's own motor output and combine visual and proprioceptive feedback to make decisions.
Assuntos
Adaptação Fisiológica/fisiologia , Retroalimentação Sensorial/fisiologia , Destreza Motora/fisiologia , Propriocepção/fisiologia , Percepção Visual/fisiologia , Adulto , Conscientização , Feminino , Humanos , Masculino , Desempenho Psicomotor , Limiar Sensorial , Adulto JovemRESUMO
Humans often make decisions based on uncertain sensory information. Signal detection theory (SDT) describes detection and discrimination decisions as a comparison of stimulus "strength" to a fixed decision criterion. However, recent research suggests that current responses depend on the recent history of stimuli and previous responses, suggesting that the decision criterion is updated trial-by-trial. The mechanisms underpinning criterion setting remain unknown. Here, we examine how observers learn to set a decision criterion in an orientation-discrimination task under both static and dynamic conditions. To investigate mechanisms underlying trial-by-trial criterion placement, we introduce a novel task in which participants explicitly set the criterion, and compare it to a more traditional discrimination task, allowing us to model this explicit indication of criterion dynamics. In each task, stimuli were ellipses with principal orientations drawn from two categories: Gaussian distributions with different means and equal variance. In the covert-criterion task, observers categorized a displayed ellipse. In the overt-criterion task, observers adjusted the orientation of a line that served as the discrimination criterion for a subsequently presented ellipse. We compared performance to the ideal Bayesian learner and several suboptimal models that varied in both computational and memory demands. Under static and dynamic conditions, we found that, in both tasks, observers used suboptimal learning rules. In most conditions, a model in which the recent history of past samples determines a belief about category means fit the data best for most observers and on average. Our results reveal dynamic adjustment of discrimination criterion, even after prolonged training, and indicate how decision criteria are updated over time.
Assuntos
Tomada de Decisões/fisiologia , Aprendizagem/fisiologia , Adulto , Biologia Computacional , Simulação por Computador , Feminino , Humanos , Masculino , Memória , Orientação , Análise e Desempenho de Tarefas , Incerteza , Adulto JovemRESUMO
Recognizing materials and understanding their properties is very useful-perhaps critical-in daily life as we encounter objects and plan our interactions with them. Visually derived estimates of material properties guide where and with what force we grasp an object. However, the estimation of material properties, such as glossiness, is a classic ill-posed problem. Image cues that we rely on to estimate gloss are also affected by shape, illumination and, in visual displays, tone-mapping. Here, we focus on the latter two. We define some commonalities present in the structure of natural illumination, and determine whether manipulation of these natural "signatures" impedes gloss constancy. We manipulate the illumination field to violate statistical regularities of natural illumination, such that light comes from below, or the luminance distribution is no longer skewed. These manipulations result in errors in perceived gloss. Similarly, tone mapping has a dramatic effect on perceived gloss. However, when objects are viewed against an informative (rather than plain gray) background that reflects these manipulations, there are some improvements to gloss constancy: in particular, observers are far less susceptible to the effects of tone mapping when judging gloss. We suggest that observers are sensitive to some very simple statistics of the environment when judging gloss.
Assuntos
Iluminação , Propriedades de Superfície , Percepção Visual/fisiologia , Sensibilidades de Contraste/fisiologia , Sinais (Psicologia) , Percepção de Forma/fisiologia , Humanos , Estimulação LuminosaRESUMO
Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. SIGNIFICANCE STATEMENT: Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data.
Assuntos
Tomada de Decisões/fisiologia , Aprendizagem por Discriminação/fisiologia , Discriminação Psicológica/fisiologia , Modelos Neurológicos , Percepção de Movimento/fisiologia , Lobo Parietal/fisiologia , Adulto , Simulação por Computador , Sinais (Psicologia) , Feminino , Percepção de Forma , Humanos , Masculino , Modelos Estatísticos , Tempo de Reação/fisiologia , Limiar Sensorial/fisiologia , Razão Sinal-Ruído , Percepção Visual/fisiologiaRESUMO
UNLABELLED: Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. We propose a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products between pairs of neurons. We demonstrate that this adjustment can be performed by a very simple learning rule. Simulations of this model closely match existing data from visual adaptation experiments. We consider several alternative models, including variants based on homeostatic maintenance of response correlations or covariance, as well as feedforward gain-control models with multiple layers, and we demonstrate that homeostatic maintenance of response products provides the best account of the physiological data. SIGNIFICANCE STATEMENT: Adaptation is a phenomenon throughout the nervous system in which neural tuning properties change in response to changes in environmental statistics. We developed a model of adaptation that combines normalization (in which a neuron's gain is reduced by the summed responses of its neighbors) and Hebbian learning (in which synaptic strength, in this case divisive normalization, is increased by correlated firing). The model is shown to account for several properties of adaptation in primary visual cortex in response to changes in the statistics of contour orientation.
Assuntos
Adaptação Fisiológica/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Orientação/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/citologia , Potenciais de Ação/fisiologia , Animais , Simulação por Computador , Humanos , Estimulação Luminosa , Sinapses/fisiologiaRESUMO
We perceive a stable environment despite the fact that visual information is essentially acquired in a sequence of snapshots separated by saccadic eye movements. The resolution of these snapshots varies-high in the fovea and lower in the periphery-and thus the formation of a stable percept presumably relies on the fusion of information acquired at different resolutions. To test if, and to what extent, foveal and peripheral information are integrated, we examined human orientation-discrimination performance across saccadic eye movements. We found that humans perform best when an oriented target is visible both before (peripherally) and after a saccade (foveally), suggesting that humans integrate the two views. Integration relied on eye movements, as we found no evidence of integration when the target was artificially moved during stationary viewing. Perturbation analysis revealed that humans combine the two views using a weighted sum, with weights assigned based on the relative precision of foveal and peripheral representations, as predicted by ideal observer models. However, our subjects displayed a systematic overweighting of the fovea, relative to the ideal observer, indicating that human integration across saccades is slightly suboptimal.
RESUMO
How do we find a target embedded in a scene? Within the framework of signal detection theory, this task is carried out by comparing each region of the scene with a "template," i.e., an internal representation of the search target. Here we ask what form this representation takes when the search target is a complex image with uncertain orientation. We examine three possible representations. The first is the matched filter. Such a representation cannot account for the ease with which humans can find a complex search target that is rotated relative to the template. A second representation attempts to deal with this by estimating the relative orientation of target and match and rotating the intensity-based template. No intensity-based template, however, can account for the ability to easily locate targets that are defined categorically and not in terms of a specific arrangement of pixels. Thus, we define a third template that represents the target in terms of image statistics rather than pixel intensities. Subjects performed a two-alternative, forced-choice search task in which they had to localize an image that matched a previously viewed target. Target images were texture patches. In one condition, match images were the same image as the target and distractors were a different image of the same textured material. In the second condition, the match image was of the same texture as the target (but different pixels) and the distractor was an image of a different texture. Match and distractor stimuli were randomly rotated relative to the target. We compared human performance to pixel-based, pixel-based with rotation, and statistic-based search models. The statistic-based search model was most successful at matching human performance. We conclude that humans use summary statistics to search for complex visual targets.
Assuntos
Modelos Estatísticos , Reconhecimento Visual de Modelos/fisiologia , Sinais (Psicologia) , Discriminação Psicológica , Humanos , Masculino , Mascaramento Perceptivo , Detecção de Sinal PsicológicoRESUMO
The fundus of the superior temporal sulcus (FST) in macaques is implicated in the processing of complex motion signals, yet a human homolog remains elusive. Here we considered potential localizers and evaluated their effectiveness in delineating putative FST (pFST), from hMT and MST, two nearby motion-sensitive areas in humans. Nine healthy participants underwent scanning sessions with 2D and 3D motion localizers, as well as population receptive field (pRF) mapping. We observed consistent anterior and inferior activation relative to hMT and MST in response to stimuli that contained coherent 3D, but not 2D, motion. Motion opponency and myelination measures further validated the functional and structural distinction between pFST and hMT/MST. At the same time, standard pRF mapping techniques that reveal the visual field organization of hMT/MST proved suboptimal for delineating pFST. Our findings provide a robust framework for localizing pFST in humans, and underscore its distinct functional role in motion processing.
RESUMO
Incorporation of dermoscopy and artificial intelligence (AI) is improving healthcare professionals' ability to diagnose melanoma earlier, but these algorithms often suffer from a "black box" issue, where decision-making processes are not transparent, limiting their utility for training healthcare providers. To address this, an automated approach for generating melanoma imaging biomarker cues (IBCs), which mimics the screening cues used by expert dermoscopists, was developed. This study created a one-minute learning environment where dermatologists adopted a sensory cue integration algorithm to combine a single IBC with a risk score built on many IBCs, then immediately tested their performance in differentiating melanoma from benign nevi. Ten participants evaluated 78 dermoscopic images, comprised of 39 melanomas and 39 nevi, first without IBCs and then with IBCs. Participants classified each image as melanoma or nevus in both experimental conditions, enabling direct comparative analysis through paired data. With IBCs, average sensitivity improved significantly from 73.69% to 81.57% (p = 0.0051), and the average specificity improved from 60.50% to 67.25% (p = 0.059) for the diagnosis of melanoma. The index of discriminability (d') increased significantly by 0.47 (p = 0.002). Therefore, the incorporation of IBCs can significantly improve physicians' sensitivity in melanoma diagnosis. While more research is needed to validate this approach across other healthcare providers, its use may positively impact melanoma screening practices.
RESUMO
Humans take into account their own movement variability as well as potential consequences of different movement outcomes in planning movement trajectories. When variability increases, planned movements are altered so as to optimize expected consequences of the movement. Past research has focused on the steady-state responses to changing conditions of movement under risk. Here, we study the dynamics of such strategy adjustment in a visuomotor decision task in which subjects reach toward a display with regions that lead to rewards and penalties, under conditions of changing uncertainty. In typical reinforcement learning tasks, subjects should base subsequent strategy by computing an estimate of the mean outcome (e.g., reward) in recent trials. In contrast, in our task, strategy should be based on a dynamic estimate of recent outcome uncertainty (i.e., squared error). We find that subjects respond to increased movement uncertainty by aiming movements more conservatively with respect to penalty regions, and that the estimate of uncertainty they use is well characterized by a weighted average of recent squared errors, with higher weights given to more recent trials.
Assuntos
Tomada de Decisões/fisiologia , Aprendizagem/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Assunção de Riscos , Percepção Visual/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Masculino , Adulto JovemRESUMO
Eye movements function to bring detailed information onto the high-resolution region of the retina. Previous research has shown that human observers select fixation points that maximize information acquisition and minimize target location uncertainty. In this study, we ask whether human observers choose the saccade endpoint that maximizes gain when there are explicit rewards associated with correctly detecting the target. Observers performed an 8-alternative forced-choice detection task for a contrast-defined target in noise. After a single saccade, observers indicated the target location. Each potential target location had an associated reward that was known to the observer. In some conditions, the reward at one location was higher than at the other locations. We compared human saccade endpoints to those of an ideal observer that maximizes expected gain given the respective human observer's visibility map, i.e., d' for target detection as a function of retinal location. Varying the location of the highest reward had a significant effect on human observers' distribution of saccade endpoints. Both human and ideal observers show a high density of saccades made toward the highest rewarded and actual target locations. But humans' overall spatial distributions of saccade endpoints differed significantly from the ideal observer as they made a greater number of saccade to locations far from the highest rewarded and actual target locations. Suboptimal choice of saccade endpoint, possibly in combination with suboptimal integration of information across saccades, had a significant effect on human observers' ability to correctly detect the target and maximize gain.
Assuntos
Reconhecimento Visual de Modelos/fisiologia , Movimentos Sacádicos/fisiologia , Comportamento de Escolha , Feminino , Fixação Ocular/fisiologia , Humanos , MasculinoRESUMO
Multisensory integration depends on causal inference about the sensory signals. We tested whether implicit causal-inference judgements pertain to entire objects or focus on task-relevant object features. Participants in our study judged virtual visual, haptic and visual-haptic surfaces with respect to two features-slant and roughness-against an internal standard in a two-alternative forced-choice task. Modelling of participants' responses revealed that the degree to which their perceptual judgements were based on integrated visual-haptic information varied unsystematically across features. For example, a perceived mismatch between visual and haptic roughness would not deter the observer from integrating visual and haptic slant. These results indicate that participants based their perceptual judgements on a feature-specific selection of information, suggesting that multisensory causal inference proceeds not at the object level but at the level of single object features. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Assuntos
Percepção do Tato , Humanos , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , JulgamentoRESUMO
Coordinate systems for movement planning are comprised of an anchor point (e.g., retinocentric coordinates) and a representation (encoding) of the desired movement. One of two representations is often assumed: a final-position code describing desired limb endpoint position and a vector code describing movement direction and extent. The existence of movement-planning systems using both representations is controversial. In our experiments, participants completed reaches grouped by target location (providing practice for a final-position code) and the same reaches grouped by movement vector (providing vector-code practice). Target-grouped reaches resulted in the isotropic (circular) distribution of errors predicted for position-coded reaches. The identical reaches grouped by vector resulted in error ellipses aligned with the reach direction, as predicted for vector-coded reaches. Manipulating only recent movement history to provide better learning for one or the other movement code, we provide definitive evidence that both movement representations are used in the identical task.