Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
Iperception ; 12(1): 2041669520978670, 2021.
Article in English | MEDLINE | ID: mdl-33680418

ABSTRACT

The temporal binding window (TBW), which reflects the range of temporal offsets in which audiovisual stimuli are combined to form a singular percept, can be reduced through training. Our research aimed to investigate whether training-induced reductions in TBW size transfer across stimulus intensities. A total of 32 observers performed simultaneity judgements at two visual intensities with a fixed auditory intensity, before and after receiving audiovisual TBW training at just one of these two intensities. We show that training individuals with a high visual intensity reduces the size of the TBW for bright stimuli, but this improvement did not transfer to dim stimuli. The reduction in TBW can be explained by shifts in decision criteria. Those trained with the dim visual stimuli, however, showed no reduction in TBW. Our main finding is that perceptual improvements following training are specific for high-intensity stimuli, potentially highlighting limitations of proposed TBW training procedures.

2.
PLoS One ; 13(11): e0206218, 2018.
Article in English | MEDLINE | ID: mdl-30412590

ABSTRACT

We actively maintain postural equilibrium in everyday life, and, although we are unaware of the underlying processing, there is increasing evidence for cortical involvement in this postural control. Converging evidence shows that we make appropriate use of 'postural anchors', for example static objects in the environment, to stabilise our posture. Visually evoked postural responses (VEPR) that are caused when we counteract the illusory perception of self-motion in space (vection) are modulated in the presence of postural anchors and therefore provide a convenient behavioural measure. The aim of this study is to evaluate the factors influencing visual appraisal of the suitability of postural anchors. We are specifically interested in the effect of perceived 'reality' in VR the expected 'stability' of visual anchors. To explore the effect of 'reality' we introduced an accommodation-vergence conflict. We show that VEPR are appropriately modulated only when virtual visual 'anchors' are rendered such that vergence and accommodation cues are consistent. In a second experiment we directly test whether cognitive assessment of the likely stability of real perceptual anchors (we contrast a 'teapot on a stand' and a 'helium balloon') affects VEPR. We show that the perceived positional stability of environmental anchors modulate postural responses. Our results confirm previous findings showing that postural sway is modulated by the configuration of the environment and further show that an assessment of the stability and reality of the environment plays an important role in this process. On this basis we propose design guidelines for VR systems, in particular we argue that accommodation-vergence conflicts should be minimised and that high quality motion tracking and rendering are essential for high fidelity VR.


Subject(s)
Cognition/physiology , Postural Balance/physiology , Posture/physiology , Vision, Ocular/physiology , Evoked Potentials, Visual/physiology , Humans , Motion , Motion Perception/physiology , Musculoskeletal Physiological Phenomena , Photic Stimulation , Virtual Reality
3.
Neuropsychologia ; 78: 51-62, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26386322

ABSTRACT

Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Motion Perception/physiology , Vision Disparity/physiology , Acoustic Stimulation/methods , Adult , Discrimination, Psychological/physiology , Electroencephalography , Evoked Potentials , Female , Humans , Male , Neuropsychological Tests , Photic Stimulation/methods , Psychophysics
4.
Front Psychol ; 5: 552, 2014.
Article in English | MEDLINE | ID: mdl-24982641

ABSTRACT

Current neuroimaging techniques with high spatial resolution constrain participant motion so that many natural tasks cannot be carried out. The aim of this paper is to show how a time-locked correlation-analysis of cerebral blood flow velocity (CBFV) lateralization data, obtained with functional TransCranial Doppler (fTCD) ultrasound, can be used to infer cerebral activation patterns across tasks. In a first experiment we demonstrate that the proposed analysis method results in data that are comparable with the standard Lateralization Index (LI) for within-task comparisons of CBFV patterns, recorded during cued word generation (CWG) at two difficulty levels. In the main experiment we demonstrate that the proposed analysis method shows correlated blood-flow patterns for two different cognitive tasks that are known to draw on common brain areas, CWG, and Music Synthesis. We show that CBFV patterns for Music and CWG are correlated only for participants with prior musical training. CBFV patterns for tasks that draw on distinct brain areas, the Tower of London and CWG, are not correlated. The proposed methodology extends conventional fTCD analysis by including temporal information in the analysis of cerebral blood-flow patterns to provide a robust, non-invasive method to infer whether common brain areas are used in different cognitive tasks. It complements conventional high resolution imaging techniques.

5.
PLoS One ; 8(6): e67651, 2013.
Article in English | MEDLINE | ID: mdl-23840760

ABSTRACT

Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.


Subject(s)
Evoked Potentials, Visual/physiology , Posture/physiology , Adolescent , Adult , Environment , Female , Humans , Male , Middle Aged , Motion Perception/physiology , Orientation/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Young Adult
6.
Neuropsychologia ; 51(9): 1716-25, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23727570

ABSTRACT

An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory­visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions.


Subject(s)
Brain/physiology , Motion Perception/physiology , Nerve Net , Semantics , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Electroencephalography , Female , Humans , Male , Phonetics , Photic Stimulation , Young Adult
7.
PLoS One ; 7(9): e44381, 2012.
Article in English | MEDLINE | ID: mdl-22957068

ABSTRACT

We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues.


Subject(s)
Auditory Perception/physiology , Aviation/methods , Cues , Visual Perception/physiology , Adult , Biomechanical Phenomena , Computer Simulation , Female , Humans , Male , Middle Aged , Models, Statistical , Motion , Orientation , Pattern Recognition, Visual , Reaction Time , Reproducibility of Results , User-Computer Interface
8.
Seeing Perceiving ; 25(1): 15-28, 2012.
Article in English | MEDLINE | ID: mdl-22353566

ABSTRACT

Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.


Subject(s)
Auditory Perception/physiology , Motion Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Humans , Middle Aged , Reaction Time , Walking , Young Adult
9.
J Cogn Neurosci ; 24(3): 575-87, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22126670

ABSTRACT

The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Motion Perception/physiology , Motor Cortex/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Motor Cortex/blood supply , Oxygen/blood , Photic Stimulation , Reaction Time/physiology , Young Adult
11.
Exp Brain Res ; 213(2-3): 203-11, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21584626

ABSTRACT

Behavioural, neuroimaging and lesion studies show that face processing has a special role in human perception. The purpose of this EEG study was to explore whether auditory information influences visual face perception. We employed a 2 × 2 factorial design and presented subjects with visual stimuli that could be cartoon faces or scrambled faces where size changes of one of the components, the mouth in the face condition, was either congruent or incongruent with the amplitude modulation of a simultaneously presented auditory signal. Our data show a significant main effect for signal congruence at an ERP peak around 135 ms and a significant main effect of face configuration at around 200 ms. The timing and scalp topology of both effects corresponds well to previously reported data on the integration of non-redundant audio-visual stimuli and face-selective processing. Our analysis did not show any significant statistical interactions. This double disassociation suggests that the early component, at 135 ms, is sensitive to auditory-visual congruency but not to facial configuration and that the later component is sensitive to facial configuration but not to AV congruency. We conclude that facial configurational processing is not influenced by the congruence of simultaneous auditory signals and is independent from featural processing where we see evidence for multisensory integration.


Subject(s)
Auditory Perception/physiology , Discrimination, Psychological/physiology , Evoked Potentials/physiology , Face , Pattern Recognition, Visual/physiology , Brain Mapping , Electroencephalography , Functional Laterality/physiology , Humans , Photic Stimulation , Reaction Time/physiology
12.
J Cogn Neurosci ; 23(9): 2291-308, 2011 Sep.
Article in English | MEDLINE | ID: mdl-20954938

ABSTRACT

Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Reaction Time/physiology , Semantics , Speech/physiology , Acoustic Stimulation/methods , Adult , Analysis of Variance , Auditory Perception , Cerebral Cortex/blood supply , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen , Photic Stimulation/methods , Visual Perception/physiology , Young Adult
13.
J Vis ; 10(14): 16, 2010 Dec 16.
Article in English | MEDLINE | ID: mdl-21163957

ABSTRACT

For moving targets, bimodal facilitation of reaction time has been observed for motion in the depth plane (C. Cappe, G. Thut, B. Romei, & M. M. Murray, 2009), but it is unclear whether analogous RT facilitation is observed for auditory-visual motion stimuli in the horizontal plane, as perception of horizontal motion relies on very different cues. Here we found that bimodal motion cues resulted in significant RT facilitation at threshold level, which could not be explained using an independent decisions model (race model). Bimodal facilitation was observed at suprathreshold levels when the RTs for suprathreshold unimodal stimuli were roughly equated, and significant RT gains were observed for direction-discrimination tasks with abrupt-onset motion stimuli and with motion preceded by a stationary phase. We found no speeded responses for bimodal signals when a motion signal in one modality was paired with a spatially co-localized stationary signal in the other modality, but faster response times could be explained by statistical facilitation when the motion signals traveled in opposite directions. These results strongly suggest that integration of motion cues led to the speeded bimodal responses. Finally, our results highlight the importance of matching the unimodal reaction times to obtain response facilitation for bimodal motion signals in the linear plane.


Subject(s)
Cues , Motion Perception/physiology , Reaction Time/physiology , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Photic Stimulation/methods , Young Adult
14.
Exp Brain Res ; 166(3-4): 538-47, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16143858

ABSTRACT

It is well known that the detection thresholds for stationary auditory and visual signals are lower if the signals are presented bimodally rather than unimodally, provided the signals coincide in time and space. Recent work on auditory-visual motion detection suggests that the facilitation seen for stationary signals is not seen for motion signals. We investigate the conditions under which motion perception also benefits from the integration of auditory and visual signals. We show that the integration of cross-modal local motion signals that are matched in position and speed is consistent with thresholds predicted by a neural summation model. If the signals are presented in different hemi-fields, move in different directions, or both, then behavioural thresholds are predicted by a probability-summation model. We conclude that cross-modal signals have to be co-localised and co-incident for effective motion integration. We also argue that facilitation is only seen if the signals contain all localisation cues that would be produced by physical objects.


Subject(s)
Auditory Perception/physiology , Motion Perception/physiology , Space Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Algorithms , Auditory Threshold/physiology , Fixation, Ocular , Humans , Photic Stimulation , Sensory Thresholds/physiology , Sound Localization/physiology , Visual Fields/physiology
15.
Eur J Nucl Med Mol Imaging ; 30(7): 951-60, 2003 Jul.
Article in English | MEDLINE | ID: mdl-12748833

ABSTRACT

In patients scheduled for the resection of perisylvian brain tumours, knowledge of the cortical topography of language functions is crucial in order to avoid neurological deficits. We investigated the applicability of statistical parametric mapping (SPM) without stereotactic normalisation for individual preoperative language function brain mapping using positron emission tomography (PET). Seven right-handed adult patients with left-sided brain tumours (six frontal and one temporal) underwent 12 oxygen-15 labelled water PET scans during overt verb generation and rest. Individual activation maps were calculated for P<0.005 and P<0.001 without anatomical normalisation and overlaid onto the individuals' magnetic resonance images for preoperative planning. Activations corresponding to Broca's and Wernicke's areas were found in five and six cases, respectively, for P<0.005 and in three and six cases, respectively, for P<0.001. One patient with a glioma located in the classical Broca's area without aphasic symptoms presented an activation of the adjacent inferior frontal cortex and of a right-sided area homologous to Broca's area. Four additional patients with left frontal tumours also presented activations of the right-sided Broca's homologue; two of these showed aphasic symptoms and two only a weak or no activation of Broca's area. Other frequently observed activations included bilaterally the superior temporal gyri, prefrontal cortices, anterior insulae, motor areas and the cerebellum. The middle and inferior temporal gyri were activated predominantly on the left. An SPM group analysis ( P<0.05, corrected) in patients with left frontal tumours confirmed the activation pattern shown by the individual analyses. We conclude that SPM analyses without stereotactic normalisation offer a promising alternative for analysing individual preoperative language function brain mapping studies. The observed right frontal activations agree with proposed reorganisation processes, but they may also reflect an unspecific recruitment of the right-sided Broca's homologue in the effort to perform the task.


Subject(s)
Brain Mapping/methods , Brain Neoplasms/diagnostic imaging , Cerebral Cortex/diagnostic imaging , Language , Preoperative Care/methods , Tomography, Emission-Computed/methods , Adult , Brain Neoplasms/physiopathology , Cerebral Cortex/physiopathology , Female , Humans , Male , Middle Aged , Models, Biological , Models, Statistical , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...