ABSTRACT
Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.
Subject(s)
Auditory Cortex , Blindness , Magnetic Resonance Imaging , Visual Cortex , Humans , Blindness/physiopathology , Blindness/diagnostic imaging , Male , Adult , Female , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Auditory Cortex/physiopathology , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Young Adult , Neuronal Plasticity/physiology , Acoustic Stimulation , Brain Mapping , Middle Aged , Auditory Perception/physiology , Echolocation/physiologyABSTRACT
In the investigation of the brain areas involved in human spatial navigation, the traditional focus has been on visually guided navigation in sighted people. Consequently, it is unclear whether the involved areas also support navigational abilities in other modalities. We explored this possibility by testing whether the occipital place area (OPA), a region associated with visual boundary-based navigation in sighted people, has a similar role in echo-acoustically guided navigation in blind human echolocators. We used fMRI to measure brain activity in 6 blind echolocation experts (EEs; five males, one female), 12 blind controls (BCs; six males, six females), and 14 sighted controls (SCs; eight males, six females) as they listened to prerecorded echolocation sounds that conveyed either a route taken through one of three maze environments, a scrambled (i.e., spatiotemporally incoherent) control sound, or a no-echo control sound. We found significantly greater activity in the OPA of EEs, but not the control groups, when they listened to the coherent route sounds relative to the scrambled sounds. This provides evidence that the OPA of the human navigation brain network is not strictly tied to the visual modality but can be recruited for nonvisual navigation. We also found that EEs, but not BCs or SCs, recruited early visual cortex for processing of echo acoustic information. This is consistent with the recent notion that the human brain is organized flexibly by task rather than by specific modalities.SIGNIFICANCE STATEMENT There has been much research on the brain areas involved in visually guided navigation, but we do not know whether the same or different brain regions are involved when blind people use a sense other than vision to navigate. In this study, we show that one part of the brain (occipital place area) known to play a specific role in visually guided navigation is also active in blind human echolocators when they use reflected sound to navigate their environment. This finding opens up new ways of understanding how people navigate, and informs our ability to provide rehabilitative support to people with vision loss.
Subject(s)
Blindness , Echolocation , Male , Animals , Humans , Female , Vision, Ocular , Auditory Perception , Occipital Lobe , Magnetic Resonance ImagingABSTRACT
Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images.NEW & NOTEWORTHY We provide comparative data on primate audition of naturalistic sounds comprising hearing thresholds, reaction time distributions, and spectral-temporal modulation transfer functions. Our psychophysical experiments demonstrate that auditory information is primarily processed in a spectral-temporal-independent manner by both monkeys and humans. Singular value decomposition of known visual spatiotemporal contrast sensitivity, in comparison to our auditory spectral-temporal sensitivity, revealed a striking contrast in how the brain encodes natural sounds as opposed to natural images, as vision appears to be space-time inseparable.
Subject(s)
Speech Perception , Time Perception , Animals , Humans , Haplorhini , Auditory Perception , Hearing , Acoustic Stimulation/methodsABSTRACT
Bodily resizing illusions typically use visual and/or tactile inputs to produce a vivid experience of one's body changing size. Naturalistic auditory input (an input that reflects the natural sounds of a stimulus) has been used to increase illusory experience during the rubber hand illusion, whilst non-naturalistic auditory input can influence estimations of finger length. We aimed to use a non-naturalistic auditory input during a hand-based resizing illusion using augmented reality, to assess whether the addition of an auditory input would increase both subjective illusion strength and measures of performance-based tasks. Forty-four participants completed the following three conditions: no finger stretching, finger stretching without tactile feedback and finger stretching with tactile feedback. Half of the participants had an auditory input throughout all the conditions, whilst the other half did not. After each condition, the participants were given one of the following three performance tasks: stimulated (right) hand dot touch task, non-stimulated (left) hand dot touch task, and a ruler judgement task. Dot tasks involved participants reaching for the location of a virtual dot, whereas the ruler task concerned estimates of the participant's own finger on a ruler whilst the hand was hidden from view. After all trials, the participants completed a questionnaire capturing subjective illusion strength. The addition of auditory input increased subjective illusion strength for manipulations without tactile feedback but not those with tactile feedback. No facilitatory effects of audio were found for any performance task. We conclude that adding auditory input to illusory finger stretching increased subjective illusory experience in the absence of tactile feedback but did not affect performance-based measures.
Subject(s)
Illusions , Touch Perception , Humans , Touch , Proprioception , Hand , Visual Perception , Body ImageABSTRACT
Diet can influence cognitive functioning in older adults and is a modifiable risk factor for cognitive decline. However, it is unknown if an association exists between diet and lower-level processes in the brain underpinning cognition, such as multisensory integration. We investigated whether temporal multisensory integration is associated with daily intake of fruit and vegetables (FV) or products high in fat/sugar/salt (FSS) in a large sample (N = 2,693) of older adults (mean age = 64.06 years, SD = 7.60; 56% female) from The Irish Longitudinal Study on Ageing (TILDA). Older adults completed a Food Frequency Questionnaire from which the total number of daily servings of FV and FSS items respectively was calculated. Older adults' susceptibility to the Sound Induced Flash Illusion (SIFI) measured the temporal precision of audio-visual integration, which included three audio-visual Stimulus Onset Asynchronies (SOAs): 70, 150 and 230â ms. Older adults who self-reported a higher daily consumption of FV were less susceptible to the SIFI at the longest versus shortest SOAs (i.e. increased temporal precision) compared to those reporting the lowest daily consumption (p = .013). In contrast, older adults reporting a higher daily consumption of FSS items were more susceptible to the SIFI at the longer versus shortest SOAs (i.e. reduced temporal precision) compared to those reporting the lowest daily consumption (p < .001). The temporal precision of multisensory integration is differentially associated with levels of daily consumption of FV versus products high in FSS, consistent with broader evidence that habitual diet is associated with brain health.
Subject(s)
Diet , Fruit , Humans , Female , Male , Aged , Middle Aged , Longitudinal Studies , Vegetables , Cognition , Ireland , Aging/physiology , Nutritional Status , Auditory PerceptionABSTRACT
Our percept of the world is not solely determined by what we perceive and process at a given moment in time, but also depends on what we processed recently. In the present study, we investigate whether the perceived emotion of a spoken sentence is contingent upon the emotion of an auditory stimulus on the preceding trial (i.e., serial dependence). Thereto, participants were exposed to spoken sentences that varied in emotional affect by changing the prosody that ranged from 'happy' to 'fearful'. Participants were instructed to rate the emotion. We found a positive serial dependence for emotion processing whereby the perceived emotion was biased towards the emotion on the preceding trial. When we introduced 'no-go' trials (i.e., no rating was required), we found a negative serial dependence when participants knew in advance to withhold their response on a given trial (Experiment 2) and a positive serial dependence when participants received the information to withhold their response after the stimulus presentation (Experiment 3). We therefore established a robust serial dependence for emotion processing in speech and introduce a methodology to disentangle perceptual from post-perceptual processes. This approach can be applied to the vast majority of studies investigating sequential dependencies to separate positive from negative serial dependence.
Subject(s)
Emotions , Speech Perception , Humans , Female , Male , Adult , Young Adult , Speech Perception/physiologyABSTRACT
While several methods have been proposed to assess the influence of continuous visual cues in parallel numerosity estimation, the impact of temporal magnitudes on sequential numerosity judgments has been largely ignored. To overcome this issue, we extend a recently proposed framework that makes it possible to separate the contribution of numerical and non-numerical information in numerosity comparison by introducing a novel stimulus space designed for sequential tasks. Our method systematically varies the temporal magnitudes embedded into event sequences through the orthogonal manipulation of numerosity and two latent factors, which we designate as "duration" and "temporal spacing". This allows us to measure the contribution of finer-grained temporal features on numerosity judgments in several sensory modalities. We validate the proposed method on two different experiments in both visual and auditory modalities: results show that adult participants discriminated sequences primarily by relying on numerosity, with similar acuity in the visual and auditory modality. However, participants were similarly influenced by non-numerical cues, such as the total duration of the stimuli, suggesting that temporal cues can significantly bias numerical processing. Our findings highlight the need to carefully consider the continuous properties of numerical stimuli in a sequential mode of presentation as well, with particular relevance in multimodal and cross-modal investigations. We provide the complete code for creating sequential stimuli and analyzing participants' responses.
Subject(s)
Judgment , Humans , Female , Male , Adult , Judgment/physiology , Young Adult , Cues , Visual Perception/physiology , Auditory Perception/physiology , Photic Stimulation , Time FactorsABSTRACT
Processing auditory sequences involves multiple brain networks and is crucial to complex perception associated with music appreciation and speech comprehension. We used time-resolved cortical imaging in a pitch change detection task to detail the underlying nature of human brain network activity, at the rapid time scales of neurophysiology. In response to tone sequence presentation to the participants, we observed slow inter-regional signaling at the pace of tone presentations (2-4 Hz) that was directed from auditory cortex toward both inferior frontal and motor cortices. Symmetrically, motor cortex manifested directed influence onto auditory and inferior frontal cortices via bursts of faster (15-35 Hz) activity. These bursts occurred precisely at the expected latencies of each tone in a sequence. This expression of interdependency between slow/fast neurophysiological activity yielded a form of local cross-frequency phase-amplitude coupling in auditory cortex, which strength varied dynamically and peaked when pitch changes were anticipated. We clarified the mechanistic relevance of these observations in relation to behavior by including a group of individuals afflicted by congenital amusia, as a model of altered function in processing sound sequences. In amusia, we found a depression of inter-regional slow signaling toward motor and inferior frontal cortices, and a chronic overexpression of slow/fast phase-amplitude coupling in auditory cortex. These observations are compatible with a misalignment between the respective neurophysiological mechanisms of stimulus encoding and internal predictive signaling, which was absent in controls. In summary, our study provides a functional and mechanistic account of neurophysiological activity for predictive, sequential timing of auditory inputs.SIGNIFICANCE STATEMENT Auditory sequences are processed by extensive brain networks, involving multiple systems. In particular, fronto-temporal brain connections participate in the encoding of sequential auditory events, but so far, their study was limited to static depictions. This study details the nature of oscillatory brain activity involved in these inter-regional interactions in human participants. It demonstrates how directed, polyrhythmic oscillatory interactions between auditory and motor cortical regions provide a functional account for predictive timing of incoming items in an auditory sequence. In addition, we show the functional relevance of these observations in relation to behavior, with data from both normal hearing participants and a rare cohort of individuals afflicted by congenital amusia, which we considered here as a model of altered function in processing sound sequences.
Subject(s)
Auditory Cortex , Auditory Perceptual Disorders , Acoustic Stimulation/methods , Auditory Cortex/physiology , Brain , Humans , Pitch Perception/physiologyABSTRACT
Understanding neural function requires quantification of the sensory signals that an animal's brain evolved to interpret. These signals in turn depend on the morphology and mechanics of the animal's sensory structures. Although the house mouse (Mus musculus) is one of the most common model species used in neuroscience, the spatial arrangement of its facial sensors has not yet been quantified. To address this gap, the present study quantifies the facial morphology of the mouse, with a particular focus on the geometry of its vibrissae (whiskers). The study develops equations that establish relationships between the three-dimensional (3D) locations of whisker basepoints, whisker geometry (arclength, curvature) and the 3D angles at which the whiskers emerge from the face. Additionally, the positions of facial sensory organs are quantified relative to bregma-lambda. Comparisons with the Norway rat (Rattus norvegicus) indicate that when normalized for head size, the whiskers of these two species have similar spacing density. The rostral-caudal distances between facial landmarks of the rat are a factor of â¼2.0 greater than the mouse, while the scale of bilateral distances is larger and more variable. We interpret these data to suggest that the larger size of rats compared with mice is a derived (apomorphic) trait. As rodents are increasingly important models in behavioral neuroscience, the morphological model developed here will help researchers generate naturalistic, multimodal patterns of stimulation for neurophysiological experiments and allow the generation of synthetic datasets and simulations to close the loop between brain, body and environment.
Subject(s)
Brain , Vibrissae , Rats , Mice , Animals , Vibrissae/physiology , Touch/physiologyABSTRACT
Event-related potentials (ERPs) associated with the involuntary orientation of (bottom-up) attention toward an unexpected sound are of larger amplitude in high dream recallers (HR) than in low dream recallers (LR) during passive listening, suggesting different attentional functioning. We measured bottom-up and top-down attentional performance and their cerebral correlates in 18 HR (11 women, age = 22.7 years, dream recall frequency = 5.3 days with a dream recall per week) and 19 LR (10 women, age = 22.3, DRF = 0.2) using EEG and the Competitive Attention Task. Between-group differences were found in ERPs but not in behavior. The results show that HR present larger ERPs to distracting sounds than LR even during active listening, arguing for enhanced bottom-up processing of irrelevant sounds. HR also presented larger contingent negative variation during target expectancy and P3b to target sounds than LR, speaking for an enhanced recruitment of top-down attention. The attentional balance seems preserved in HR since their performances are not altered, but possibly at a higher resource cost. In HR, increased bottom-up processes would favor dream recall through awakening facilitation during sleep and enhanced top-down processes may foster dream recall through increased awareness and/or short-term memory stability of dream content.
Subject(s)
Evoked Potentials , Sleep , Adult , Auditory Perception , Electroencephalography , Female , Humans , Memory, Short-Term , Mental Recall , Young AdultABSTRACT
As the world becomes more urbanized, more people become exposed to traffic and the risks associated with a higher exposure to road traffic noise increase. Excessive exposure to environmental noise could potentially interfere with functional maturation of the auditory brain in developing individuals. The aim of the present study was to assess the association between exposure to annual average road traffic noise (LAeq) in schools and functional connectivity of key elements of the central auditory pathway in schoolchildren. A total of 229 children from 34 representative schools in the city of Barcelona with ages between 8 and 12 years (49.2% girls) were evaluated. LAeq was obtained as the mean of 2-consecutive day measurements inside classrooms before lessons started following standard procedures to obtain an indicator of long-term road traffic noise levels. A region-of-interest functional connectivity Magnetic Resonance Imaging (MRI) approach was adopted. Functional connectivity maps were generated for the inferior colliculus, medial geniculate body of the thalamus and primary auditory cortex as key levels of the central auditory pathway. Road traffic noise in schools was significantly associated with stronger connectivity between the inferior colliculus and a bilateral thalamic region adjacent to the medial geniculate body, and with stronger connectivity between the medial geniculate body and a bilateral brainstem region adjacent to the inferior colliculus. Such a functional connectivity strengthening effect did not extend to the cerebral cortex. The anatomy of the association implicating subcortical relays suggests that prolonged road traffic noise exposure in developing individuals may accelerate maturation in the basic elements of the auditory pathway. Future research is warranted to establish whether such a faster maturation in early pathway levels may ultimately reduce the developing potential in the whole auditory system.
Subject(s)
Auditory Pathways , Noise, Transportation , Child , Female , Humans , Male , Noise, Transportation/adverse effects , Geniculate Bodies , Cities , Schools , Environmental ExposureABSTRACT
Local circuit neurons are present in the thalamus of all vertebrates where they are considered inhibitory. They play an important role in computation and influence the transmission of information from the thalamus to the telencephalon. In mammals, the percentage of local circuit neurons in the dorsal lateral geniculate nucleus remains relatively constant across a variety of species. In contrast, the numbers of local circuit neurons in the ventral division of the medial geniculate body in mammals vary significantly depending on the species examined. To explain these observations, the numbers of local circuit neurons were investigated by reviewing the literature on this subject in these two nuclei in mammals and their respective homologs in sauropsids and by providing additional data on a crocodilian. Local circuit neurons are present in the dorsal geniculate nucleus of sauropsids just as is the case for this nucleus in mammals. However, sauropsids lack local circuits neurons in the auditory thalamic nuclei homologous to the ventral division of the medial geniculate body. A cladistic analysis of these results suggests that differences in the numbers of local circuit neurons in the dorsal lateral geniculate nucleus of amniotes reflect an elaboration of these local circuit neurons as a result of evolution from a common ancestor. In contrast, the numbers of local circuit neurons in the ventral division of the medial geniculate body changed independently in several mammalian lineages.
Subject(s)
Thalamic Nuclei , Thalamus , Animals , Geniculate Bodies , Mammals , NeuronsABSTRACT
The fission and fusion illusions provide measures of multisensory integration. The sound-induced tap fission illusion occurs when a tap is paired with two distractor sounds, resulting in the perception of two taps; the sound-induced tap fusion illusion occurs when two taps are paired with a single sound, resulting in the perception of a single tap. Using these illusions, we measured integration in three groups of children (9-, 11-, and 13-year-olds) and compared them with a group of adults. Based on accuracy, we derived a measure of magnitude of illusion and used a signal detection analysis to estimate perceptual discriminability and decisional criterion. All age groups showed a significant fission illusion, whereas only the three groups of children showed a significant fusion illusion. When compared with adults, the 9-year-olds showed larger fission and fusion illusions (i.e., reduced discriminability and greater bias), whereas the 11-year-olds were adult-like for fission but showed some differences for fusion: significantly worse discriminability and marginally greater magnitude and criterion. The 13-year-olds were adult-like on all measures. Based on the pattern of data, we speculate that the developmental trajectories for fission and fusion differ. We discuss these developmental results in the context of three non-mutually exclusive theoretical frameworks: sensory dominance, maximum likelihood estimation, and causal inference.
Subject(s)
Illusions , Touch Perception , Adult , Child , Humans , Visual Perception , Acoustic Stimulation/methods , Auditory Perception , Photic Stimulation/methodsABSTRACT
We used a variant of cued auditory task switching to investigate task preparation and its relation to response-set overlap. Previous studies found increased interference with overlapping response sets across tasks relative to non-overlapping motor response sets. In the present experiments, participants classified either pitch or loudness of a simple tone as low or high, hence, both tasks were constructed around common underlying integrated semantic categories ranging from low to high. Manual responses overlapped in both category and modality for both tasks in Experiment 1A, whereas each task was related to a specific response category and response modality (manual vs. vocal) in Experiment 1B. Focusing on the manual responses in both experiments, the data showed that non-overlapping response sets (Experiment 1B) resulted in a decreased congruency effect, suggesting reduced response-based crosstalk and thus better task shielding, but at the same time switch costs were increased, suggesting less efficient switching between task sets. Moreover, varying preparation time (cue-stimulus interval, CSI) showed that long CSI led to better performance overall. Our results thus suggest that when non-overlapping response sets share common semantic categories across tasks, there is no general benefit over overlapping response sets.
Subject(s)
Cues , Psychomotor Performance , Humans , Reaction Time/physiology , Psychomotor Performance/physiology , Acoustic Stimulation/methods , Photic Stimulation/methodsABSTRACT
One of the key objectives in developing IoT applications is to automatically detect and identify human activities of daily living (ADLs). Mobile phone users are becoming more accepting of sharing data captured by various built-in sensors. Sounds detected by smartphones are processed in this work. We present a hierarchical identification system to recognize ADLs by detecting and identifying certain sounds taking place in a complex audio situation (AS). Three major categories of sound are discriminated in terms of signal duration. These are persistent background noise (PBN), non-impulsive long sounds (NILS), and impulsive sound (IS). We first analyze audio signals in a situation-aware manner and then map the sounds of daily living (SDLs) to ADLs. A new hierarchical audible event (AE) recognition approach is proposed that classifies atomic audible actions (AAs), then computes pre-classified portions of atomic AAs energy in one AE session, and finally marks the maximum-likelihood ADL label as the outcome. Our experiments demonstrate that the proposed hierarchical methodology is effective in recognizing SDLs and, thus, also in detecting ADLs with a remarkable performance for other known baseline systems.
Subject(s)
Activities of Daily Living , Sound , Humans , Hearing , Auditory Perception , NoiseABSTRACT
OBJECTIVE: To design and develop a Portable Auditory Localization Acclimation Training (PALAT) system capable of producing psychoacoustically accurate localization cues; evaluate the training effect against a proven full-scale, laboratory-grade system under three listening conditions; and determine if the PALAT system is sensitive to differences among electronic level-dependent hearing protection devices (HPDs). BACKGROUND: In-laboratory auditory localization training has demonstrated the ability to improve localization performance with the open (natural) ear, that is, unoccluded, and while wearing HPDs. The military requires a portable system capable of imparting similar training benefits as those demonstrated in laboratory experiments. METHOD: In a full-factorial repeated measures design experiment, 12 audiometrically normal participants completed localization training and testing using an identical, optimized training protocol on two training systems under three listening conditions (open ear, TEP-100, and ComTac™ III). Statistical tests were performed on mean absolute accuracy score and front-back reversal errors. RESULTS: No statistical difference existed between the PALAT and laboratory-grade DRILCOM systems on two dependent localization accuracy measurements at all stages of training. In addition, the PALAT system detected the same localization performance differences among the three listening conditions. CONCLUSION: The PALAT system imparted similar training benefits as the DRILCOM system and was sensitive to HPD localization performance differences. APPLICATION: The user-operable PALAT system and optimized training protocol can be employed by the military, law enforcement, and various industries, to improve auditory localization performance in conditions where auditory situation awareness is critical to safety.
ABSTRACT
Perception of sub-second auditory event timing supports multisensory integration, and speech and music perception and production. Neural populations tuned for the timing (duration and rate) of visual events were recently described in several human extrastriate visual areas. Here we ask whether the brain also contains neural populations tuned for auditory event timing, and whether these are shared with visual timing. Using 7T fMRI, we measured responses to white noise bursts of changing duration and rate. We analyzed these responses using neural response models describing different parametric relationships between event timing and neural response amplitude. This revealed auditory timing-tuned responses in the primary auditory cortex, and auditory association areas of the belt, parabelt and premotor cortex. While these areas also showed tonotopic tuning for auditory pitch, pitch and timing preferences were not consistently correlated. Auditory timing-tuned response functions differed between these areas, though without clear hierarchical integration of responses. The similarity of auditory and visual timing tuned responses, together with the lack of overlap between the areas showing these responses for each modality, suggests modality-specific responses to event timing are computed similarly but from different sensory inputs, and then transformed differently to suit the needs of each modality.
Subject(s)
Auditory Cortex , Music , Acoustic Stimulation , Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping , Humans , Magnetic Resonance ImagingABSTRACT
A widely used example of the intricate (yet poorly understood) intertwining of multisensory signals in the brain is the audiovisual bounce inducing effect (ABE). This effect presents two identical objects moving along the azimuth with uniform motion and towards opposite directions. The perceptual interpretation of the motion is ambiguous and is modulated if a transient (sound) is presented in coincidence with the point of overlap of the two objects' motion trajectories. This phenomenon has long been written-off to simple attentional or decision-making mechanisms, although the neurological underpinnings for the effect are not well understood. Using behavioural metrics concurrently with event-related fMRI, we show that sound-induced modulations of motion perception can be further modulated by changing motion dynamics of the visual targets. The phenomenon engages the posterior parietal cortex and the parieto-insular-vestibular cortical complex, with a close correspondence of activity in these regions with behaviour. These findings suggest that the insular cortex is engaged in deriving a probabilistic perceptual solution through the integration of multisensory data.
Subject(s)
Motion Perception , Vestibule, Labyrinth , Auditory Perception , Brain , Humans , Motion , Photic Stimulation , Visual PerceptionABSTRACT
Several factors influencing dream recall frequency (DRF) have been identified, but some remain poorly understood. One way to study DRF is to compare cognitive processes in low and high dream recallers (LR and HR). According to the arousal-retrieval model, long-term memory encoding of a dream requires wakefulness while its multisensory short-term memory is still alive. Previous studies showed contradictory results concerning short-term memory differences between LR and HR. It has also been found that extreme DRFs are associated with different electrophysiological traits related to attentional processes. However, to date, there is no evidence for attentional differences between LR and HR at the behavioural level. To further investigate attention and working memory in HR and LR, we used a newly-developed challenging paradigm called "MEMAT" (for MEMory and ATtention), which allows the study of selective attention and working memory interaction during memory encoding of non-verbal auditory stimuli. We manipulated the difficulties of the distractor to ignore and of the memory task. The performance of the two groups were not differentially impacted by working memory load. However, HR were slower and less accurate in the presence of a hard rather than easy to-ignore distractor, while LR were much less impacted by the distractor difficulty. Therefore, we show behavioural evidence towards less resistance to hard-to-ignore distractors in HR. Using a challenging task, we show for the first time, attentional differences between HR and LR at the behavioural level. The impact of auditory attention and working memory on dream recall is discussed.
Subject(s)
Memory, Short-Term , Mental Recall , Attention/physiology , Humans , Memory, Short-Term/physiology , Mental Recall/physiology , Wakefulness/physiologyABSTRACT
The aim of this work was to evaluate whether the angular elevation of a sound source could generate auditory cues which improve the auditory distance perception in a similar way to that previously reported by visual modality. For this purpose, we compared ADP curves obtained with sources located both at the listeners' ears and at ground level. Our hypothesis was that the participants can interpret the relation between elevation and distance of ground-level sources (which are linked geometrically) so we expected them to perceive their distances more accurately than those at ear level. However, the responses obtained with sources located at ground level were almost identical to those obtained at the height of the listeners' ears, showing that, under the conditions of our experiment, auditory elevation cues do not influence auditory distance perception.