Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
bioRxiv ; 2024 Mar 29.
Article in English | MEDLINE | ID: mdl-38559254

ABSTRACT

Purpose: This study investigates the effect of parallel stimulus presentation on the place specificity of the auditory brainstem response (ABR) in human listeners. Frequency-specific stimuli do not guarantee a response from the place on the cochlea corresponding only to that characteristic frequency - especially for brief and high-level stimuli. Adding masking noise yields responses that are more place specific, and a prior modeling study has suggested similar effects when multiple frequency-specific stimuli are presented in parallel. We tested this hypothesis experimentally here, comparing the place specificity of responses to serial and parallel stimuli at two stimulus frequencies and three stimulus rates. Methods: Parallel ABR (pABR) stimuli were presented alongside high-pass filtered noise with a varied cutoff frequency. Serial presentation was also tested by isolating and presenting single-frequency stimulus trains from the pABR ensemble. Latencies of the ABRs were examined to assess place specificity of responses. Response bands were derived by subtracting responses from different high pass noise conditions. The response amplitude from each derived response band was then used to determine how much individual frequency regions of the auditory system were contributing to the overall response. Results: We found that parallel presentation improves place specificity of ABRs for the lower stimulus frequency and at higher stimulus rates. At a higher stimulus frequency, serial and parallel presentation were equally place specific. Conclusion: Parallel presentation can provide more place specific responses than serial for lower stimulus frequencies. The improvement increases with higher stimulus rates and is in addition to the pABR's primary benefit of faster test times.

2.
Sci Rep ; 14(1): 789, 2024 01 08.
Article in English | MEDLINE | ID: mdl-38191488

ABSTRACT

Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.


Subject(s)
Music , Humans , Speech , Acoustics , Auditory Pathways , Benchmarking
3.
J Clin Neurophysiol ; 2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37934074

ABSTRACT

PURPOSE: The neurologic examination of patients undergoing extracorporeal membrane oxygenation (ECMO) is crucial for evaluating irreversible encephalopathy but is often obscured by sedation or neuromuscular blockade. Noninvasive neuromonitoring modalities including diffuse correlation spectroscopy and EEG measure cerebral perfusion and neuronal function, respectively. We hypothesized that encephalopathic ECMO patients with greater degree of irreversible cerebral injury demonstrate less correlation between electrographic activity and cerebral perfusion than those whose encephalopathy is attributable to medications. METHODS: We performed a prospective observational study of adults undergoing ECMO who underwent simultaneous continuous EEG and diffuse correlation spectroscopy monitoring. (Alpha + beta)/delta ratio and alpha/delta Rartio derived from quantitative EEG analysis were correlated with frontal cortical blood flow index. Patients who awakened and followed commands during sedation pauses were included in group 1, whereas patients who could not follow commands for most neuromonitoring were placed in group 2. (Alpha + beta)/delta ratio-blood flow index and ADR-BFI correlations were compared between the groups. RESULTS: Ten patients (five in each group) underwent 39 concomitant continuous EEG and diffuse correlation spectroscopy monitoring sessions. Four patients (80%) in each group received some form of analgosedation during neuromonitoring. (Alpha + beta)/delta ratio-blood flow index correlation was significantly lower in group 2 than group 1 (left: 0.05 vs. 0.52, P = 0.03; right: -0.12 vs. 0.39, P = 0.04). Group 2 ADR-BFI correlation was lower only over the right hemisphere (-0.06 vs. 0.47, P = 0.04). CONCLUSIONS: Correlation between (alpha + beta)/delta ratio and blood flow index were decreased in encephalopathic ECMO patients compared with awake ones, regardless of the analgosedation use. The combined use of EEG and diffuse correlation spectroscopy may have utility in monitoring cerebral function in ECMO patients.

4.
Trends Hear ; 27: 23312165231207235, 2023.
Article in English | MEDLINE | ID: mdl-37847849

ABSTRACT

Audiovisual integration of speech can benefit the listener by not only improving comprehension of what a talker is saying but also helping a listener select a particular talker's voice from a mixture of sounds. Binding, an early integration of auditory and visual streams that helps an observer allocate attention to a combined audiovisual object, is likely involved in processing audiovisual speech. Although temporal coherence of stimulus features across sensory modalities has been implicated as an important cue for non-speech stimuli (Maddox et al., 2015), the specific cues that drive binding in speech are not fully understood due to the challenges of studying binding in natural stimuli. Here we used speech-like artificial stimuli that allowed us to isolate three potential contributors to binding: temporal coherence (are the face and the voice changing synchronously?), articulatory correspondence (do visual faces represent the correct phones?), and talker congruence (do the face and voice come from the same person?). In a trio of experiments, we examined the relative contributions of each of these cues. Normal hearing listeners performed a dual task in which they were instructed to respond to events in a target auditory stream while ignoring events in a distractor auditory stream (auditory discrimination) and detecting flashes in a visual stream (visual detection). We found that viewing the face of a talker who matched the attended voice (i.e., talker congruence) offered a performance benefit. We found no effect of temporal coherence on performance in this task, prompting an important recontextualization of previous findings.


Subject(s)
Speech Perception , Speech , Humans , Auditory Perception , Cues , Sound , Visual Perception
5.
Trends Hear ; 27: 23312165231205719, 2023.
Article in English | MEDLINE | ID: mdl-37807857

ABSTRACT

While each place on the cochlea is most sensitive to a specific frequency, it will generally respond to a sufficiently high-level stimulus over a wide range of frequencies. This spread of excitation can introduce errors in clinical threshold estimation during a diagnostic auditory brainstem response (ABR) exam. Off-frequency cochlear excitation can be mitigated through the addition of masking noise to the test stimuli, but introducing a masker increases the already long test times of the typical ABR exam. Our lab has recently developed the parallel ABR (pABR) paradigm to speed up test times by utilizing randomized stimulus timing to estimate the thresholds for multiple frequencies simultaneously. There is reason to believe parallel presentation of multiple frequencies provides masking effects and improves place specificity while decreasing test times. Here, we use two computational models of the auditory periphery to characterize the predicted effect of parallel presentation on place specificity in the auditory nerve. We additionally examine the effect of stimulus rate and level. Both models show the pABR is at least as place specific as standard methods, with an improvement in place specificity for parallel presentation (vs. serial) at high levels, especially at high stimulus rates. When simulating hearing impairment in one of the models, place specificity was also improved near threshold. Rather than a tradeoff, this improved place specificity would represent a secondary benefit to the pABR's faster test times.


Subject(s)
Evoked Potentials, Auditory, Brain Stem , Perceptual Masking , Humans , Auditory Threshold/physiology , Perceptual Masking/physiology , Evoked Potentials, Auditory, Brain Stem/physiology , Noise , Brain Stem/physiology , Acoustic Stimulation
6.
Trends Hear ; 26: 23312165221136934, 2022.
Article in English | MEDLINE | ID: mdl-36384325

ABSTRACT

Listening in a noisy environment is challenging, but many previous studies have demonstrated that comprehension of speech can be substantially improved by looking at the talker's face. We recently developed a deep neural network (DNN) based system that generates movies of a talking face from speech audio and a single face image. In this study, we aimed to quantify the benefits that such a system can bring to speech comprehension, especially in noise. The target speech audio was masked with signal to noise ratios of -9, -6, -3, and 0 dB and was presented to subjects in three audio-visual (AV) stimulus conditions: (1) synthesized AV: audio with the synthesized talking face movie; (2) natural AV: audio with the original movie from the corpus; and (3) audio-only: audio with a static image of the talker. Subjects were asked to type the sentences they heard in each trial and keyword recognition was quantified for each condition. Overall, performance in the synthesized AV condition fell approximately halfway between the other two conditions, showing a marked improvement over the audio-only control but still falling short of the natural AV condition. Every subject showed some benefit from the synthetic AV stimulus. The results of this study support the idea that a DNN-based model that generates a talking face from speech audio can meaningfully enhance comprehension in noisy environments, and has the potential to be used as a visual hearing aid.


Subject(s)
Comprehension , Speech Perception , Humans , Speech , Noise/adverse effects , Neural Networks, Computer
7.
Front Neurosci ; 16: 858404, 2022.
Article in English | MEDLINE | ID: mdl-35478849

ABSTRACT

Peripheral veno-arterial extracorporeal membrane oxygenation (ECMO) artificially oxygenates and circulates blood retrograde from the femoral artery, potentially exposing the brain to asymmetric perfusion. Though ECMO patients frequently experience brain injury, neurologic exams and imaging are difficult to obtain. Diffuse correlation spectroscopy (DCS) non-invasively measures relative cerebral blood flow (rBF) at the bedside using an optical probe on each side of the forehead. In this study we observed interhemispheric rBF differences in response to mean arterial pressure (MAP) changes in adult ECMO recipients. We recruited 13 subjects aged 21-78 years (7 with cardiac arrest, 4 with acute heart failure, and 2 with acute respiratory distress syndrome). They were dichotomized via Glasgow Coma Scale Motor score (GCS-M) into comatose (GCS-M ≤ 4; n = 4) and non-comatose (GCS-M > 4; n = 9) groups. Comatose patients had greater interhemispheric rBF asymmetry (ASYMrBF) vs. non-comatose patients over a range of MAP values (29 vs. 11%, p = 0.009). ASYMrBF in comatose patients resolved near a MAP range of 70-80 mmHg, while rBF remained symmetric through a wider MAP range in non-comatose patients. Correlations between post-oxygenator pCO2 or pH vs. ASYMrBF were significantly different between comatose and non-comatose groups. Our findings indicate that comatose patients are more likely to have asymmetric cerebral perfusion.

8.
Atten Percept Psychophys ; 84(6): 2016-2026, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35211849

ABSTRACT

It is well established that in order to comprehend speech in noisy environments, listeners use the face of the talker in conjunction with the auditory speech. Yet how listeners use audiovisual speech correspondences along the multisensory speech processing pathway is not known. We engaged listeners in a pair of experiments using face rotation to partially dissociate linguistic and temporal information and two tasks to assess both overall integration and early integration specifically. In our first exploratory experiment, listeners performed a speech in noise task to determine which face rotation maximally disrupts speech comprehension and thus overall audiovisual integration. Our second experiment involved a dual pitch discrimination and visual catch task to test specifically for binding. The results showed that temporal coherence supports early integration, replicating the importance of temporal coherence seen for binding nonspeech stimuli. However, the benefit of temporal coherence was present in both upright and inverted positions, suggesting that binding is minimally affected by face rotation under these conditions. Together, our results suggest that different aspects of audio-visual speech are integrated at different stages of multisensory speech processing.


Subject(s)
Speech Perception , Speech , Acoustic Stimulation/methods , Auditory Perception , Cues , Humans , Linguistics , Visual Perception
9.
Ear Hear ; 43(2): 646-658, 2022.
Article in English | MEDLINE | ID: mdl-34593686

ABSTRACT

OBJECTIVES: Timely assessments are critical to providing early intervention and better hearing and spoken language outcomes for children with hearing loss. To facilitate faster diagnostic hearing assessments in infants, the authors developed the parallel auditory brainstem response (pABR), which presents randomly timed trains of tone pips at five frequencies to each ear simultaneously. The pABR yields high-quality waveforms that are similar to the standard, single-frequency serial ABR but in a fraction of the recording time. While well-documented for standard ABRs, it is yet unknown how presentation rate and level interact to affect responses collected in parallel. Furthermore, the stimuli are yet to be calibrated to perceptual thresholds. Therefore, this study aimed to determine the optimal range of parameters for the pABR and to establish the normative stimulus level correction values for the ABR stimuli. DESIGN: Two experiments were completed, each with a group of 20 adults (18-35 years old) with normal-hearing thresholds (≤20 dB HL) from 250 to 8000 Hz. First, pABR electroencephalographic (EEG) responses were recorded for six stimulation rates and two intensities. The changes in component wave V amplitude and latency were analyzed, as well as the time required for all responses to reach a criterion signal-to-noise ratio of 0 dB. Second, behavioral thresholds were measured for pure tones and for the pABR stimuli at each rate to determine the correction factors that relate the stimulus level in dB peSPL to perceptual thresholds in dB nHL. RESULTS: The pABR showed some adaptation with increased stimulation rate. A wide range of rates yielded robust responses in under 15 minutes, but 40 Hz was the optimal singular presentation rate. Extending the analysis window to include later components of the response offered further time-saving advantages for the temporally broader responses to low-frequency tone pips. The perceptual thresholds to pABR stimuli changed subtly with rate, giving a relatively similar set of correction factors to convert the level of the pABR stimuli from dB peSPL to dB nHL. CONCLUSIONS: The optimal stimulation rate for the pABR is 40 Hz but using multiple rates may prove useful. Perceptual thresholds that subtly change across rate allow for a testing paradigm that easily transitions between rates, which may be useful for quickly estimating thresholds for different configurations of hearing loss. These optimized parameters facilitate expediency and effectiveness of the pABR to estimate hearing thresholds in a clinical setting.


Subject(s)
Deafness , Hearing Loss , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold/physiology , Child , Evoked Potentials, Auditory, Brain Stem/physiology , Hearing/physiology , Hearing Loss/diagnosis , Humans , Infant , Young Adult
10.
J Acoust Soc Am ; 150(4): 3085, 2021 10.
Article in English | MEDLINE | ID: mdl-34717460

ABSTRACT

The ability to see a talker's face improves speech intelligibility in noise, provided that the auditory and visual speech signals are approximately aligned in time. However, the importance of spatial alignment between corresponding faces and voices remains unresolved, particularly in multi-talker environments. In a series of online experiments, we investigated this using a task that required participants to selectively attend a target talker in noise while ignoring a distractor talker. In experiment 1, we found improved task performance when the talkers' faces were visible, but only when corresponding faces and voices were presented in the same hemifield (spatially aligned). In experiment 2, we tested for possible influences of eye position on this result. In auditory-only conditions, directing gaze toward the distractor voice reduced performance, but this effect could not fully explain the cost of audio-visual (AV) spatial misalignment. Lowering the signal-to-noise ratio (SNR) of the speech from +4 to -4 dB increased the magnitude of the AV spatial alignment effect (experiment 3), but accurate closed-set lipreading caused a floor effect that influenced results at lower SNRs (experiment 4). Taken together, these results demonstrate that spatial alignment between faces and voices contributes to the ability to selectively attend AV speech.


Subject(s)
Speech Perception , Voice , Humans , Lipreading , Noise/adverse effects , Speech Intelligibility
11.
Elife ; 102021 02 17.
Article in English | MEDLINE | ID: mdl-33594974

ABSTRACT

Speech processing is built upon encoding by the auditory nerve and brainstem, yet we know very little about how these processes unfold in specific subcortical structures. These structures are deep and respond quickly, making them difficult to study during ongoing speech. Recent techniques have begun to address this problem, but yield temporally broad responses with consequently ambiguous neural origins. Here, we describe a method that pairs re-synthesized 'peaky' speech with deconvolution analysis of electroencephalography recordings. We show that in adults with normal hearing the method quickly yields robust responses whose component waves reflect activity from distinct subcortical structures spanning auditory nerve to rostral brainstem. We further demonstrate the versatility of peaky speech by simultaneously measuring bilateral and ear-specific responses across different frequency bands and discuss the important practical considerations such as talker choice. The peaky speech method holds promise as a tool for investigating speech encoding and processing, and for clinical applications.


Subject(s)
Brain Stem/physiology , Evoked Potentials, Auditory, Brain Stem/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography/methods , Female , Humans , Male , Speech
12.
Biomed Opt Express ; 11(11): 6551-6569, 2020 Nov 01.
Article in English | MEDLINE | ID: mdl-33282508

ABSTRACT

Extracorporeal membrane oxygenation (ECMO) is a form of cardiopulmonary bypass that provides life-saving support to critically ill patients whose illness is progressing despite maximal conventional support. Use in adults is expanding, however neurological injuries are common. Currently, the existing brain imaging tools are a snapshot in time and require high-risk patient transport. Here we assess the feasibility of measuring diffuse correlation spectroscopy, transcranial Doppler ultrasound, electroencephalography, and auditory brainstem responses at the bedside, and developing a cerebral autoregulation metric. We report preliminary results from two patients, demonstrating feasibility and laying the foundation for future studies monitoring neurological health during ECMO.

13.
Trends Hear ; 23: 2331216519871395, 2019.
Article in English | MEDLINE | ID: mdl-31516096

ABSTRACT

The frequency-specific tone-evoked auditory brainstem response (ABR) is an indispensable tool in both the audiology clinic and research laboratory. Most frequently, the toneburst ABR is used to estimate hearing thresholds in infants, toddlers, and other patients for whom behavioral testing is not feasible. Therefore, results of the ABR exam form the basis for decisions regarding interventions and hearing habilitation with implications extending far into the child's future. Currently, responses are elicited by periodic sequences of toneburst stimuli presented serially to one ear at a time, which take a long time to measure multiple frequencies and intensities, and provide incomplete information if the infant wakes up early. Here, we describe a new method, the parallel ABR (pABR), which uses randomly timed toneburst stimuli to simultaneously acquire ABR waveforms to five frequencies in both ears. Here, we describe the pABR and quantify its effectiveness in addressing the greatest drawback of current methods: test duration. We show that in adults with normal hearing the pABR yields high-quality waveforms over a range of intensities, with similar morphology to the standard ABR in a fraction of the recording time. Furthermore, longer latencies and smaller amplitudes for low frequencies at a high intensity evoked by the pABR versus serial ABR suggest that responses may have better place specificity due to the masking provided by the other simultaneous toneburst sequences. Thus, the pABR has substantial potential for facilitating faster accumulation of more diagnostic information that is important for timely identification and treatment of hearing loss.


Subject(s)
Evoked Potentials, Auditory, Brain Stem/physiology , Hearing Loss/diagnosis , Hearing Tests/methods , Adult , Auditory Threshold/physiology , Female , Hearing/physiology , Humans , Male , Time Factors
14.
PLoS One ; 14(9): e0215417, 2019.
Article in English | MEDLINE | ID: mdl-31498804

ABSTRACT

In order to survive and function in the world, we must understand the content of our environment. This requires us to gather and parse complex, sometimes conflicting, information. Yet, the brain is capable of translating sensory stimuli from disparate modalities into a cohesive and accurate percept with little conscious effort. Previous studies of multisensory integration have suggested that the brain's integration of cues is well-approximated by an ideal observer implementing Bayesian causal inference. However, behavioral data from tasks that include only one stimulus in each modality fail to capture what is in nature a complex process. Here we employed an auditory spatial discrimination task in which listeners were asked to determine on which side they heard one of two concurrently presented sounds. We compared two visual conditions in which task-uninformative shapes were presented in the center of the screen, or spatially aligned with the auditory stimuli. We found that performance on the auditory task improved when the visual stimuli were spatially aligned with the auditory stimuli-even though the shapes provided no information about which side the auditory target was on. We also demonstrate that a model of a Bayesian ideal observer performing causal inference cannot explain this improvement, demonstrating that humans deviate systematically from the ideal observer model.


Subject(s)
Brain/physiology , Models, Neurological , Pattern Recognition, Physiological/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Space Perception/physiology , Acoustic Stimulation , Adult , Attention/physiology , Bayes Theorem , Cues , Female , Functional Laterality , Humans , Male , Photic Stimulation , Psychophysics/methods , Reaction Time/physiology
15.
Neuron ; 97(3): 640-655.e4, 2018 02 07.
Article in English | MEDLINE | ID: mdl-29395914

ABSTRACT

How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis.


Subject(s)
Auditory Perception/physiology , Neurons/physiology , Visual Cortex/physiology , Visual Perception/physiology , Acoustic Stimulation , Action Potentials , Animals , Female , Ferrets , Photic Stimulation
16.
eNeuro ; 5(1)2018.
Article in English | MEDLINE | ID: mdl-29435487

ABSTRACT

Speech is an ecologically essential signal, whose processing crucially involves the subcortical nuclei of the auditory brainstem, but there are few experimental options for studying these early responses in human listeners under natural conditions. While encoding of continuous natural speech has been successfully probed in the cortex with neurophysiological tools such as electroencephalography (EEG) and magnetoencephalography, the rapidity of subcortical response components combined with unfavorable signal-to-noise ratios signal-to-noise ratio has prevented application of those methods to the brainstem. Instead, experiments have used thousands of repetitions of simple stimuli such as clicks, tone-bursts, or brief spoken syllables, with deviations from those paradigms leading to ambiguity in the neural origins of measured responses. In this study we developed and tested a new way to measure the auditory brainstem response (ABR) to ongoing, naturally uttered speech, using EEG to record from human listeners. We found a high degree of morphological similarity between the speech-derived ABRs and the standard click-evoked ABR, in particular, a preserved Wave V, the most prominent voltage peak in the standard click-evoked ABR. Because this method yields distinct peaks that recapitulate the canonical ABR, at latencies too short to originate from the cortex, the responses measured can be unambiguously determined to be subcortical in origin. The use of naturally uttered speech to measure the ABR allows the design of engaging behavioral tasks, facilitating new investigations of the potential effects of cognitive processes like language and attention on brainstem processing.


Subject(s)
Brain Stem/physiology , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Speech Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Signal Processing, Computer-Assisted , Young Adult
18.
Trends Neurosci ; 39(2): 74-85, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26775728

ABSTRACT

Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Behavior Rating Scale , Visual Cortex/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Animals , Auditory Pathways/physiology , Brain Mapping/methods , Humans , Photic Stimulation/methods , Visual Pathways/physiology
19.
Elife ; 42015 Feb 05.
Article in English | MEDLINE | ID: mdl-25654748

ABSTRACT

In noisy settings, listening is aided by correlated dynamic visual cues gleaned from a talker's face-an improvement often attributed to visually reinforced linguistic information. In this study, we aimed to test the effect of audio-visual temporal coherence alone on selective listening, free of linguistic confounds. We presented listeners with competing auditory streams whose amplitude varied independently and a visual stimulus with varying radius, while manipulating the cross-modal temporal relationships. Performance improved when the auditory target's timecourse matched that of the visual stimulus. The fact that the coherence was between task-irrelevant stimulus features suggests that the observed improvement stemmed from the integration of auditory and visual streams into cross-modal objects, enabling listeners to better attend the target. These findings suggest that in everyday conditions, where listeners can often see the source of a sound, temporal cues provided by vision can help listeners to select one sound source from a mixture.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Task Performance and Analysis , Acoustic Stimulation , Adolescent , Adult , Behavior , Female , Humans , Male , Models, Biological , Pitch Discrimination , Time Factors , Young Adult
20.
Front Neurosci ; 8: 330, 2014.
Article in English | MEDLINE | ID: mdl-25368547

ABSTRACT

Modern neuroimaging techniques enable non-invasive observation of ongoing neural processing, with magnetoencephalography (MEG) in particular providing direct measurement of neural activity with millisecond time resolution. However, accurately mapping measured MEG sensor readings onto the underlying source neural structures remains an active area of research. This so-called "inverse problem" is ill posed, and poses a challenge for source estimation that is often cited as a drawback limiting MEG data interpretation. However, anatomically constrained MEG localization estimates may be more accurate than commonly believed. Here we hypothesize that, by combining anatomically constrained inverse estimates across subjects, the spatial uncertainty of MEG source localization can be mitigated. Specifically, we argue that differences in subject brain geometry yield differences in point-spread functions, resulting in improved spatial localization across subjects. To test this, we use standard methods to combine subject anatomical MRI scans with coregistration information to obtain an accurate forward (physical) solution, modeling the MEG sensor data resulting from brain activity originating from different cortical locations. Using a linear minimum-norm inverse to localize this brain activity, we demonstrate that a substantial increase in the spatial accuracy of MEG source localization can result from combining data from subjects with differing brain geometry. This improvement may be enabled by an increase in the amount of available spatial information in MEG data as measurements from different subjects are combined. This approach becomes more important in the face of practical issues of coregistration errors and potential noise sources, where we observe even larger improvements in localization when combining data across subjects. Finally, we use a simple auditory N100(m) localization task to show how this effect can influence localization using a recorded neural dataset.

SELECTION OF CITATIONS
SEARCH DETAIL
...