Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Trends Hear ; 25: 2331216521989900, 2021.
Article in English | MEDLINE | ID: mdl-33563136

ABSTRACT

Hearing aids are typically fitted using speech-based prescriptive formulae to make speech more intelligible. Individual preferences may vary from these prescriptions and may also vary with signal type. It is important to consider what motivates listener preferences and how those preferences can inform hearing aid processing so that assistive listening devices can best be tailored for hearing aid users. Therefore, this study explored preferred frequency-gain shaping relative to prescribed gain for speech and music samples. Preferred gain was determined for 22 listeners with mild sloping to moderately severe hearing loss relative to individually prescribed amplification while listening to samples of male speech, female speech, pop music, and classical music across low-, mid-, and high-frequency bands. Samples were amplified using a fast-acting compression hearing aid simulator. Preferences were determined using an adaptive paired comparison procedure. Listeners then rated speech and music samples processed using prescribed and preferred shaping across different sound quality descriptors. On average, low-frequency gain was significantly increased relative to the prescription for all stimuli and most substantially for pop and classical music. High-frequency gain was decreased significantly for pop music and male speech. Gain adjustments, particularly in the mid- and high-frequency bands, varied considerably between listeners. Music preferences were driven by changes in perceived fullness and sharpness, whereas speech preferences were driven by changes in perceived intelligibility and loudness. The results generally support the use of prescribed amplification to optimize speech intelligibility and alternative amplification for music listening for most listeners.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Music , Speech Perception , Female , Humans , Male , Speech Discrimination Tests
2.
Int J Audiol ; 59(7): 556-565, 2020 07.
Article in English | MEDLINE | ID: mdl-32069128

ABSTRACT

Objective: To assess the performance of an active transcutaneous implantable-bone conduction device (TI-BCD), and to evaluate the benefit of device digital signal processing (DSP) features in challenging listening environments.Design: Participants were tested at 1- and 3-month post-activation of the TI-BCD. At each session, aided and unaided phoneme perception was assessed using the Ling-6 test. Speech reception thresholds (SRTs) and quality ratings of speech and music samples were collected in noisy and reverberant environments, with and without the DSP features. Self-assessment of the device performance was obtained using the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire.Study sample: Six adults with conductive or mixed hearing loss.Results: Average SRTs were 2.9 and 12.3 dB in low and high reverberation environments, respectively, which improved to -1.7 and 8.7 dB, respectively with the DSP features. In addition, speech quality ratings improved by 23 points with the DSP features when averaged across all environmental conditions. Improvement scores on APHAB scales revealed a statistically significant aided benefit.Conclusions: Noise and reverberation significantly impacted speech recognition performance and perceived sound quality. DSP features (directional microphone processing and adaptive noise reduction) significantly enhanced subjects' performance in these challenging listening environments.


Subject(s)
Bone Conduction , Correction of Hearing Impairment/instrumentation , Hearing Aids , Hearing Loss, Conductive/physiopathology , Hearing Loss, Mixed Conductive-Sensorineural/physiopathology , Adult , Female , Hearing Loss, Conductive/rehabilitation , Hearing Loss, Mixed Conductive-Sensorineural/rehabilitation , Humans , Male , Middle Aged , Noise , Outcome Assessment, Health Care , Prosthesis Design , Signal Processing, Computer-Assisted , Speech Perception , Speech Reception Threshold Test
3.
Am J Audiol ; 28(4): 947-963, 2019 Dec 16.
Article in English | MEDLINE | ID: mdl-31829722

ABSTRACT

Purpose A growing body of evidence indicates that treatment of hearing loss by provision of hearing aids leads to improvements in auditory and visual working memory. The purpose of this study was to assess whether similar working memory benefits are observed following provision of cochlear implants (CIs). Method Fifteen adults with postlingually acquired severe bilateral sensorineural hearing loss completed the prospective longitudinal study. Participants were candidates for bilateral cochlear implantation with some aidable hearing in each ear. Implantation surgeries were carried out sequentially, approximately 1 year apart. Working memory was measured with the visual Reading Span Test (Daneman & Carpenter, 1980) at 5 time points: pre-operatively following a 6-month bilateral hearing aid trial, after 6 and 12 months of bimodal (CI plus contralateral hearing aid) listening experience following the 1st CI surgery and activation, and again after 6 and 12 months of bilateral CI listening experience following the 2nd CI surgery and activation. Results Compared to the preoperative baseline, CI listening experience yielded significant improvements in participants' ability to recall test words in the correct serial order after 12 months in the bimodal condition. Individual performance outcomes were variable, but almost all participants showed increases in task performance over the course of the study. Conclusions These results suggest that, similar to appropriate interventions with hearing aids, treatment of hearing loss with CIs can yield working memory benefits. A likely mechanism is the freeing of cognitive resources previously devoted to effortful listening.


Subject(s)
Cochlear Implants , Memory, Short-Term , Reading , Adult , Aged , Aged, 80 and over , Cochlear Implants/psychology , Cognitive Dysfunction/etiology , Cognitive Dysfunction/prevention & control , Female , Hearing Loss, Sensorineural/psychology , Hearing Loss, Sensorineural/therapy , Humans , Longitudinal Studies , Male , Middle Aged , Neuropsychological Tests , Prospective Studies
4.
Exp Brain Res ; 236(4): 945-953, 2018 04.
Article in English | MEDLINE | ID: mdl-29374776

ABSTRACT

Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping/methods , Cerebral Cortex/physiology , Emotions/physiology , Space Perception/physiology , Adult , Auditory Pathways/diagnostic imaging , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Sound Localization/physiology , Young Adult
5.
Int J Audiol ; 55(10): 580-6, 2016 10.
Article in English | MEDLINE | ID: mdl-27367278

ABSTRACT

OBJECTIVE: Direct real-ear measurement to the 4-6 kHz range can be measured with suitable accuracy and repeatability. This study evaluates extended bandwidth measurement accuracy and repeatability using narrowband and wideband signal analysis. DESIGN: White noise was measured in female ear canals at four insertion depths using one-third and one-twenty-fourth octave band averaging. STUDY SAMPLE: Fourteen female adults with reported normal hearing and middle-ear function participated in the study. RESULTS: Test-retest differences were within ±2 dB for typical frequency bandwidths at insertion depths administered in clinical practice, and for up to 8 kHz at the experimental 30 mm insertion depth. The 28 mm insertion depth was the best predictor of ear canal levels measured at the 30 mm insertion depth. There was no effect of signal analysis bandwidth on accuracy or repeatability. CONCLUSIONS: Clinically feasible 28 mm probe tube insertions reliably measured up to 8 kHz and predicted intensities up to 10 kHz measured at the 30 mm insertion depth more accurately than did shallower insertion depths. Signal analysis bandwidth may not be an important clinical issue at least for one-third and one-twenty-fourth octave band analyses.


Subject(s)
Acoustics , Ear/physiology , Hearing Tests/methods , Hearing , Acoustic Stimulation , Acoustics/instrumentation , Adult , Hearing Tests/instrumentation , Humans , Predictive Value of Tests , Reproducibility of Results , Sound Spectrography , Young Adult
6.
Hear Res ; 306: 76-92, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24076423

ABSTRACT

For human listeners, the primary cues for localization in the vertical plane are provided by the direction-dependent filtering of the pinnae, head, and upper body. Vertical-plane localization generally is accurate for broadband sounds, but when such sounds are presented at near-threshold levels or at high levels with short durations (<20 ms), the apparent location is biased toward the horizontal plane (i.e., elevation gain <1). We tested the hypothesis that these effects result in part from distorted peripheral representations of sound spectra. Human listeners indicated the apparent position of 100-ms, 50-60 dB SPL, wideband noise-burst targets by orienting their heads. The targets were synthesized in virtual auditory space and presented over headphones. Faithfully synthesized targets were interleaved with targets for which the directional transfer function spectral notches were filled in, peaks were leveled off, or the spectral contrast of the entire profile was reduced or expanded. As notches were filled in progressively or peaks leveled progressively, elevation gain decreased in a graded manner similar to that observed as sensation level is reduced below 30 dB or, for brief sounds, increased above 45 dB. As spectral contrast was reduced, gain dropped only at the most extreme reduction (25% of normal). Spectral contrast expansion had little effect. The results are consistent with the hypothesis that loss of representation of spectral features contributes to reduced elevation gain at low and high sound levels. The results also suggest that perceived location depends on a correlation-like spectral matching process that is sensitive to the relative, rather than absolute, across-frequency shape of the spectral profile.


Subject(s)
Acoustic Stimulation , Auditory Perception , Sound Localization/physiology , Adult , Auditory Threshold/physiology , Cues , Female , Healthy Volunteers , Hearing , Humans , Male , Orientation , Psychophysics , Reproducibility of Results , Signal Processing, Computer-Assisted , Young Adult
7.
Hear Res ; 304: 20-7, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23769958

ABSTRACT

Individual differences exist in sound localization performance even for normal-hearing listeners. Some of these differences might be related to acoustical differences in localization cues carried by the head related transfer functions (HRTF). Recent data suggest that individual differences in sound localization performance could also have a perceptual origin. The localization of an auditory target in the up/down and front/back dimensions requires the analysis of the spectral shape of the stimulus. In the present study, we investigated the role of an acoustic factor, the prominence of the spectral shape ("spectral strength") and the role of a perceptual factor, the listener's sensitivity to spectral shape, in individual differences observed in sound localization performance. Spectral strength was computed as the spectral distance between the magnitude spectrum of the HRTFs and a flat spectrum. Sensitivity to spectral shape was evaluated using spectral-modulation thresholds measured with a broadband (0.2-12.8 kHz) or high-frequency (4-16 kHz) carrier and for different spectral modulation frequencies (below 1 cycle/octave, between 1 and 2 cycles/octave, above 2 cycles/octave). Data obtained from 19 young normal-hearing listeners showed that low thresholds for spectral modulation frequency below 1 cycle/octave with a high-frequency carrier were associated with better sound localization performance. No correlation was found between sound localization performance and the spectral strength of the HRTFs. These results suggest that differences in perceptual ability, rather than acoustical differences, contribute to individual differences in sound localization performance in noise.


Subject(s)
Sound Localization/physiology , Acoustic Stimulation , Adult , Auditory Perception/physiology , Auditory Threshold/physiology , Cues , Female , Humans , Male , Noise , Young Adult
8.
Neuroimage ; 82: 295-305, 2013 Nov 15.
Article in English | MEDLINE | ID: mdl-23711533

ABSTRACT

Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Brain Mapping , Emotions/physiology , Sound Localization/physiology , Acoustic Stimulation , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Young Adult
9.
Hear Res ; 240(1-2): 22-41, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18359176

ABSTRACT

We assessed the spatial-tuning properties of units in the cat's anterior auditory field (AAF) and compared them with those observed previously in the primary (A1) and posterior auditory fields (PAF). Multi-channel, silicon-substrate probes were used to record single- and multi-unit activity from the right hemispheres of alpha-chloralose-anesthetized cats. Spatial tuning was assessed using broadband noise bursts that varied in azimuth or elevation. Response latencies were slightly, though significantly, shorter in AAF than A1, and considerably shorter in both of those fields than in PAF. Compared to PAF, spike counts and latencies were more poorly modulated by changes in stimulus location in AAF and A1, particularly at higher sound pressure levels. Moreover, units in AAF and A1 demonstrated poorer level tolerance than units in PAF with spike rates modulated as much by changes in stimulus intensity as changes in stimulus location. Finally, spike-pattern-recognition analyses indicated that units in AAF transmitted less spatial information, on average, than did units in PAF-an observation consistent with recent evidence that PAF is necessary for sound-localization behavior, whereas AAF is not.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception , Neurons/physiology , Sound Localization , Acoustic Stimulation , Animals , Auditory Cortex/cytology , Auditory Pathways/cytology , Auditory Threshold , Cats , Evoked Potentials, Auditory , Female , Male , Pressure , Reaction Time
10.
J Acoust Soc Am ; 121(6): 3677-88, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17552719

ABSTRACT

For human listeners, cues for vertical-plane localization are provided by direction-dependent pinna filtering. This study quantified listeners' weighting of the spectral cues from each ear as a function of stimulus lateral angle, interaural time difference (ITD), and interaural level difference (ILD). Subjects indicated the apparent position of headphone-presented noise bursts synthesized in virtual auditory space. The synthesis filters for the two ears either corresponded to the same location or to two different locations separated vertically by 20 deg. Weighting of each ear's spectral information was determined by a multiple regression between the elevations to which each ear's spectrum corresponded and the vertical component of listeners' responses. The apparent horizontal source location was controlled either by choosing synthesis filters corresponding to locations on or 30 deg left or right of the median plane or by attenuating or delaying the signal at one ear. For broadband stimuli, spectral weighting and apparent lateral angle were determined primarily by ITD. Only for high-pass stimuli were weighting and lateral angle determined primarily by ILD. The results suggest that the weighting of monaural spectral cues and the perceived lateral angle of a sound source depend similarly on ITD, ILD, and stimulus spectral range.


Subject(s)
Hearing/physiology , Sound Localization , Acoustic Stimulation , Adult , Audiometry , Cues , Female , Functional Laterality , Humans , Male , Sound
11.
J Neurophysiol ; 94(2): 1267-80, 2005 Aug.
Article in English | MEDLINE | ID: mdl-15857970

ABSTRACT

We compared the spatial sensitivity of neural responses in three areas of cat auditory cortex: primary auditory cortex (A1), the posterior auditory field (PAF), and the dorsal zone (DZ). Stimuli were 80-ms pure tones or broadband noise bursts varying in free-field azimuth (in the horizontal plane) or elevation (in the vertical median plane), presented at levels 20-40 dB above units' thresholds. We recorded extracellular spike activity simultaneously from 16 to 32 sites in one or two areas of alpha-chloralose-anesthetized cats. We examined the dependence of spike counts and response latencies on stimulus location as well as the information transmission by neural spike patterns. Compared with units in A1, DZ units exhibited more complex frequency tuning, longer-latency responses, increased prevalence and degree of nonmonotonic rate-level functions, and weaker responses to noise than to tonal stimulation. DZ responses also showed sharper tuning for stimulus azimuth, stronger azimuthal modulation of first-spike latency, and enhanced spatial information transmission by spike patterns, compared with A1. Each of these findings was similar to differences observed between PAF and A1. Compared with PAF, DZ responses were of shorter overall latency, and more DZ units preferred stimulation from ipsilateral azimuths, but the majority of analyses suggest strong similarity between PAF and DZ responses. These results suggest that DZ and A1 are physiologically distinct cortical fields and that fields like PAF and DZ might constitute a "belt" region of auditory cortex exhibiting enhanced spatial sensitivity and temporal coding of stimulus features.


Subject(s)
Auditory Cortex/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation/methods , Action Potentials/physiology , Animals , Auditory Cortex/anatomy & histology , Auditory Pathways/physiology , Auditory Threshold/physiology , Brain Mapping , Cats , Computer Simulation , Dose-Response Relationship, Radiation , Female , Male , Reaction Time/physiology
12.
Hear Res ; 199(1-2): 124-34, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15574307

ABSTRACT

Physiological studies of spatial hearing show that the spatial receptive fields of cortical neurons typically are narrow at near-threshold levels, broadening at moderate levels. The apparent loss of neuronal spatial selectivity at increasing sound levels conflicts with the accurate performance of human subjects localizing at moderate sound levels. In the present study, human sound localization was evaluated across a wide range of sensation levels, extending down to the detection threshold. Listeners reported whether they heard each target sound and, if the target was audible, turned their heads to face the apparent source direction. Head orientation was tracked electromagnetically. At near-threshold levels, the lateral (left/right) components of responses were highly variable and slightly biased towards the midline, and front vertical components consistently exhibited a strong bias towards the horizontal plane. Stimulus levels were specified relative to the detection threshold for a front-positioned source, so low-level rear targets often were inaudible. As the sound level increased, first lateral and then vertical localization neared asymptotic levels. The improvement of localization over a range of increasing levels, in which neural spatial receptive fields presumably are broadening, indicates that sound localization does not depend on narrow spatial receptive fields of cortical neurons.


Subject(s)
Auditory Threshold/physiology , Sound Localization/physiology , Adult , Audiometry, Pure-Tone , Auditory Cortex/physiology , Female , Humans , Male , Time Factors
13.
J Acoust Soc Am ; 114(1): 430-45, 2003 Jul.
Article in English | MEDLINE | ID: mdl-12880054

ABSTRACT

Ripple-spectrum stimuli were used to investigate the scale of spectral detail used by listeners in interpreting spectral cues for vertical-plane localization. In three experiments, free-field localization judgments were obtained for 250-ms, 0.6-16-kHz noise bursts with log-ripple spectra that varied in ripple density, peak-to-trough depth, and phase. When ripple density was varied and depth was held constant at 40 dB, listeners' localization error rates increased most (relative to rates for flat-spectrum targets) for densities of 0.5-2 ripples/oct. When depth was varied and density was held constant at 1 ripple/oct, localization accuracy was degraded only for ripple depths > or = 20 dB. When phase was varied and density was held constant at 1 ripple/oct and depth at 40 dB, three of five listeners made errors at consistent locations unrelated to the ripple phase, whereas two listeners made errors at locations systematically modulated by ripple phase. Although the reported upper limit for ripple discrimination is 10 ripples/oct [Supin et al., J. Acoust. Soc. Am. 106, 2800-2804 (1999)], present results indicate that details finer than 2 ripples/oct or coarser than 0.5 ripples/oct do not strongly influence processing of spectral cues for sound localization. The low spectral-frequency limit suggests that broad-scale spectral variation is discounted, even though components at this scale are among those contributing the most to the shapes of directional transfer functions.


Subject(s)
Noise , Pitch Discrimination , Sound Localization , Sound Spectrography , Acoustic Stimulation , Adult , Attention , Female , Humans , Loudness Perception , Male , Psychoacoustics
14.
J Neurophysiol ; 89(6): 2889-903, 2003 Jun.
Article in English | MEDLINE | ID: mdl-12611946

ABSTRACT

We compared the spatial tuning properties of neurons in two fields [primary auditory cortex (A1) and posterior auditory field (PAF)] of cat auditory cortex. Broadband noise bursts of 80-ms duration were presented from loudspeakers throughout 360 degrees in the horizontal plane (azimuth) or 260 degrees in the vertical median plane (elevation). Sound levels varied from 20 to 40 dB above units' thresholds. We recorded neural spike activity simultaneously from 16 sites in field PAF and/or A1 of alpha-chloralose-anesthetized cats. We assessed spatial sensitivity by examining the dependence of spike count and response latency on stimulus location. In addition, we used an artificial neural network (ANN) to assess the information about stimulus location carried by spike patterns of single units and of ensembles of 2-32 units. The results indicate increased spatial sensitivity, more uniform distributions of preferred locations, and greater tolerance to changes in stimulus intensity among PAF units relative to A1 units. Compared to A1 units, PAF units responded at significantly longer latencies, and latencies varied more strongly with stimulus location. ANN analysis revealed significantly greater information transmission by spike patterns of PAF than A1 units, primarily reflecting the information transmitted by latency variation in PAF. Finally, information rates grew more rapidly with the number of units included in neural ensembles for PAF than A1. The latter finding suggests more accurate population coding of space in PAF, made possible by a more diverse population of neural response types.


Subject(s)
Action Potentials , Auditory Cortex/physiology , Neural Networks, Computer , Neurons/physiology , Sound Localization , Acoustic Stimulation , Animals , Cats , Electrophysiology , Female , Male , Reaction Time
15.
J Acoust Soc Am ; 111(5 Pt 1): 2219-36, 2002 May.
Article in English | MEDLINE | ID: mdl-12051442

ABSTRACT

The virtual auditory space technique was used to quantify the relative strengths of interaural time difference (ITD), interaural level difference (ILD), and spectral cues in determining the perceived lateral angle of wideband, low-pass, and high-pass noise bursts. Listeners reported the apparent locations of virtual targets that were presented over headphones and filtered with listeners' own directional transfer functions. The stimuli were manipulated by delaying or attenuating the signal to one ear (by up to 600 micros or 20 dB) or by altering the spectral cues at one or both ears. Listener weighting of the manipulated cues was determined by examining the resulting localization response biases. In accordance with the Duplex Theory defined for pure-tones, listeners gave high weight to ITD and low weight to ILD for low-pass stimuli, and high weight to ILD for high-pass stimuli. Most (but not all) listeners gave low weight to ITD for high-pass stimuli. This weight could be increased by amplitude-modulating the stimuli or reduced by lengthening stimulus onsets. For wideband stimuli, the ITD weight was greater than or equal to that given to ILD. Manipulations of monaural spectral cues and the interaural level spectrum had little influence on lateral angle judgements.


Subject(s)
Cues , Models, Theoretical , Sound Localization , Female , Humans , Male , Motivation
16.
Neuroscientist ; 8(1): 73-83, 2002 Feb.
Article in English | MEDLINE | ID: mdl-11843102

ABSTRACT

Efforts to locate a cortical map of auditory space generally have proven unsuccessful. At moderate sound levels, cortical neurons generally show large or unbounded spatial receptive fields. Within those large receptive fields, however, changes in sound location result in systematic changes in the temporal firing patterns such that single-neuron firing patterns can signal the locations of sound sources throughout as much as 360 degrees of auditory space. Neurons in the cat's auditory cortex show accurate localization of broad-band sounds, which human listeners localize accurately. Conversely, in response to filtered sounds that produce spatial illusions in human listeners, neurons signal systematically incorrect locations that can be predicted by a model that also predicts the listeners' illusory reports. These results from the cat's auditory cortex, as well as more limited results from nonhuman primates, suggest a model in which the location of any particular sound source is represented in a distributed fashion within individual auditory cortical areas and among multiple cortical areas.


Subject(s)
Auditory Cortex/physiology , Neural Networks, Computer , Neurons/physiology , Sound Localization/physiology , Animals , Brain Mapping/methods , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...