Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Hear Res ; 439: 108879, 2023 11.
Article in English | MEDLINE | ID: mdl-37826916

ABSTRACT

We demonstrate how the structure of auditory cortex can be investigated by combining computational modelling with advanced optimisation methods. We optimise a well-established auditory cortex model by means of an evolutionary algorithm. The model describes auditory cortex in terms of multiple core, belt, and parabelt fields. The optimisation process finds the optimum connections between individual fields of auditory cortex so that the model is able to reproduce experimental magnetoencephalographic (MEG) data. In the current study, this data comprised the auditory event-related fields (ERFs) recorded from a human subject in an MEG experiment where the stimulus-onset interval between consecutive tones was varied. The quality of the match between synthesised and experimental waveforms was 98%. The results suggest that neural activity caused by feedback connections plays a particularly important role in shaping ERF morphology. Further, ERFs reflect activity of the entire auditory cortex, and response adaptation due to stimulus repetition emerges from a complete reorganisation of AC dynamics rather than a reduction of activity in discrete sources. Our findings constitute the first stage in establishing a new non-invasive method for uncovering the organisation of the human auditory cortex.


Subject(s)
Auditory Cortex , Animals , Humans , Auditory Cortex/physiology , Brain Mapping , Magnetoencephalography , Macaca mulatta/physiology , Computer Simulation , Evoked Potentials, Auditory , Auditory Perception/physiology , Acoustic Stimulation
2.
Brain Behav ; 7(9): e00789, 2017 09.
Article in English | MEDLINE | ID: mdl-28948083

ABSTRACT

INTRODUCTION: We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. METHODS: In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. RESULTS: The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. CONCLUSIONS: The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.


Subject(s)
Auditory Cortex/physiology , Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/diagnostic imaging , Brain/diagnostic imaging , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
3.
Neuroimage ; 129: 214-223, 2016 Apr 01.
Article in English | MEDLINE | ID: mdl-26774614

ABSTRACT

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.


Subject(s)
Prefrontal Cortex/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted , Young Adult
4.
Neuroimage ; 125: 131-143, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26477651

ABSTRACT

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences. In the experiments, stimuli were initially presented to the subject in a distorted form, after which undistorted versions of the stimuli were presented. Finally, the original distorted stimuli were presented once more. The resulting increase in intelligibility observed for the second presentation of the distorted stimuli depended on the complexity of the stimulus: vowels remained unintelligible (behaviorally measured intelligibility 27%) whereas the intelligibility of the words increased from 19% to 45% and that of the sentences from 31% to 65%. This increase in the intelligibility of the degraded stimuli was reflected as an enhancement of activity in the auditory cortex and surrounding areas at early latencies of 130-160ms. In the same regions, increasing stimulus complexity attenuated mean currents at latencies of 130-160ms whereas at latencies of 200-270ms the mean currents increased. These modulations in cortical activity may reflect feedback from top-down mechanisms enhancing the extraction of information from speech. The behavioral results suggest that memory-driven expectancies can have a significant effect on speech comprehension, especially in acoustically adverse conditions where the bottom-up information is decreased.


Subject(s)
Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted , Speech Intelligibility/physiology , Young Adult
5.
BMC Neurosci ; 13: 157, 2012 Dec 31.
Article in English | MEDLINE | ID: mdl-23276297

ABSTRACT

BACKGROUND: The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear. The present magnetoencephalography (MEG) study examined how the activity of auditory cortical areas is modulated by acoustic degradation and intelligibility of connected speech. The experimental design allowed for the comparison of cortical activity patterns elicited by acoustically identical stimuli which were perceived as either intelligible or unintelligible. RESULTS: In the experiment, a set of sentences was presented to the subject in distorted, undistorted, and again in distorted form. The intervening exposure to undistorted versions of sentences rendered the initially unintelligible, distorted sentences intelligible, as evidenced by an increase from 30% to 80% in the proportion of sentences reported as intelligible. These perceptual changes were reflected in the activity of the auditory cortex, with the auditory N1m response (~100 ms) being more prominent for the distorted stimuli than for the intact ones. In the time range of auditory P2m response (>200 ms), auditory cortex as well as regions anterior and posterior to this area generated a stronger response to sentences which were intelligible than unintelligible. During the sustained field (>300 ms), stronger activity was elicited by degraded stimuli in auditory cortex and by intelligible sentences in areas posterior to auditory cortex. CONCLUSIONS: The current findings suggest that the auditory system comprises bottom-up and top-down processes which are reflected in transient and sustained brain activity. It appears that analysis of acoustic features occurs during the first 100 ms, and sensitivity to speech intelligibility emerges in auditory cortex and surrounding areas from 200 ms onwards. The two processes are intertwined, with the activity of auditory cortical areas being modulated by top-down processes related to memory traces of speech and supporting speech intelligibility.


Subject(s)
Auditory Cortex/physiology , Brain Mapping/psychology , Speech Intelligibility/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Brain Mapping/methods , Evoked Potentials, Auditory/physiology , Humans , Image Processing, Computer-Assisted/methods , Magnetoencephalography/methods , Magnetoencephalography/psychology
6.
Neuroimage ; 55(3): 1252-9, 2011 Apr 01.
Article in English | MEDLINE | ID: mdl-21215807

ABSTRACT

Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population. In the current magnetoencephalography study, cortical sensitivity to periodicity was probed with natural periodic vowels and their aperiodic counterparts in a stimulus-specific adaptation paradigm. The effects of intervening adaptor stimuli on the N1m elicited by the probe stimuli (the actual effective stimuli) were studied under interstimulus intervals (ISIs) of 800 and 200 ms. The results indicated a periodicity-dependent release from adaptation which was observed for aperiodic probes alternating with periodic adaptors under both ISIs. Such release from adaptation can be attributed to the activation of a distinct neural population responsive to aperiodic (probe) but not to periodic (adaptor) stimuli. Thus, the current results suggest that the aperiodicity of speech sounds may be represented not only by decreased activation of the periodicity-sensitive population but, additionally, by the activation of a distinct cortical population responsive to speech aperiodicity.


Subject(s)
Cerebral Cortex/cytology , Cerebral Cortex/physiology , Neurons/physiology , Speech Perception/physiology , Acoustic Stimulation , Adaptation, Physiological/physiology , Data Interpretation, Statistical , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Speech , Young Adult
7.
Brain Res ; 1367: 298-309, 2011 Jan 07.
Article in English | MEDLINE | ID: mdl-20969833

ABSTRACT

The cortical mechanisms underlying human speech perception in acoustically adverse conditions remain largely unknown. Besides distortions from external sources, degradation of the acoustic structure of the sound itself poses further demands on perceptual mechanisms. We conducted a magnetoencephalography (MEG) study to reveal whether the perceptual differences between these distortions are reflected in cortically generated auditory evoked fields (AEFs). To mimic the degradation of the internal structure of sound and external distortion, we degraded speech sounds by reducing the amplitude resolution of the signal waveform and by using additive noise, respectively. Since both distortion types increase the relative strength of high frequencies in the signal spectrum, we also used versions of the stimuli which were low-pass filtered to match the tilted spectral envelope of the undistorted speech sound. This enabled us to examine whether the changes in the overall spectral shape of the stimuli affect the AEFs. We found that the auditory N1m response was substantially enhanced as the amplitude resolution was reduced. In contrast, the N1m was insensitive to distorted speech with additive noise. Changing the spectral envelope had no effect on the N1m. We propose that the observed amplitude enhancements are due to an increase in noisy spectral harmonics produced by the reduction of the amplitude resolution, which activates the periodicity-sensitive neuronal populations participating in pitch extraction processes. The current findings suggest that the auditory cortex processes speech sounds in a differential manner when the internal structure of sound is degraded compared with the speech distorted by external noise.


Subject(s)
Auditory Cortex/physiology , Brain Waves/physiology , Evoked Potentials, Auditory/physiology , Noise , Sound Localization/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Humans , Magnetoencephalography , Male , Phonetics , Psychoacoustics , Reaction Time/physiology , Spectrum Analysis , Statistics, Nonparametric , Young Adult
8.
J Acoust Soc Am ; 128(1): 224-34, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20649218

ABSTRACT

Cortical sensitivity to the periodicity of speech sounds has been evidenced by larger, more anterior responses to periodic than to aperiodic vowels in several non-invasive studies of the human brain. The current study investigated the temporal integration underlying the cortical sensitivity to speech periodicity by studying the increase in periodicity-specific cortical activation with growing stimulus duration. Periodicity-specific activation was estimated from magnetoencephalography as the differences between the N1m responses elicited by periodic and aperiodic vowel stimuli. The duration of the vowel stimuli with a fundamental frequency (F0=106 Hz) representative of typical male speech was varied in units corresponding to the vowel fundamental period (9.4 ms) and ranged from one to ten units. Cortical sensitivity to speech periodicity, as reflected by larger and more anterior responses to periodic than to aperiodic stimuli, was observed when stimulus duration was 3 cycles or more. Further, for stimulus durations of 5 cycles and above, response latency was shorter for the periodic than for the aperiodic stimuli. Together the current results define a temporal window of integration for the periodicity of speech sounds in the F0 range of typical male speech. The length of this window is 3-5 cycles, or 30-50 ms.


Subject(s)
Auditory Cortex/physiology , Periodicity , Speech Acoustics , Speech Perception , Time Perception , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Humans , Magnetoencephalography , Male , Models, Statistical , Reaction Time , Signal Processing, Computer-Assisted , Sound Spectrography , Time Factors
9.
Clin Neurophysiol ; 121(6): 912-20, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20457006

ABSTRACT

OBJECTIVE: To investigate the effects of cortical ischemic stroke and aphasic symptoms on auditory processing abilities in humans as indicated by the transient brain response, a recently documented cortical deflection which has been shown to accurately predict behavioral sound detection. METHODS: Using speech and sinusoidal stimuli in the active (attend) and the passive (ignore) recording condition, cortical activity of ten aphasic stroke patients and ten control subjects was recorded with whole-head MEG and behavioral measurements. RESULTS: Stroke patients exhibited significantly diminished neuromagnetic transient responses for both sinusoidal and speech stimulation when compared to the control subjects. The attention-related increase of response amplitude was slightly more pronounced in the control subjects than in the stroke patients but this difference did not reach statistical significance. CONCLUSIONS: Left-hemispheric ischemic stroke impairs the processing of sinusoidal and speech sounds. This deficit seems to depend on the severity and location of stroke. SIGNIFICANCE: Directly observable, non-invasive brain measures can be used in assessing the effects of stroke which are related to the behavioral symptoms patients manifest.


Subject(s)
Aphasia/physiopathology , Auditory Pathways/physiopathology , Auditory Perception/physiology , Cerebral Cortex/physiopathology , Evoked Potentials, Auditory/physiology , Stroke/physiopathology , Acoustic Stimulation , Aged , Aged, 80 and over , Analysis of Variance , Aphasia/complications , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Middle Aged , Phonetics , Reaction Time/physiology , Severity of Illness Index , Stroke/complications
10.
Clin Neurophysiol ; 121(6): 902-11, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20359943

ABSTRACT

OBJECTIVE: The aim of the study was to investigate the effects of aging on human cortical auditory processing of rising-intensity sinusoids and speech sounds. We also aimed to evaluate the suitability of a recently discovered transient brain response for applied research. METHODS: In young and aged adults, magnetic fields produced by cortical activity elicited by a 570-Hz pure-tone and a speech sound (Finnish vowel /a/) were measured using MEG. The stimuli rose smoothly in intensity from an inaudible to an audible level over 750 ms. We used both the active (attended) and the passive recording condition. In the attended condition, behavioral reaction times were measured. RESULTS: The latency of the transient brain response was prolonged in the aged compared to the young and the accuracy of behavioral responses to sinusoids was diminished among the aged. In response amplitudes, no differences were found between the young and the aged. In both groups, spectral complexity of the stimuli enhanced response amplitudes. CONCLUSIONS: Aging seems to affect the temporal dynamics of cortical auditory processing. The transient brain response is sensitive both to spectral complexity and aging-related changes in the timing of cortical activation. SIGNIFICANCE: The transient brain responses elicited by rising-intensity sounds could be useful in revealing differences in auditory cortical processing in applied research.


Subject(s)
Aging/physiology , Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Adult , Age Factors , Aged , Analysis of Variance , Attention/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Psychomotor Performance/physiology , Reaction Time/physiology
11.
J Acoust Soc Am ; 127(2): EL60-5, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20136180

ABSTRACT

A magnetoencephalography study was conducted to reveal the neural code of interaural time difference (ITD) in the human cortex. Widely used crosscorrelator models predict that the code consists of narrow receptive fields distributed to all ITDs. The present findings are, however, more in line with a neural code formed by two opponent neural populations: one tuned to the left and the other to the right hemifield. The results are consistent with models of ITD extraction in the auditory brainstem of small mammals and, therefore, suggest that similar computational principles underlie human sound source localization.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Ear , Models, Neurological , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Functional Laterality , Head , Humans , Magnetoencephalography , Male , Photic Stimulation , Time Factors , Visual Perception/physiology
12.
BMC Neurosci ; 11: 24, 2010 Feb 22.
Article in English | MEDLINE | ID: mdl-20175890

ABSTRACT

BACKGROUND: Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects. RESULTS: We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. CONCLUSIONS: We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Functional Laterality , Humans , Magnetoencephalography , Male , Neuropsychological Tests , Pattern Recognition, Physiological/physiology , Psychoacoustics , Reaction Time , Speech , Time Factors
13.
Psychophysiology ; 47(1): 66-122, 2010 Jan 01.
Article in English | MEDLINE | ID: mdl-19686538

ABSTRACT

The current review constitutes the first comprehensive look at the possibility that the mismatch negativity (MMN, the deflection of the auditory ERP/ERF elicited by stimulus change) might be generated by so-called fresh-afferent neuronal activity. This possibility has been repeatedly ruled out for the past 30 years, with the prevailing theoretical accounts relying on a memory-based explanation instead. We propose that the MMN is, in essence, a latency- and amplitude-modulated expression of the auditory N1 response, generated by fresh-afferent activity of cortical neurons that are under nonuniform levels of adaptation.


Subject(s)
Auditory Perception/physiology , Electroencephalography/psychology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Adaptation, Psychological/physiology , Affect/physiology , Auditory Cortex/physiology , Auditory Threshold , Humans , Memory/physiology , Neuronal Plasticity/physiology
14.
Brain Res ; 1306: 93-9, 2010 Jan 08.
Article in English | MEDLINE | ID: mdl-19799877

ABSTRACT

Recent single-neuron recordings in monkeys and magnetoencephalography (MEG) data on humans suggest that auditory space is represented in cortex as a population rate code whereby spatial receptive fields are wide and centered at locations to the far left or right of the subject. To explore the details of this code in the human brain, we conducted an MEG study utilizing realistic spatial sound stimuli presented in a stimulus-specific adaptation paradigm. In this paradigm, the spatial selectivity of cortical neurons is measured as the effect the location of a preceding adaptor has on the response to a subsequent probe sound. Two types of stimuli were used: a wideband noise sound and a speech sound. The cortical hemispheres differed in the effects the adaptors had on the response to a probe sound presented in front of the subject. The right-hemispheric responses were attenuated more by an adaptor to the left than by an adaptor to the right of the subject. In contrast, the left-hemispheric responses were similarly affected by adaptors in these two locations. When interpreted in terms of single-neuron spatial receptive fields, these results support a population rate code model where neurons in the right hemisphere are more often tuned to the left than to the right of the perceiver while in the left hemisphere these two neuronal populations are of equal size.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Evoked Potentials, Auditory , Female , Functional Laterality , Humans , Magnetoencephalography , Male , Models, Neurological , Neurons/physiology , Speech , Speech Perception/physiology
15.
PLoS One ; 4(10): e7600, 2009 Oct 26.
Article in English | MEDLINE | ID: mdl-19855836

ABSTRACT

BACKGROUND: Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown. METHODOLOGY/PRINCIPAL FINDINGS: Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons. CONCLUSIONS/SIGNIFICANCE: These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.


Subject(s)
Auditory Cortex/anatomy & histology , Magnetoencephalography/methods , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping , Data Interpretation, Statistical , Evoked Potentials, Auditory/physiology , Humans , Neurons/metabolism , Sound
16.
J Acoust Soc Am ; 125(5): 3177-85, 2009 May.
Article in English | MEDLINE | ID: mdl-19425660

ABSTRACT

Aperiodicity of speech alters voice quality. The current study investigated the relationship between vowel aperiodicity and human auditory cortical N1m and sustained field (SF) responses with magnetoencephalography. Behavioral estimates of vocal roughness perception were also collected. Stimulus aperiodicity was experimentally varied by increasing vocal jitter with techniques that model the mechanisms of natural speech production. N1m and SF responses for vowels with high vocal jitter were reduced in amplitude as compared to those elicited by vowels of normal vocal periodicity. Behavioral results indicated that the ratings of vocal roughness increased up to the highest jitter values. Based on these findings, the representation of vocal jitter in the auditory cortex is suggested to be formed on the basis of reduced activity in periodicity-sensitive neural populations.


Subject(s)
Auditory Cortex/physiology , Speech Perception/physiology , Voice Quality , Acoustic Stimulation , Adult , Analysis of Variance , Evoked Potentials, Auditory , Female , Humans , Magnetoencephalography , Male , Phonetics , Speech , Speech Acoustics , Time Factors
17.
BMC Neurosci ; 8: 78, 2007 Sep 26.
Article in English | MEDLINE | ID: mdl-17897443

ABSTRACT

BACKGROUND: In the field of auditory neuroscience, much research has focused on the neural processes underlying human sound localization. A recent magnetoencephalography (MEG) study investigated localization-related brain activity by measuring the N1m event-related response originating in the auditory cortex. It was found that the dynamic range of the right-hemispheric N1m response, defined as the mean difference in response magnitude between contralateral and ipsilateral stimulation, reflects cortical activity related to the discrimination of horizontal sound direction. Interestingly, the results also suggested that the presence of realistic spectral information within horizontally located spatial sounds resulted in a larger right-hemispheric N1m dynamic range. Spectral cues being predominant at high frequencies, the present study further investigated the issue by removing frequencies from the spatial stimuli with low-pass filtering. This resulted in a stepwise elimination of direction-specific spectral information. Interaural time and level differences were kept constant. The original, unfiltered stimuli were broadband noise signals presented from five frontal horizontal directions and binaurally recorded for eight human subjects with miniature microphones placed in each subject's ear canals. Stimuli were presented to the subjects during MEG registration and in a behavioral listening experiment. RESULTS: The dynamic range of the right-hemispheric N1m amplitude was not significantly affected even when all frequencies above 600 Hz were removed. The dynamic range of the left-hemispheric N1m response was significantly diminished by the removal of frequencies over 7.5 kHz. The subjects' behavioral sound direction discrimination was only affected by the removal of frequencies over 600 Hz. CONCLUSION: In accord with previous psychophysical findings, the current results indicate that frontal horizontal sound localization and related right-hemispheric cortical processes are insensitive to the presence of high-frequency spectral information. The previously described changes in localization-related brain activity, reflected in the enlarged N1m dynamic range elicited by natural spatial stimuli, can most likely be attributed to the processing of individualized spatial cues present already at relatively low frequencies. The left-hemispheric effect could be an indication of left-hemispheric processing of high-frequency sound information unrelated to sound localization. Taken together, these results provide converging evidence for a hemispheric asymmetry in sound localization.


Subject(s)
Acoustic Stimulation/methods , Pitch Perception/physiology , Sound Localization/physiology , Sound , Adult , Female , Humans , Magnetoencephalography/methods , Male
18.
Neuroreport ; 18(6): 601-5, 2007 Apr 16.
Article in English | MEDLINE | ID: mdl-17413665

ABSTRACT

We investigated how degraded speech sounds activate the auditory cortices of the left and right hemisphere. To degrade the stimuli, we introduce uniform scalar quantization, a controlled and replicable manipulation, not used before, in cognitive neuroscience. Three Finnish vowels (/a/, /e/ and /u/) were used as stimuli for 10 participants in magnetoencephalography registrations. Compared with the original vowel sounds, the degraded sounds increased the amplitude of the right-hemispheric N1m without affecting the latency whereas the amplitude and latency of the N1m in the left hemisphere remained unaffected. Although the participants were able to identify the stimuli correctly, the increased degradation led to increased reaction times which correlated positively with the N1m amplitude. Thus, the auditory cortex of right hemisphere might be particularly involved in processing degraded speech and possibly compensates for the poor signal quality by increasing its activity.


Subject(s)
Auditory Cortex/physiology , Dominance, Cerebral/physiology , Magnetoencephalography , Phonetics , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Reaction Time/physiology
19.
Neurosci Lett ; 396(1): 17-22, 2006 Mar 20.
Article in English | MEDLINE | ID: mdl-16343772

ABSTRACT

In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Pitch Discrimination/physiology , Reaction Time/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation , Adult , Auditory Cortex/anatomy & histology , Auditory Pathways/anatomy & histology , Auditory Pathways/physiology , Frontal Lobe/anatomy & histology , Frontal Lobe/physiology , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Nerve Net/anatomy & histology , Nerve Net/physiology , Neural Pathways/anatomy & histology , Neural Pathways/physiology , Parietal Lobe/anatomy & histology , Parietal Lobe/physiology
20.
Brain Res Cogn Brain Res ; 24(3): 364-79, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16099350

ABSTRACT

Here, the perception of auditory spatial information as indexed by behavioral measures is linked to brain dynamics as reflected by the N1m response recorded with whole-head magnetoencephalography (MEG). Broadband noise stimuli with realistic spatial cues corresponding to eight direction angles in the horizontal plane were constructed via custom-made, individualized binaural recordings (BAR) and generic head-related transfer functions (HRTF). For comparison purposes, stimuli with impoverished acoustical cues were created via interaural time and level differences (ITDs and ILDs) and their combinations. MEG recordings in ten subjects revealed that the amplitude and the latency of the N1m exhibits directional tuning to sound location, with the amplitude of the right-hemispheric N1m being particularly sensitive to the amount of spatial cues in the stimuli. The BAR, HRTF, and combined ITD + ILD stimuli resulted both in a larger dynamic range and in a more systematic distribution of the N1m amplitude across stimulus angle than did the ITD or ILD stimuli alone. Further, the right-hemispheric source loci of the N1m responses for the BAR and HRTF stimuli were anterior to those for the ITD and ILD stimuli. In behavioral tests, we measured the ability of the subjects to localize BAR and HRTF stimuli in terms of azimuthal error and front-back confusions. We found that behavioral performance correlated positively with the amplitude of the N1m. Thus, the activity taking place already in the auditory cortex predicts behavioral sound detection of spatial stimuli, and the amount of spatial cues embedded in the signal are reflected in the activity of this brain area.


Subject(s)
Auditory Cortex/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation , Adult , Evoked Potentials, Auditory/physiology , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Middle Aged , Psychomotor Performance/physiology
SELECTION OF CITATIONS
SEARCH DETAIL