Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Neuroimage ; 125: 131-143, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26477651

RESUMEN

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences. In the experiments, stimuli were initially presented to the subject in a distorted form, after which undistorted versions of the stimuli were presented. Finally, the original distorted stimuli were presented once more. The resulting increase in intelligibility observed for the second presentation of the distorted stimuli depended on the complexity of the stimulus: vowels remained unintelligible (behaviorally measured intelligibility 27%) whereas the intelligibility of the words increased from 19% to 45% and that of the sentences from 31% to 65%. This increase in the intelligibility of the degraded stimuli was reflected as an enhancement of activity in the auditory cortex and surrounding areas at early latencies of 130-160ms. In the same regions, increasing stimulus complexity attenuated mean currents at latencies of 130-160ms whereas at latencies of 200-270ms the mean currents increased. These modulations in cortical activity may reflect feedback from top-down mechanisms enhancing the extraction of information from speech. The behavioral results suggest that memory-driven expectancies can have a significant effect on speech comprehension, especially in acoustically adverse conditions where the bottom-up information is decreased.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Magnetoencefalografía , Masculino , Procesamiento de Señales Asistido por Computador , Inteligibilidad del Habla/fisiología , Adulto Joven
2.
J Acoust Soc Am ; 137(6): 3356-65, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-26093425

RESUMEN

Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Localización de Sonidos , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Humanos , Magnetoencefalografía , Masculino , Psicoacústica , Detección de Señal Psicológica , Espectrografía del Sonido
3.
Dev Neuropsychol ; 38(8): 550-66, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24219695

RESUMEN

Identifying children at risk for reading problems or dyslexia at kindergarten age could improve support for beginning readers. Brain event-related potentials (ERPs) were measured for temporally complex pseudowords and corresponding non-speech stimuli from 6.5-year-old children who participated in behavioral literacy tests again at 9 years in the second grade. Children who had reading problems at school age had larger N250 responses to speech and non-speech stimuli particularly at the left hemisphere. The brain responses also correlated with reading skills. The results suggest that atypical auditory and speech processing are a neural-level risk factor for future reading problems. [Supplementary material is available for this article. Go to the publisher's online edition of Developmental Neuropsychology for the following free supplemental resources: Sound files used in the experiments. Three speech sounds and corresponding non-speech sounds with short, intermediate, and long gaps].


Asunto(s)
Dislexia/diagnóstico , Potenciales Evocados Auditivos/fisiología , Lectura , Percepción del Habla/fisiología , Estimulación Acústica , Análisis de Varianza , Encéfalo , Mapeo Encefálico , Estudios de Casos y Controles , Niño , Dislexia/fisiopatología , Dislexia/psicología , Electroencefalografía , Femenino , Humanos , Masculino , Fonética , Habla
4.
Cogn Neurosci ; 4(2): 99-106, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24073735

RESUMEN

This study evaluated whether the linguistic multi-feature paradigm with five types of speech-sound changes and novel sounds is an eligible neurophysiologic measure of central auditory processing in toddlers. Participants were 18 typically developing 2-year-old children. Syllable stimuli elicited significant obligatory responses and syllable changes significant MMN (mismatch negativity) which suggests that toddlers can discriminate auditory features from alternating speech-sound stream. The MMNs were lateralized similarly as found earlier in adults. Furthermore, novel sounds elicited a significant novelty P3 response. Thus, the linguistic multi-feature paradigm with novel sounds is feasible for the concurrent investigation of the different stages of central auditory processing in 2-year-old children, ranging from pre-attentive encoding and discrimination of stimuli to attentional mechanisms in speech-like research compositions. As a conclusion, this time-efficient paradigm can be applied to investigating central auditory development and impairments in toddlers in whom developmental changes of speech-related cortical functions and language are rapid.


Asunto(s)
Potenciales Evocados Auditivos/fisiología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Preescolar , Electroencefalografía , Estudios de Factibilidad , Humanos , Lactante
5.
J Acoust Soc Am ; 133(4): 2377-89, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23556603

RESUMEN

High vocal effort has characteristic acoustic effects on speech. This study focuses on the utilization of this information by human listeners and a machine-based detection system in the task of detecting shouted speech in the presence of noise. Both female and male speakers read Finnish sentences using normal and shouted voice in controlled conditions, with the sound pressure level recorded. The speech material was artificially corrupted by noise and supplemented with pure noise. The human performance level was statistically evaluated by a listening test, where the subjects labeled noisy samples according to whether shouting was heard or not. A Bayesian detection system was constructed and statistically evaluated. Its performance was compared against that of human listeners, substituting different spectrum analysis methods in the feature extraction stage. Using features capable of taking into account the spectral fine structure (i.e., the fundamental frequency and its harmonics), the machine reached the detection level of humans even in the noisiest conditions. In the listening test, male listeners detected shouted speech significantly better than female listeners, especially with speakers making a smaller vocal effort increase for shouting.


Asunto(s)
Acústica/instrumentación , Percepción Sonora , Ruido/efectos adversos , Enmascaramiento Perceptual , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Análisis de Varianza , Audiometría del Habla , Teorema de Bayes , Femenino , Humanos , Masculino , Factores Sexuales , Detección de Señal Psicológica , Procesamiento de Señales Asistido por Computador , Relación Señal-Ruido , Espectrografía del Sonido , Medición de la Producción del Habla
6.
J Acoust Soc Am ; 134(6): 4508, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25669261

RESUMEN

Previous studies on fusion in speech perception have demonstrated the ability of the human auditory system to group separate components of speech-like sounds together and consequently to enable the identification of speech despite the spatial separation between the components. Typically, the spatial separation has been implemented using headphone reproduction where the different components evoke auditory images at different lateral positions. In the present study, a multichannel loudspeaker system was used to investigate whether the correct vowel is identified and whether two auditory events are perceived when a noise-excited vowel is divided into two components that are spatially separated. The two components consisted of the even and odd formants. Both the amount of spatial separation between the components and the directions of the components were varied. Neither the spatial separation nor the directions of the components affected the vowel identification. Interestingly, an additional auditory event not associated with any vowel was perceived at the same time when the components were presented symmetrically in front of the listener. In such scenarios, the vowel was perceived from the direction of the odd formant components.


Asunto(s)
Señales (Psicología) , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Audiometría del Habla , Umbral Auditivo , Humanos , Masculino , Patrones de Reconocimiento Fisiológico , Reconocimiento en Psicología , Localización de Sonidos , Factores de Tiempo , Adulto Joven
7.
J Acoust Soc Am ; 132(6): 3990-4001, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23231128

RESUMEN

Post-filtering can be utilized to improve the quality and intelligibility of telephone speech. Previous studies have shown that energy reallocation with a high-pass type filter works effectively in improving the intelligibility of speech in difficult noise conditions. The present study introduces a signal-to-noise ratio adaptive post-filtering method that utilizes energy reallocation to transfer energy from the first formant to higher frequencies. The proposed method adapts to the level of the background noise so that, in favorable noise conditions, the post-filter has a flat frequency response and the effect of the post-filtering is increased as the level of the ambient noise increases. The performance of the proposed method is compared with a similar post-filtering algorithm and unprocessed speech in subjective listening tests which evaluate both intelligibility and listener preference. The results indicate that both of the post-filtering methods maintain the quality of speech in negligible noise conditions and are able to provide intelligibility improvement over unprocessed speech in adverse noise conditions. Furthermore, the proposed post-filtering algorithm performs better than the other post-filtering method under evaluation in moderate to difficult noise conditions, where intelligibility improvement is mostly required.


Asunto(s)
Acústica , Teléfono Celular , Procesamiento de Señales Asistido por Computador , Acústica del Lenguaje , Inteligibilidad del Habla , Calidad de la Voz , Estimulación Acústica , Adulto , Algoritmos , Análisis de Varianza , Femenino , Humanos , Masculino , Ruido/efectos adversos , Enmascaramiento Perceptual , Relación Señal-Ruido , Prueba del Umbral de Recepción del Habla , Adulto Joven
8.
BMC Neurosci ; 13: 157, 2012 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-23276297

RESUMEN

BACKGROUND: The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear. The present magnetoencephalography (MEG) study examined how the activity of auditory cortical areas is modulated by acoustic degradation and intelligibility of connected speech. The experimental design allowed for the comparison of cortical activity patterns elicited by acoustically identical stimuli which were perceived as either intelligible or unintelligible. RESULTS: In the experiment, a set of sentences was presented to the subject in distorted, undistorted, and again in distorted form. The intervening exposure to undistorted versions of sentences rendered the initially unintelligible, distorted sentences intelligible, as evidenced by an increase from 30% to 80% in the proportion of sentences reported as intelligible. These perceptual changes were reflected in the activity of the auditory cortex, with the auditory N1m response (~100 ms) being more prominent for the distorted stimuli than for the intact ones. In the time range of auditory P2m response (>200 ms), auditory cortex as well as regions anterior and posterior to this area generated a stronger response to sentences which were intelligible than unintelligible. During the sustained field (>300 ms), stronger activity was elicited by degraded stimuli in auditory cortex and by intelligible sentences in areas posterior to auditory cortex. CONCLUSIONS: The current findings suggest that the auditory system comprises bottom-up and top-down processes which are reflected in transient and sustained brain activity. It appears that analysis of acoustic features occurs during the first 100 ms, and sensitivity to speech intelligibility emerges in auditory cortex and surrounding areas from 200 ms onwards. The two processes are intertwined, with the activity of auditory cortical areas being modulated by top-down processes related to memory traces of speech and supporting speech intelligibility.


Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico/psicología , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adulto , Mapeo Encefálico/métodos , Potenciales Evocados Auditivos/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Magnetoencefalografía/métodos , Magnetoencefalografía/psicología
9.
Neuroimage ; 55(3): 1252-9, 2011 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-21215807

RESUMEN

Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population. In the current magnetoencephalography study, cortical sensitivity to periodicity was probed with natural periodic vowels and their aperiodic counterparts in a stimulus-specific adaptation paradigm. The effects of intervening adaptor stimuli on the N1m elicited by the probe stimuli (the actual effective stimuli) were studied under interstimulus intervals (ISIs) of 800 and 200 ms. The results indicated a periodicity-dependent release from adaptation which was observed for aperiodic probes alternating with periodic adaptors under both ISIs. Such release from adaptation can be attributed to the activation of a distinct neural population responsive to aperiodic (probe) but not to periodic (adaptor) stimuli. Thus, the current results suggest that the aperiodicity of speech sounds may be represented not only by decreased activation of the periodicity-sensitive population but, additionally, by the activation of a distinct cortical population responsive to speech aperiodicity.


Asunto(s)
Corteza Cerebral/citología , Corteza Cerebral/fisiología , Neuronas/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adaptación Fisiológica/fisiología , Interpretación Estadística de Datos , Femenino , Lateralidad Funcional/fisiología , Humanos , Magnetoencefalografía , Masculino , Habla , Adulto Joven
10.
Brain Res ; 1367: 298-309, 2011 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-20969833

RESUMEN

The cortical mechanisms underlying human speech perception in acoustically adverse conditions remain largely unknown. Besides distortions from external sources, degradation of the acoustic structure of the sound itself poses further demands on perceptual mechanisms. We conducted a magnetoencephalography (MEG) study to reveal whether the perceptual differences between these distortions are reflected in cortically generated auditory evoked fields (AEFs). To mimic the degradation of the internal structure of sound and external distortion, we degraded speech sounds by reducing the amplitude resolution of the signal waveform and by using additive noise, respectively. Since both distortion types increase the relative strength of high frequencies in the signal spectrum, we also used versions of the stimuli which were low-pass filtered to match the tilted spectral envelope of the undistorted speech sound. This enabled us to examine whether the changes in the overall spectral shape of the stimuli affect the AEFs. We found that the auditory N1m response was substantially enhanced as the amplitude resolution was reduced. In contrast, the N1m was insensitive to distorted speech with additive noise. Changing the spectral envelope had no effect on the N1m. We propose that the observed amplitude enhancements are due to an increase in noisy spectral harmonics produced by the reduction of the amplitude resolution, which activates the periodicity-sensitive neuronal populations participating in pitch extraction processes. The current findings suggest that the auditory cortex processes speech sounds in a differential manner when the internal structure of sound is degraded compared with the speech distorted by external noise.


Asunto(s)
Corteza Auditiva/fisiología , Ondas Encefálicas/fisiología , Potenciales Evocados Auditivos/fisiología , Ruido , Localización de Sonidos/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Humanos , Magnetoencefalografía , Masculino , Fonética , Psicoacústica , Tiempo de Reacción/fisiología , Análisis Espectral , Estadísticas no Paramétricas , Adulto Joven
11.
BMC Neurosci ; 11: 88, 2010 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-20673357

RESUMEN

BACKGROUND: Early auditory experiences are a prerequisite for speech and language acquisition. In healthy children, phoneme discrimination abilities improve for native and degrade for unfamiliar, socially irrelevant phoneme contrasts between 6 and 12 months of age as the brain tunes itself to, and specializes in the native spoken language. This process is known as perceptual narrowing, and has been found to predict normal native language acquisition. Prematurely born infants are known to be at an elevated risk for later language problems, but it remains unclear whether these problems relate to early perceptual narrowing. To address this question, we investigated early neurophysiological phoneme discrimination abilities and later language skills in prematurely born infants and in healthy, full-term infants. RESULTS: Our follow-up study shows for the first time that perceptual narrowing for non-native phoneme contrasts found in the healthy controls at 12 months was not observed in very prematurely born infants. An electric mismatch response of the brain indicated that whereas full-term infants gradually lost their ability to discriminate non-native phonemes from 6 to 12 months of age, prematurely born infants kept on this ability. Language performance tested at the age of 2 years showed a significant delay in the prematurely born group. Moreover, those infants who did not become specialized in native phonemes at the age of one year, performed worse in the communicative language test (MacArthur Communicative Development Inventories) at the age of two years. Thus, decline in sensitivity to non-native phonemes served as a predictor for further language development. CONCLUSION: Our data suggest that detrimental effects of prematurity on language skills are based on the low degree of specialization to native language early in development. Moreover, delayed or atypical perceptual narrowing was associated with slower language acquisition. The results hence suggest that language problems related to prematurity may partially originate already from this early tuning stage of language acquisition.


Asunto(s)
Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Recien Nacido Prematuro/fisiología , Desarrollo del Lenguaje , Habla/fisiología , Estimulación Acústica , Análisis de Varianza , Mapeo Encefálico , Corteza Cerebral/fisiología , Preescolar , Electroencefalografía , Estudios de Seguimiento , Humanos , Lactante , Recién Nacido , Pruebas del Lenguaje , Procesamiento de Señales Asistido por Computador , Encuestas y Cuestionarios
12.
J Acoust Soc Am ; 128(1): 224-34, 2010 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-20649218

RESUMEN

Cortical sensitivity to the periodicity of speech sounds has been evidenced by larger, more anterior responses to periodic than to aperiodic vowels in several non-invasive studies of the human brain. The current study investigated the temporal integration underlying the cortical sensitivity to speech periodicity by studying the increase in periodicity-specific cortical activation with growing stimulus duration. Periodicity-specific activation was estimated from magnetoencephalography as the differences between the N1m responses elicited by periodic and aperiodic vowel stimuli. The duration of the vowel stimuli with a fundamental frequency (F0=106 Hz) representative of typical male speech was varied in units corresponding to the vowel fundamental period (9.4 ms) and ranged from one to ten units. Cortical sensitivity to speech periodicity, as reflected by larger and more anterior responses to periodic than to aperiodic stimuli, was observed when stimulus duration was 3 cycles or more. Further, for stimulus durations of 5 cycles and above, response latency was shorter for the periodic than for the aperiodic stimuli. Together the current results define a temporal window of integration for the periodicity of speech sounds in the F0 range of typical male speech. The length of this window is 3-5 cycles, or 30-50 ms.


Asunto(s)
Corteza Auditiva/fisiología , Periodicidad , Acústica del Lenguaje , Percepción del Habla , Percepción del Tiempo , Estimulación Acústica , Adulto , Potenciales Evocados Auditivos , Femenino , Humanos , Magnetoencefalografía , Masculino , Modelos Estadísticos , Tiempo de Reacción , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido , Factores de Tiempo
13.
Clin Neurophysiol ; 121(6): 912-20, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20457006

RESUMEN

OBJECTIVE: To investigate the effects of cortical ischemic stroke and aphasic symptoms on auditory processing abilities in humans as indicated by the transient brain response, a recently documented cortical deflection which has been shown to accurately predict behavioral sound detection. METHODS: Using speech and sinusoidal stimuli in the active (attend) and the passive (ignore) recording condition, cortical activity of ten aphasic stroke patients and ten control subjects was recorded with whole-head MEG and behavioral measurements. RESULTS: Stroke patients exhibited significantly diminished neuromagnetic transient responses for both sinusoidal and speech stimulation when compared to the control subjects. The attention-related increase of response amplitude was slightly more pronounced in the control subjects than in the stroke patients but this difference did not reach statistical significance. CONCLUSIONS: Left-hemispheric ischemic stroke impairs the processing of sinusoidal and speech sounds. This deficit seems to depend on the severity and location of stroke. SIGNIFICANCE: Directly observable, non-invasive brain measures can be used in assessing the effects of stroke which are related to the behavioral symptoms patients manifest.


Asunto(s)
Afasia/fisiopatología , Vías Auditivas/fisiopatología , Percepción Auditiva/fisiología , Corteza Cerebral/fisiopatología , Potenciales Evocados Auditivos/fisiología , Accidente Cerebrovascular/fisiopatología , Estimulación Acústica , Anciano , Anciano de 80 o más Años , Análisis de Varianza , Afasia/complicaciones , Femenino , Lateralidad Funcional/fisiología , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Fonética , Tiempo de Reacción/fisiología , Índice de Severidad de la Enfermedad , Accidente Cerebrovascular/complicaciones
14.
Clin Neurophysiol ; 121(6): 902-11, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20359943

RESUMEN

OBJECTIVE: The aim of the study was to investigate the effects of aging on human cortical auditory processing of rising-intensity sinusoids and speech sounds. We also aimed to evaluate the suitability of a recently discovered transient brain response for applied research. METHODS: In young and aged adults, magnetic fields produced by cortical activity elicited by a 570-Hz pure-tone and a speech sound (Finnish vowel /a/) were measured using MEG. The stimuli rose smoothly in intensity from an inaudible to an audible level over 750 ms. We used both the active (attended) and the passive recording condition. In the attended condition, behavioral reaction times were measured. RESULTS: The latency of the transient brain response was prolonged in the aged compared to the young and the accuracy of behavioral responses to sinusoids was diminished among the aged. In response amplitudes, no differences were found between the young and the aged. In both groups, spectral complexity of the stimuli enhanced response amplitudes. CONCLUSIONS: Aging seems to affect the temporal dynamics of cortical auditory processing. The transient brain response is sensitive both to spectral complexity and aging-related changes in the timing of cortical activation. SIGNIFICANCE: The transient brain responses elicited by rising-intensity sounds could be useful in revealing differences in auditory cortical processing in applied research.


Asunto(s)
Envejecimiento/fisiología , Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Análisis de Varianza , Atención/fisiología , Femenino , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología
15.
Brain Res ; 1327: 77-90, 2010 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-20188710

RESUMEN

The aim of the present study was to determine differences in cortical processing of consonant-vowel syllables and acoustically matched non-speech sounds, as well as novel human and nonhuman sounds. Event-related potentials (ERPs) were recorded to vowel, vowel duration, consonant, syllable intensity, and frequency changes as well as corresponding changes in their non-speech counterparts with the multi-feature mismatch negativity (MMN) paradigm. Enhanced responses to linguistically relevant deviants were expected. Indeed, the vowel and frequency deviants elicited significantly larger MMNs in the speech than non-speech condition. Minimum-norm source localization algorithm was applied to determine hemispheric asymmetry in the responses. Language relevant deviants (vowel, duration and - to a lesser degree - frequency) showed higher activation in the left than right hemisphere to stimuli in the speech condition. Novel sounds elicited novelty P3 waves, the amplitude of which for nonhuman sounds was larger in the speech than non-speech condition. The current MMN results imply enhanced processing of linguistically relevant information at the pre-attentive stage and in this way support the domain-specific model of speech perception.


Asunto(s)
Potenciales Evocados Auditivos/fisiología , Fonética , Detección de Señal Psicológica/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Algoritmos , Mapeo Encefálico , Variación Contingente Negativa , Electroencefalografía/métodos , Lateralidad Funcional/fisiología , Humanos , Persona de Mediana Edad , Tiempo de Reacción/fisiología , Adulto Joven
16.
BMC Neurosci ; 11: 24, 2010 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-20175890

RESUMEN

BACKGROUND: Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects. RESULTS: We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. CONCLUSIONS: We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Potenciales Evocados Auditivos , Femenino , Lateralidad Funcional , Humanos , Magnetoencefalografía , Masculino , Pruebas Neuropsicológicas , Patrones de Reconocimiento Fisiológico/fisiología , Psicoacústica , Tiempo de Reacción , Habla , Factores de Tiempo
17.
Brain Res ; 1306: 93-9, 2010 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-19799877

RESUMEN

Recent single-neuron recordings in monkeys and magnetoencephalography (MEG) data on humans suggest that auditory space is represented in cortex as a population rate code whereby spatial receptive fields are wide and centered at locations to the far left or right of the subject. To explore the details of this code in the human brain, we conducted an MEG study utilizing realistic spatial sound stimuli presented in a stimulus-specific adaptation paradigm. In this paradigm, the spatial selectivity of cortical neurons is measured as the effect the location of a preceding adaptor has on the response to a subsequent probe sound. Two types of stimuli were used: a wideband noise sound and a speech sound. The cortical hemispheres differed in the effects the adaptors had on the response to a probe sound presented in front of the subject. The right-hemispheric responses were attenuated more by an adaptor to the left than by an adaptor to the right of the subject. In contrast, the left-hemispheric responses were similarly affected by adaptors in these two locations. When interpreted in terms of single-neuron spatial receptive fields, these results support a population rate code model where neurons in the right hemisphere are more often tuned to the left than to the right of the perceiver while in the left hemisphere these two neuronal populations are of equal size.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Potenciales Evocados Auditivos , Femenino , Lateralidad Funcional , Humanos , Magnetoencefalografía , Masculino , Modelos Neurológicos , Neuronas/fisiología , Habla , Percepción del Habla/fisiología
18.
PLoS One ; 4(10): e7600, 2009 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-19855836

RESUMEN

BACKGROUND: Previous work on the human auditory cortex has revealed areas specialized in spatial processing but how the neurons in these areas represent the location of a sound source remains unknown. METHODOLOGY/PRINCIPAL FINDINGS: Here, we performed a magnetoencephalography (MEG) experiment with the aim of revealing the neural code of auditory space implemented by the human cortex. In a stimulus-specific adaptation paradigm, realistic spatial sound stimuli were presented in pairs of adaptor and probe locations. We found that the attenuation of the N1m response depended strongly on the spatial arrangement of the two sound sources. These location-specific effects showed that sounds originating from locations within the same hemifield activated the same neuronal population regardless of the spatial separation between the sound sources. In contrast, sounds originating from opposite hemifields activated separate groups of neurons. CONCLUSIONS/SIGNIFICANCE: These results are highly consistent with a rate code of spatial location formed by two opponent populations, one tuned to locations in the left and the other to those in the right. This indicates that the neuronal code of sound source location implemented by the human auditory cortex is similar to that previously found in other primates.


Asunto(s)
Corteza Auditiva/anatomía & histología , Magnetoencefalografía/métodos , Localización de Sonidos/fisiología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Interpretación Estadística de Datos , Potenciales Evocados Auditivos/fisiología , Humanos , Neuronas/metabolismo , Sonido
19.
Biol Psychol ; 82(3): 219-26, 2009 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-19646504

RESUMEN

In this study, we addressed whether a new fast multi-feature mismatch negativity (MMN) paradigm can be used for determining the central auditory discrimination accuracy for several acoustic and phonetic changes in speech sounds. We recorded the MMNs in the multi-feature paradigm to changes in syllable intensity, frequency, and vowel length, as well as for consonant and vowel change, and compared these MMNs to those obtained with the traditional oddball paradigm. In addition, we examined the reliability of the multi-feature paradigm by repeating the recordings with the same subjects 1-7 days after the first recordings. The MMNs recorded with the multi-feature paradigm were similar to those obtained with the oddball paradigm. Furthermore, only minor differences were observed in the MMN amplitudes across the two recording sessions. Thus, this new multi-feature paradigm with speech stimuli provides similar results as the oddball paradigm, and the MMNs recorded with the new paradigm were reproducible.


Asunto(s)
Vías Auditivas/fisiología , Encéfalo/fisiología , Discriminación en Psicología/fisiología , Potenciales Evocados Auditivos/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Análisis de Varianza , Atención/fisiología , Electroencefalografía , Femenino , Humanos , Lenguaje , Pruebas del Lenguaje , Masculino , Pruebas Neuropsicológicas , Fonética , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Procesamiento de Señales Asistido por Computador , Localización de Sonidos/fisiología , Vocabulario
20.
Neurosci Lett ; 460(2): 161-5, 2009 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-19481587

RESUMEN

We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.


Asunto(s)
Aptitud/fisiología , Atención/fisiología , Percepción Auditiva/fisiología , Desarrollo Infantil , Lenguaje , Música , Estimulación Acústica/métodos , Niño , Variación Contingente Negativa/fisiología , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Masculino , Multilingüismo , Conducta Verbal/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA