Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Neuroimage ; 228: 117670, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33359352

RESUMEN

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Asunto(s)
Atención/fisiología , Corteza Cerebral/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Ruido , Procesamiento de Señales Asistido por Computador , Adulto Joven
2.
J Neurosci ; 38(21): 4977-4984, 2018 05 23.
Artículo en Inglés | MEDLINE | ID: mdl-29712782

RESUMEN

The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.


Asunto(s)
Corteza Auditiva/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Adulto , Corteza Auditiva/diagnóstico por imagen , Percepción Auditiva/fisiología , Mapeo Encefálico , Simulación por Computador , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Máquina de Vectores de Soporte
3.
Neuroimage ; 173: 472-483, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29518569

RESUMEN

Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Estimulación Acústica/métodos , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Adulto Joven
4.
Neuroimage ; 180(Pt A): 291-300, 2018 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-29146377

RESUMEN

Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept.


Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico/métodos , Modelos Neurológicos , Percepción de la Altura Tonal/fisiología , Estimulación Acústica , Adulto , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino
5.
Neuroimage ; 132: 32-42, 2016 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-26899782

RESUMEN

Multivariate pattern analysis (MVPA) in fMRI has been used to extract information from distributed cortical activation patterns, which may go undetected in conventional univariate analysis. However, little is known about the physical and physiological underpinnings of MVPA in fMRI as well as about the effect of spatial smoothing on its performance. Several studies have addressed these issues, but their investigation was limited to the visual cortex at 3T with conflicting results. Here, we used ultra-high field (7T) fMRI to investigate the effect of spatial resolution and smoothing on decoding of speech content (vowels) and speaker identity from auditory cortical responses. To that end, we acquired high-resolution (1.1mm isotropic) fMRI data and additionally reconstructed them at 2.2 and 3.3mm in-plane spatial resolutions from the original k-space data. Furthermore, the data at each resolution were spatially smoothed with different 3D Gaussian kernel sizes (i.e. no smoothing or 1.1, 2.2, 3.3, 4.4, or 8.8mm kernels). For all spatial resolutions and smoothing kernels, we demonstrate the feasibility of decoding speech content (vowel) and speaker identity at 7T using support vector machine (SVM) MVPA. In addition, we found that high spatial frequencies are informative for vowel decoding and that the relative contribution of high and low spatial frequencies is different across the two decoding tasks. Moderate smoothing (up to 2.2mm) improved the accuracies for both decoding of vowels and speakers, possibly due to reduction of noise (e.g. residual motion artifacts or instrument noise) while still preserving information at high spatial frequency. In summary, our results show that - even with the same stimuli and within the same brain areas - the optimal spatial resolution for MVPA in fMRI depends on the specific decoding task of interest.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Procesamiento de Señales Asistido por Computador , Estimulación Acústica , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Análisis Multivariante , Reconocimiento de Normas Patrones Automatizadas , Percepción del Habla , Máquina de Vectores de Soporte
6.
J Neurosci ; 34(13): 4548-57, 2014 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-24672000

RESUMEN

Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subject's behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschl's gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/irrigación sanguínea , Mapeo Encefálico , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno , Psicoacústica , Espectrografía del Sonido , Adulto Joven
7.
J Neurosci ; 34(1): 332-8, 2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-24381294

RESUMEN

Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.


Asunto(s)
Estimulación Acústica/métodos , Imagen por Resonancia Magnética/métodos , Multilingüismo , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Femenino , Humanos , Lenguaje , Masculino , Habla/fisiología , Adulto Joven
8.
J Neurosci ; 32(38): 13273-80, 2012 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-22993443

RESUMEN

The formation of new sound categories is fundamental to everyday goal-directed behavior. Categorization requires the abstraction of discrete classes from continuous physical features as required by context and task. Electrophysiology in animals has shown that learning to categorize novel sounds alters their spatiotemporal neural representation at the level of early auditory cortex. However, functional magnetic resonance imaging (fMRI) studies so far did not yield insight into the effects of category learning on sound representations in human auditory cortex. This may be due to the use of overlearned speech-like categories and fMRI subtraction paradigms, leading to insufficient sensitivity to distinguish the responses to learning-induced, novel sound categories. Here, we used fMRI pattern analysis to investigate changes in human auditory cortical response patterns induced by category learning. We created complex novel sound categories and analyzed distributed activation patterns during passive listening to a sound continuum before and after category learning. We show that only after training, sound categories could be successfully decoded from early auditory areas and that learning-induced pattern changes were specific to the category-distinctive sound feature (i.e., pitch). Notably, the similarity between fMRI response patterns for the sound continuum mirrored the sigmoid shape of the behavioral category identification function. Our results indicate that perceptual representations of novel sound categories emerge from neural changes at early levels of the human auditory processing hierarchy.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Aprendizaje/fisiología , Sonido , Estimulación Acústica/clasificación , Adulto , Análisis de Varianza , Corteza Auditiva/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Distribución Normal , Oxígeno/sangre , Psicoacústica , Análisis Espectral , Adulto Joven
9.
J Neurosci ; 32(23): 8024-34, 2012 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-22674277

RESUMEN

Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.


Asunto(s)
Corteza Auditiva/fisiología , Audición/fisiología , Ruido , Percepción del Habla/fisiología , Estimulación Acústica , Adaptación Psicológica/fisiología , Adulto , Interpretación Estadística de Datos , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Ilusiones/psicología , Masculino , Análisis de Componente Principal , Desempeño Psicomotor/fisiología , Psicofísica , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA