Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
Nat Neurosci ; 26(4): 664-672, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36928634

RESUMEN

Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.


Asunto(s)
Corteza Auditiva , Semántica , Humanos , Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Acústica , Imagen por Resonancia Magnética , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos
2.
Cereb Cortex ; 30(3): 1103-1116, 2020 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-31504283

RESUMEN

Auditory spatial tasks induce functional activation in the occipital-visual-cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal-auditory-cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general-independent of sound location-was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.


Asunto(s)
Corteza Auditiva/fisiopatología , Ceguera/fisiopatología , Plasticidad Neuronal , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Ceguera/congénito , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Lóbulo Occipital/fisiología , Personas con Daño Visual
3.
J Neurosci ; 38(40): 8574-8587, 2018 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-30126968

RESUMEN

Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/fisiología , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
4.
Neuroimage ; 174: 274-287, 2018 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-29571712

RESUMEN

Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Colículos Inferiores/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/fisiología , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
5.
J Acoust Soc Am ; 142(4): 1757, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-29092572

RESUMEN

Meaningful sounds represent the majority of sounds that humans hear and process in everyday life. Yet studies of human sound localization mainly use artificial stimuli such as clicks, pure tones, and noise bursts. The present study investigated the influence of behavioral relevance, sound category, and acoustic properties on the localization of complex, meaningful sounds in the horizontal plane. Participants localized vocalizations and traffic sounds with two levels of behavioral relevance (low and high) within each category, as well as amplitude-modulated tones. Results showed a small but significant effect of behavioral relevance: localization acuity was higher for complex sounds with a high level of behavioral relevance at several target locations. The data also showed category-specific effects: localization biases were lower, and localization precision higher, for vocalizations than for traffic sounds in central space. Several acoustic parameters influenced sound localization performance as well. Correcting localization responses for front-back reversals reduced the overall variability across sounds, but behavioral relevance and sound category still had a modulatory effect on sound localization performance in central auditory space. The results thus demonstrate that spatial hearing performance for complex sounds is influenced not only by acoustic characteristics, but also by sound category and behavioral relevance.


Asunto(s)
Estimulación Acústica/métodos , Señales (Psicología) , Ruido del Transporte , Psicoacústica , Localización de Sonidos , Voz , Adulto , Femenino , Humanos , Masculino , Adulto Joven
6.
Cereb Cortex ; 27(5): 3002-3014, 2017 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-27230215

RESUMEN

A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Lóbulo Temporal/diagnóstico por imagen , Estimulación Acústica , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Juicio , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Psicoacústica , Adulto Joven
7.
Neuroimage ; 132: 32-42, 2016 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-26899782

RESUMEN

Multivariate pattern analysis (MVPA) in fMRI has been used to extract information from distributed cortical activation patterns, which may go undetected in conventional univariate analysis. However, little is known about the physical and physiological underpinnings of MVPA in fMRI as well as about the effect of spatial smoothing on its performance. Several studies have addressed these issues, but their investigation was limited to the visual cortex at 3T with conflicting results. Here, we used ultra-high field (7T) fMRI to investigate the effect of spatial resolution and smoothing on decoding of speech content (vowels) and speaker identity from auditory cortical responses. To that end, we acquired high-resolution (1.1mm isotropic) fMRI data and additionally reconstructed them at 2.2 and 3.3mm in-plane spatial resolutions from the original k-space data. Furthermore, the data at each resolution were spatially smoothed with different 3D Gaussian kernel sizes (i.e. no smoothing or 1.1, 2.2, 3.3, 4.4, or 8.8mm kernels). For all spatial resolutions and smoothing kernels, we demonstrate the feasibility of decoding speech content (vowel) and speaker identity at 7T using support vector machine (SVM) MVPA. In addition, we found that high spatial frequencies are informative for vowel decoding and that the relative contribution of high and low spatial frequencies is different across the two decoding tasks. Moderate smoothing (up to 2.2mm) improved the accuracies for both decoding of vowels and speakers, possibly due to reduction of noise (e.g. residual motion artifacts or instrument noise) while still preserving information at high spatial frequency. In summary, our results show that - even with the same stimuli and within the same brain areas - the optimal spatial resolution for MVPA in fMRI depends on the specific decoding task of interest.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Procesamiento de Señales Asistido por Computador , Estimulación Acústica , Adulto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Análisis Multivariante , Reconocimiento de Normas Patrones Automatizadas , Percepción del Habla , Máquina de Vectores de Soporte
8.
Neuroimage ; 129: 428-438, 2016 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-26826515

RESUMEN

In the perceptual domain, it has been shown that the human brain is strongly shaped through experience, leading to expertise in highly-skilled professionals. What has remained unclear is whether specialization also shapes brain networks underlying mental imagery. In our fMRI study, we aimed to uncover modality-specific mental imagery specialization of film experts. Using multi-voxel pattern analysis we decoded from brain activity of professional cinematographers and sound designers whether they were imagining sounds or images of particular film clips. In each expert group distinct multi-voxel patterns, specific for the modality of their expertise, were found during classification of imagery modality. These patterns were mainly localized in the occipito-temporal and parietal cortex for cinematographers and in the auditory cortex for sound designers. We also found generalized patterns across perception and imagery that were distinct for the two expert groups: they involved frontal cortex for the cinematographers and temporal cortex for the sound designers. Notably, the mental representations of film clips and sounds of cinematographers contained information that went beyond modality-specificity. We were able to successfully decode the implicit presence of film genre from brain activity during mental imagery in cinematographers. The results extend existing neuroimaging literature on expertise into the domain of mental imagery and show that experience in visual versus auditory imagery can alter the representation of information in modality-specific association cortices.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Imaginación/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino
9.
Cereb Cortex ; 26(1): 450-464, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26545618

RESUMEN

Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Auditiva/fisiología , Neuronas/fisiología , Localización de Sonidos/fisiología , Sonido , Estimulación Acústica/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
10.
PLoS One ; 9(9): e108045, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25259525

RESUMEN

Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.


Asunto(s)
Atención , Corteza Auditiva/fisiología , Estimulación Acústica , Adolescente , Adulto , Percepción Auditiva , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Adulto Joven
11.
J Neurosci ; 34(13): 4548-57, 2014 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-24672000

RESUMEN

Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subject's behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschl's gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/irrigación sanguínea , Mapeo Encefálico , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno , Psicoacústica , Espectrografía del Sonido , Adulto Joven
12.
J Neurosci ; 34(1): 332-8, 2014 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-24381294

RESUMEN

Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.


Asunto(s)
Estimulación Acústica/métodos , Imagen por Resonancia Magnética/métodos , Multilingüismo , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Femenino , Humanos , Lenguaje , Masculino , Habla/fisiología , Adulto Joven
13.
J Neurosci ; 32(38): 13273-80, 2012 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-22993443

RESUMEN

The formation of new sound categories is fundamental to everyday goal-directed behavior. Categorization requires the abstraction of discrete classes from continuous physical features as required by context and task. Electrophysiology in animals has shown that learning to categorize novel sounds alters their spatiotemporal neural representation at the level of early auditory cortex. However, functional magnetic resonance imaging (fMRI) studies so far did not yield insight into the effects of category learning on sound representations in human auditory cortex. This may be due to the use of overlearned speech-like categories and fMRI subtraction paradigms, leading to insufficient sensitivity to distinguish the responses to learning-induced, novel sound categories. Here, we used fMRI pattern analysis to investigate changes in human auditory cortical response patterns induced by category learning. We created complex novel sound categories and analyzed distributed activation patterns during passive listening to a sound continuum before and after category learning. We show that only after training, sound categories could be successfully decoded from early auditory areas and that learning-induced pattern changes were specific to the category-distinctive sound feature (i.e., pitch). Notably, the similarity between fMRI response patterns for the sound continuum mirrored the sigmoid shape of the behavioral category identification function. Our results indicate that perceptual representations of novel sound categories emerge from neural changes at early levels of the human auditory processing hierarchy.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Aprendizaje/fisiología , Sonido , Estimulación Acústica/clasificación , Adulto , Análisis de Varianza , Corteza Auditiva/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Distribución Normal , Oxígeno/sangre , Psicoacústica , Análisis Espectral , Adulto Joven
14.
Neuroimage ; 60(1): 47-58, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22186678

RESUMEN

Imagination is a key function for many human activities, such as reminiscing, learning, or planning. Unravelling its neuro-biological basis is paramount to grasp the essence of our thoughts. Previous neuroimaging studies have identified brain regions subserving the visualisation of "what?" (e.g. faces or objects) and "where?" (e.g. spatial layout) content of mental images. However, the functional role of a common set of involved regions - the frontal regions - and their interplay with the "what" and "where" regions, has remained largely unspecified. This study combines functional MRI and electroencephalography to examine the full-brain network that underlies the visual imagery of complex scenes and to investigate the spectro-temporal properties of its nodes, especially of the frontal cortex. Our results indicate that frontal regions integrate the "what" and "where" content of our thoughts into one visually imagined scene. We link early synchronisation of anterior theta and beta oscillations to regional activation of right and central frontal cortices, reflecting retrieval and integration of information. These frontal regions orchestrate remote occipital-temporal regions (including calcarine sulcus and parahippocampal gyrus) that encode the detailed representations of the objects, and parietal "where" regions that encode the spatial layout into forming one coherent mental picture. Specifically the mesial superior frontal gyrus appears to have a principal integrative role, as its activity during the visualisation of the scene predicts subsequent performance on the imagery task.


Asunto(s)
Lóbulo Frontal/fisiología , Imaginación/fisiología , Adolescente , Adulto , Electroencefalografía , Femenino , Neuroimagen Funcional , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
15.
J Neurosci ; 31(5): 1715-20, 2011 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-21289180

RESUMEN

The confounding of physical stimulus characteristics and perceptual interpretations of stimuli poses a problem for most neuroscientific studies of perception. In the auditory domain, this pertains to the entanglement of acoustics and percept. Traditionally, most study designs have relied on cognitive subtraction logic, which demands the use of one or more comparisons between stimulus types. This does not allow for a differentiation between effects due to acoustic differences (i.e., sensation) and those due to conscious perception. To overcome this problem, we used functional magnetic resonance imaging (fMRI) in humans and pattern-recognition analysis to identify activation patterns that encode the perceptual interpretation of physically identical, ambiguous sounds. We show that it is possible to retrieve the perceptual interpretation of ambiguous phonemes-information that is fully subjective to the listener-from fMRI measurements of brain activity in auditory areas in the superior temporal cortex, most prominently on the posterior bank of the left Heschl's gyrus and sulcus and in the adjoining left planum temporale. These findings suggest that, beyond the basic acoustic analysis of sounds, constructive perceptual processes take place in these relatively early cortical auditory networks. This disagrees with hierarchical models of auditory processing, which generally conceive of these areas as sets of feature detectors, whose task is restricted to the analysis of physical characteristics and the structure of sounds.


Asunto(s)
Corteza Auditiva/fisiología , Imagen por Resonancia Magnética , Fonética , Sonido , Percepción del Habla/fisiología , Habla , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Lóbulo Temporal/fisiología
16.
Neuroimage ; 56(2): 826-36, 2011 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-20691274

RESUMEN

The combination of electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI) has been proposed as a tool to study brain dynamics with both high temporal and high spatial resolution. Integration through EEG-fMRI trial-by-trial coupling has been proposed as a method to combine the different data sets and achieve temporal expansion of the fMRI data (Eichele et al., 2005). To fully benefit of this type of analysis simultaneous EEG-fMRI acquisitions are necessary (Debener et al., 2006). Here we address the issue of predicting the signal in one modality using information from the other modality. We use multivariate Relevance Vector Machine (RVM) regression to "learn" the relation between fMRI activation patterns and simultaneously acquired EEG responses in the context of a complex cognitive task entailing an auditory cue, visual mental imagery and a control visual target. We show that multivariate regression is a valuable approach for predicting evoked and induced oscillatory EEG responses from fMRI time series. Prediction of EEG from fMRI is largely influenced by the overall filtering effects of the hemodynamic response function. However, a detailed analysis of the auditory evoked responses shows that there is a small but significant contribution of single trial modulations that can be exploited for linking spatially-distributed patterns of fMRI activation to specific components of the simultaneously-recorded EEG signal.


Asunto(s)
Inteligencia Artificial , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Señales Asistido por Computador , Adulto , Encéfalo/fisiología , Femenino , Humanos
17.
Magn Reson Imaging ; 27(8): 1110-9, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19570634

RESUMEN

Spatial independent component analysis (ICA) is a well-established technique for multivariate analysis of functional magnetic resonance imaging (fMRI) data. It blindly extracts spatiotemporal patterns of neural activity from functional measurements by seeking for sources that are maximally independent. Additional information on one or more sources (e.g., spatial regularity) is often available; however, it is not considered while looking for independent components. In the present work, we propose a new ICA algorithm based on the optimization of an objective function that accounts for both independence and other information on the sources or on the mixing model in a very general fashion. In particular, we apply this approach to fMRI data analysis and illustrate, by means of simulations, how inclusion of a spatial regularity term helps to recover the sources more effectively than with conventional ICA. The improvement is especially evident in high noise situations. Furthermore we employ the same approach on data sets from a complex mental imagery experiment, showing that consistency and physiological plausibility of relatively weak components are improved.


Asunto(s)
Artefactos , Mapeo Encefálico/métodos , Encéfalo/fisiología , Potenciales Evocados/fisiología , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/métodos , Humanos , Imagen por Resonancia Magnética/instrumentación , Análisis de Componente Principal , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
J Neurosci ; 29(6): 1699-706, 2009 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-19211877

RESUMEN

Speech and vocal sounds are at the core of human communication. Cortical processing of these sounds critically depends on behavioral demands. However, the neurocomputational mechanisms enabling this adaptive processing remain elusive. Here we examine the task-dependent reorganization of electroencephalographic responses to natural speech sounds (vowels /a/, /i/, /u/) spoken by three speakers (two female, one male) while listeners perform a one-back task on either vowel or speaker identity. We show that dynamic changes of sound-evoked responses and phase patterns of cortical oscillations in the alpha band (8-12 Hz) closely reflect the abstraction and analysis of the sounds along the task-relevant dimension. Vowel categorization leads to a significant temporal realignment of responses to the same vowel, e.g., /a/, independent of who pronounced this vowel, whereas speaker categorization leads to a significant temporal realignment of responses to the same speaker, e.g., speaker 1, independent of which vowel she/he pronounced. This transient and goal-dependent realignment of neuronal responses to physically different external events provides a robust cortical coding mechanism for forming and processing abstract representations of auditory (speech) input.


Asunto(s)
Ritmo alfa , Corteza Cerebral/fisiología , Desempeño Psicomotor/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Voz/fisiología , Estimulación Acústica/métodos , Ritmo alfa/métodos , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA