Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 83
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Nat Rev Neurosci ; 20(10): 609-623, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31467450

RESUMEN

Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Localización de Sonidos/fisiología , Animales , Corteza Auditiva/diagnóstico por imagen , Vías Auditivas/diagnóstico por imagen , Audición/fisiología , Humanos
2.
Cereb Cortex ; 33(10): 6207-6227, 2023 05 09.
Artículo en Inglés | MEDLINE | ID: mdl-36573464

RESUMEN

To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.


Asunto(s)
Corteza Auditiva , Humanos , Corteza Auditiva/diagnóstico por imagen , Lóbulo Temporal , Lóbulo Parietal , Semántica , Lenguaje
3.
J Neurophysiol ; 129(6): 1344-1358, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37141051

RESUMEN

How the brain responds temporally and spectrally when we listen to familiar versus unfamiliar musical sequences remains unclear. This study uses EEG techniques to investigate the continuous electrophysiological changes in the human brain during passive listening to familiar and unfamiliar musical excerpts. EEG activity was recorded in 20 participants while they passively listened to 10 s of classical music, and they were then asked to indicate their self-assessment of familiarity. We analyzed the EEG data in two manners: familiarity based on the within-subject design, i.e., averaging trials for each condition and participant, and familiarity based on the same music excerpt, i.e., averaging trials for each condition and music excerpt. By comparing the familiar condition with the unfamiliar condition and the local baseline, sustained low-beta power (12-16 Hz) suppression was observed in both analyses in fronto-central and left frontal electrodes after 800 ms. However, sustained alpha power (8-12 Hz) decreased in fronto-central and posterior electrodes after 850 ms only in the first type of analysis. Our study indicates that listening to familiar music elicits a late sustained spectral response (inhibition of alpha/low-beta power from 800 ms to 10 s). Moreover, the results showed that alpha suppression reflects increased attention or arousal/engagement due to listening to familiar music; nevertheless, low-beta suppression exhibits the effect of familiarity.NEW & NOTEWORTHY This study differentiates the dynamic temporal-spectral effects during listening to 10 s of familiar music compared with unfamiliar music. This study highlights that listening to familiar music leads to continuous suppression in the alpha and low-beta bands. This suppression starts ∼800 ms after the stimulus onset.


Asunto(s)
Música , Humanos , Electroencefalografía/métodos , Encéfalo/fisiología , Percepción Auditiva/fisiología , Reconocimiento en Psicología/fisiología
4.
Proc Natl Acad Sci U S A ; 117(26): 15242-15252, 2020 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-32541016

RESUMEN

Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.


Asunto(s)
Percepción Auditiva/fisiología , Aprendizaje , Macaca mulatta/fisiología , Corteza Motora/fisiología , Sonido , Animales , Mapeo Encefálico , Potenciales Evocados Auditivos , Femenino , Imagen por Resonancia Magnética , Masculino
5.
Hum Brain Mapp ; 42(13): 4134-4143, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-30697878

RESUMEN

A prominent finding of postmortem and molecular imaging studies on Alzheimer's disease (AD) is the accumulation of neuropathological proteins in brain regions of the default mode network (DMN). Molecular models suggest that the progression of disease proteins depends on the directionality of signaling pathways. At network level, effective connectivity (EC) reflects directionality of signaling pathways. We hypothesized a specific pattern of EC in the DMN of patients with AD, related to cognitive impairment. Metabolic connectivity mapping is a novel measure of EC identifying regions of signaling input based on neuroenergetics. We simultaneously acquired resting-state functional MRI and FDG-PET data from patients with early AD (n = 35) and healthy subjects (n = 18) on an integrated PET/MR scanner. We identified two distinct subnetworks of EC in the DMN of healthy subjects: an anterior part with bidirectional EC between hippocampus and medial prefrontal cortex and a posterior part with predominant input into medial parietal cortex. Patients had reduced input into the medial parietal system and absent input from hippocampus into medial prefrontal cortex (p < 0.05, corrected). In a multiple linear regression with unimodal imaging and EC measures (F4,25 = 5.63, p = 0.002, r2 = 0.47), we found that EC (ß = 0.45, p = 0.012) was stronger associated with cognitive deficits in patients than any of the PET and fMRI measures alone. Our approach indicates specific disruptions of EC in the DMN of patients with AD and might be suitable to test molecular theories about downstream and upstream spreading of neuropathology in AD.


Asunto(s)
Enfermedad de Alzheimer/diagnóstico por imagen , Corteza Cerebral , Conectoma/métodos , Red en Modo Predeterminado , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Tomografía de Emisión de Positrones/métodos , Anciano , Enfermedad de Alzheimer/metabolismo , Enfermedad de Alzheimer/fisiopatología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/metabolismo , Corteza Cerebral/fisiopatología , Red en Modo Predeterminado/diagnóstico por imagen , Red en Modo Predeterminado/metabolismo , Red en Modo Predeterminado/fisiopatología , Humanos
6.
Cereb Cortex ; 29(11): 4863-4876, 2019 12 17.
Artículo en Inglés | MEDLINE | ID: mdl-30843062

RESUMEN

In the present combined DTI/fMRI study we investigated adaptive plasticity of neural networks involved in controlling spatial and nonspatial auditory working memory in the early blind (EB). In both EB and sighted controls (SC), fractional anisotropy (FA) within the right inferior longitudinal fasciculus correlated positively with accuracy in a one-back sound localization but not sound identification task. The neural tracts passing through the cluster of significant correlation connected auditory and "visual" areas in the right hemisphere. Activity in these areas during both sound localization and identification correlated with FA within the anterior corpus callosum, anterior thalamic radiation, and inferior fronto-occipital fasciculus. In EB, FA in these structures correlated positively with activity in both auditory and "visual" areas, whereas FA in SC correlated positively with activity in auditory and negatively with activity in visual areas. The results indicate that frontal white matter conveys cross-modal suppression of occipital areas in SC, while it mediates coactivation of auditory and reorganized "visual" cortex in EB.


Asunto(s)
Corteza Auditiva/patología , Corteza Auditiva/fisiopatología , Percepción Auditiva/fisiología , Ceguera/patología , Ceguera/fisiopatología , Corteza Visual/patología , Corteza Visual/fisiología , Adulto , Mapeo Encefálico , Imagen de Difusión por Resonancia Magnética , Femenino , Humanos , Masculino , Memoria a Corto Plazo/fisiología , Persona de Mediana Edad , Vías Nerviosas/patología , Vías Nerviosas/fisiopatología , Plasticidad Neuronal , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Sustancia Blanca/patología , Sustancia Blanca/fisiopatología
7.
J Neurosci ; 38(40): 8574-8587, 2018 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-30126968

RESUMEN

Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.


Asunto(s)
Corteza Auditiva/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/fisiología , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
8.
Proc Natl Acad Sci U S A ; 113(2): 428-33, 2016 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-26712010

RESUMEN

Directionality of signaling among brain regions provides essential information about human cognition and disease states. Assessing such effective connectivity (EC) across brain states using functional magnetic resonance imaging (fMRI) alone has proven difficult, however. We propose a novel measure of EC, termed metabolic connectivity mapping (MCM), that integrates undirected functional connectivity (FC) with local energy metabolism from fMRI and positron emission tomography (PET) data acquired simultaneously. This method is based on the concept that most energy required for neuronal communication is consumed postsynaptically, i.e., at the target neurons. We investigated MCM and possible changes in EC within the physiological range using "eyes open" versus "eyes closed" conditions in healthy subjects. Independent of condition, MCM reliably detected stable and bidirectional communication between early and higher visual regions. Moreover, we found stable top-down signaling from a frontoparietal network including frontal eye fields. In contrast, we found additional top-down signaling from all major clusters of the salience network to early visual cortex only in the eyes open condition. MCM revealed consistent bidirectional and unidirectional signaling across the entire cortex, along with prominent changes in network interactions across two simple brain states. We propose MCM as a novel approach for inferring EC from neuronal energy metabolism that is ideally suited to study signaling hierarchies in the brain and their defects in brain disorders.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Metabolómica , Descanso/fisiología , Encéfalo/diagnóstico por imagen , Femenino , Fluorodesoxiglucosa F18 , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Tomografía de Emisión de Positrones , Corteza Prefrontal/fisiología
9.
J Neurosci ; 36(4): 1416-28, 2016 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-26818527

RESUMEN

Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior-posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or "periodotopy," are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale "periodotopic" organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT: In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Acústica , Adulto , Corteza Auditiva/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Oxígeno/sangre , Sonido , Análisis Espectral , Factores de Tiempo , Adulto Joven
10.
J Acoust Soc Am ; 142(4): 1757, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-29092572

RESUMEN

Meaningful sounds represent the majority of sounds that humans hear and process in everyday life. Yet studies of human sound localization mainly use artificial stimuli such as clicks, pure tones, and noise bursts. The present study investigated the influence of behavioral relevance, sound category, and acoustic properties on the localization of complex, meaningful sounds in the horizontal plane. Participants localized vocalizations and traffic sounds with two levels of behavioral relevance (low and high) within each category, as well as amplitude-modulated tones. Results showed a small but significant effect of behavioral relevance: localization acuity was higher for complex sounds with a high level of behavioral relevance at several target locations. The data also showed category-specific effects: localization biases were lower, and localization precision higher, for vocalizations than for traffic sounds in central space. Several acoustic parameters influenced sound localization performance as well. Correcting localization responses for front-back reversals reduced the overall variability across sounds, but behavioral relevance and sound category still had a modulatory effect on sound localization performance in central auditory space. The results thus demonstrate that spatial hearing performance for complex sounds is influenced not only by acoustic characteristics, but also by sound category and behavioral relevance.


Asunto(s)
Estimulación Acústica/métodos , Señales (Psicología) , Ruido del Transporte , Psicoacústica , Localización de Sonidos , Voz , Adulto , Femenino , Humanos , Masculino , Adulto Joven
11.
Neuroimage ; 129: 214-223, 2016 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-26774614

RESUMEN

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.


Asunto(s)
Corteza Prefrontal/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Magnetoencefalografía , Masculino , Procesamiento de Señales Asistido por Computador , Adulto Joven
12.
Hum Brain Mapp ; 37(8): 2717-35, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27091485

RESUMEN

Tinnitus is an increasingly common disorder in which patients experience phantom auditory sensations, usually ringing or buzzing in the ear. Tinnitus pathophysiology has been repeatedly shown to involve both auditory and non-auditory brain structures, making network-level studies of tinnitus critical. In this magnetic resonance imaging (MRI) study, two resting-state functional connectivity (RSFC) approaches were used to better understand functional network disturbances in tinnitus. First, we demonstrated tinnitus-related reductions in RSFC between specific brain regions and resting-state networks (RSNs), defined by independent components analysis (ICA) and chosen for their overlap with structures known to be affected in tinnitus. Then, we restricted ICA to data from tinnitus patients, and identified one RSN not apparent in control data. This tinnitus RSN included auditory-sensory regions like inferior colliculus and medial Heschl's gyrus, as well as classically non-auditory regions like the mediodorsal nucleus of the thalamus, striatum, lateral prefrontal, and orbitofrontal cortex. Notably, patients' reported tinnitus loudness was positively correlated with RSFC between the mediodorsal nucleus and the tinnitus RSN, indicating that this network may underlie the auditory-sensory experience of tinnitus. These data support the idea that tinnitus involves network dysfunction, and further stress the importance of communication between auditory-sensory and fronto-striatal circuits in tinnitus pathophysiology. Hum Brain Mapp 37:2717-2735, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.


Asunto(s)
Encéfalo/fisiopatología , Vías Nerviosas/fisiopatología , Acúfeno/fisiopatología , Adulto , Anciano , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad
13.
Cereb Cortex ; 25(8): 2035-48, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24518755

RESUMEN

Early blindness results in both structural and functional changes of the brain. However, these changes have rarely been studied in relation to each other. We measured alterations in cortical thickness (CT) caused by early visual deprivation and their relationship with cortical activity. Structural and functional magnetic resonance imaging was performed in 12 early blind (EB) humans and 12 sighted controls (SC). Experimental conditions included one-back tasks for auditory localization and pitch identification, and a simple sound-detection task. Structural and functional data were analyzed in a whole-brain approach and within anatomically defined regions of interest in sensory areas of the spared (auditory) and deprived (visual) modalities. Functional activation during sound-localization or pitch-identification tasks correlated negatively with CT in occipital areas of EB (calcarine sulcus, lingual gyrus, superior and middle occipital gyri, and cuneus) and in nonprimary auditory areas of SC. These results suggest a link between CT and activation and demonstrate that the relationship between cortical structure and function may depend on early sensory experience, probably via selective pruning of exuberant connections. Activity-dependent effects of early sensory deprivation and long-term practice are superimposed on normal maturation and aging. Together these processes shape the relationship between brain structure and function over the lifespan.


Asunto(s)
Percepción Auditiva/fisiología , Ceguera/patología , Ceguera/fisiopatología , Corteza Cerebral/patología , Corteza Cerebral/fisiopatología , Estimulación Acústica , Adulto , Edad de Inicio , Envejecimiento/patología , Envejecimiento/fisiología , Ceguera/diagnóstico por imagen , Mapeo Encefálico , Corteza Cerebral/diagnóstico por imagen , Circulación Cerebrovascular/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Plasticidad Neuronal , Pruebas Neuropsicológicas , Tamaño de los Órganos , Oxígeno/sangre , Tomografía Computarizada por Rayos X
14.
Proc Natl Acad Sci U S A ; 110(19): 7892-7, 2013 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-23610391

RESUMEN

Receptive fields (RFs) of neurons in primary visual cortex have traditionally been subdivided into two major classes: "simple" and "complex" cells. Simple cells were originally defined by the existence of segregated subregions within their RF that respond to either the on- or offset of a light bar and by spatial summation within each of these regions, whereas complex cells had ON and OFF regions that were coextensive in space [Hubel DH, et al. (1962) J Physiol 160:106-154]. Although other definitions based on the linearity of response modulation have been proposed later [Movshon JA, et al. (1978) J Physiol 283:53-77; Skottun BC, et al. (1991) Vision Res 31(7-8):1079-1086], the segregation of ON and OFF subregions has remained an important criterion for the distinction between simple and complex cells. Here we report that response profiles of neurons in primary auditory cortex of monkeys show a similar distinction: one group of cells has segregated ON and OFF subregions in frequency space; and another group shows ON and OFF responses within largely overlapping response profiles. This observation is intriguing for two reasons: (i) spectrotemporal dissociation in the auditory domain provides a basic neural mechanism for the segregation of sounds, a fundamental prerequisite for auditory figure-ground discrimination; and (ii) the existence of similar types of RF organization in visual and auditory cortex would support the existence of a common canonical processing algorithm within cortical columns.


Asunto(s)
Corteza Auditiva/anatomía & histología , Corteza Auditiva/citología , Neuronas/fisiología , Acústica , Potenciales de Acción , Algoritmos , Animales , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Conducta Animal , Corteza Cerebral/metabolismo , Electrofisiología , Potenciales Evocados Visuales , Audición , Macaca mulatta , Imagen por Resonancia Magnética , Factores de Tiempo , Visión Ocular
15.
Eur J Neurosci ; 41(5): 579-85, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25728177

RESUMEN

A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separation of the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features at the columnar level are direction selectivity, size/bandwidth selectivity, and receptive fields with segregated vs. overlapping ON and OFF subregions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: (i) identification of objects; and (ii) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independently of sensory modality.


Asunto(s)
Corteza Auditiva/fisiología , Conectoma , Corteza Visual/fisiología , Animales , Humanos , Primates
16.
Proc Natl Acad Sci U S A ; 109(8): E505-14, 2012 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-22308358

RESUMEN

Spoken word recognition requires complex, invariant representations. Using a meta-analytic approach incorporating more than 100 functional imaging experiments, we show that preference for complex sounds emerges in the human auditory ventral stream in a hierarchical fashion, consistent with nonhuman primate electrophysiology. Examining speech sounds, we show that activation associated with the processing of short-timescale patterns (i.e., phonemes) is consistently localized to left mid-superior temporal gyrus (STG), whereas activation associated with the integration of phonemes into temporally complex patterns (i.e., words) is consistently localized to left anterior STG. Further, we show left mid- to anterior STG is reliably implicated in the invariant representation of phonetic forms and that this area also responds preferentially to phonetic sounds, above artificial control sounds or environmental sounds. Together, this shows increasing encoding specificity and invariance along the auditory ventral stream for temporally complex speech sounds.


Asunto(s)
Corteza Auditiva/fisiología , Fonética , Animales , Humanos , Imagen por Resonancia Magnética
17.
J Neurosci ; 33(12): 5208-15, 2013 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-23516286

RESUMEN

Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.


Asunto(s)
Corteza Auditiva/fisiología , Imagen por Resonancia Magnética , Corteza Motora/fisiología , Fonética , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Adulto , Mapeo Encefálico/métodos , Discriminación en Psicología/fisiología , Femenino , Lateralidad Funcional/fisiología , Humanos , Masculino , Modelos Neurológicos , Vías Nerviosas/fisiología , Adulto Joven
18.
J Neurophysiol ; 111(8): 1671-85, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24501260

RESUMEN

The respective roles of ventral and dorsal cortical processing streams are still under discussion in both vision and audition. We characterized neural responses in the caudal auditory belt cortex, an early dorsal stream region of the macaque. We found fast neural responses with elevated temporal precision as well as neurons selective to sound location. These populations were partly segregated: Neurons in a caudomedial area more precisely followed temporal stimulus structure but were less selective to spatial location. Response latencies in this area were even shorter than in primary auditory cortex. Neurons in a caudolateral area showed higher selectivity for sound source azimuth and elevation, but responses were slower and matching to temporal sound structure was poorer. In contrast to the primary area and other regions studied previously, latencies in the caudal belt neurons were not negatively correlated with best frequency. Our results suggest that two functional substreams may exist within the auditory dorsal stream.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Neuronas/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Animales , Discriminación en Psicología , Macaca mulatta , Masculino , Factores de Tiempo
19.
Hum Brain Mapp ; 35(5): 2233-52, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-23913818

RESUMEN

Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Adulto , Encéfalo/irrigación sanguínea , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Modelos Lineales , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Estimulación Luminosa , Estadística como Asunto , Adulto Joven
20.
Hum Brain Mapp ; 35(11): 5587-605, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24996043

RESUMEN

The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Lenguaje , Percepción Visual/fisiología , Estimulación Acústica , Bases de Datos Factuales/estadística & datos numéricos , Humanos , Neuroimagen , Estimulación Luminosa
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA