Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
J Neurophysiol ; 127(6): 1547-1563, 2022 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-35507478

RESUMEN

Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. In addition, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.NEW & NOTEWORTHY Using intracranial electroencephalography (iEEG) in humans during a passive listening task, we demonstrate that sounds modulate activity in visual cortex at both the onset and offset of sounds, which likely supports visual timing and duration processing. However, more complex auditory rate information did not affect visual activity. These findings are based on one of the largest multisensory iEEG studies to date and reveal the type of information transmitted between auditory and visual regions.


Asunto(s)
Corteza Auditiva , Corteza Visual , Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Humanos , Sonido , Corteza Visual/fisiología , Percepción Visual/fisiología
2.
Sci Rep ; 11(1): 23052, 2021 11 29.
Artículo en Inglés | MEDLINE | ID: mdl-34845325

RESUMEN

Multisensory stimuli speed behavioral responses, but the mechanisms subserving these effects remain disputed. Historically, the observation that multisensory reaction times (RTs) outpace models assuming independent sensory channels has been taken as evidence for multisensory integration (the "redundant target effect"; RTE). However, this interpretation has been challenged by alternative explanations based on stimulus sequence effects, RT variability, and/or negative correlations in unisensory processing. To clarify the mechanisms subserving the RTE, we collected RTs from 78 undergraduates in a multisensory simple RT task. Based on previous neurophysiological findings, we hypothesized that the RTE was unlikely to reflect these alternative mechanisms, and more likely reflected pre-potentiation of sensory responses through crossmodal phase-resetting. Contrary to accounts based on stimulus sequence effects, we found that preceding stimuli explained only 3-9% of the variance in apparent RTEs. Comparing three plausible evidence accumulator models, we found that multisensory RT distributions were best explained by increased sensory evidence at stimulus onset. Because crossmodal phase-resetting increases cortical excitability before sensory input arrives, these results are consistent with a mechanism based on pre-potentiation through phase-resetting. Mathematically, this model entails increasing the prior log-odds of stimulus presence, providing a potential link between neurophysiological, behavioral, and computational accounts of multisensory interactions.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Conducta , Estimulación Luminosa , Tiempo de Reacción/fisiología , Percepción Visual , Adolescente , Adulto , Simulación por Computador , Humanos , Modelos Neurológicos , Probabilidad , Reproducibilidad de los Resultados , Factores de Tiempo , Adulto Joven
3.
Eur J Neurosci ; 54(9): 7301-7317, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34587350

RESUMEN

Speech perception is a central component of social communication. Although principally an auditory process, accurate speech perception in everyday settings is supported by meaningful information extracted from visual cues. Visual speech modulates activity in cortical areas subserving auditory speech perception including the superior temporal gyrus (STG). However, it is unknown whether visual modulation of auditory processing is a unitary phenomenon or, rather, consists of multiple functionally distinct processes. To explore this question, we examined neural responses to audiovisual speech measured from intracranially implanted electrodes in 21 patients with epilepsy. We found that visual speech modulated auditory processes in the STG in multiple ways, eliciting temporally and spatially distinct patterns of activity that differed across frequency bands. In the theta band, visual speech suppressed the auditory response from before auditory speech onset to after auditory speech onset (-93 to 500 ms) most strongly in the posterior STG. In the beta band, suppression was seen in the anterior STG from -311 to -195 ms before auditory speech onset and in the middle STG from -195 to 235 ms after speech onset. In high gamma, visual speech enhanced the auditory response from -45 to 24 ms only in the posterior STG. We interpret the visual-induced changes prior to speech onset as reflecting crossmodal prediction of speech signals. In contrast, modulations after sound onset may reflect a decrease in sustained feedforward auditory activity. These results are consistent with models that posit multiple distinct mechanisms supporting audiovisual speech perception.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Percepción Auditiva , Humanos , Habla , Percepción Visual
4.
Cereb Cortex ; 31(8): 3881-3898, 2021 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-33791797

RESUMEN

Aging is associated with widespread alterations in cerebral white matter (WM). Most prior studies of age differences in WM have used diffusion tensor imaging (DTI), but typical DTI metrics (e.g., fractional anisotropy; FA) can reflect multiple neurobiological features, making interpretation challenging. Here, we used fixel-based analysis (FBA) to investigate age-related WM differences observed using DTI in a sample of 45 older and 25 younger healthy adults. Age-related FA differences were widespread but were strongly associated with differences in multi-fiber complexity (CX), suggesting that they reflected differences in crossing fibers in addition to structural differences in individual fiber segments. FBA also revealed a frontolimbic locus of age-related effects and provided insights into distinct microstructural changes underlying them. Specifically, age differences in fiber density were prominent in fornix, bilateral anterior internal capsule, forceps minor, body of the corpus callosum, and corticospinal tract, while age differences in fiber cross section were largest in cingulum bundle and forceps minor. These results provide novel insights into specific structural differences underlying major WM differences associated with aging.


Asunto(s)
Envejecimiento/fisiología , Imagen de Difusión Tensora/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/crecimiento & desarrollo , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Anatomía Transversal , Corteza Cerebral/citología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/crecimiento & desarrollo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fibras Nerviosas , Tractos Piramidales , Sustancia Blanca/citología , Adulto Joven
5.
Proc Natl Acad Sci U S A ; 117(29): 16920-16927, 2020 07 21.
Artículo en Inglés | MEDLINE | ID: mdl-32632010

RESUMEN

Visual speech facilitates auditory speech perception, but the visual cues responsible for these benefits and the information they provide remain unclear. Low-level models emphasize basic temporal cues provided by mouth movements, but these impoverished signals may not fully account for the richness of auditory information provided by visual speech. High-level models posit interactions among abstract categorical (i.e., phonemes/visemes) or amodal (e.g., articulatory) speech representations, but require lossy remapping of speech signals onto abstracted representations. Because visible articulators shape the spectral content of speech, we hypothesized that the perceptual system might exploit natural correlations between midlevel visual (oral deformations) and auditory speech features (frequency modulations) to extract detailed spectrotemporal information from visual speech without employing high-level abstractions. Consistent with this hypothesis, we found that the time-frequency dynamics of oral resonances (formants) could be predicted with unexpectedly high precision from the changing shape of the mouth during speech. When isolated from other speech cues, speech-based shape deformations improved perceptual sensitivity for corresponding frequency modulations, suggesting that listeners could exploit this cross-modal correspondence to facilitate perception. To test whether this type of correspondence could improve speech comprehension, we selectively degraded the spectral or temporal dimensions of auditory sentence spectrograms to assess how well visual speech facilitated comprehension under each degradation condition. Visual speech produced drastically larger enhancements during spectral degradation, suggesting a condition-specific facilitation effect driven by cross-modal recovery of auditory speech spectra. The perceptual system may therefore use audiovisual correlations rooted in oral acoustics to extract detailed spectrotemporal information from visual speech.


Asunto(s)
Acústica del Lenguaje , Percepción del Habla , Percepción Visual , Adulto , Señales (Psicología) , Femenino , Humanos , Labio/fisiología , Masculino , Fonética
6.
J Neural Eng ; 17(4): 045010, 2020 07 24.
Artículo en Inglés | MEDLINE | ID: mdl-32541097

RESUMEN

Objective: Postmortem analysis of the brain from a blind human subject who had a cortical visual prosthesis implanted for 36 years (Dobelle 2000 Asaio J. 46 3­9) Approach: This provided insight into the design requirements for a successful human cortical visual prosthesis by revealing, (a) unexpected rotation of the electrode array 25 to 40 degrees away from the midsagittal plane, thought to be due to the torque of the connecting cable, (b) degradation of the platinum electrodes, and (c) only partial coverage of the primary visual cortex by the rectangular array. The electrode array only overlapped with the anterior 45% of primary visual cortex (identified by the line of Gennari), largely missing the posterior foveal representation of visual cortex. Main results: A significantly greater proportions of electrodes outside of V1 elicited phosphenes than did electrodes within of V1. Histology did not reveal appreciable loss of neurons in cortex that surrounded the migrated array, perhaps due to the very slow rotation of this implant. Significance: This pioneering effort to develop a cortical visual prosthesis suggests that to maximize efficacy, the long-term effects of implanted alien materials on nervous tissue, and vice versa, need to be considered in detail, and that electrode array design considerations need to optimally match the electrodes to the patient's cortical anatomy. Modern pre-implant imaging can help optimize future implants by identifying the location and extent of bridging veins with MRI and even map the location of the V1/V2 border in vivo with PET.


Asunto(s)
Corteza Visual , Prótesis Visuales , Estimulación Eléctrica , Electrodos Implantados , Humanos , Fosfenos
7.
Neuroimage Clin ; 23: 101836, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31077985

RESUMEN

Antisocial behavior (AB), including violence, criminality, and substance abuse, is often linked to deficits in emotion processing, reward-related learning, and inhibitory control, as well as their associated neural networks. To better understand these deficits, the structural connections between brain regions implicated in AB can be examined using diffusion tensor imaging (DTI), which assesses white matter microstructure. Prior studies have identified differences in white matter microstructure of the uncinate fasciculus (UF), primarily within offender samples. However, few studies have looked beyond the UF or determined whether these relationships are present dimensionally across the range of AB and callous-unemotional (CU) traits. In the current study, we examined associations between AB and white matter microstructure from major fiber tracts, including the UF. Further, we explored whether these associations were specific to individuals high on CU traits. Within a relatively large community sample of young adult men from low-income, urban families (N = 178), we found no direct relations between dimensional, self-report measures of either AB or CU traits and white matter microstructure. However, we found significant associations between AB and white matter microstructure of several tracts only for those with high co-occurring levels of CU traits. In general, these associations did not differ according to race, socioeconomic status, or comorbid psychiatric symptoms. The current results suggest a unique neural profile of severe AB in combination with CU traits, characterized by widespread differences in white matter microstructure, which differs from either AB or CU traits in isolation and is not specific to hypothesized tracts (i.e., the UF).


Asunto(s)
Síntomas Afectivos/diagnóstico por imagen , Trastorno de Personalidad Antisocial/diagnóstico por imagen , Red Nerviosa/diagnóstico por imagen , Pobreza , Población Urbana , Sustancia Blanca/diagnóstico por imagen , Síntomas Afectivos/economía , Síntomas Afectivos/psicología , Anisotropía , Trastorno de Personalidad Antisocial/economía , Trastorno de Personalidad Antisocial/psicología , Imagen de Difusión Tensora/economía , Imagen de Difusión Tensora/métodos , Emociones/fisiología , Humanos , Estudios Longitudinales , Masculino , Pobreza/economía , Pobreza/psicología , Adulto Joven
8.
J Cogn Neurosci ; 31(7): 1002-1017, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30912728

RESUMEN

Co-occurring sounds can facilitate perception of spatially and temporally correspondent visual events. Separate lines of research have identified two putatively distinct neural mechanisms underlying two types of crossmodal facilitations: Whereas crossmodal phase resetting is thought to underlie enhancements based on temporal correspondences, lateralized occipital evoked potentials (ERPs) are thought to reflect enhancements based on spatial correspondences. Here, we sought to clarify the relationship between these two effects to assess whether they reflect two distinct mechanisms or, rather, two facets of the same underlying process. To identify the neural generators of each effect, we examined crossmodal responses to lateralized sounds in visually responsive cortex of 22 patients using electrocorticographic recordings. Auditory-driven phase reset and ERP responses in visual cortex displayed similar topography, revealing significant activity in pericalcarine, inferior occipital-temporal, and posterior parietal cortex, with maximal activity in lateral occipitotemporal cortex (potentially V5/hMT+). Laterality effects showed similar but less widespread topography. To test whether lateralized and nonlateralized components of crossmodal ERPs emerged from common or distinct neural generators, we compared responses throughout visual cortex. Visual electrodes responded to both contralateral and ipsilateral sounds with a contralateral bias, suggesting that previously observed laterality effects do not emerge from a distinct neural generator but rather reflect laterality-biased responses in the same neural populations that produce phase-resetting responses. These results suggest that crossmodal phase reset and ERP responses previously found to reflect spatial and temporal facilitation in visual cortex may reflect the same underlying mechanism. We propose a new unified model to account for these and previous results.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados Auditivos , Potenciales Evocados Visuales , Corteza Visual/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adolescente , Adulto , Electrocorticografía , Femenino , Lateralidad Funcional , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Factores de Tiempo , Adulto Joven
9.
J Exp Psychol Gen ; 148(10): 1665-1674, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-30421944

RESUMEN

Cognition in action requires strategic allocation of attention between internal processes and the sensory environment. We hypothesized that this resource allocation could be facilitated by mechanisms that predict sensory results of self-generated actions. Sensory signals conforming to predictions would be safely ignored to facilitate focus on internally generated content, whereas those violating predictions would draw attention for additional scrutiny. During a visual-verbal serial digit-recall task, we varied the temporal relationship between task-irrelevant keypresses and auditory distractors so that the distractors were either temporally coupled or decoupled with keypresses. Consistent with our hypothesis, distractors were more likely to interfere with target maintenance and intrude into working memory when they were decoupled from keypresses, thereby violating action-based sensory predictions. Interference was maximal when sounds preceded keypresses, suggesting that stimuli were most distracting when their timing was inconsistent with expected action-sensation contingencies. In a follow-up experiment, neither auditory nor visual cues to distractor timing produced similar effects, suggesting a unique action-based mechanism. These results suggest that action-based sensory predictions are used to dynamically optimize attentional allocation during cognition in action. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Atención/fisiología , Cognición/fisiología , Memoria a Corto Plazo/fisiología , Asignación de Recursos , Adolescente , Señales (Psicología) , Femenino , Humanos , Masculino , Recuerdo Mental/fisiología , Pruebas Neuropsicológicas , Adulto Joven
10.
Atten Percept Psychophys ; 79(7): 2055-2063, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28634962

RESUMEN

Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Concienciación/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Atención/fisiología , Femenino , Humanos , Masculino , Fonética , Adulto Joven
11.
Neurosci Conscious ; 2016(1)2016.
Artículo en Inglés | MEDLINE | ID: mdl-28184322

RESUMEN

Plasticity is essential in body perception so that physical changes in the body can be accommodated and assimilated. Multisensory integration of visual, auditory, tactile, and proprioceptive signals contributes both to conscious perception of the body's current state and to associated learning. However, much is unknown about how novel information is assimilated into body perception networks in the brain. Sleep-based consolidation can facilitate various types of learning via the reactivation of networks involved in prior encoding or through synaptic down-scaling. Sleep may likewise contribute to perceptual learning of bodily information by providing an optimal time for multisensory recalibration. Here we used methods for targeted memory reactivation (TMR) during slow-wave sleep to examine the influence of sleep-based reactivation of experimentally induced alterations in body perception. The rubber-hand illusion was induced with concomitant auditory stimulation in 24 healthy participants on 3 consecutive days. While each participant was sleeping in his or her own bed during intervening nights, electrophysiological detection of slow-wave sleep prompted covert stimulation with either the sound heard during illusion induction, a counterbalanced novel sound, or neither. TMR systematically enhanced feelings of bodily ownership after subsequent inductions of the rubber-hand illusion. TMR also enhanced spatial recalibration of perceived hand location in the direction of the rubber hand. This evidence for a sleep-based facilitation of a body-perception illusion demonstrates that the spatial recalibration of multisensory signals can be altered overnight to stabilize new learning of bodily representations. Sleep-based memory processing may thus constitute a fundamental component of body-image plasticity.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA