Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Neural Eng ; 21(3)2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38806038

RESUMEN

Objective. Decoding gestures from the upper limb using noninvasive surface electromyogram (sEMG) signals is of keen interest for the rehabilitation of amputees, artificial supernumerary limb augmentation, gestural control of computers, and virtual/augmented realities. We show that sEMG signals recorded across an array of sensor electrodes in multiple spatial locations around the forearm evince a rich geometric pattern of global motor unit (MU) activity that can be leveraged to distinguish different hand gestures.Approach. We demonstrate a simple technique to analyze spatial patterns of muscle MU activity within a temporal window and show that distinct gestures can be classified in both supervised and unsupervised manners. Specifically, we construct symmetric positive definite covariance matrices to represent the spatial distribution of MU activity in a time window of interest, calculated as pairwise covariance of electrical signals measured across different electrodes.Main results. This allows us to understand and manipulate multivariate sEMG timeseries on a more natural subspace-the Riemannian manifold. Furthermore, it directly addresses signal variability across individuals and sessions, which remains a major challenge in the field. sEMG signals measured at a single electrode lack contextual information such as how various anatomical and physiological factors influence the signals and how their combined effect alters the evident interaction among neighboring muscles.Significance. As we show here, analyzing spatial patterns using covariance matrices on Riemannian manifolds allows us to robustly model complex interactions across spatially distributed MUs and provides a flexible and transparent framework to quantify differences in sEMG signals across individuals. The proposed method is novel in the study of sEMG signals and its performance exceeds the current benchmarks while being computationally efficient.


Asunto(s)
Electromiografía , Gestos , Mano , Músculo Esquelético , Humanos , Electromiografía/métodos , Mano/fisiología , Masculino , Femenino , Adulto , Músculo Esquelético/fisiología , Adulto Joven , Algoritmos
2.
Neuropsychologia ; 194: 108774, 2024 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-38145800

RESUMEN

Electrophysiological studies of congenitally deaf children and adults have reported atypical visual evoked potentials (VEPs) which have been associated with both behavioral enhancements of visual attention as well as poorer performance and outcomes in tests of spoken language speech processing. This pattern has often been interpreted as a maladaptive consequence of early auditory deprivation, whereby a remapping of auditory cortex by the visual system ultimately reduces resources necessary for optimal rehabilitative outcomes of spoken language acquisition and use. Making use of a novel electrophysiological paradigm, we compare VEPs in children with severe to profound congenital deafness who received a cochlear implant(s) prior to 31 months (n = 28) and typically developing age matched controls (n = 28). We observe amplitude enhancements and in some cases latency differences in occipitally expressed P1 and N1 VEP components in CI-using children as well as an early frontal negativity, N1a. We relate these findings to developmental factors such as chronological age and spoken language understanding. We further evaluate whether VEPs are additionally modulated by auditory stimulation. Collectively, these data provide a means to examine the extent to which atypical VEPs are consistent with prior accounts of maladaptive cross-modal plasticity. Our results support a view that VEP changes reflect alterations to visual-sensory attention and saliency mechanisms rather than a re-mapping of auditory cortex. The present data suggests that early auditory deprivation may have temporally prolonged effects on visual system processing even after activation and use of cochlear implant.


Asunto(s)
Corteza Auditiva , Implantación Coclear , Implantes Cocleares , Sordera , Niño , Adulto , Humanos , Potenciales Evocados Visuales , Percepción Visual/fisiología , Sordera/cirugía , Corteza Auditiva/fisiología
3.
Cognition ; 231: 105313, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36344304

RESUMEN

For seventy years, auditory selective attention research has focused on studying the cognitive mechanisms of prioritizing the processing a 'main' task-relevant stimulus, in the presence of 'other' stimuli. However, a closer look at this body of literature reveals deep empirical inconsistencies and theoretical confusion regarding the extent to which this 'other' stimulus is processed. We argue that many key debates regarding attention arise, at least in part, from inappropriate terminological choices for experimental variables that may not accurately map onto the cognitive constructs they are meant to describe. Here we critically review the more common or disruptive terminological ambiguities, differentiate between methodology-based and theory-derived terms, and unpack the theoretical assumptions underlying different terminological choices. Particularly, we offer an in-depth analysis of the terms 'unattended' and 'distractor' and demonstrate how their use can lead to conflicting theoretical inferences. We also offer a framework for thinking about terminology in a more productive and precise way, in hope of fostering more productive debates and promoting more nuanced and accurate cognitive models of selective attention.


Asunto(s)
Atención , Humanos , Estimulación Acústica
4.
J Speech Lang Hear Res ; 65(9): 3502-3517, 2022 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-36037517

RESUMEN

PURPOSE: This research examined the expression of cortical auditory evoked potentials in a cohort of children who received cochlear implants (CIs) for treatment of congenital deafness (n = 28) and typically hearing controls (n = 28). METHOD: We make use of a novel electroencephalography paradigm that permits the assessment of auditory responses to ambiently presented speech and evaluates the contributions of concurrent visual stimulation on this activity. RESULTS: Our findings show group differences in the expression of auditory sensory and perceptual event-related potential components occurring in 80- to 200-ms and 200- to 300-ms time windows, with reductions in amplitude and a greater latency difference for CI-using children. Relative to typically hearing children, current source density analysis showed muted responses to concurrent visual stimulation in CI-using children, suggesting less cortical specialization and/or reduced responsiveness to auditory information that limits the detection of the interaction between sensory systems. CONCLUSION: These findings indicate that even in the face of early interventions, CI-using children may exhibit disruptions in the development of auditory and multisensory processing.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera , Percepción del Habla , Estimulación Acústica , Niño , Sordera/cirugía , Potenciales Evocados Auditivos/fisiología , Humanos , Habla , Percepción del Habla/fisiología
5.
iScience ; 25(7): 104671, 2022 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-35845168

RESUMEN

Previous work addressing the influence of audition on visual perception has mainly been assessed using non-speech stimuli. Herein, we introduce the Audiovisual Time-Flow Illusion in spoken language, underscoring the role of audition in multisensory processing. When brief pauses were inserted into or brief portions were removed from an acoustic speech stream, individuals perceived the corresponding visual speech as "pausing" or "skipping", respectively-even though the visual stimulus was intact. When the stimulus manipulation was reversed-brief pauses were inserted into, or brief portions were removed from the visual speech stream-individuals failed to perceive the illusion in the corresponding intact auditory stream. Our findings demonstrate that in the context of spoken language, people continually realign the pace of their visual perception based on that of the auditory input. In short, the auditory modality sets the pace of the visual modality during audiovisual speech processing.

6.
J Cogn Neurosci ; 33(4): 574-593, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33475452

RESUMEN

In recent years, a growing number of studies have used cortical tracking methods to investigate auditory language processing. Although most studies that employ cortical tracking stem from the field of auditory signal processing, this approach should also be of interest to psycholinguistics-particularly the subfield of sentence processing-given its potential to provide insight into dynamic language comprehension processes. However, there has been limited collaboration between these fields, which we suggest is partly because of differences in theoretical background and methodological constraints, some mutually exclusive. In this paper, we first review the theories and methodological constraints that have historically been prioritized in each field and provide concrete examples of how some of these constraints may be reconciled. We then elaborate on how further collaboration between the two fields could be mutually beneficial. Specifically, we argue that the use of cortical tracking methods may help resolve long-standing debates in the field of sentence processing that commonly used behavioral and neural measures (e.g., ERPs) have failed to adjudicate. Similarly, signal processing researchers who use cortical tracking may be able to reduce noise in the neural data and broaden the impact of their results by controlling for linguistic features of their stimuli and by using simple comprehension tasks. Overall, we argue that a balance between the methodological constraints of the two fields will lead to an overall improved understanding of language processing as well as greater clarity on what mechanisms cortical tracking of speech reflects. Increased collaboration will help resolve debates in both fields and will lead to new and exciting avenues for research.


Asunto(s)
Percepción del Habla , Habla , Comprensión , Humanos , Lenguaje , Psicolingüística
7.
J Neurophysiol ; 122(4): 1312-1329, 2019 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-31268796

RESUMEN

Objective assessment of the sensory pathways is crucial for understanding their development across the life span and how they may be affected by neurodevelopmental disorders (e.g., autism spectrum) and neurological pathologies (e.g., stroke, multiple sclerosis, etc.). Quick and passive measurements, for example, using electroencephalography (EEG), are especially important when working with infants and young children and with patient populations having communication deficits (e.g., aphasia). However, many EEG paradigms are limited to measuring activity from one sensory domain at a time, may be time consuming, and target only a subset of possible responses from that particular sensory domain (e.g., only auditory brainstem responses or only auditory P1-N1-P2 evoked potentials). Thus we developed a new multisensory paradigm that enables simultaneous, robust, and rapid (6-12 min) measurements of both auditory and visual EEG activity, including auditory brainstem responses, auditory and visual evoked potentials, as well as auditory and visual steady-state responses. This novel method allows us to examine neural activity at various stations along the auditory and visual hierarchies with an ecologically valid continuous speech stimulus, while an unrelated video is playing. Both the speech stimulus and the video can be customized for any population of interest. Furthermore, by using two simultaneous visual steady-state stimulation rates, we demonstrate the ability of this paradigm to track both parafoveal and peripheral visual processing concurrently. We report results from 25 healthy young adults, which validate this new paradigm.NEW & NOTEWORTHY A novel electroencephalography paradigm enables the rapid, reliable, and noninvasive assessment of neural activity along both auditory and visual pathways concurrently. The paradigm uses an ecologically valid continuous speech stimulus for auditory evaluation and can simultaneously track visual activity to both parafoveal and peripheral visual space. This new methodology may be particularly appealing to researchers and clinicians working with infants and young children and with patient populations with limited communication abilities.


Asunto(s)
Electroencefalografía/métodos , Potenciales Evocados Auditivos del Tronco Encefálico , Potenciales Evocados Visuales , Adolescente , Adulto , Vías Auditivas/fisiología , Femenino , Humanos , Masculino , Percepción del Habla , Vías Visuales/fisiología
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 5806-5809, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30441655

RESUMEN

Contemporary hearing aids are markedlylimited in their most important role: improving speech perception in dynamic "cocktail party" environments with multiple, competing talkers. Here we describe an open-source, mobile assistive hearing platform entitled "Cochlearity" which uses eye gaze to guide an acoustic beamformer, so a listener will hear best wherever they look. Cochlearity runs on Android and its eight-channel microphone array can be worn comfortably on the head, e.g. mounted on eyeglasses. In this preliminary report, we examine the efficacy of both a static (delay-and-sum) and an adaptive (MVDR) beamformer in the task of separating an "attended" voice from an "unattended" voice in a two-talker scenario. We show that the different beamformers have the potential to complement each other to improve target speech SNR (signal to noise ratio), across the range of speech power, with tolerably low latency.


Asunto(s)
Fijación Ocular , Audífonos , Pérdida Auditiva/terapia , Percepción del Habla , Acústica/instrumentación , Humanos
10.
Proc Natl Acad Sci U S A ; 113(48): 13570-13575, 2016 11 29.
Artículo en Inglés | MEDLINE | ID: mdl-27849587

RESUMEN

Wind turbines generate electricity by removing kinetic energy from the atmosphere. Large numbers of wind turbines are likely to reduce wind speeds, which lowers estimates of electricity generation from what would be presumed from unaffected conditions. Here, we test how well wind power limits that account for this effect can be estimated without explicitly simulating atmospheric dynamics. We first use simulations with an atmospheric general circulation model (GCM) that explicitly simulates the effects of wind turbines to derive wind power limits (GCM estimate), and compare them to a simple approach derived from the climatological conditions without turbines [vertical kinetic energy (VKE) estimate]. On land, we find strong agreement between the VKE and GCM estimates with respect to electricity generation rates (0.32 and 0.37 We m-2) and wind speed reductions by 42 and 44%. Over ocean, the GCM estimate is about twice the VKE estimate (0.59 and 0.29 We m-2) and yet with comparable wind speed reductions (50 and 42%). We then show that this bias can be corrected by modifying the downward momentum flux to the surface. Thus, large-scale limits to wind power use can be derived from climatological conditions without explicitly simulating atmospheric dynamics. Consistent with the GCM simulations, the approach estimates that only comparatively few land areas are suitable to generate more than 1 We m-2 of electricity and that larger deployment scales are likely to reduce the expected electricity generation rate of each turbine. We conclude that these atmospheric effects are relevant for planning the future expansion of wind power.

11.
eNeuro ; 3(5)2016.
Artículo en Inglés | MEDLINE | ID: mdl-27822505

RESUMEN

Modulations in alpha oscillations (∼10 Hz) are typically studied in the context of anticipating upcoming stimuli. Alpha power decreases in sensory regions processing upcoming targets compared to regions processing distracting input, thereby likely facilitating processing of relevant information while suppressing irrelevant. In this electroencephalography study using healthy human volunteers, we examined whether modulations in alpha power also occur after the onset of a bilaterally presented target and distractor. Spatial attention was manipulated through spatial cues and feature-based attention through adjusting the color-similarity of distractors to the target. Consistent with previous studies, we found that informative spatial cues induced a relative decrease of pretarget alpha power at occipital electrodes contralateral to the expected target location. Interestingly, this pattern reemerged relatively late (300-750 ms) after stimulus onset, suggesting that lateralized alpha reflects not only preparatory attention, but also ongoing attentive stimulus processing. Uninformative cues (i.e., conveying no information about the spatial location of the target) resulted in an interaction between spatial attention and feature-based attention in post-target alpha lateralization. When the target was paired with a low-similarity distractor, post-target alpha was lateralized (500-900 ms). Crucially, the lateralization was absent when target selection was ambiguous because the distractor was highly similar to the target. Instead, during this condition, midfrontal theta was increased, indicative of reactive conflict resolution. Behaviorally, the degree of alpha lateralization was negatively correlated with the reaction time distraction cost induced by target-distractor similarity. These results suggest a pivotal role for poststimulus alpha lateralization in protecting sensory processing of target information.


Asunto(s)
Ritmo alfa/fisiología , Atención/fisiología , Encéfalo/fisiología , Adulto , Análisis de Varianza , Femenino , Lateralidad Funcional/fisiología , Humanos , Masculino , Pruebas Neuropsicológicas , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa , Tiempo de Reacción , Ritmo Teta , Adulto Joven
12.
Proc Natl Acad Sci U S A ; 112(36): 11169-74, 2015 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-26305925

RESUMEN

Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 10(5) km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m(-2), whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m(-2), with VKE capturing this combination in a comparatively simple way.

13.
J Neurophysiol ; 113(5): 1437-50, 2015 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-25505102

RESUMEN

Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Sincronización Cortical , Potenciales Evocados Auditivos , Percepción del Habla , Percepción Visual , Estimulación Acústica , Ritmo alfa , Ritmo beta , Femenino , Humanos , Masculino , Boca/fisiología , Movimiento , Estimulación Luminosa , Voz , Adulto Joven
14.
J Acoust Soc Am ; 136(2): 803-17, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25096114

RESUMEN

Spatial perception in echoic environments is influenced by recent acoustic history. For instance, echo suppression becomes more effective or "builds up" with repeated exposure to echoes having a consistent acoustic relationship to a temporally leading sound. Four experiments were conducted to investigate how buildup is affected by prior exposure to unpaired lead-alone or lag-alone click trains. Unpaired trains preceded lead-lag click trains designed to evoke and assay buildup. Listeners reported how many sounds they heard from the echo hemifield during the lead-lag trains. Stimuli were presented in free field (experiments 1 and 4) or dichotically through earphones (experiments 2 and 3). In experiment 1, listeners reported more echoes following a lead-alone train compared to a period of silence. In contrast, listeners reported fewer echoes following a lag-alone train; similar results were observed with earphones. Interestingly, the effects of lag-alone click trains on buildup were qualitatively different when compared to a no-conditioner trial type in experiment 4. Finally, experiment 3 demonstrated that the effects of preceding click trains on buildup cannot be explained by a change in counting strategy or perceived click salience. Together, these findings demonstrate that echo suppression is affected by prior exposure to unpaired stimuli.


Asunto(s)
Ruido/efectos adversos , Enmascaramiento Perceptual , Localización de Sonidos , Percepción Espacial , Estimulación Acústica , Acústica , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Psicoacústica , Factores de Tiempo , Vibración , Adulto Joven
15.
Neuron ; 77(5): 806-9, 2013 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-23473312

RESUMEN

In a crowded environment, how do we hear a single talker while ignoring everyone else? In this issue of Neuron, Zion Golumbic et al. (2013) record from the surface of the human brain to show how speech tracking arises through multiple neural frequency channels, both within and beyond auditory cortex.

16.
J Neurosci ; 33(5): 1858-63, 2013 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-23365225

RESUMEN

Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Femenino , Neuroimagen Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino
17.
South Med J ; 105(10): 520-3, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23038482

RESUMEN

OBJECTIVES: Patients' perception of quality is a critical primary outcome of medical care. Important downstream effects of perceived quality include a more trusting attitude toward the physician, more adherence to treatment, and better treatment outcomes. Patients' satisfaction issues are important to address during dermatology residency training. The aim of the study was to determine patients' satisfaction with dermatology residents and identify potential areas that could be targeted to improve satisfaction. METHODS: Dermatology residents informed patients about a survey on an online doctor rating/patients' satisfaction Web site (www.DrScore.com), provided the patients with cards with the Web site address, and requested that they complete the survey. Respondents provided an overall rating, open comments, and detailed information in seven core areas. The numerical ratings were on a scale from 0 (not at all satisfied) to 10 (extremely satisfied). Patients had the option of indicating aspects of care that could be improved. Descriptive statistics are reported. RESULTS: A total of 148 surveys were collected with a mean rating for the six residents of 9.7 out of 10, with a range of 9.4 to 10. The average during the early period was 9.7 out of 10, whereas the average during the late period was 9.8 out of 10. Fifty-two surveys (35%) indicated areas for improvement, with the most common issues related to staff, parking availability, waiting time, waiting area, and ability to obtain information. CONCLUSIONS: Patients were generally satisfied with the care provided by dermatology residents. Areas for improvement were identified, but these were largely areas over which residents do not have direct control.


Asunto(s)
Dermatología , Internado y Residencia , Satisfacción del Paciente , Adolescente , Adulto , Anciano , Recolección de Datos , Dermatología/educación , Dermatología/normas , Femenino , Humanos , Internado y Residencia/normas , Masculino , Persona de Mediana Edad , Relaciones Médico-Paciente , Adulto Joven
18.
J Neurophysiol ; 108(7): 1869-83, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-22786953

RESUMEN

Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.


Asunto(s)
Percepción Auditiva/fisiología , Estimulación Luminosa , Estimulación Acústica , Adulto , Discriminación en Psicología/fisiología , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Modelos Neurológicos , Percepción Espacial/fisiología , Percepción Visual/fisiología
19.
Front Hum Neurosci ; 6: 158, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22701410

RESUMEN

The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual "stream," such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.g., frequency separation). In the current report, we employ a classic ABA streaming paradigm and human electroencephalography (EEG) to disentangle the individual contributions of stimulus properties from changes in auditory perception. We find that changes in perceptual state-that is the perception of one versus two auditory streams with physically identical stimuli-and changes in physical stimulus properties are reflected independently in the event-related potential (ERP) during overlapping time windows. These findings emphasize the necessity of controlling for stimulus properties when studying perceptual effects of streaming. Furthermore, the independence of the perceptual effect from stimulus properties suggests the neural correlates of streaming reflect a tone's relative position within a larger sequence (1st, 2nd, 3rd) rather than its acoustics. By clarifying the role of stimulus attributes along with perceptual changes, this study helps explain precisely how the brain is able to distinguish a sound source of interest in an auditory scene.

20.
J Exp Psychol Hum Percept Perform ; 38(6): 1371-9, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22545599

RESUMEN

Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such interference. In a rapid phenomenon known as the precedence effect, reflections are perceptually fused with the veridical primary sound. The brain can also use spatial attention to highlight a target sound at the expense of distracters. Although attention has been shown to modulate many auditory perceptual phenomena, rarely does it alter how acoustic energy is first parsed into objects, as with the precedence effect. This brief report suggests that both endogenous (voluntary) and exogenous (stimulus-driven) spatial attention have a profound influence on the precedence effect depending on where they are oriented. Moreover, we observed that both types of attention could enhance perceptual fusion while only exogenous attention could hinder it. These results demonstrate that attention, by altering how auditory objects are formed, guides the basic perceptual organization of our acoustic environment.


Asunto(s)
Atención , Percepción Auditiva , Percepción Espacial , Volición , Adolescente , Adulto , Señales (Psicología) , Humanos , Ruido , Tiempo de Reacción , Localización de Sonidos , Percepción Visual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...