Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 88
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 42(23): 4619-4628, 2022 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-35508382

RESUMEN

Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Cognición , Femenino , Humanos , Masculino , Ruido , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología
2.
Neuroimage ; 268: 119883, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36657693

RESUMEN

Listening in everyday life requires attention to be deployed dynamically - when listening is expected to be difficult and when relevant information is expected to occur - to conserve mental resources. Conserving mental resources may be particularly important for older adults who often experience difficulties understanding speech. In the current study, we use electro- and magnetoencephalography to investigate the neural and behavioral mechanics of attention regulation during listening and the effects that aging has on these. We first show in younger adults (17-31 years) that neural alpha oscillatory activity indicates when in time attention is deployed (Experiment 1) and that deployment depends on listening difficulty (Experiment 2). Experiment 3 investigated age-related changes in auditory attention regulation. Middle-aged and older adults (54-72 years) show successful attention regulation but appear to utilize timing information differently compared to younger adults (20-33 years). We show a notable age-group dissociation in recruited brain regions. In younger adults, superior parietal cortex underlies alpha power during attention regulation, whereas, in middle-aged and older adults, alpha power emerges from more ventro-lateral areas (posterior temporal cortex). This difference in the sources of alpha activity between age groups only occurred during task performance and was absent during rest (Experiment S1). In sum, our study suggests that middle-aged and older adults employ different neural control strategies compared to younger adults to regulate attention in time under listening challenges.


Asunto(s)
Envejecimiento , Percepción del Habla , Persona de Mediana Edad , Humanos , Anciano , Envejecimiento/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Magnetoencefalografía , Lóbulo Temporal , Percepción del Habla/fisiología
3.
J Neurosci ; 41(23): 5045-5055, 2021 06 09.
Artículo en Inglés | MEDLINE | ID: mdl-33903222

RESUMEN

Many older listeners have difficulty understanding speech in noise, when cues to speech-sound identity are less redundant. The amplitude envelope of speech fluctuates dramatically over time, and features such as the rate of amplitude change at onsets (attack) and offsets (decay), signal critical information about the identity of speech sounds. Aging is also thought to be accompanied by increases in cortical excitability, which may differentially alter sensitivity to envelope dynamics. Here, we recorded electroencephalography in younger and older human adults (of both sexes) to investigate how aging affects neural synchronization to 4 Hz amplitude-modulated noises with different envelope shapes (ramped: slow attack and sharp decay; damped: sharp attack and slow decay). We observed that subcortical responses did not differ between age groups, whereas older compared with younger adults exhibited larger cortical responses to sound onsets, consistent with an increase in auditory cortical excitability. Neural activity in older adults synchronized more strongly to rapid-onset, slow-offset (damped) envelopes, was less sinusoidal, and was more peaked. Younger adults demonstrated the opposite pattern, showing stronger synchronization to slow-onset, rapid-offset (ramped) envelopes, as well as a more sinusoidal neural response shape. The current results suggest that age-related changes in the excitability of auditory cortex alter responses to envelope dynamics. This may be part of the reason why older adults experience difficulty understanding speech in noise.SIGNIFICANCE STATEMENT Many middle-aged and older adults report difficulty understanding speech when there is background noise, which can trigger social withdrawal and negative psychosocial health outcomes. The difficulty may be related to age-related changes in how the brain processes temporal sound features. We tested younger and older people on their sensitivity to different envelope shapes, using EEG. Our results demonstrate that aging is associated with heightened sensitivity to sounds with a sharp attack and gradual decay, and sharper neural responses that deviate from the sinusoidal features of the stimulus, perhaps reflecting increased excitability in the aged auditory cortex. Altered responses to temporal sound features may be part of the reason why older adults often experience difficulty understanding speech in social situations.


Asunto(s)
Envejecimiento/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Enmascaramiento Perceptual/fisiología , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Adulto Joven
4.
J Cogn Neurosci ; 34(6): 933-950, 2022 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-35258555

RESUMEN

Older people with hearing problems often experience difficulties understanding speech in the presence of background sound. As a result, they may disengage in social situations, which has been associated with negative psychosocial health outcomes. Measuring listening (dis)engagement during challenging listening situations has received little attention thus far. We recruit young, normal-hearing human adults (both sexes) and investigate how speech intelligibility and engagement during naturalistic story listening is affected by the level of acoustic masking (12-talker babble) at different signal-to-noise ratios (SNRs). In , we observed that word-report scores were above 80% for all but the lowest SNR (-3 dB SNR) we tested, at which performance dropped to 54%. In , we calculated intersubject correlation (ISC) using EEG data to identify dynamic spatial patterns of shared neural activity evoked by the stories. ISC has been used as a neural measure of participants' engagement with naturalistic materials. Our results show that ISC was stable across all but the lowest SNRs, despite reduced speech intelligibility. Comparing ISC and intelligibility demonstrated that word-report performance declined more strongly with decreasing SNR compared to ISC. Our measure of neural engagement suggests that individuals remain engaged in story listening despite missing words because of background noise. Our work provides a potentially fruitful approach to investigate listener engagement with naturalistic, spoken stories that may be used to investigate (dis)engagement in older adults with hearing impairment.


Asunto(s)
Percepción del Habla , Acústica , Anciano , Percepción Auditiva , Femenino , Humanos , Masculino , Ruido , Inteligibilidad del Habla , Percepción del Habla/fisiología
5.
Cereb Cortex ; 31(6): 2952-2967, 2021 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-33511976

RESUMEN

It is well established that movement planning recruits motor-related cortical brain areas in preparation for the forthcoming action. Given that an integral component to the control of action is the processing of sensory information throughout movement, we predicted that movement planning might also modulate early sensory cortical areas, readying them for sensory processing during the unfolding action. To test this hypothesis, we performed 2 human functional magnetic resonance imaging studies involving separate delayed movement tasks and focused on premovement neural activity in early auditory cortex, given the area's direct connections to the motor system and evidence that it is modulated by motor cortex during movement in rodents. We show that effector-specific information (i.e., movements of the left vs. right hand in Experiment 1 and movements of the hand vs. eye in Experiment 2) can be decoded, well before movement, from neural activity in early auditory cortex. We find that this motor-related information is encoded in a separate subregion of auditory cortex than sensory-related information and is present even when movements are cued visually instead of auditorily. These findings suggest that action planning, in addition to preparing the motor system for movement, involves selectively modulating primary sensory areas based on the intended action.


Asunto(s)
Estimulación Acústica/métodos , Anticipación Psicológica/fisiología , Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Adulto Joven
6.
J Acoust Soc Am ; 152(1): 31, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35931555

RESUMEN

Pitch discrimination is better for complex tones than pure tones, but how pitch discrimination differs between natural and artificial sounds is not fully understood. This study compared pitch discrimination thresholds for flat-spectrum harmonic complex tones with those for natural sounds played by musical instruments of three different timbres (violin, trumpet, and flute). To investigate whether natural familiarity with sounds of particular timbres affects pitch discrimination thresholds, this study recruited non-musicians and musicians who were trained on one of the three instruments. We found that flautists and trumpeters could discriminate smaller differences in pitch for artificial flat-spectrum tones, despite their unfamiliar timbre, than for sounds played by musical instruments, which are regularly heard in everyday life (particularly by musicians who play those instruments). Furthermore, thresholds were no better for the instrument a musician was trained to play than for other instruments, suggesting that even extensive experience listening to and producing sounds of particular timbres does not reliably improve pitch discrimination thresholds for those timbres. The results show that timbre familiarity provides minimal improvements to auditory acuity, and physical acoustics (e.g., the presence of equal-amplitude harmonics) determine pitch discrimination thresholds more than does experience with natural sounds and timbre-specific training.


Asunto(s)
Música , Discriminación de la Altura Tonal , Percepción Auditiva , Discriminación en Psicología , Percepción de la Altura Tonal , Reconocimiento en Psicología
7.
Neuroimage ; 237: 118107, 2021 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-33933598

RESUMEN

When speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices. This effect correlated with the familiar-voice intelligibility benefit. We functionally parcellated auditory cortex, and found that the most prominent familiar-voice advantage was manifest along the posterior superior and middle temporal gyri. Overall, our results demonstrate that experience-driven improvements in intelligibility are associated with enhanced multivariate pattern activity in posterior temporal cortex.


Asunto(s)
Neuroimagen Funcional , Reconocimiento en Psicología/fisiología , Percepción Social , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Anciano , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Voz , Adulto Joven
8.
Neuroimage ; 238: 118238, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34098064

RESUMEN

Repeating structures forming regular patterns are common in sounds. Learning such patterns may enable accurate perceptual organization. In five experiments, we investigated the behavioral and neural signatures of rapid perceptual learning of regular sound patterns. We show that recurring (compared to novel) patterns are detected more quickly and increase sensitivity to pattern deviations and to the temporal order of pattern onset relative to a visual stimulus. Sustained neural activity reflected perceptual learning in two ways. Firstly, sustained activity increased earlier for recurring than novel patterns when participants attended to sounds, but not when they ignored them; this earlier increase mirrored the rapid perceptual learning we observed behaviorally. Secondly, the magnitude of sustained activity was generally lower for recurring than novel patterns, but only for trials later in the experiment, and independent of whether participants attended to or ignored sounds. The late manifestation of sustained activity reduction suggests that it is not directly related to rapid perceptual learning, but to a mechanism that does not require attention to sound. In sum, we demonstrate that the latency of sustained activity reflects rapid perceptual learning of auditory patterns, while the magnitude may reflect a result of learning, such as better prediction of learned auditory patterns.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Lóbulo Frontal/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Señales (Psicología) , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
9.
Psychol Sci ; 32(6): 903-915, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33979256

RESUMEN

When people listen to speech in noisy places, they can understand more words spoken by someone familiar, such as a friend or partner, than someone unfamiliar. Yet we know little about how voice familiarity develops over time. We exposed participants (N = 50) to three voices for different lengths of time (speaking 88, 166, or 478 sentences during familiarization and training). These previously heard voices were recognizable and more intelligible when presented with a competing talker than novel voices-even the voice previously heard for the shortest duration. However, recognition and intelligibility improved at different rates with longer exposures. Whereas recognition was similar for all previously heard voices, intelligibility was best for the voice that had been heard most extensively. The speech-intelligibility benefit for the most extensively heard voice (10%-15%) is as large as that reported for voices that are naturally very familiar (friends and spouses)-demonstrating that the intelligibility of a voice can be improved substantially after only an hour of training.


Asunto(s)
Percepción del Habla , Voz , Humanos , Inteligibilidad del Habla , Reconocimiento de Voz , Entrenamiento de la Voz
10.
J Neurosci ; 39(44): 8679-8689, 2019 10 30.
Artículo en Inglés | MEDLINE | ID: mdl-31533976

RESUMEN

The functional organization of human auditory cortex can be probed by characterizing responses to various classes of sound at different anatomical locations. Along with histological studies this approach has revealed a primary field in posteromedial Heschl's gyrus (HG) with pronounced induced high-frequency (70-150 Hz) activity and short-latency responses that phase-lock to rapid transient sounds. Low-frequency neural oscillations are also relevant to stimulus processing and information flow, however, their distribution within auditory cortex has not been established. Alpha activity (7-14 Hz) in particular has been associated with processes that may differentially engage earlier versus later levels of the cortical hierarchy, including functional inhibition and the communication of sensory predictions. These theories derive largely from the study of occipitoparietal sources readily detectable in scalp electroencephalography. To characterize the anatomical basis and functional significance of less accessible temporal-lobe alpha activity we analyzed responses to sentences in seven human adults (4 female) with epilepsy who had been implanted with electrodes in superior temporal cortex. In contrast to primary cortex in posteromedial HG, a non-primary field in anterolateral HG was characterized by high spontaneous alpha activity that was strongly suppressed during auditory stimulation. Alpha-power suppression decreased with distance from anterolateral HG throughout superior temporal cortex, and was more pronounced for clear compared to degraded speech. This suppression could not be accounted for solely by a change in the slope of the power spectrum. The differential manifestation and stimulus-sensitivity of alpha oscillations across auditory fields should be accounted for in theories of their generation and function.SIGNIFICANCE STATEMENT To understand how auditory cortex is organized in support of perception, we recorded from patients implanted with electrodes for clinical reasons. This allowed measurement of activity in brain regions at different levels of sensory processing. Oscillations in the alpha range (7-14 Hz) have been associated with functions including sensory prediction and inhibition of regions handling irrelevant information, but their distribution within auditory cortex is not known. A key finding was that these oscillations dominated in one particular non-primary field, anterolateral Heschl's gyrus, and were suppressed when subjects listened to sentences. These results build on our knowledge of the functional organization of auditory cortex and provide anatomical constraints on theories of the generation and function of alpha oscillations.


Asunto(s)
Ritmo alfa , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Adulto , Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Potenciales Evocados Auditivos , Femenino , Ritmo Gamma , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
11.
J Neurosci ; 38(24): 5466-5477, 2018 06 13.
Artículo en Inglés | MEDLINE | ID: mdl-29773757

RESUMEN

The ability to detect regularities in sound (i.e., recurring structure) is critical for effective perception, enabling, for example, change detection and prediction. Two seemingly unconnected lines of research concern the neural operations involved in processing regularities: one investigates how neural activity synchronizes with temporal regularities (e.g., frequency modulation; FM) in sounds, whereas the other focuses on increases in sustained activity during stimulation with repeating tone-frequency patterns. In three electroencephalography studies with male and female human participants, we investigated whether neural synchronization and sustained neural activity are dissociable, or whether they are functionally interdependent. Experiment I demonstrated that neural activity synchronizes with temporal regularity (FM) in sounds, and that sustained activity increases concomitantly. In Experiment II, phase coherence of FM in sounds was parametrically varied. Although neural synchronization was more sensitive to changes in FM coherence, such changes led to a systematic modulation of both neural synchronization and sustained activity, with magnitude increasing as coherence increased. In Experiment III, participants either performed a duration categorization task on the sounds, or a visual object tracking task to distract attention. Neural synchronization was observed regardless of task, whereas the sustained response was observed only when attention was on the auditory task, not under (visual) distraction. The results suggest that neural synchronization and sustained activity levels are functionally linked: both are sensitive to regularities in sounds. However, neural synchronization might reflect a more sensory-driven response to regularity, compared with sustained activity which may be influenced by attentional, contextual, or other experiential factors.SIGNIFICANCE STATEMENT Optimal perception requires that the auditory system detects regularities in sounds. Synchronized neural activity and increases in sustained neural activity both appear to index the detection of a regularity, but the functional interrelation of these two neural signatures is unknown. In three electroencephalography experiments, we measured both signatures concomitantly while listeners were presented with sounds containing frequency modulations that differed in their regularity. We observed that both neural signatures are sensitive to temporal regularity in sounds, although they functionally decouple when a listener is distracted by a demanding visual task. Our data suggest that neural synchronization reflects a more automatic response to regularity compared with sustained activity, which may be influenced by attentional, contextual, or other experiential factors.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Sincronización Cortical/fisiología , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
12.
J Neurosci ; 38(8): 1989-1999, 2018 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-29358362

RESUMEN

Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information.SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from sound-level distributions with different modes (15 vs 45 dB). Auditory cortex neurons adapted to sound-level statistics in younger and older adults, but adaptation was incomplete in older people. The data suggest that the aging auditory system does not fully capitalize on the statistics available in sound environments to tune the perceptual system dynamically.


Asunto(s)
Adaptación Fisiológica/fisiología , Envejecimiento/patología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Acústica , Adolescente , Adulto , Anciano , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
13.
J Acoust Soc Am ; 146(5): 3487, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31795686

RESUMEN

The ability to segregate simultaneous speech streams is crucial for successful communication. Recent studies have demonstrated that participants can report 10%-20% more words spoken by naturally familiar (e.g., friends or spouses) than unfamiliar talkers in two-voice mixtures. This benefit is commensurate with one of the largest benefits to speech intelligibility currently known-that which is gained by spatially separating two talkers. However, because of differences in the methods of these previous studies, the relative benefits of spatial separation and voice familiarity are unclear. Here, the familiar-voice benefit and spatial release from masking are directly compared, and it is examined if and how these two cues interact with one another. Talkers were recorded while speaking sentences from a published closed-set "matrix" task, and then listeners were presented with three different sentences played simultaneously. Each target sentence was played at 0° azimuth, and two masker sentences were symmetrically separated about the target. On average, participants reported 10%-30% more words correctly when the target sentence was spoken in a familiar than unfamiliar voice (collapsed over spatial separation conditions); it was found that participants gain a similar benefit from a familiar target as when an unfamiliar voice is separated from two symmetrical maskers by approximately 15° azimuth.

14.
Psychol Sci ; 29(10): 1575-1583, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30096018

RESUMEN

We can recognize familiar people by their voices, and familiar talkers are more intelligible than unfamiliar talkers when competing talkers are present. However, whether the acoustic voice characteristics that permit recognition and those that benefit intelligibility are the same or different is unknown. Here, we recruited pairs of participants who had known each other for 6 months or longer and manipulated the acoustic correlates of two voice characteristics (vocal tract length and glottal pulse rate). These had different effects on explicit recognition of and the speech-intelligibility benefit realized from familiar voices. Furthermore, even when explicit recognition of familiar voices was eliminated, they were still more intelligible than unfamiliar voices-demonstrating that familiar voices do not need to be explicitly recognized to benefit intelligibility. Processing familiar-voice information appears therefore to depend on multiple, at least partially independent, systems that are recruited depending on the perceptual goal of the listener.


Asunto(s)
Atención/fisiología , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Voz/fisiología , Estimulación Acústica , Femenino , Humanos , Masculino , Reconocimiento en Psicología , Adulto Joven
15.
Int J Audiol ; 57(7): 483-492, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-29415585

RESUMEN

OBJECTIVE: We investigated whether speech intelligibility and listening effort for hearing-aid users is affected by semantic context and hearing-aid setting. DESIGN: Participants heard target sentences spoken in a reverberant background of cafeteria noise and competing speech. Participants reported each sentence verbally. Eight participants also rated listening effort after each sentence. Sentence topic was either the same as, or different from, the previous target sentence. STUDY SAMPLE: Twenty participants with sensorineural hearing loss were fit binaurally with Signia receiver-in-the-canal hearing aids. Participants performed the task twice: once using the hearing aid's omnidirectional setting and once using the "Reverberant Room" setting, designed to aid listening in reverberant environments. RESULTS: Participants achieved better speech intelligibility for same-topic than different-topic sentences, and when they used the "Reverberant Room" than the omnidirectional hearing-aid setting. Participants who rated effort showed a reliable reduction in listening effort for same-topic sentences and for the "Reverberant Room" hearing-aid setting. The improvement in speech intelligibility from semantic context (i.e. same-topic compared to different-topic sentences) was greater than the improvement gained from changing hearing-aid setting. CONCLUSIONS: These findings highlight the enormous potential of cognitive (specifically, semantic) factors for improving speech intelligibility and reducing perceived listening effort in noise for hearing-aid users.


Asunto(s)
Audífonos/psicología , Pérdida Auditiva Sensorineural/psicología , Semántica , Inteligibilidad del Habla , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Umbral Auditivo , Femenino , Pérdida Auditiva Sensorineural/terapia , Humanos , Masculino , Persona de Mediana Edad
16.
Cereb Cortex ; 26(2): 708-30, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25576538

RESUMEN

Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions-those that occur after an object has been grasped-are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements.


Asunto(s)
Corteza Cerebral/fisiología , Fuerza de la Mano/fisiología , Intención , Movimiento/fisiología , Red Nerviosa/fisiología , Desempeño Psicomotor/fisiología , Adulto , Análisis de Varianza , Mapeo Encefálico , Corteza Cerebral/irrigación sanguínea , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Modelos Lineales , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Adulto Joven
17.
J Cogn Neurosci ; 28(8): 1210-27, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-27054397

RESUMEN

Every day we generate motor responses that are timed with external cues. This phenomenon of sensorimotor synchronization has been simplified and studied extensively using finger tapping sequences that are executed in synchrony with auditory stimuli. The predictive saccade paradigm closely resembles the finger tapping task. In this paradigm, participants follow a visual target that "steps" between two fixed locations on a visual screen at predictable ISIs. Eventually, the time from target appearance to saccade initiation (i.e., saccadic RT) becomes predictive with values nearing 0 msec. Unlike the finger tapping literature, neural control of predictive behavior described within the eye movement literature has not been well established and is inconsistent, especially between neuroimaging and patient lesion studies. To resolve these discrepancies, we used fMRI to investigate the neural correlates of predictive saccades by contrasting brain areas involved with behavior generated from the predictive saccade task with behavior generated from a reactive saccade task (saccades are generated toward targets that are unpredictably timed). We observed striking differences in neural recruitment between reactive and predictive conditions: Reactive saccades recruited oculomotor structures, as predicted, whereas predictive saccades recruited brain structures that support timing in motor responses, such as the crus I of the cerebellum, and structures commonly associated with the default mode network. Therefore, our results were more consistent with those found in the finger tapping literature.


Asunto(s)
Anticipación Psicológica/fisiología , Encéfalo/fisiología , Movimientos Sacádicos/fisiología , Adolescente , Adulto , Percepción Auditiva/fisiología , Encéfalo/diagnóstico por imagen , Femenino , Dedos/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Actividad Motora/fisiología , Pruebas Neuropsicológicas , Tiempo de Reacción , Percepción Visual/fisiología , Adulto Joven
18.
J Acoust Soc Am ; 139(3): 1037-46, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-27036241

RESUMEN

When the spatial location or identity of a sound is held constant, it is not masked as effectively by competing sounds. This suggests that experience with a particular voice over time might facilitate perceptual organization in multitalker environments. The current study examines whether listeners benefit from experience with a voice only when it is the target, or also when it is a masker, using diotic presentation and a closed-set task (coordinate response measure). A reliable interaction was observed such that, in two-talker mixtures, consistency of masker or target voice over 3-7 trials significantly benefited target recognition performance, whereas in three-talker mixtures, target, but not masker, consistency was beneficial. Overall, this work suggests that voice consistency improves intelligibility, although somewhat differently when two talkers, compared to three talkers, are present, suggesting that consistent-voice information facilitates intelligibility in at least two different ways. Listeners can use a template-matching strategy to extract a known voice from a mixture when it is the target. However, consistent-voice information facilitates segregation only when two, but not three, talkers are present.


Asunto(s)
Ruido/efectos adversos , Enmascaramiento Perceptual , Acústica del Lenguaje , Inteligibilidad del Habla , Calidad de la Voz , Estimulación Acústica , Adolescente , Audiometría de Tonos Puros , Audiometría del Habla , Umbral Auditivo , Señales (Psicología) , Femenino , Humanos , Masculino , Patrones de Reconocimiento Fisiológico , Reconocimiento en Psicología , Factores de Tiempo , Adulto Joven
19.
J Neurosci ; 33(10): 4339-48, 2013 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-23467350

RESUMEN

The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an "error signal" regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems.


Asunto(s)
Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Retroalimentación Sensorial/fisiología , Habla/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/irrigación sanguínea , Encéfalo/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Tiempo de Reacción/fisiología , Adulto Joven
20.
Neuroimage ; 101: 76-86, 2014 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-24999040

RESUMEN

An important aspect of hearing is the degree to which listeners have to deploy effort to understand speech. One promising measure of listening effort is task-evoked pupil dilation. Here, we use functional magnetic resonance imaging (fMRI) to identify the neural correlates of pupil dilation during comprehension of degraded spoken sentences in 17 normal-hearing listeners. Subjects listened to sentences degraded in three different ways: the target female speech was masked by fluctuating noise, by speech from a single male speaker, or the target speech was noise-vocoded. The degree of degradation was individually adapted such that 50% or 84% of the sentences were intelligible. Control conditions included clear speech in quiet, and silent trials. The peak pupil dilation was larger for the 50% compared to the 84% intelligibility condition, and largest for speech masked by the single-talker masker, followed by speech masked by fluctuating noise, and smallest for noise-vocoded speech. Activation in the bilateral superior temporal gyrus (STG) showed the same pattern, with most extensive activation for speech masked by the single-talker masker. Larger peak pupil dilation was associated with more activation in the bilateral STG, bilateral ventral and dorsal anterior cingulate cortex and several frontal brain areas. A subset of the temporal region sensitive to pupil dilation was also sensitive to speech intelligibility and degradation type. These results show that pupil dilation during speech perception in challenging conditions reflects both auditory and cognitive processes that are recruited to cope with degraded speech and the need to segregate target speech from interfering sounds.


Asunto(s)
Lóbulo Frontal/fisiología , Neuroimagen Funcional/métodos , Pupila/fisiología , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Medidas del Movimiento Ocular , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA