Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Neuroreport ; 34(17): 811-816, 2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-37823446

RESUMEN

The virtual reality (VR) environment is claimed to be highly immersive. Participants may thus be potentially unaware of their real, external world. The present study presented irrelevant auditory stimuli while participants were engaged in an easy or difficult visual working memory (WM) task within the VR environment. The difficult WM task should be immersive and require many cognitive resources, thus few will be available for the processing of task-irrelevant auditory stimuli. Sixteen young adults wore a 3D head-mounted VR device. In the easy WM task, the stimuli were nameable objects. In the difficult WM task, the stimuli were abstract objects that could not be easily named. A novel paradigm using event-related potentials (ERPs) was implemented to examine the feasibility of quantifying the extent of processing of task-irrelevant stimuli occurring outside of the VR environment. Auditory stimuli irrelevant to the WM task were presented concurrently at every 1.5 or 12 s in separate conditions. Performance on the WM task varied with task difficulty, with accuracy significantly lower during the difficult task. The auditory ERPs consisted of N1 and a later P2/P3a deflection which were larger when the auditory stimuli were presented slowly. ERPs were unaffected by task difficulty, but significant correlations were found. N1 and P2/P3a amplitudes were smallest when performance on the Easy WM task was highest. It is possible that even the easy WM task was so immersive and required many processing resources that few were available for the co-processing of the task-irrelevant auditory stimuli.


Asunto(s)
Potenciales Evocados , Memoria a Corto Plazo , Adulto Joven , Humanos , Estimulación Acústica , Electroencefalografía
2.
Ann N Y Acad Sci ; 1530(1): 110-123, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37823710

RESUMEN

The generalization of music training to unrelated nonmusical domains is well established and may reflect musicians' superior ability to regulate attention. We investigated the temporal deployment of attention in musicians and nonmusicians using scalp-recording of event-related potentials in an attentional blink (AB) paradigm. Participants listened to rapid sequences of stimuli and identified target and probe sounds. The AB was defined as a probe identification deficit when the probe closely follows the target. The sequence of stimuli was preceded by a neutral or informative cue about the probe position within the sequence. Musicians outperformed nonmusicians in identifying the target and probe. In both groups, cueing improved target and probe identification and reduced the AB. The informative cue elicited a sustained potential, which was more prominent in musicians than nonmusicians over left temporal areas and yielded a larger N1 amplitude elicited by the target. The N1 was larger in musicians than nonmusicians, and its amplitude over the left frontocentral cortex of musicians correlated with accuracy. Together, these results reveal musicians' superior ability to regulate attention, allowing them to prepare for incoming stimuli, thereby improving sound object identification. This capacity to manage attentional resources to optimize task performance may generalize to nonmusical activities.


Asunto(s)
Parpadeo Atencional , Música , Humanos , Atención/fisiología , Señales (Psicología) , Percepción Auditiva/fisiología , Estimulación Acústica/métodos
3.
Cereb Cortex ; 33(18): 10181-10193, 2023 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-37522256

RESUMEN

To what extent does incidental encoding of auditory stimuli influence subsequent episodic memory for the same stimuli? We examined whether the mismatch negativity (MMN), an event-related potential generated by auditory change detection, is correlated with participants' ability to discriminate those stimuli (i.e. targets) from highly similar lures and from dissimilar foils. We measured the MMN in 30 young adults (18-32 years, 18 females) using a passive auditory oddball task with standard and deviant 5-tone sequences differing in pitch contour. After exposure, all participants completed an incidental memory test for old targets, lures, and foils. As expected, participants at test exhibited high sensitivity in recognizing target items relative to foils and lower sensitivity in recognizing target items relative to lures. Notably, we found a significant correlation between MMN amplitude and lure discrimination, but not foil discrimination. Our investigation shows that our capacity to discriminate sensory inputs at encoding, as measured by the MMN, translates into precision in memory for those inputs.


Asunto(s)
Percepción Auditiva , Potenciales Evocados Auditivos , Femenino , Adulto Joven , Humanos , Estimulación Acústica , Electroencefalografía , Potenciales Evocados
4.
Sci Adv ; 9(17): eadg7056, 2023 04 28.
Artículo en Inglés | MEDLINE | ID: mdl-37126550

RESUMEN

Musicianship can mitigate age-related declines in audiovisual speech-in-noise perception. We tested whether this benefit originates from functional preservation or functional compensation by comparing fMRI responses of older musicians, older nonmusicians, and young nonmusicians identifying noise-masked audiovisual syllables. Older musicians outperformed older nonmusicians and showed comparable performance to young nonmusicians. Notably, older musicians retained similar neural specificity of speech representations in sensorimotor areas to young nonmusicians, while older nonmusicians showed degraded neural representations. In the same region, older musicians showed higher neural alignment to young nonmusicians than older nonmusicians, which was associated with their training intensity. In older nonmusicians, the degree of neural alignment predicted better performance. In addition, older musicians showed greater activation in frontal-parietal, speech motor, and visual motion regions and greater deactivation in the angular gyrus than older nonmusicians, which predicted higher neural alignment in sensorimotor areas. Together, these findings suggest that musicianship-related benefit in audiovisual speech-in-noise processing is rooted in preserving youth-like representations in sensorimotor regions.


Asunto(s)
Percepción del Habla , Habla , Adolescente , Humanos , Anciano , Estimulación Acústica , Ruido , Percepción del Habla/fisiología , Envejecimiento
5.
Cereb Cortex ; 33(10): 6465-6473, 2023 05 09.
Artículo en Inglés | MEDLINE | ID: mdl-36702477

RESUMEN

Absolute pitch (AP) is the ability to rapidly label pitch without an external reference. The speed of AP labeling may be related to faster sensory processing. We compared time needed for auditory processing in AP musicians, non-AP musicians, and nonmusicians (NM) using high-density electroencephalographic recording. Participants responded to pure tones and sung voice. Stimuli evoked a negative deflection peaking at ~100 ms (N1) post-stimulus onset, followed by a positive deflection peaking at ~200 ms (P2). N1 latency was shortest in AP, intermediate in non-AP musicians, and longest in NM. Source analyses showed decreased auditory cortex and increased frontal cortex contributions to N1 for complex tones compared with pure tones. Compared with NM, AP musicians had weaker source currents in left auditory cortex but stronger currents in left inferior frontal gyrus (IFG) during N1, and stronger currents in left IFG during P2. Compared with non-AP musicians, AP musicians exhibited stronger source currents in right insula and left IFG during N1, and stronger currents in left IFG during P2. Non-AP musicians had stronger N1 currents in right auditory cortex than nonmusicians. Currents in left IFG and left auditory cortex were correlated to response times exclusively in AP. Findings suggest a left frontotemporal network supports rapid pitch labeling in AP.


Asunto(s)
Música , Percepción de la Altura Tonal , Humanos , Percepción de la Altura Tonal/fisiología , Percepción Auditiva , Corteza Prefrontal , Tiempo de Reacción/fisiología , Electroencefalografía , Estimulación Acústica , Discriminación de la Altura Tonal/fisiología , Potenciales Evocados Auditivos/fisiología
6.
Brain Cogn ; 163: 105914, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36155348

RESUMEN

The perception of concurrent sound sources depends on processes (i.e., auditory scene analysis) that fuse and segregate acoustic features according to harmonic relations, temporal coherence, and binaural cues (encompass dichotic pitch, location difference, simulated echo). The object-related negativity (ORN) and P400 are electrophysiological indices of concurrent sound perception. Here, we review the different paradigms used to study concurrent sound perception and the brain responses obtained from these paradigms. Recommendations regarding the design and recording parameters of the ORN and P400 are made, and their clinical applications in assessing central auditory processing ability in different populations are discussed.


Asunto(s)
Percepción Auditiva , Potenciales Evocados Auditivos , Estimulación Acústica , Percepción Auditiva/fisiología , Mapeo Encefálico , Señales (Psicología) , Potenciales Evocados Auditivos/fisiología , Audición , Humanos , Percepción de la Altura Tonal/fisiología
7.
Neuroscience ; 423: 18-28, 2019 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-31705894

RESUMEN

Difficulty understanding speech-in-noise (SIN) is a pervasive problem faced by older adults particularly those with hearing loss. Previous studies have identified structural and functional changes in the brain that contribute to older adults' speech perception difficulties. Yet, many of these studies use neuroimaging techniques that evaluate only gross activation in isolated brain regions. Neural oscillations may provide further insight into the processes underlying SIN perception as well as the interaction between auditory cortex and prefrontal linguistic brain regions that mediate complex behaviors. We examined frequency-specific neural oscillations and functional connectivity of the EEG in older adults with and without hearing loss during an active SIN perception task. Brain-behavior correlations revealed listeners who were more resistant to the detrimental effects of noise also demonstrated greater modulation of α phase coherence between clean and noise-degraded speech, suggesting α desynchronization reflects release from inhibition and more flexible allocation of neural resources. Additionally, we found top-down ß connectivity between prefrontal and auditory cortices strengthened with poorer hearing thresholds despite minimal behavioral differences. This is consistent with the proposal that linguistic brain areas may be recruited to compensate for impoverished auditory inputs through increased top-down predictions to assist SIN perception. Overall, these results emphasize the importance of top-down signaling in low-frequency brain rhythms that help compensate for hearing-related declines and facilitate efficient SIN processing.


Asunto(s)
Ondas Encefálicas/fisiología , Presbiacusia/fisiopatología , Percepción del Habla/fisiología , Estimulación Acústica , Anciano , Corteza Auditiva/fisiología , Estudios de Casos y Controles , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Corteza Prefrontal/fisiología
8.
Brain Struct Funct ; 224(8): 2661-2676, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31346715

RESUMEN

Speech comprehension difficulties are ubiquitous to aging and hearing loss, particularly in noisy environments. Older adults' poorer speech-in-noise (SIN) comprehension has been related to abnormal neural representations within various nodes (regions) of the speech network, but how senescent changes in hearing alter the transmission of brain signals remains unspecified. We measured electroencephalograms in older adults with and without mild hearing loss during a SIN identification task. Using functional connectivity and graph-theoretic analyses, we show that hearing-impaired (HI) listeners have more extended (less integrated) communication pathways and less efficient information exchange among widespread brain regions (larger network eccentricity) than their normal-hearing (NH) peers. Parameter optimized support vector machine classifiers applied to EEG connectivity data showed hearing status could be decoded (> 85% accuracy) solely using network-level descriptions of brain activity, but classification was particularly robust using left hemisphere connections. Notably, we found a reversal in directed neural signaling in left hemisphere dependent on hearing status among specific connections within the dorsal-ventral speech pathways. NH listeners showed an overall net "bottom-up" signaling directed from auditory cortex (A1) to inferior frontal gyrus (IFG; Broca's area), whereas the HI group showed the reverse signal (i.e., "top-down" Broca's → A1). A similar flow reversal was noted between left IFG and motor cortex. Our full-brain connectivity results demonstrate that even mild forms of hearing loss alter how the brain routes information within the auditory-linguistic-motor loop.


Asunto(s)
Envejecimiento/fisiología , Encéfalo/fisiopatología , Pérdida Auditiva/fisiopatología , Percepción del Habla/fisiología , Estimulación Acústica , Anciano , Envejecimiento/psicología , Audiometría , Mapeo Encefálico/métodos , Comprensión/fisiología , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Vías Nerviosas/fisiopatología
9.
J Neurophysiol ; 120(2): 812-829, 2018 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-29742026

RESUMEN

Attentional blink (AB) refers to the situation where correctly identifying a target impairs the processing of a subsequent probe in a sequence of stimuli. Although the AB often coincides with a modulation of scalp-recorded cognitive event-related potentials (ERPs), the neural sources of this effect remain unclear. In two separate experiments, we used classical LORETA analysis recursively applied (CLARA) to estimate the neural sources of ERPs elicited by an auditory probe when it immediately followed an auditory target (i.e., AB condition), when no auditory target was present (i.e., no-AB condition), and when the probe followed an auditory target but occurred outside of the AB time window (i.e., no-AB condition). We observed a processing deficit when the probe immediately followed the target, and this auditory AB was accompanied by reduced P3b amplitude. Contrasting brain electrical source activity from the AB and no-AB conditions revealed reduced source activity in the medial temporal region as well as in the temporoparietal junction (extending into inferior parietal lobe), ventromedial prefrontal cortex, left anterior thalamic nuclei, mammillary body, and left cerebellum. The results indicate that successful probe identification following a target relies on a widely distributed brain network and further support the suggestion that the auditory AB reflects the failure of the probe to reach short-term consolidation. NEW & NOTEWORTHY Within a rapid succession of auditory stimuli, the perception of a predefined target sound often impedes listeners' ability to detect another target sound that is presented close in succession. This attentional blink may be related to activity in brain areas supporting attention and memory. We show that the auditory attentional blink is associated with brain activity changes in a network including the medial temporal lobe, parietal cortex, and prefrontal cortex. This study suggests that a problem in the interaction between attention and memory underlies the auditory attentional blink.


Asunto(s)
Parpadeo Atencional/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Estimulación Acústica , Adolescente , Adulto , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Adulto Joven
10.
EFSA J ; 15(7): e04908, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32625569

RESUMEN

EFSA was asked by the European Commission to deliver a scientific opinion on the risks for human health related to the presence of pyrrolizidine alkaloids (PAs) in honey, tea, herbal infusions and food supplements and to identify the PAs of relevance in the aforementioned food commodities and in other feed and food. PAs are a large group of toxins produced by different plant species. In 2011, the EFSA Panel on Contaminants in the Food Chain (CONTAM Panel) assessed the risks related to the presence of PAs in food and feed. Based on occurrence data limited to honey, the CONTAM Panel concluded that there was a possible health concern for those toddlers and children who are high consumers of honey. A new exposure assessment including new occurrence data was published by EFSA in 2016 and was used to update the risk characterisation. The CONTAM Panel established a new Reference Point of 237 µg/kg body weight per day to assess the carcinogenic risks of PAs, and concluded that there is a possible concern for human health related to the exposure to PAs, in particular for frequent and high consumers of tea and herbal infusions. The Panel noted that consumption of food supplements based on PA-producing plants could result in exposure levels too close (i.e. less than 100 times lower) to the range of doses known to cause severe acute/short term toxicity. From the analysis of the available occurrence data, the CONTAM Panel identified a list of 17 PAs of relevance for monitoring in food and feed. The Panel recommended continuing the efforts to monitor the presence of PAs in food and feed, including the development of more sensitive and specific analytical methods. A recommendation was also issued on the generation of data to identify the toxic and carcinogenic potency of the PAs commonly found in food.

11.
Nat Commun ; 7: 12241, 2016 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-27483187

RESUMEN

Understanding speech in noisy environments is challenging, especially for seniors. Although evidence suggests that older adults increasingly recruit prefrontal cortices to offset reduced periphery and central auditory processing, the brain mechanisms underlying such compensation remain elusive. Here we show that relative to young adults, older adults show higher activation of frontal speech motor areas as measured by functional MRI during a syllable identification task at varying signal-to-noise ratios. This increased activity correlates with improved speech discrimination performance in older adults. Multivoxel pattern classification reveals that despite an overall phoneme dedifferentiation, older adults show greater specificity of phoneme representations in frontal articulatory regions than auditory regions. Moreover, older adults with stronger frontal activity have higher phoneme specificity in frontal and auditory regions. Thus, preserved phoneme specificity and upregulation of activity in speech motor regions provide a means of compensation in older adults for decoding impoverished speech representations in adverse listening conditions.


Asunto(s)
Adaptación Fisiológica , Envejecimiento/fisiología , Corteza Motora/fisiología , Corteza Prefrontal/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Factores de Edad , Anciano , Comprensión/fisiología , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Corteza Motora/diagnóstico por imagen , Fonética , Relación Señal-Ruido , Adulto Joven
12.
Brain Res ; 1642: 146-153, 2016 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-27021953

RESUMEN

Recent research has indicated that music practice can influence cognitive processing across the lifespan. Although extensive musical experience may have a mitigating effect on cognitive decline in older adults, the nature of changes to brain functions underlying performance benefits remains underexplored. The present study was designed to investigate the underlying neural mechanisms that may support apparent beneficial effects of life-long musical practice on cognition. We recorded event-related potentials (ERPs) in older musicians (N=17; average age=69.2) and non-musicians (N=17; average age=69.9), matched for age and education, while they completed an executive control task (visual go/no-go). Whereas both groups showed similar response speed and accuracy on go trials, older musicians showed fewer no-go errors. ERP recordings revealed the typical N2/P3 complex, but the nature of these responses differed between groups in that (1) older musicians showed larger N2 and P3 effects ('no-go minus go' amplitude), with the N2 amplitude being correlated with behavioral accuracy for no-go trials and (2) the topography of the P3 response was more anterior in musicians. Moreover, P3 amplitude was correlated with measures of musical experience in musicians. In our discussion of these results, we propose that music practice may have conferred an executive control advantage for musicians in later life.


Asunto(s)
Corteza Cerebral/fisiología , Función Ejecutiva/fisiología , Música/psicología , Práctica Psicológica , Desempeño Psicomotor , Estimulación Acústica , Anciano , Anciano de 80 o más Años , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Inhibición Psicológica , Masculino , Persona de Mediana Edad
13.
J Cogn Neurosci ; 27(11): 2186-96, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26226073

RESUMEN

Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Detección de Señal Psicológica/fisiología , Sonido , Estimulación Acústica , Adulto , Análisis de Varianza , Mapeo Encefálico , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Tiempo de Reacción/fisiología , Adulto Joven
14.
J Neurosci Methods ; 245: 137-46, 2015 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-25721269

RESUMEN

Simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) has gained attention due to the complimentary properties of the two imaging modalities. Their combined recording enables the study of brain function while taking advantage of the high temporal resolution of EEG and high spatial resolution of fMRI. However EEG data recorded inside the MR scanner is significantly contaminated by two main sources of artifacts: MR gradient artifacts and ballistocardiogram (BCG) artifacts. Most existing removal approaches for these artifacts fall into two main categories: average artifact subtraction (AAS) and optimal basis selection (OBS). While these techniques can improve the data quality significantly, highly effective removal of artifacts - particularly the BCG artifact - from the data is still lacking. Here, we compared two of the most commonly used algorithms for BCG artifact removal (OBS and AAS) based on the estimated signal-to-noise ratio (SNR) of auditory and visual evoked responses recorded during fMRI acquisition. We also further compared optimization of OBS for groups, and at the individual subject and run level. The results suggest that performance of the OBS algorithm can be significantly improved by choosing the optimum number of principal components. Furthermore, optimizing the number of principal components at the individual participant and run level results in significant improvements in the SNR of evoked responses compared to group optimization.


Asunto(s)
Balistocardiografía/efectos adversos , Mapeo Encefálico , Encéfalo/irrigación sanguínea , Encéfalo/fisiología , Potenciales Evocados/fisiología , Estimulación Acústica , Adulto , Algoritmos , Análisis de Varianza , Artefactos , Electroencefalografía , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Estimulación Luminosa , Relación Señal-Ruido , Análisis Espectral , Adulto Joven
15.
J Neurosci ; 35(3): 1240-9, 2015 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-25609638

RESUMEN

Musicianship in early life is associated with pervasive changes in brain function and enhanced speech-language skills. Whether these neuroplastic benefits extend to older individuals more susceptible to cognitive decline, and for whom plasticity is weaker, has yet to be established. Here, we show that musical training offsets declines in auditory brain processing that accompanying normal aging in humans, preserving robust speech recognition late into life. We recorded both brainstem and cortical neuroelectric responses in older adults with and without modest musical training as they classified speech sounds along an acoustic-phonetic continuum. Results reveal higher temporal precision in speech-evoked responses at multiple levels of the auditory system in older musicians who were also better at differentiating phonetic categories. Older musicians also showed a closer correspondence between neural activity and perceptual performance. This suggests that musicianship strengthens brain-behavior coupling in the aging auditory system. Last, "neurometric" functions derived from unsupervised classification of neural activity established that early cortical responses could accurately predict listeners' psychometric speech identification and, more critically, that neurometric profiles were organized more categorically in older musicians. We propose that musicianship offsets age-related declines in speech listening by refining the hierarchical interplay between subcortical/cortical auditory brain representations, allowing more behaviorally relevant information carried within the neural code, and supplying more faithful templates to the brain mechanisms subserving phonetic computations. Our findings imply that robust neuroplasticity conferred by musical training is not restricted by age and may serve as an effective means to bolster speech listening skills that decline across the lifespan.


Asunto(s)
Envejecimiento/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Tronco Encefálico/fisiología , Música , Plasticidad Neuronal/fisiología , Estimulación Acústica , Anciano , Vías Auditivas/fisiología , Femenino , Humanos , Lenguaje , Masculino , Persona de Mediana Edad , Habla , Percepción del Habla/fisiología
16.
J Neurosci ; 35(3): 1307-18, 2015 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-25609643

RESUMEN

Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Potenciales Evocados/fisiología , Memoria a Corto Plazo/fisiología , Orientación/fisiología , Estimulación Acústica , Adolescente , Adulto , Señales (Psicología) , Electroencefalografía , Femenino , Humanos , Masculino , Neuronas/fisiología , Tiempo de Reacción/fisiología , Adulto Joven
17.
Cereb Cortex ; 25(2): 496-506, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24042339

RESUMEN

Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral ("what") and dorsal ("where") brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Aprendizaje/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Mapeo Encefálico , Señales (Psicología) , Humanos , Magnetoencefalografía , Masculino , Pruebas Neuropsicológicas , Patrones de Reconocimiento Fisiológico/fisiología , Acústica del Lenguaje , Adulto Joven
18.
Int J Psychophysiol ; 94(3): 427-36, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25448269

RESUMEN

Job burnout is a significant cause of work absenteeism. Evidence from behavioral studies and patient reports suggests that job burnout is associated with impairments of attention and decreased working capacity, and it has overlapping elements with depression, anxiety and sleep disturbances. Here, we examined the electrophysiological correlates of automatic sound change detection and involuntary attention allocation in job burnout using scalp recordings of event-related potentials (ERP). Volunteers with job burnout symptoms but without severe depression and anxiety disorders and their non-burnout controls were presented with natural speech sound stimuli (standard and nine deviants), as well as three rarely occurring speech sounds with strong emotional prosody. All stimuli elicited mismatch negativity (MMN) responses that were comparable in both groups. The groups differed with respect to the P3a, an ERP component reflecting involuntary shift of attention: job burnout group showed a shorter P3a latency in response to the emotionally negative stimulus, and a longer latency in response to the positive stimulus. Results indicate that in job burnout, automatic speech sound discrimination is intact, but there is an attention capture tendency that is faster for negative, and slower to positive information compared to that of controls.


Asunto(s)
Estimulación Acústica/métodos , Atención/fisiología , Percepción Auditiva/fisiología , Agotamiento Profesional/psicología , Emociones/fisiología , Potenciales Evocados Auditivos/fisiología , Adulto , Agotamiento Profesional/diagnóstico , Agotamiento Profesional/epidemiología , Femenino , Finlandia/epidemiología , Humanos , Masculino , Persona de Mediana Edad
19.
Neuropsychologia ; 62: 233-44, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25080187

RESUMEN

The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Reconocimiento en Psicología/fisiología , Aprendizaje Verbal/fisiología , Vocabulario , Estimulación Acústica , Adulto , Análisis de Varianza , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Psicolingüística , Tiempo de Reacción , Adulto Joven
20.
Proc Natl Acad Sci U S A ; 111(19): 7126-31, 2014 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-24778251

RESUMEN

Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (-12, -9, -6, -2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca's area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca's area exhibited effective phoneme categorization when SNR ≥ -6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.


Asunto(s)
Corteza Auditiva/fisiología , Imagen por Resonancia Magnética , Corteza Motora/fisiología , Ruido , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adulto , Mapeo Encefálico/métodos , Femenino , Lateralidad Funcional/fisiología , Humanos , Masculino , Análisis Multivariante , Fonética , Relación Señal-Ruido , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA