Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
J Neurosci ; 43(48): 8189-8200, 2023 11 29.
Artículo en Inglés | MEDLINE | ID: mdl-37793909

RESUMEN

Spontaneous speech is produced in chunks called intonation units (IUs). IUs are defined by a set of prosodic cues and presumably occur in all human languages. Recent work has shown that across different grammatical and sociocultural conditions IUs form rhythms of ∼1 unit per second. Linguistic theory suggests that IUs pace the flow of information in the discourse. As a result, IUs provide a promising and hitherto unexplored theoretical framework for studying the neural mechanisms of communication. In this article, we identify a neural response unique to the boundary defined by the IU. We measured the EEG of human participants (of either sex), who listened to different speakers recounting an emotional life event. We analyzed the speech stimuli linguistically and modeled the EEG response at word offset using a GLM approach. We find that the EEG response to IU-final words differs from the response to IU-nonfinal words even when equating acoustic boundary strength. Finally, we relate our findings to the body of research on rhythmic brain mechanisms in speech processing. We study the unique contribution of IUs and acoustic boundary strength in predicting delta-band EEG. This analysis suggests that IU-related neural activity, which is tightly linked to the classic Closure Positive Shift (CPS), could be a time-locked component that captures the previously characterized delta-band neural speech tracking.SIGNIFICANCE STATEMENT Linguistic communication is central to human experience, and its neural underpinnings are a topic of much research in recent years. Neuroscientific research has benefited from studying human behavior in naturalistic settings, an endeavor that requires explicit models of complex behavior. Usage-based linguistic theory suggests that spoken language is prosodically structured in intonation units. We reveal that the neural system is attuned to intonation units by explicitly modeling their impact on the EEG response beyond mere acoustics. To our understanding, this is the first time this is demonstrated in spontaneous speech under naturalistic conditions and under a theoretical framework that connects the prosodic chunking of speech, on the one hand, with the flow of information during communication, on the other.


Asunto(s)
Percepción del Habla , Habla , Humanos , Habla/fisiología , Electroencefalografía , Estimulación Acústica , Percepción del Habla/fisiología , Lenguaje
2.
Sci Rep ; 10(1): 15846, 2020 09 28.
Artículo en Inglés | MEDLINE | ID: mdl-32985572

RESUMEN

Studies of speech processing investigate the relationship between temporal structure in speech stimuli and neural activity. Despite clear evidence that the brain tracks speech at low frequencies (~ 1 Hz), it is not well understood what linguistic information gives rise to this rhythm. In this study, we harness linguistic theory to draw attention to Intonation Units (IUs), a fundamental prosodic unit of human language, and characterize their temporal structure as captured in the speech envelope, an acoustic representation relevant to the neural processing of speech. IUs are defined by a specific pattern of syllable delivery, together with resets in pitch and articulatory force. Linguistic studies of spontaneous speech indicate that this prosodic segmentation paces new information in language use across diverse languages. Therefore, IUs provide a universal structural cue for the cognitive dynamics of speech production and comprehension. We study the relation between IUs and periodicities in the speech envelope, applying methods from investigations of neural synchronization. Our sample includes recordings from every-day speech contexts of over 100 speakers and six languages. We find that sequences of IUs form a consistent low-frequency rhythm and constitute a significant periodic cue within the speech envelope. Our findings allow to predict that IUs are utilized by the neural system when tracking speech. The methods we introduce here facilitate testing this prediction in the future (i.e., with physiological data).


Asunto(s)
Acústica del Lenguaje , Percepción del Habla/fisiología , Estimulación Acústica , Humanos , Lenguaje , Psicolingüística , Interacción Social , Sonido
3.
Curr Biol ; 29(4): 693-699.e4, 2019 02 18.
Artículo en Inglés | MEDLINE | ID: mdl-30744973

RESUMEN

Attention supports the allocation of resources to relevant locations and objects in a scene. Under most conditions, several stimuli compete for neural representation. Attention biases neural representation toward the response associated with the attended object [1, 2]. Therefore, an attended stimulus enjoys a neural response that resembles the response to that stimulus in isolation. Factors that determine and generate attentional bias have been researched, ranging from endogenously controlled processes to exogenous capture of attention [1-4]. Recent studies investigate the temporal structure governing attention. When participants monitor a single location, visual-target detection depends on the phase of an ∼8-Hz brain rhythm [5, 6]. When two locations are monitored, performance fluctuates at 4 Hz for each location [7, 8]. The hypothesis is that 4-Hz sampling for two locations may reflect a common sampler that operates at 8 Hz globally, which is divided between relevant locations [5-7, 9]. The present study targets two properties of this phenomenon, called rhythmic-attentional sampling: first, sampling is typically described for selection over different locations. We examined whether rhythmic sampling is limited to selection over space or whether it extends to feature-based attention. Second, we examined whether sampling at 4 Hz results from the division of an 8-Hz rhythm over two objects. We found that two overlapping objects defined by features are sampled at ∼4 Hz per object. In addition, performance on a single object fluctuated at 8 Hz. Rhythmic sampling of features did not result from temporal structure in eye movements.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Señales (Psicología) , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Periodicidad , Adulto Joven
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda