Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 183
Filtrar
1.
PLoS Biol ; 22(5): e3002631, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38805517

RESUMO

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Assuntos
Estimulação Acústica , Percepção Auditiva , Música , Percepção da Fala , Humanos , Masculino , Feminino , Adulto , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Adulto Jovem , Fala/fisiologia , Adolescente
2.
PLoS Biol ; 22(5): e3002622, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38814982

RESUMO

Combinatoric linguistic operations underpin human language processes, but how meaning is composed and refined in the mind of the reader is not well understood. We address this puzzle by exploiting the ubiquitous function of negation. We track the online effects of negation ("not") and intensifiers ("really") on the representation of scalar adjectives (e.g., "good") in parametrically designed behavioral and neurophysiological (MEG) experiments. The behavioral data show that participants first interpret negated adjectives as affirmative and later modify their interpretation towards, but never exactly as, the opposite meaning. Decoding analyses of neural activity further reveal significant above chance decoding accuracy for negated adjectives within 600 ms from adjective onset, suggesting that negation does not invert the representation of adjectives (i.e., "not bad" represented as "good"); furthermore, decoding accuracy for negated adjectives is found to be significantly lower than that for affirmative adjectives. Overall, these results suggest that negation mitigates rather than inverts the neural representations of adjectives. This putative suppression mechanism of negation is supported by increased synchronization of beta-band neural activity in sensorimotor areas. The analysis of negation provides a steppingstone to understand how the human brain represents changes of meaning over time.


Assuntos
Idioma , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Encéfalo/fisiologia , Magnetoencefalografia/métodos , Semântica , Linguística/métodos
3.
Proc Natl Acad Sci U S A ; 120(36): e2215710120, 2023 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-37639606

RESUMO

The beginnings of words are, in some informal sense, special. This intuition is widely shared, for example, when playing word games. Less apparent is whether the intuition is substantiated empirically and what the underlying organizational principle(s) might be. Here, we answer this seemingly simple question in a quantitatively clear way. Based on arguments about the interplay between lexical storage and speech processing, we examine whether the distribution of information among different speech sounds of words is governed by a critical computational unit for online speech perception and production: syllables. By analyzing lexical databases of twelve languages, we demonstrate that there is a compelling asymmetry between syllable beginnings (onsets) versus ends (codas) in their involvement in distinguishing words stored in the lexicon. In particular, we show that the functional advantage of syllable onset reflects an asymmetrical distribution of lexical informativeness within the syllable unit but not an effect of a global decay of informativeness from the beginning to the end of a word. The converging finding across languages from a range of typological families supports the conjecture that the syllable unit, while being a critical primitive for both speech perception and production, is also a key organizational constraint for lexical storage.


Assuntos
Dissidências e Disputas , Intuição , Humanos , Bases de Dados Factuais , Idioma , Fala
4.
J Neurosci ; 44(30)2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-38926087

RESUMO

Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.


Assuntos
Percepção Auditiva , Eletroencefalografia , Música , Humanos , Feminino , Masculino , Eletroencefalografia/métodos , Adulto , Percepção Auditiva/fisiologia , Adulto Jovem , Estimulação Acústica/métodos , Encéfalo/fisiologia
5.
Nat Rev Neurosci ; 21(6): 322-334, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32376899

RESUMO

The recognition of spoken language has typically been studied by focusing on either words or their constituent elements (for example, low-level features or phonemes). More recently, the 'temporal mesoscale' of speech has been explored, specifically regularities in the envelope of the acoustic signal that correlate with syllabic information and that play a central role in production and perception processes. The temporal structure of speech at this scale is remarkably stable across languages, with a preferred range of rhythmicity of 2- 8 Hz. Importantly, this rhythmicity is required by the processes underlying the construction of intelligible speech. A lot of current work focuses on audio-motor interactions in speech, highlighting behavioural and neural evidence that demonstrates how properties of perceptual and motor systems, and their relation, can underlie the mesoscale speech rhythms. The data invite the hypothesis that the speech motor cortex is best modelled as a neural oscillator, a conjecture that aligns well with current proposals highlighting the fundamental role of neural oscillations in perception and cognition. The findings also show motor theories (of speech) in a different light, placing new mechanistic constraints on accounts of the action-perception interface.


Assuntos
Córtex Motor/fisiologia , Periodicidade , Percepção da Fala/fisiologia , Fala/fisiologia , Humanos
6.
PLoS Biol ; 20(7): e3001712, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35793349

RESUMO

People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory-motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory-motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech during learning) normalizes performance across the entire sample. Our work provides novel insights on SL from speech and reconciles previous contrasting findings. These findings also highlight a more general need to factor in fundamental individual differences for a precise characterization of cognitive phenomena.


Assuntos
Percepção da Fala , Fala , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Fala/fisiologia , Percepção da Fala/fisiologia
7.
Proc Natl Acad Sci U S A ; 118(16)2021 04 20.
Artigo em Inglês | MEDLINE | ID: mdl-33853943

RESUMO

The environment is shaped by two sources of temporal uncertainty: the discrete probability of whether an event will occur and-if it does-the continuous probability of when it will happen. These two types of uncertainty are fundamental to every form of anticipatory behavior including learning, decision-making, and motor planning. It remains unknown how the brain models the two uncertainty parameters and how they interact in anticipation. It is commonly assumed that the discrete probability of whether an event will occur has a fixed effect on event expectancy over time. In contrast, we first demonstrate that this pattern is highly dynamic and monotonically increases across time. Intriguingly, this behavior is independent of the continuous probability of when an event will occur. The effect of this continuous probability on anticipation is commonly proposed to be driven by the hazard rate (HR) of events. We next show that the HR fails to account for behavior and propose a model of event expectancy based on the probability density function of events. Our results hold for both vision and audition, suggesting independence of the representation of the two uncertainties from sensory input modality. These findings enrich the understanding of fundamental anticipatory processes and have provocative implications for many aspects of behavior and its neural underpinnings.


Assuntos
Antecipação Psicológica/fisiologia , Tomada de Decisões/fisiologia , Incerteza , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Aprendizagem/fisiologia , Masculino , Probabilidade , Análise Espaço-Temporal , Percepção Visual/fisiologia
8.
Proc Biol Sci ; 290(1994): 20222410, 2023 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-36855868

RESUMO

When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.


Assuntos
Compreensão , Fala , Humanos , Acústica , Extremidades , Linguística
9.
Psychol Sci ; 34(5): 633-643, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37053267

RESUMO

Much of human learning happens through interaction with other people, but little is known about how this process is reflected in the brains of students and teachers. Here, we concurrently recorded electroencephalography (EEG) data from nine groups, each of which contained four students and a teacher. All participants were young adults from the northeast United States. Alpha-band (8-12 Hz) brain-to-brain synchrony between students predicted both immediate and delayed posttest performance. Further, brain-to-brain synchrony was higher in specific lecture segments associated with questions that students answered correctly. Brain-to-brain synchrony between students and teachers predicted learning outcomes at an approximately 300-ms lag in the students' brain activity relative to the teacher's brain activity, which is consistent with the time course of spoken-language comprehension. These findings provide key new evidence for the importance of collecting brain data simultaneously from groups of learners in ecologically valid settings.


Assuntos
Encéfalo , Aprendizagem , Adulto Jovem , Humanos , Estudantes
10.
PLoS Biol ; 18(3): e3000207, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32119667

RESUMO

Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4-8 Hz) and beta-gamma (around 15-40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.


Assuntos
Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Eletrodos Implantados , Eletroencefalografia , Epilepsia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino
11.
Eur J Neurosci ; 55(11-12): 3373-3390, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34155728

RESUMO

Ample evidence shows that the human brain carefully tracks acoustic temporal regularities in the input, perhaps by entraining cortical neural oscillations to the rate of the stimulation. To what extent the entrained oscillatory activity influences processing of upcoming auditory events remains debated. Here, we revisit a critical finding from Hickok et al. (2015) that demonstrated a clear impact of auditory entrainment on subsequent auditory detection. Participants were asked to detect tones embedded in stationary noise, following a noise that was amplitude modulated at 3 Hz. Tonal targets occurred at various phases relative to the preceding noise modulation. The original study (N = 5) showed that the detectability of the tones (presented at near-threshold intensity) fluctuated cyclically at the same rate as the preceding noise modulation. We conducted an exact replication of the original paradigm (N = 23) and a conceptual replication using a shorter experimental procedure (N = 24). Neither experiment revealed significant entrainment effects at the group level. A restricted analysis on the subset of participants (36%) who did show the entrainment effect revealed no consistent phase alignment between detection facilitation and the preceding rhythmic modulation. Interestingly, both experiments showed group-wide presence of a non-cyclic behavioural pattern, wherein participants' detection of the tonal targets was lower at early and late time points of the target period. The two experiments highlight both the sensitivity of the task to elicit oscillatory entrainment and the striking individual variability in performance.


Assuntos
Percepção Auditiva , Ruído , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Humanos
12.
Eur J Neurosci ; 55(11-12): 3277-3287, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35193163

RESUMO

Entrainment depends on sequential neural phase reset by regular stimulus onset, a temporal parameter. Entraining to sequences of identical stimuli also entails stimulus feature predictability, but this component is not readily separable from temporal regularity. To test if spectral regularities concur with temporal regularities in determining the strength of auditory entrainment, we devised sound sequences that varied in conditional perceptual inferences based on deviant sound repetition probability: strong inference (100% repetition probability: If a deviant appears, then it will repeat), weak inference (75% repetition probability) and no inference (50%: A deviant may or may not repeat with equal probability). We recorded EEG data from 15 young human participants pre-attentively listening to the experimental sound sequences delivered either isochronously or anisochronously (±20% jitter), at both delta (1.67 Hz) and theta (6.67 Hz) stimulation rates. Strong perceptual inferences significantly enhanced entrainment at either stimulation rate and determined positive correlations between precision in phase distribution at the onset of deviant trials and entrained power. We conclude that both spectral predictability and temporal regularity govern entrainment via neural phase control.


Assuntos
Percepção Auditiva , Eletroencefalografia , Estimulação Acústica , Percepção Auditiva/fisiologia , Humanos
13.
Cereb Cortex ; 31(7): 3226-3236, 2021 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-33625488

RESUMO

In contrast to classical views of working memory (WM) maintenance, recent research investigating activity-silent neural states has demonstrated that persistent neural activity in sensory cortices is not necessary for active maintenance of information in WM. Previous studies in humans have measured putative memory representations indirectly, by decoding memory contents from neural activity evoked by a neutral impulse stimulus. However, it is unclear whether memory contents can also be decoded in different species and attentional conditions. Here, we employ a cross-species approach to test whether auditory memory contents can be decoded from electrophysiological signals recorded in different species. Awake human volunteers (N = 21) were exposed to auditory pure tone and noise burst stimuli during an auditory sensory memory task using electroencephalography. In a closely matching paradigm, anesthetized female rats (N = 5) were exposed to comparable stimuli while neural activity was recorded using electrocorticography from the auditory cortex. In both species, the acoustic frequency could be decoded from neural activity evoked by pure tones as well as neutral frozen noise burst stimuli. This finding demonstrates that memory contents can be decoded in different species and different states using homologous methods, suggesting that the mechanisms of sensory memory encoding are evolutionarily conserved across species.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Adulto , Animais , Eletrocorticografia/métodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ratos , Ratos Wistar , Tempo de Reação/fisiologia , Especificidade da Espécie , Adulto Jovem
14.
Proc Natl Acad Sci U S A ; 116(20): 10113-10121, 2019 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-31019082

RESUMO

A body of research demonstrates convincingly a role for synchronization of auditory cortex to rhythmic structure in sounds including speech and music. Some studies hypothesize that an oscillator in auditory cortex could underlie important temporal processes such as segmentation and prediction. An important critique of these findings raises the plausible concern that what is measured is perhaps not an oscillator but is instead a sequence of evoked responses. The two distinct mechanisms could look very similar in the case of rhythmic input, but an oscillator might better provide the computational roles mentioned above (i.e., segmentation and prediction). We advance an approach to adjudicate between the two models: analyzing the phase lag between stimulus and neural signal across different stimulation rates. We ran numerical simulations of evoked and oscillatory computational models, showing that in the evoked case,phase lag is heavily rate-dependent, while the oscillatory model displays marked phase concentration across stimulation rates. Next, we compared these model predictions with magnetoencephalography data recorded while participants listened to music of varying note rates. Our results show that the phase concentration of the experimental data is more in line with the oscillatory model than with the evoked model. This finding supports an auditory cortical signal that (i) contains components of both bottom-up evoked responses and internal oscillatory synchronization whose strengths are weighted by their appropriateness for particular stimulus types and (ii) cannot be explained by evoked responses alone.


Assuntos
Córtex Auditivo/fisiologia , Modelos Biológicos , Música , Relógios Biológicos , Humanos , Acústica da Fala
15.
Neuroimage ; 227: 117436, 2021 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-33039619

RESUMO

When we feel connected or engaged during social behavior, are our brains in fact "in sync" in a formal, quantifiable sense? Most studies addressing this question use highly controlled tasks with homogenous subject pools. In an effort to take a more naturalistic approach, we collaborated with art institutions to crowdsource neuroscience data: Over the course of 5 years, we collected electroencephalogram (EEG) data from thousands of museum and festival visitors who volunteered to engage in a 10-min face-to-face interaction. Pairs of participants with various levels of familiarity sat inside the Mutual Wave Machine-an artistic neurofeedback installation that translates real-time correlations of each pair's EEG activity into light patterns. Because such inter-participant EEG correlations are prone to noise contamination, in subsequent offline analyses we computed inter-brain coupling using Imaginary Coherence and Projected Power Correlations, two synchrony metrics that are largely immune to instantaneous, noise-driven correlations. When applying these methods to two subsets of recorded data with the most consistent protocols, we found that pairs' trait empathy, social closeness, engagement, and social behavior (joint action and eye contact) consistently predicted the extent to which their brain activity became synchronized, most prominently in low alpha (~7-10 Hz) and beta (~20-22 Hz) oscillations. These findings support an account where shared engagement and joint action drive coupled neural activity and behavior during dynamic, naturalistic social interactions. To our knowledge, this work constitutes a first demonstration that an interdisciplinary, real-world, crowdsourcing neuroscience approach may provide a promising method to collect large, rich datasets pertaining to real-life face-to-face interactions. Additionally, it is a demonstration of how the general public can participate and engage in the scientific process outside of the laboratory. Institutions such as museums, galleries, or any other organization where the public actively engages out of self-motivation, can help facilitate this type of citizen science research, and support the collection of large datasets under scientifically controlled experimental conditions. To further enhance the public interest for the out-of-the-lab experimental approach, the data and results of this study are disseminated through a website tailored to the general public (wp.nyu.edu/mutualwavemachine).


Assuntos
Encéfalo/fisiologia , Empatia/fisiologia , Comportamento Social , Crowdsourcing , Eletroencefalografia , Humanos , Relações Interpessoais , Neurorretroalimentação
16.
Dev Sci ; 24(6): e13121, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34060181

RESUMO

The power and precision with which humans link language to cognition is unique to our species. By 3-4 months of age, infants have already established this link: simply listening to human language facilitates infants' success in fundamental cognitive processes. Initially, this link to cognition is also engaged by a broader set of acoustic stimuli, including non-human primate vocalizations (but not other sounds, like backwards speech). But by 6 months, non-human primate vocalizations no longer confer this cognitive advantage that persists for speech. What remains unknown is the mechanism by which these sounds influence infant cognition, and how this initially broader set of privileged sounds narrows to only human speech between 4 and 6 months. Here, we recorded 4- and 6-month-olds' EEG responses to acoustic stimuli whose behavioral effects on infant object categorization have been previously established: infant-directed speech, backwards speech, and non-human primate vocalizations. We document that by 6 months, infants' 4-9 Hz neural activity is modulated in response to infant-directed speech and non-human primate vocalizations (the two stimuli that initially support categorization), but that 4-9 Hz neural activity is not modulated at either age by backward speech (an acoustic stimulus that doesn't support categorization at either age). These results advance the prior behavioral evidence to suggest that by 6 months, speech and non-human primate vocalizations elicit distinct changes in infants' cognitive state, influencing performance on foundational cognitive tasks such as object categorization.


Assuntos
Idioma , Percepção da Fala , Animais , Desenvolvimento Infantil/fisiologia , Cognição/fisiologia , Humanos , Lactente , Desenvolvimento da Linguagem , Fala/fisiologia , Percepção da Fala/fisiologia
17.
Cereb Cortex ; 30(4): 2600-2614, 2020 04 14.
Artigo em Inglês | MEDLINE | ID: mdl-31761952

RESUMO

Natural sounds contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditory system encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al. 2017) demonstrated a temporal coding preference for the theta and gamma ranges, but it remains unclear how acoustic dynamics between these two ranges are coded. Here, we generated artificial sounds with temporal structures over timescales from ~200 to ~30 ms and investigated temporal coding on different timescales. Participants discriminated sounds with temporal structures at different timescales while undergoing magnetoencephalography recording. Although considerable intertrial phase coherence can be induced by acoustic dynamics of all the timescales, classification analyses reveal that the acoustic information of all timescales is preferentially differentiated through the theta and gamma bands, but not through the alpha and beta bands; stimulus reconstruction shows that the acoustic dynamics in the theta and gamma ranges are preferentially coded. We demonstrate that the theta and gamma bands show the generality of temporal coding with comparable capacity. Our findings provide a novel perspective-acoustic information of all timescales is discretised into two discrete temporal chunks for further perceptual analysis.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ritmo Gama/fisiologia , Magnetoencefalografia/métodos , Ritmo Teta/fisiologia , Adulto , Feminino , Humanos , Masculino , Som , Fatores de Tempo , Adulto Jovem
18.
J Cogn Neurosci ; 32(10): 1975-1983, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32662732

RESUMO

Understanding speech in noise is a fundamental challenge for speech comprehension. This perceptual demand is amplified in a second language: It is a common experience in bars, train stations, and other noisy environments that degraded signal quality severely compromises second language comprehension. Through a novel design, paired with a carefully selected participant profile, we independently assessed signal-driven and knowledge-driven contributions to the brain bases of first versus second language processing. We were able to dissociate the neural processes driven by the speech signal from the processes that come from speakers' knowledge of their first versus second languages. The neurophysiological data show that, in combination with impaired access to top-down linguistic information in the second language, the locus of bilinguals' difficulty in understanding second language speech in noisy conditions arises from a failure to successfully perform a basic, low-level process: cortical entrainment to speech signals above the syllabic level.


Assuntos
Multilinguismo , Percepção da Fala , Compreensão , Humanos , Ruído , Fala
19.
Eur J Neurosci ; 52(2): 2889-2904, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32080939

RESUMO

Changes in modulation rate are important cues for parsing acoustic signals, such as speech. We parametrically controlled modulation rate via the correlation coefficient (r) of amplitude spectra across fixed frequency channels between adjacent time frames: broadband modulation spectra are biased toward slow modulate rates with increasing r, and vice versa. By concatenating segments with different r, acoustic changes of various directions (e.g., changes from low to high correlation coefficients, that is, random-to-correlated or vice versa) and sizes (e.g., changes from low to high or from medium to high correlation coefficients) can be obtained. Participants listened to sound blocks and detected changes in correlation while MEG was recorded. Evoked responses to changes in correlation demonstrated (a) an asymmetric representation of change direction: random-to-correlated changes produced a prominent evoked field around 180 ms, while correlated-to-random changes evoked an earlier response with peaks at around 70 and 120 ms, whose topographies resemble those of the canonical P50m and N100m responses, respectively, and (b) a highly non-linear representation of correlation structure, whereby even small changes involving segments with a high correlation coefficient were much more salient than relatively large changes that did not involve segments with high correlation coefficients. Induced responses revealed phase tracking in the delta and theta frequency bands for the high correlation stimuli. The results confirm a high sensitivity for low modulation rates in human auditory cortex, both in terms of their representation and their segregation from other modulation rates.


Assuntos
Córtex Auditivo , Estimulação Acústica , Percepção Auditiva , Potenciais Evocados Auditivos , Humanos , Magnetoencefalografia
20.
Proc Biol Sci ; 287(1923): 20193010, 2020 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-32208834

RESUMO

The timing of acoustic events is central to human speech and music. Tempo tends to be slower in aesthetic contexts: rates in poetic speech and music are slower than non-poetic, running speech. We tested whether a general aesthetic preference for slower rates can account for this, using birdsong as a stimulus: it structurally resembles human sequences but is unbiased by their production or processing constraints. When listeners selected the birdsong playback tempo that was most pleasing, they showed no bias towards any range of note rates. However, upon hearing a novel stimulus, listeners rapidly formed a robust, implicit memory of its temporal properties, and developed a stimulus-specific preference for the memorized tempo. Interestingly, tempo perception in birdsong stimuli was strongly determined by individual, internal preferences for rates of 1-2 Hz. This suggests that processing complex sound sequences relies on a default time window, while aesthetic appreciation appears flexible, experience-based and not determined by absolute event rates.


Assuntos
Percepção Auditiva , Estética , Vocalização Animal , Estimulação Acústica , Animais , Humanos , Julgamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA