Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Adv ; 10(20): eadm9797, 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38748798

RESUMEN

Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a "musi-linguistic" continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.


Asunto(s)
Lenguaje , Música , Habla , Humanos , Habla/fisiología , Masculino , Percepción de la Altura Tonal/fisiología , Femenino , Adulto , Publicación de Preinscripción
2.
Cognition ; 246: 105757, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38442588

RESUMEN

One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.


Asunto(s)
Percepción del Habla , Habla , Humanos , Señales (Psicología) , Lenguaje , Fonética , Percepción de la Altura Tonal
3.
J Exp Psychol Hum Percept Perform ; 50(1): 119-138, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38236259

RESUMEN

A growing amount of attention has been given to examining the domain-general auditory processing of individual acoustic dimensions as a key driving force for adult L2 acquisition. Whereas auditory processing has traditionally been conceptualized as a bottom-up and encapsulated phenomenon, the interaction model (Kraus & Banai, 2007) proposes auditory processing as a set of perceptual, cognitive, and motoric abilities-the perception of acoustic details (acuity), the selection of relevant and irrelevant dimensions (attention), and the conversion of audio input into motor action (integration). To test this hypothesis, we examined the relationship between each component and the L2 outcomes of 102 adult Chinese speakers of English who varied in age, experience, and working memory background. According to the results of the statistical analyses, (a) the tests scores tapped into essentially distinct components of auditory processing (acuity, attention, and integration), and (b) these components played an equal role in explaining various aspects of L2 learning (phonology, morphosyntax) with large effects, even after biographical background and working memory were controlled for. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Desarrollo del Lenguaje , Lenguaje , Adulto , Humanos , Aprendizaje , Percepción Auditiva , Cognición
4.
J Exp Psychol Learn Mem Cogn ; 50(2): 189-203, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37289511

RESUMEN

Speech perception requires the integration of evidence from acoustic cues across multiple dimensions. Individuals differ in their cue weighting strategies, that is, the weight they assign to different dimensions during speech categorization. In two experiments, we investigate musical training as one potential predictor of individual differences in prosodic cue weighting strategies. Attentional theories of speech categorization suggest that prior experience with the task-relevance of a particular dimension leads that dimension to attract attention. Experiment 1 tested whether musicians and nonmusicians differed in their ability to selectively attend to pitch and loudness in speech. Compared to nonmusicians, musicians showed enhanced dimension-selective attention to pitch but not loudness. Experiment 2 tested the hypothesis that musicians would show greater pitch weighting during prosodic categorization due to prior experience with the task-relevance of pitch cues in music. Listeners categorized phrases that varied in the extent to which pitch and duration signaled the location of linguistic focus and phrase boundaries. During linguistic focus categorization, musicians upweighted pitch compared to nonmusicians. During phrase boundary categorization, musicians upweighted duration relative to nonmusicians. These results suggest that musical experience is linked with domain-general enhancements in the ability to selectively attend to certain acoustic dimensions in speech. As a result, musicians may place greater perceptual weight on a single primary dimension during prosodic categorization, while nonmusicians may be more likely to choose a perceptual strategy that integrates across multiple dimensions. These findings support attentional theories of cue weighting, which suggest attention influences listeners' perceptual weighting of acoustic dimensions during categorization. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Música , Percepción del Habla , Humanos , Percepción de la Altura Tonal , Estimulación Acústica , Atención
5.
Psychon Bull Rev ; 31(1): 137-147, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37430179

RESUMEN

The auditory world is often cacophonous, with some sounds capturing attention and distracting us from our goals. Despite the universality of this experience, many questions remain about how and why sound captures attention, how rapidly behavior is disrupted, and how long this interference lasts. Here, we use a novel measure of behavioral disruption to test predictions made by models of auditory salience. Models predict that goal-directed behavior is disrupted immediately after points in time that feature a high degree of spectrotemporal change. We find that behavioral disruption is precisely time-locked to the onset of distracting sound events: Participants who tap to a metronome temporarily increase their tapping speed 750 ms after the onset of distractors. Moreover, this response is greater for more salient sounds (larger amplitude) and sound changes (greater pitch shift). We find that the time course of behavioral disruption is highly similar after acoustically disparate sound events: Both sound onsets and pitch shifts of continuous background sounds speed responses at 750 ms, with these effects dying out by 1,750 ms. These temporal distortions can be observed using only data from the first trial across participants. A potential mechanism underlying these results is that arousal increases after distracting sound events, leading to an expansion of time perception, and causing participants to misjudge when their next movement should begin.


Asunto(s)
Percepción del Tiempo , Humanos , Estimulación Acústica , Sonido , Atención/fisiología , Percepción Auditiva/fisiología , Percepción de la Altura Tonal/fisiología
6.
Dev Sci ; 27(1): e13420, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37350014

RESUMEN

Auditory selective attention forms an important foundation of children's learning by enabling the prioritisation and encoding of relevant stimuli. It may also influence reading development, which relies on metalinguistic skills including the awareness of the sound structure of spoken language. Reports of attentional impairments and speech perception difficulties in noisy environments in dyslexic readers are also suggestive of the putative contribution of auditory attention to reading development. To date, it is unclear whether non-speech selective attention and its underlying neural mechanisms are impaired in children with dyslexia and to which extent these deficits relate to individual reading and speech perception abilities in suboptimal listening conditions. In this EEG study, we assessed non-speech sustained auditory selective attention in 106 7-to-12-year-old children with and without dyslexia. Children attended to one of two tone streams, detecting occasional sequence repeats in the attended stream, and performed a speech-in-speech perception task. Results show that when children directed their attention to one stream, inter-trial-phase-coherence at the attended rate increased in fronto-central sites; this, in turn, was associated with better target detection. Behavioural and neural indices of attention did not systematically differ as a function of dyslexia diagnosis. However, behavioural indices of attention did explain individual differences in reading fluency and speech-in-speech perception abilities: both these skills were impaired in dyslexic readers. Taken together, our results show that children with dyslexia do not show group-level auditory attention deficits but these deficits may represent a risk for developing reading impairments and problems with speech perception in complex acoustic environments. RESEARCH HIGHLIGHTS: Non-speech sustained auditory selective attention modulates EEG phase coherence in children with/without dyslexia Children with dyslexia show difficulties in speech-in-speech perception Attention relates to dyslexic readers' speech-in-speech perception and reading skills Dyslexia diagnosis is not linked to behavioural/EEG indices of auditory attention.


Asunto(s)
Dislexia , Percepción del Habla , Niño , Humanos , Lectura , Sonido , Habla , Trastornos del Habla , Fonética
7.
Cognition ; 244: 105696, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38160651

RESUMEN

From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.


Asunto(s)
Música , Percepción del Habla , Humanos , Percepción Auditiva , Percepción de la Altura Tonal , Cognición , Estimulación Acústica
8.
J Exp Psychol Learn Mem Cogn ; 49(12): 1943-1955, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38127498

RESUMEN

Languages differ in the importance of acoustic dimensions for speech categorization. This poses a potential challenge for second language (L2) learners, and the extent to which adult L2 learners can acquire new perceptual strategies for speech categorization remains unclear. This study investigated the effects of extensive English L2 immersion on speech perception strategies and dimension-selective-attention ability in native Mandarin speakers. Experienced first language (L1) Mandarin speakers (length of U.K. residence > 3 years) demonstrated more native-like weighting of cues to L2 suprasegmental categorization relative to inexperienced Mandarin speakers (length of residence < 1 year), weighting duration more highly. However, both the experienced and the inexperienced Mandarin speakers continued to weight duration less highly and pitch more highly during musical beat categorization and struggled to ignore pitch and selectively attend to amplitude in speech, relative to native English speakers. These results suggest that adult L2 experience can lead to retuning of perceptual strategies in specific contexts, but global acoustic salience is more resistant to change. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Lenguaje , Percepción del Habla , Adulto , Humanos , Habla , Señales (Psicología) , Atención , Fonética
9.
Psychon Bull Rev ; 30(1): 373-382, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35915382

RESUMEN

Segmental speech units such as phonemes are described as multidimensional categories whose perception involves contributions from multiple acoustic input dimensions, and the relative perceptual weights of these dimensions respond dynamically to context. For example, when speech is altered to create an "accent" in which two acoustic dimensions are correlated in a manner opposite that of long-term experience, the dimension that carries less perceptual weight is down-weighted to contribute less in category decisions. It remains unclear, however, whether this short-term reweighting extends to perception of suprasegmental features that span multiple phonemes, syllables, or words, in part because it has remained debatable whether suprasegmental features are perceived categorically. Here, we investigated the relative contribution of two acoustic dimensions to word emphasis. Participants categorized instances of a two-word phrase pronounced with typical covariation of fundamental frequency (F0) and duration, and in the context of an artificial "accent" in which F0 and duration (established in prior research on English speech as "primary" and "secondary" dimensions, respectively) covaried atypically. When categorizing "accented" speech, listeners rapidly down-weighted the secondary dimension (duration). This result indicates that listeners continually track short-term regularities across speech input and dynamically adjust the weight of acoustic evidence for suprasegmental decisions. Thus, dimension-based statistical learning appears to be a widespread phenomenon in speech perception extending to both segmental and suprasegmental categorization.


Asunto(s)
Acústica del Lenguaje , Percepción del Habla , Humanos , Fonética , Habla , Lenguaje
10.
J Exp Psychol Hum Percept Perform ; 48(12): 1410-1426, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36442040

RESUMEN

Recent evidence suggests that domain-general auditory processing (sensitivity to the spectro-temporal characteristics of sounds) helps determine individual differences in L2 speech acquisition outcomes. The current study examined the extent to which focused training could enhance auditory processing ability, and whether this had a concomitant impact on L2 vowel proficiency. A total of 98 Japanese learners of English were divided into four groups: (1) Auditory-Only (F2 discrimination training); (2) Phonetic-Only (English [æ] and [ʌ] identification training); (3) Auditory-Phonetic (a combination of auditory and phonetic training); and (4) Control training. The results showed that the Phonetic-Only group improved only their English [æ] and [ʌ] identification, while the Auditory-Only and Auditory-Phonetic groups enhanced both auditory and phonetic skills. The results suggest that a learner's auditory acuity to key, domain-general acoustic cues (F2 = 1200-1600 Hz) promotes the acquisition of knowledge about speech categories (English [æ] vs. [ʌ]). (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Lenguaje , Habla , Humanos , Percepción Auditiva , Fonética , Discriminación en Psicología
11.
Cognition ; 229: 105236, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36027789

RESUMEN

Growing evidence suggests a broad relationship between individual differences in auditory processing ability and the rate and ultimate attainment of language acquisition throughout the lifespan, including post-pubertal second language (L2) speech learning. However, little is known about how the precision of processing of specific auditory dimensions relates to the acquisition of specific L2 segmental contrasts. In the context of 100 late Japanese-English bilinguals with diverse profiles of classroom and immersion experience, the current study set out to investigate the link between the perception of several auditory dimensions (F3 frequency, F2 frequency, and duration) in non-verbal sounds and English [r]-[l] perception and production proficiency. Whereas participants' biographical factors (the presence/absence of immersion) accounted for a large amount of variance in the success of learning this contrast, the outcomes were also tied to their acuity to the most reliable, new auditory cues (F3 variation) and the less reliable but already-familiar cues (F2 variation). This finding suggests that individuals can vary in terms of how they perceive, utilize, and make the most of information conveyed by specific acoustic dimensions. When perceiving more naturalistic spoken input, where speech contrasts can be distinguished via a combination of numerous cues, some can attain a high-level of L2 speech proficiency by using nativelike and/or non-nativelike strategies in a complementary fashion.


Asunto(s)
Multilingüismo , Percepción del Habla , Percepción Auditiva , Humanos , Lenguaje , Fonética
12.
Neuroimage ; 252: 119024, 2022 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-35231629

RESUMEN

To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.


Asunto(s)
Corteza Auditiva , Electroencefalografía , Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Cafeína , Electroencefalografía/métodos , Humanos , Sonido
13.
J Exp Psychol Hum Percept Perform ; 47(12): 1681-1697, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34881953

RESUMEN

In the speech-to-song illusion, certain spoken phrases are perceived as sung after repetition. One possible explanation for this increase in musicality is that, as phrases are repeated, lexical activation dies off, enabling listeners to focus on the melodic and rhythmic characteristics of stimuli and assess them for the presence of musical structure. Here we tested the idea that perception of the illusion requires implicit assessment of melodic and rhythmic structure by presenting individuals with phrases that tend to be perceived as song when repeated, as well as phrases that tend to continue to be perceived as speech when repeated, measuring the strength of the illusion as the rating difference between these two stimulus categories after repetition. Illusion strength varied widely and stably between listeners, with large individual differences and high split-half reliability, suggesting that not all listeners are equally able to detect musical structure in speech. Although variability in illusion strength was unrelated to degree of musical training, participants who perceived the illusion more strongly were proficient in several musical skills, including beat perception, tonality perception, and selective attention to pitch. These findings support models of the speech-to-song illusion in which experience of the illusion is based on detection of musical characteristics latent in spoken phrases. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Ilusiones , Música , Percepción del Habla , Aptitud , Humanos , Individualidad , Percepción de la Altura Tonal , Reproducibilidad de los Resultados , Habla
14.
Front Neurosci ; 15: 717572, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34955707

RESUMEN

While there is evidence for bilingual enhancements of inhibitory control and auditory processing, two processes that are fundamental to daily communication, it is not known how bilinguals utilize these cognitive and sensory enhancements during real-world listening. To test our hypothesis that bilinguals engage their enhanced cognitive and sensory processing in real-world listening situations, bilinguals and monolinguals performed a selective attention task involving competing talkers, a common demand of everyday listening, and then later passively listened to the same competing sentences. During the active and passive listening periods, evoked responses to the competing talkers were collected to understand how online auditory processing facilitates active listening and if this processing differs between bilinguals and monolinguals. Additionally, participants were tested on a separate measure of inhibitory control to see if inhibitory control abilities related with performance on the selective attention task. We found that although monolinguals and bilinguals performed similarly on the selective attention task, the groups differed in the neural and cognitive processes engaged to perform this task, compared to when they were passively listening to the talkers. Specifically, during active listening monolinguals had enhanced cortical phase consistency while bilinguals demonstrated enhanced subcortical phase consistency in the response to the pitch contours of the sentences, particularly during passive listening. Moreover, bilinguals' performance on the inhibitory control test related with performance on the selective attention test, a relationship that was not seen for monolinguals. These results are consistent with the hypothesis that bilinguals utilize inhibitory control and enhanced subcortical auditory processing in everyday listening situations to engage with sound in ways that are different than monolinguals.

15.
Neuroimage ; 244: 118544, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34492294

RESUMEN

Some theories of auditory categorization suggest that auditory dimensions that are strongly diagnostic for particular categories - for instance voice onset time or fundamental frequency in the case of some spoken consonants - attract attention. However, prior cognitive neuroscience research on auditory selective attention has largely focused on attention to simple auditory objects or streams, and so little is known about the neural mechanisms that underpin dimension-selective attention, or how the relative salience of variations along these dimensions might modulate neural signatures of attention. Here we investigate whether dimensional salience and dimension-selective attention modulate the cortical tracking of acoustic dimensions. In two experiments, participants listened to tone sequences varying in pitch and spectral peak frequency; these two dimensions changed at different rates. Inter-trial phase coherence (ITPC) and amplitude of the EEG signal at the frequencies tagged to pitch and spectral changes provided a measure of cortical tracking of these dimensions. In Experiment 1, tone sequences varied in the size of the pitch intervals, while the size of spectral peak intervals remained constant. Cortical tracking of pitch changes was greater for sequences with larger compared to smaller pitch intervals, with no difference in cortical tracking of spectral peak changes. In Experiment 2, participants selectively attended to either pitch or spectral peak. Cortical tracking was stronger in response to the attended compared to unattended dimension for both pitch and spectral peak. These findings suggest that attention can enhance the cortical tracking of specific acoustic dimensions rather than simply enhancing tracking of the auditory object as a whole.


Asunto(s)
Acústica , Atención/fisiología , Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Adulto , Neurociencia Cognitiva , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Percepción de la Altura Tonal/fisiología , Voz
16.
J Exp Child Psychol ; 210: 105196, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34090237

RESUMEN

The onset of reading ability is rife with individual differences, with some children termed "early readers" and some falling behind from the very beginning. Reading skill in children has been linked to an ability to remember nonverbal rhythms, specifically in the auditory modality. It has been hypothesized that the link between rhythm skills and reading reflects a shared reliance on the ability to extract temporal structure from sound. Here we tested this hypothesis by investigating whether the link between rhythm memory and reading depends on the modality in which rhythms are presented. We tested 75 primary school-aged children aged 7-11 years on a within-participants battery of reading and rhythm tasks. Participants received a reading efficiency task followed by three rhythm tasks (auditory, visual, and audiovisual). Results showed that children who performed poorly on the reading task also performed poorly on the tasks that required them to remember and repeat back nonverbal rhythms. In addition, these children showed a rhythmic deficit not just in the auditory domain but also in the visual domain. However, auditory rhythm memory explained additional variance in reading ability even after controlling for visual memory. These results suggest that reading ability and rhythm memory rely both on shared modality-general cognitive processes and on the ability to perceive the temporal structure of sound.


Asunto(s)
Nombres , Lectura , Percepción Auditiva , Niño , Humanos , Individualidad , Memoria , Percepción
17.
Cognition ; 206: 104481, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33075568

RESUMEN

Speech and music are highly redundant communication systems, with multiple acoustic cues signaling the existence of perceptual categories. This redundancy makes these systems robust to the influence of noise, but necessitates the development of perceptual strategies: listeners need to decide how much importance to place on each source of information. Prior empirical work and modeling has suggested that cue weights primarily reflect within-task statistical learning, as listeners assess the reliability with which different acoustic dimensions signal a category and modify their weights accordingly. Here we present evidence that perceptual experience can lead to changes in cue weighting that extend across tasks and across domains, suggesting that perceptual strategies reflect both global biases and local (i.e. task-specific) learning. In two experiments, native speakers of Mandarin (N = 45)-where pitch is a crucial cue to word identity-placed more importance on pitch and less importance on other dimensions compared to native speakers of non-tonal languages English (N = 45) and Spanish (N = 27), during the perception of both English speech and musical beats. In a third experiment, we further show that Mandarin speakers are better able to attend to pitch and ignore irrelevant variation in other dimensions in speech compared to English and Spanish speakers, and even struggle to ignore pitch when asked to attend to other dimensions. Thus, an individual's idiosyncratic auditory perceptual strategy reflects a complex mixture of congenital predispositions, task-specific learning, and biases instilled by extensive experience in making use of important dimensions in their native language.


Asunto(s)
Lenguaje , Percepción del Habla , Señales (Psicología) , Humanos , Percepción de la Altura Tonal , Reproducibilidad de los Resultados , Habla
18.
Neuroimage ; 224: 117396, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-32979522

RESUMEN

To extract meaningful information from complex auditory scenes like a noisy playground, rock concert, or classroom, children can direct attention to different sound streams. One means of accomplishing this might be to align neural activity with the temporal structure of a target stream, such as a specific talker or melody. However, this may be more difficult for children with ADHD, who can struggle with accurately perceiving and producing temporal intervals. In this EEG study, we found that school-aged children's attention to one of two temporally-interleaved isochronous tone 'melodies' was linked to an increase in phase-locking at the melody's rate, and a shift in neural phase that aligned the neural responses with the attended tone stream. Children's attention task performance and neural phase alignment with the attended melody were linked to performance on temporal production tasks, suggesting that children with more robust control over motor timing were better able to direct attention to the time points associated with the target melody. Finally, we found that although children with ADHD performed less accurately on the tonal attention task than typically developing children, they showed the same degree of attentional modulation of phase locking and neural phase shifts, suggesting that children with ADHD may have difficulty with attentional engagement rather than attentional selection.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Corteza Auditiva/fisiopatología , Percepción Auditiva/fisiología , Sonido , Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Niño , Electroencefalografía/métodos , Femenino , Humanos , Masculino
19.
Elife ; 92020 08 07.
Artículo en Inglés | MEDLINE | ID: mdl-32762842

RESUMEN

Individuals with congenital amusia have a lifelong history of unreliable pitch processing. Accordingly, they downweight pitch cues during speech perception and instead rely on other dimensions such as duration. We investigated the neural basis for this strategy. During fMRI, individuals with amusia (N = 15) and controls (N = 15) read sentences where a comma indicated a grammatical phrase boundary. They then heard two sentences spoken that differed only in pitch and/or duration cues and selected the best match for the written sentence. Prominent reductions in functional connectivity were detected in the amusia group between left prefrontal language-related regions and right hemisphere pitch-related regions, which reflected the between-group differences in cue weights in the same groups of listeners. Connectivity differences between these regions were not present during a control task. Our results indicate that the reliability of perceptual dimensions is linked with functional connectivity between frontal and perceptual regions and suggest a compensatory mechanism.


Spoken language is colored by fluctuations in pitch and rhythm. Rather than speaking in a flat monotone, we allow our sentences to rise and fall. We vary the length of syllables, drawing out some, and shortening others. These fluctuations, known as prosody, add emotion to speech and denote punctuation. In written language, we use a comma or a period to signal a boundary between phrases. In speech, we use changes in pitch ­ how deep or sharp a voice sounds ­ or in the length of syllables. Having more than one type of cue that can signal emotion or transitions between sentences has a number of advantages. It means that people can understand each other even when factors such as background noise obscure one set of cues. It also means that people with impaired sound perception can still understand speech. Those with a condition called congenital amusia, for example, struggle to perceive pitch, but they can compensate for this difficulty by placing greater emphasis on other aspects of speech. Jasmin et al. showed how the brain achieves this by comparing the brain activities of people with and without amusia. Participants were asked to read sentences on a screen where a comma indicated a boundary between two phrases. They then heard two spoken sentences, and had to choose the one that matched the written sentence. The spoken sentences used changes in pitch and/or syllable duration to signal the position of the comma. This provided listeners with the information needed to distinguish between "after John runs the race, ..." and "after John runs, the race...", for example. When two brain regions communicate, they tend to increase their activity at around the same time. The brain regions are then said to show functional connectivity. Jasmin et al. found that compared to healthy volunteers, people with amusia showed less functional connectivity between left hemisphere brain regions that process language and right hemisphere regions that process pitch. In other words, because pitch is a less reliable source of information for people with amusia, they recruit pitch-related brain regions less when processing speech. These results add to our understanding of how brains compensate for impaired perception. This may be useful for understanding the neural basis of compensation in other clinical conditions. It could also help us design bespoke hearing aids or other communication devices, such as computer programs that convert text into speech. Such programs could tailor the pitch and rhythm characteristics of the speech they produce to suit the perception of individual users.


Asunto(s)
Trastornos de la Percepción Auditiva/fisiopatología , Percepción del Habla/fisiología , Adulto , Anciano , Femenino , Humanos , Imagen por Resonancia Magnética , Persona de Mediana Edad , Reino Unido
20.
Neuroimage ; 213: 116717, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32165265

RESUMEN

How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 â€‹h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Individualidad , Estimulación Acústica , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...