Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ear Hear ; 45(2): 425-440, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37882091

RESUMO

OBJECTIVES: The listening demand incurred by speech perception fluctuates in normal conversation. At the acoustic-phonetic level, natural variation in pronunciation acts as speedbumps to accurate lexical selection. Any given utterance may be more or less phonetically ambiguous-a problem that must be resolved by the listener to choose the correct word. This becomes especially apparent when considering two common speech registers-clear and casual-that have characteristically different levels of phonetic ambiguity. Clear speech prioritizes intelligibility through hyperarticulation which results in less ambiguity at the phonetic level, while casual speech tends to have a more collapsed acoustic space. We hypothesized that listeners would invest greater cognitive resources while listening to casual speech to resolve the increased amount of phonetic ambiguity, as compared with clear speech. To this end, we used pupillometry as an online measure of listening effort during perception of clear and casual continuous speech in two background conditions: quiet and noise. DESIGN: Forty-eight participants performed a probe detection task while listening to spoken, nonsensical sentences (masked and unmasked) while recording pupil size. Pupil size was modeled using growth curve analysis to capture the dynamics of the pupil response as the sentence unfolded. RESULTS: Pupil size during listening was sensitive to the presence of noise and speech register (clear/casual). Unsurprisingly, listeners had overall larger pupil dilations during speech perception in noise, replicating earlier work. The pupil dilation pattern for clear and casual sentences was considerably more complex. Pupil dilation during clear speech trials was slightly larger than for casual speech, across quiet and noisy backgrounds. CONCLUSIONS: We suggest that listener motivation could explain the larger pupil dilations to clearly spoken speech. We propose that, bounded by the context of this task, listeners devoted more resources to perceiving the speech signal with the greatest acoustic/phonetic fidelity. Further, we unexpectedly found systematic differences in pupil dilation preceding the onset of the spoken sentences. Together, these data demonstrate that the pupillary system is not merely reactive but also adaptive-sensitive to both task structure and listener motivation to maximize accurate perception in a limited resource system.


Assuntos
Pupila , Percepção da Fala , Humanos , Pupila/fisiologia , Fala , Ruído , Cognição , Percepção da Fala/fisiologia , Inteligibilidade da Fala/fisiologia
2.
Neurobiol Lang (Camb) ; 4(1): 145-177, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37229142

RESUMO

Though the right hemisphere has been implicated in talker processing, it is thought to play a minimal role in phonetic processing, at least relative to the left hemisphere. Recent evidence suggests that the right posterior temporal cortex may support learning of phonetic variation associated with a specific talker. In the current study, listeners heard a male talker and a female talker, one of whom produced an ambiguous fricative in /s/-biased lexical contexts (e.g., epi?ode) and one who produced it in /∫/-biased contexts (e.g., friend?ip). Listeners in a behavioral experiment (Experiment 1) showed evidence of lexically guided perceptual learning, categorizing ambiguous fricatives in line with their previous experience. Listeners in an fMRI experiment (Experiment 2) showed differential phonetic categorization as a function of talker, allowing for an investigation of the neural basis of talker-specific phonetic processing, though they did not exhibit perceptual learning (likely due to characteristics of our in-scanner headphones). Searchlight analyses revealed that the patterns of activation in the right superior temporal sulcus (STS) contained information about who was talking and what phoneme they produced. We take this as evidence that talker information and phonetic information are integrated in the right STS. Functional connectivity analyses suggested that the process of conditioning phonetic identity on talker information depends on the coordinated activity of a left-lateralized phonetic processing system and a right-lateralized talker processing system. Overall, these results clarify the mechanisms through which the right hemisphere supports talker-specific phonetic processing.

3.
Brain Lang ; 240: 105264, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37087863

RESUMO

Theories suggest that speech perception is informed by listeners' beliefs of what phonetic variation is typical of a talker. A previous fMRI study found right middle temporal gyrus (RMTG) sensitivity to whether a phonetic variant was typical of a talker, consistent with literature suggesting that the right hemisphere may play a key role in conditioning phonetic identity on talker information. The current work used transcranial magnetic stimulation (TMS) to test whether the RMTG plays a causal role in processing talker-specific phonetic variation. Listeners were exposed to talkers who differed in how they produced voiceless stop consonants while TMS was applied to RMTG, left MTG, or scalp vertex. Listeners subsequently showed near-ceiling performance in indicating which of two variants was typical of a trained talker, regardless of previous stimulation site. Thus, even though the RMTG is recruited for talker-specific phonetic processing, modulation of its function may have only modest consequences.


Assuntos
Fonética , Percepção da Fala , Humanos , Estimulação Magnética Transcraniana , Lobo Temporal/diagnóstico por imagem , Percepção da Fala/fisiologia , Imageamento por Ressonância Magnética
4.
J Exp Psychol Learn Mem Cogn ; 49(7): 1161-1175, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36757985

RESUMO

Individuals differ in their ability to perceive and learn unfamiliar speech sounds, but we lack a comprehensive theoretical account that predicts individual differences in this skill. Predominant theories largely attribute difficulties of non-native speech perception to the relationships between non-native speech sounds/contrasts and native-language categories. The goal of the current study was to test whether the predictions made by these theories can be extended to predict individual differences in naive perception of non-native speech sounds or learning of these sounds. Specifically, we hypothesized that the internal structure of native-language speech categories is the cause of difficulty in perception of unfamiliar sounds such that learners who show more graded (i.e., less categorical) perception of sounds in their native language would have an advantage for perceiving non-native speech sounds because they would be less likely to assimilate unfamiliar speech tokens to their native-language categories. We tested this prediction in two experiments in which listeners categorized speech continua in their native language and performed tasks of discrimination or identification of difficult non-native speech sound contrasts. Overall, results did not support the hypothesis that individual differences in categorical perception of native-language speech sounds is responsible for variability in sensitivity to non-native speech sounds. However, participants who responded more consistently on a speech categorization task showed more accurate perception of non-native speech sounds. This suggests that individual differences in non-native speech perception are more related to the stability of phonetic processing abilities than to individual differences in phonetic category structure. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Percepção da Fala , Humanos , Idioma , Aprendizagem , Fonética , Som , Fala
5.
J Speech Lang Hear Res ; 66(2): 720-734, 2023 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-36668820

RESUMO

PURPOSE: Sleep-based memory consolidation has been shown to facilitate perceptual learning of atypical speech input including nonnative speech sounds, accented speech, and synthetic speech. The current research examined the role of sleep-based memory consolidation on perceptual learning for noise-vocoded speech, including maintenance of learning over a 1-week time interval. Because comprehending noise-vocoded speech requires extensive restructuring of the mapping between the acoustic signal and prelexical representations, sleep consolidation may be critical for this type of adaptation. Thus, the purpose of this study was to investigate the role of sleep-based memory consolidation on adaptation to noise-vocoded speech in listeners without hearing loss as a foundational step toward identifying parameters that can be useful to consider for auditory training with clinical populations. METHOD: Two groups of normal-hearing listeners completed a transcription training task with feedback for noise-vocoded sentences in either the morning or the evening. Learning was assessed through transcription accuracy before training, immediately after training, 12 hr after training, and 1 week after training for both trained and novel sentences. RESULTS: Both the morning and evening groups showed improved comprehension of noise-vocoded sentences immediately following training. Twelve hours later, the evening group showed stable gains (following a period of sleep), whereas the morning group demonstrated a decline in gains (following a period of wakefulness). One week after training, the morning and evening groups showed equivalent performance for both trained and novel sentences. CONCLUSION: Sleep-consolidated learning helps stabilize training gains for degraded speech input, which may hold clinical utility for optimizing rehabilitation recommendations.


Assuntos
Consolidação da Memória , Percepção da Fala , Humanos , Fala , Aprendizagem , Sono , Estimulação Acústica , Mascaramento Perceptivo
6.
J Acoust Soc Am ; 152(1): 511, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35931533

RESUMO

Parkinson's disease (PD) is a neurodegenerative condition primarily associated with its motor consequences. Although much of the focus within the speech domain has focused on PD's consequences for production, people with PD have been shown to differ in the perception of emotional prosody, loudness, and speech rate from age-matched controls. The current study targeted the effect of PD on perceptual phonetic plasticity, defined as the ability to learn and adjust to novel phonetic input, both in second language and native language contexts. People with PD were compared to age-matched controls (and, for three of the studies, a younger control population) in tasks of explicit non-native speech learning and adaptation to variation in native speech (compressed rate, accent, and the use of timing information within a sentence to parse ambiguities). The participants with PD showed significantly worse performance on the task of compressed rate and used the duration of an ambiguous fricative to segment speech to a lesser degree than age-matched controls, indicating impaired speech perceptual abilities. Exploratory comparisons also showed people with PD who were on medication performed significantly worse than their peers off medication on those two tasks and the task of explicit non-native learning.


Assuntos
Doença de Parkinson , Percepção da Fala , Humanos , Idioma , Fonética , Fala
7.
Brain Lang ; 226: 105070, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35026449

RESUMO

The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that aredeployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.


Assuntos
Percepção da Fala , Fala , Humanos , Idioma , Fonética , Reprodutibilidade dos Testes
8.
J Exp Psychol Hum Percept Perform ; 47(12): 1673-1680, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34881952

RESUMO

Determining how human listeners achieve phonetic constancy despite a variable mapping between the acoustics of speech and phonemic categories is the longest standing challenge in speech perception. A clue comes from studies where the talker changes randomly between stimuli, which slows processing compared with a single-talker baseline. These multitalker processing costs have been observed most often in speeded monitoring paradigms, where participants respond whenever a specific item occurs. Notably, the conventional paradigm imposes attentional demands via two forms of varied mapping in mixed-talker conditions. First, target recycling (i.e., allowing items to serve as targets on some trials but as distractors on others) potentially prevents the development of task automaticity. Second, in mixed trials, participants must respond to two unique stimuli (i.e., one target produced by each talker), whereas in blocked conditions, they need respond to only one token (i.e., multiple target tokens). We seek to understand how attentional demands influence talker normalization, as measured by multitalker processing costs. Across four experiments, multitalker processing costs persisted when target recycling was not allowed but diminished when only one stimulus served as the target on mixed trials. We discuss the logic of using varied mapping to elicit attentional effects and implications for theories of speech perception. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Percepção da Fala , Acústica , Atenção , Humanos , Fonética , Fala
9.
J Speech Lang Hear Res ; 64(10): 3720-3733, 2021 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-34525309

RESUMO

Purpose Individuals vary in their ability to learn the sound categories of nonnative languages (nonnative phonetic learning) and to adapt to systematic differences, such as accent or talker differences, in the sounds of their native language (native phonetic learning). Difficulties with both native and nonnative learning are well attested in people with speech and language disorders relative to healthy controls, but substantial variability in these skills is also present in the typical population. This study examines whether this individual variability can be organized around a common ability that we label "phonetic plasticity." Method A group of healthy young adult participants (N = 80), who attested they had no history of speech, language, neurological, or hearing deficits, completed two tasks of nonnative phonetic category learning, two tasks of learning to cope with variation in their native language, and seven tasks of other cognitive functions, distributed across two sessions. Performance on these 11 tasks was compared, and exploratory factor analysis was used to assess the extent to which performance on each task was related to the others. Results Performance on both tasks of native learning and an explicit task of nonnative learning patterned together, suggesting that native and nonnative phonetic learning tasks rely on a shared underlying capacity, which is termed "phonetic plasticity." Phonetic plasticity was also associated with vocabulary, comprehension of words in background noise, and, more weakly, working memory. Conclusions Nonnative sound learning and native language speech perception may rely on shared phonetic plasticity. The results suggest that good learners of native language phonetic variation are also good learners of nonnative phonetic contrasts. Supplemental Material https://doi.org/10.23641/asha.16606778.


Assuntos
Fonética , Percepção da Fala , Humanos , Individualidade , Idioma , Ruído , Adulto Jovem
10.
Atten Percept Psychophys ; 83(6): 2367-2376, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33948883

RESUMO

Researchers have hypothesized that in order to accommodate variability in how talkers produce their speech sounds, listeners must perform a process of talker normalization. Consistent with this proposal, several studies have shown that spoken word recognition is slowed when speech is produced by multiple talkers compared with when all speech is produced by one talker (a multitalker processing cost). Nusbaum and colleagues have argued that talker normalization is modulated by attention (e.g., Nusbaum & Morin, 1992, Speech Perception, Production and Linguistic Structure, pp. 113-134). Some of the strongest evidence for this claim is from a speeded monitoring study where a group of participants who expected to hear two talkers showed a multitalker processing cost, but a separate group who expected one talker did not (Magnuson & Nusbaum, 2007, Journal of Experimental Psychology, 33[2], 391-409). In that study, however, the sample size was small and the crucial interaction was not significant. In this registered report, we present the results of a well-powered attempt to replicate those findings. In contrast to the previous study, we did not observe multitalker processing costs in either of our groups. To rule out the possibility that the null result was due to task constraints, we conducted a second experiment using a speeded classification task. As in Experiment 1, we found no influence of expectations on talker normalization, with no multitalker processing cost observed in either group. Our data suggest that the previous findings of Magnuson and Nusbaum (2007) be regarded with skepticism and that talker normalization may not be permeable to high-level expectations.


Assuntos
Motivação , Percepção da Fala , Atenção , Humanos , Fonética , Fala
11.
J Exp Psychol Learn Mem Cogn ; 47(4): 685-704, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33983786

RESUMO

A challenge for listeners is to learn the appropriate mapping between acoustics and phonetic categories for an individual talker. Lexically guided perceptual learning (LGPL) studies have shown that listeners can leverage lexical knowledge to guide this process. For instance, listeners learn to interpret ambiguous /s/-/∫/ blends as /s/ if they have previously encountered them in /s/-biased contexts like epi?ode. Here, we examined whether the degree of preceding lexical support might modulate the extent of perceptual learning. In Experiment 1, we first demonstrated that perceptual learning could be obtained in a modified LGPL paradigm where listeners were first biased to interpret ambiguous tokens as one phoneme (e.g., /s/) and then later as another (e.g., /∫/). In subsequent experiments, we tested whether the extent of learning differed depending on whether targets encountered predictive contexts or neutral contexts prior to the auditory target (e.g., epi?ode). Experiment 2 used auditory sentence contexts (e.g., "I love The Walking Dead and eagerly await every new . . ."), whereas Experiment 3 used written sentence contexts. In Experiment 4, participants did not receive sentence contexts but rather saw the written form of the target word (episode) or filler text (########) prior to hearing the critical auditory token. While we consistently observed effects of context on in-the-moment processing of critical words, the size of the learning effect was not modulated by the type of context. We hypothesize that boosting lexical support through preceding context may not strongly influence perceptual learning when ambiguous speech sounds can be identified solely from lexical information. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Aprendizagem , Fonética , Percepção da Fala , Redação , Feminino , Humanos , Conhecimento , Masculino
12.
Brain Lang ; 218: 104959, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33930722

RESUMO

Phonetic categories have undefined edges, such that individual tokens that belong to different speech sound categories may occupy the same region in acoustic space. In continuous speech, there are multiple sources of top-down information (e.g., lexical, semantic) that help to resolve the identity of an ambiguous phoneme. Of interest is how these top-down constraints interact with ambiguity at the phonetic level. In the current fMRI study, participants passively listened to sentences that varied in semantic predictability and in the amount of naturally-occurring phonetic competition. The left middle frontal gyrus, angular gyrus, and anterior inferior frontal gyrus were sensitive to both semantic predictability and the degree of phonetic competition. Notably, greater phonetic competition within non-predictive contexts resulted in a negatively-graded neural response. We suggest that uncertainty at the phonetic-acoustic level interacts with uncertainty at the semantic level-perhaps due to a failure of the network to construct a coherent meaning.


Assuntos
Fonética , Percepção da Fala , Mapeamento Encefálico , Humanos , Idioma , Imageamento por Ressonância Magnética , Semântica , Fala
13.
Atten Percept Psychophys ; 83(5): 2217-2228, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33754298

RESUMO

Because different talkers produce their speech sounds differently, listeners benefit from maintaining distinct generative models (sets of beliefs) about the correspondence between acoustic information and phonetic categories for different talkers. A robust literature on phonetic recalibration indicates that when listeners encounter a talker who produces their speech sounds idiosyncratically (e.g., a talker who produces their /s/ sound atypically), they can update their generative model for that talker. Such recalibration has been shown to occur in a relatively talker-specific way. Because listeners in ecological situations often meet several new talkers at once, the present study considered how the process of simultaneously updating two distinct generative models compares to updating one model at a time. Listeners were exposed to two talkers, one who produced /s/ atypically and one who produced /∫/ atypically. Critically, these talkers only produced these sounds in contexts where lexical information disambiguated the phoneme's identity (e.g., epi_ode, flouri_ing). When initial exposure to the two talkers was blocked by voice (Experiment 1), listeners recalibrated to these talkers after relatively little exposure to each talker (32 instances per talker, of which 16 contained ambiguous fricatives). However, when the talkers were intermixed during learning (Experiment 2), listeners required more exposure trials before they were able to adapt to the idiosyncratic productions of these talkers (64 instances per talker, of which 32 contained ambiguous fricatives). Results suggest that there is a perceptual cost to simultaneously updating multiple distinct generative models, potentially because listeners must first select which generative model to update.


Assuntos
Percepção da Fala , Voz , Humanos , Aprendizagem , Fonética , Som
14.
Brain Lang ; 215: 104919, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33524740

RESUMO

Listeners perceive speech sounds categorically. While group-level differences in categorical perception have been observed in children or individuals with reading disorders, recent findings suggest that typical adults vary in how categorically they perceive sounds. The current study investigated neural sources of individual variability in categorical perception of speech. Fifty-seven participants rated phonetic tokens on a visual analogue scale; categoricity and response consistency were measured and related to measures of brain structure from MRI. Increased surface area of the right middle frontal gyrus predicted more categorical perception of a fricative continuum. This finding supports the idea that frontal regions are sensitive to phonetic category-level information and extends it to make behavioral predictions at the individual level. Additionally, more gyrification in bilateral transverse temporal gyri predicted less consistent responses on the task, perhaps reflecting subtle variation in language ability across the population.


Assuntos
Córtex Auditivo , Percepção da Fala , Adulto , Criança , Humanos , Individualidade , Fonética , Fala
15.
J Speech Lang Hear Res ; 63(8): 2667-2679, 2020 08 10.
Artigo em Inglês | MEDLINE | ID: mdl-32755501

RESUMO

Purpose Children and early adolescents seem to have an advantage over adults in acquiring nonnative speech sounds, supported by evidence showing that earlier age of acquisition strongly predicts second language attainment. Although many factors influence children's ultimate success in language learning, it is unknown whether children rely on different, perhaps more efficient learning mechanisms than adults. Method The current study compared children (aged 10-16 years) and adults in their learning of a nonnative Hindi contrast. We tested the hypothesis that younger participants would show superior baseline discriminability or learning of the contrast, better memory for new sounds after a delay, or improved generalization to a new talker's voice. Measures of phonological and auditory skills were collected to determine whether individual variability in these skills predicts nonnative speech sound learning and whether these potential relationships differ between adults and children. Results Adults showed superior pretraining sensitivity to the contrast compared to children, and these pretraining discrimination scores predicted learning and retention. Even though adults seemed to have an initial advantage in learning, children improved after a period of off-line consolidation on the trained identification task and began to catch up to adults after an overnight delay. Additionally, perceptual skills that predicted speech sound learning differed between adults and children, suggesting they rely on different learning mechanisms. Conclusions These findings challenge the view that children are simply better speech sound learners than adults and suggest that their advantages may be due to different learning mechanisms or better retention of nonnative contrasts over the broader language learning trajectory. Supplemental Material https://doi.org/10.23641/asha.12735914.


Assuntos
Fonética , Percepção da Fala , Adolescente , Adulto , Criança , Humanos , Idioma , Desenvolvimento da Linguagem , Aprendizagem
16.
J Cogn Neurosci ; 32(10): 2001-2012, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32662731

RESUMO

A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.


Assuntos
Percepção da Fala , Fala , Humanos , Idioma , Aprendizagem , Fonética
17.
J Acoust Soc Am ; 147(3): EL289, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32237871

RESUMO

Recent studies suggest that sleep-mediated consolidation processes help adults learn non-native speech sounds. However, overnight improvement was not seen when participants learned in the morning, perhaps resulting from native-language interference. The current study trained participants to perceive the Hindi dental/retroflex contrast in the morning and tested whether increased training can lead to overnight improvement. Results showed overnight effects regardless of training amount. In contrast to previous studies, participants in this study heard sounds in limited contexts (i.e., one talker and one vowel context), corroborating other findings, suggesting that overnight improvement is seen in non-native phonetic learning when variability is limited.

18.
Atten Percept Psychophys ; 82(4): 2066, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32026448

RESUMO

Due to a production error, some IPA symbols were not included. The original article has been corrected.

19.
Atten Percept Psychophys ; 82(4): 2049-2065, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31970707

RESUMO

Adult listeners often struggle to learn to distinguish speech sounds not present in their native language. High-variability training sets (i.e., stimuli produced by multiple talkers or stimuli that occur in diverse phonological contexts) often result in better retention of the learned information, as well as increased generalization to new instances. However, high-variability training is also more challenging, and not every listener can take advantage of this kind of training. An open question is how variability should be introduced to the learner in order to capitalize on the benefits of such training without derailing the training process. The current study manipulated phonological variability as native English speakers learned a difficult nonnative (Hindi) contrast by presenting the nonnative contrast in the context of two different vowels (/i/ and /u/). In a between-subjects design, variability was manipulated during training and during test. Participants were trained in the evening hours and returned the next morning for reassessment to test for retention of the speech sounds. We found that blocked training was superior to interleaved training for both learning and retention, but for learners in the interleaved training group, higher pretraining aptitude predicted better identification performance. Further, pretraining discrimination aptitude positively predicted changes in phonetic discrimination after a period of off-line consolidation, regardless of the training manipulation. These findings add to a growing literature suggesting that variability may come at a cost in phonetic learning and that aptitude can affect both learning and retention of nonnative speech sounds.


Assuntos
Aptidão , Fonética , Percepção da Fala , Humanos , Aprendizagem
20.
Neurobiol Lang (Camb) ; 1(3): 339-364, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35784619

RESUMO

The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the "dorsal stream") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...