Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 55
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Ear Hear ; 45(2): 425-440, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37882091

RESUMEN

OBJECTIVES: The listening demand incurred by speech perception fluctuates in normal conversation. At the acoustic-phonetic level, natural variation in pronunciation acts as speedbumps to accurate lexical selection. Any given utterance may be more or less phonetically ambiguous-a problem that must be resolved by the listener to choose the correct word. This becomes especially apparent when considering two common speech registers-clear and casual-that have characteristically different levels of phonetic ambiguity. Clear speech prioritizes intelligibility through hyperarticulation which results in less ambiguity at the phonetic level, while casual speech tends to have a more collapsed acoustic space. We hypothesized that listeners would invest greater cognitive resources while listening to casual speech to resolve the increased amount of phonetic ambiguity, as compared with clear speech. To this end, we used pupillometry as an online measure of listening effort during perception of clear and casual continuous speech in two background conditions: quiet and noise. DESIGN: Forty-eight participants performed a probe detection task while listening to spoken, nonsensical sentences (masked and unmasked) while recording pupil size. Pupil size was modeled using growth curve analysis to capture the dynamics of the pupil response as the sentence unfolded. RESULTS: Pupil size during listening was sensitive to the presence of noise and speech register (clear/casual). Unsurprisingly, listeners had overall larger pupil dilations during speech perception in noise, replicating earlier work. The pupil dilation pattern for clear and casual sentences was considerably more complex. Pupil dilation during clear speech trials was slightly larger than for casual speech, across quiet and noisy backgrounds. CONCLUSIONS: We suggest that listener motivation could explain the larger pupil dilations to clearly spoken speech. We propose that, bounded by the context of this task, listeners devoted more resources to perceiving the speech signal with the greatest acoustic/phonetic fidelity. Further, we unexpectedly found systematic differences in pupil dilation preceding the onset of the spoken sentences. Together, these data demonstrate that the pupillary system is not merely reactive but also adaptive-sensitive to both task structure and listener motivation to maximize accurate perception in a limited resource system.


Asunto(s)
Pupila , Percepción del Habla , Humanos , Pupila/fisiología , Habla , Ruido , Cognición , Percepción del Habla/fisiología , Inteligibilidad del Habla/fisiología
2.
J Acoust Soc Am ; 152(1): 511, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35931533

RESUMEN

Parkinson's disease (PD) is a neurodegenerative condition primarily associated with its motor consequences. Although much of the focus within the speech domain has focused on PD's consequences for production, people with PD have been shown to differ in the perception of emotional prosody, loudness, and speech rate from age-matched controls. The current study targeted the effect of PD on perceptual phonetic plasticity, defined as the ability to learn and adjust to novel phonetic input, both in second language and native language contexts. People with PD were compared to age-matched controls (and, for three of the studies, a younger control population) in tasks of explicit non-native speech learning and adaptation to variation in native speech (compressed rate, accent, and the use of timing information within a sentence to parse ambiguities). The participants with PD showed significantly worse performance on the task of compressed rate and used the duration of an ambiguous fricative to segment speech to a lesser degree than age-matched controls, indicating impaired speech perceptual abilities. Exploratory comparisons also showed people with PD who were on medication performed significantly worse than their peers off medication on those two tasks and the task of explicit non-native learning.


Asunto(s)
Enfermedad de Parkinson , Percepción del Habla , Humanos , Lenguaje , Fonética , Habla
3.
J Cogn Neurosci ; 32(10): 2001-2012, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32662731

RESUMEN

A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.


Asunto(s)
Percepción del Habla , Habla , Humanos , Lenguaje , Aprendizaje , Fonética
4.
J Acoust Soc Am ; 147(3): EL289, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32237871

RESUMEN

Recent studies suggest that sleep-mediated consolidation processes help adults learn non-native speech sounds. However, overnight improvement was not seen when participants learned in the morning, perhaps resulting from native-language interference. The current study trained participants to perceive the Hindi dental/retroflex contrast in the morning and tested whether increased training can lead to overnight improvement. Results showed overnight effects regardless of training amount. In contrast to previous studies, participants in this study heard sounds in limited contexts (i.e., one talker and one vowel context), corroborating other findings, suggesting that overnight improvement is seen in non-native phonetic learning when variability is limited.

5.
J Acoust Soc Am ; 142(5): EL448, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-29195416

RESUMEN

Phonological variability is a key factor in many phonetic training studies, but it is unclear whether variability is universally helpful for learners. The current study explored variability and sleep consolidation in non-native phonetic learning. Two groups of participants were trained on a non-native contrast in one vowel context (/u/) and differed in whether they were also tested on an untrained context (/i/). Participants exposed to two vowels during the test were less accurate in perception of trained speech sounds and showed no overnight improvement. These findings suggest that introducing variability even in test phases may destabilize learning and prevent consolidation-based performance improvements.


Asunto(s)
Aprendizaje , Multilingüismo , Fonética , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adolescente , Femenino , Humanos , Masculino , Consolidación de la Memoria , Sueño , Adulto Joven
6.
J Acoust Soc Am ; 140(4): EL307, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27794292

RESUMEN

Listeners use lexical information to retune the mapping between the acoustic signal and speech sound representations, resulting in changes to phonetic category boundaries. Other research shows that phonetic categories have a rich internal structure; within-category variation is represented in a graded fashion. The current work examined whether lexically informed perceptual learning promotes a comprehensive reorganization of internal category structure. The results showed a reorganization of internal structure for one but not both of the examined categories, which may reflect an attenuation of learning for distributions with extensive category overlap. This finding points towards potential input-driven constraints on lexically guided phonetic retuning.

7.
J Psycholinguist Res ; 45(6): 1359-1367, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26645465

RESUMEN

Emotions are conveyed primarily through two channels in language: semantics and prosody. While many studies confirm the role of a left hemisphere network in processing semantic emotion, there has been debate over the role of the right hemisphere in processing prosodic emotion. Some evidence suggests a preferential role for the right hemisphere, and other evidence supports a bilateral model. The relative contributions of semantics and prosody to the overall processing of affect in language are largely unexplored. The present work used functional magnetic resonance imaging to elucidate the neural bases of processing anger conveyed by prosody or semantic content. Results showed a robust, distributed, bilateral network for processing angry prosody and a more modest left hemisphere network for processing angry semantics when compared to emotionally neutral stimuli. Findings suggest the nervous system may be more responsive to prosodic cues in speech than to the semantic content of speech.


Asunto(s)
Ira/fisiología , Mapeo Encefálico/métodos , Encéfalo/fisiología , Lateralidad Funcional/fisiología , Lenguaje , Semántica , Adolescente , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
8.
J Acoust Soc Am ; 137(1): EL91-7, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25618106

RESUMEN

This investigation explored the generalization of phonetic learning across talkers following training on a nonnative (Hindi dental and retroflex) contrast. Participants were trained in two groups, either in the morning or in the evening. Discrimination and identification performance was assessed in the trained talker and an untrained talker three times over 24 h following training. Results suggest that overnight consolidation promotes generalization across talkers in identification, but not necessarily discrimination, of nonnative speech sounds.


Asunto(s)
Aprendizaje/fisiología , Consolidación de la Memoria/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Fonética , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adolescente , Femenino , Humanos , Individualidad , Masculino , Sueño , Encuestas y Cuestionarios , Factores de Tiempo , Adulto Joven
9.
J Acoust Soc Am ; 138(2): 1068-78, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26328722

RESUMEN

A primary goal for models of speech perception is to describe how listeners achieve reliable comprehension given a lack of invariance between the acoustic signal and individual speech sounds. For example, individual talkers differ in how they implement phonetic properties of speech. Research suggests that listeners attain perceptual constancy by processing acoustic variation categorically while maintaining graded internal category structure. Moreover, listeners will use lexical information to modify category boundaries to learn to interpret a talker's ambiguous productions. The current work examines perceptual learning for talker differences that signal well-defined, unambiguous category members. Speech synthesis techniques were used to differentially manipulate talkers' characteristic productions of the stop voicing contrast for two groups of listeners. Following exposure to the talkers, internal category structure and category boundary were examined. The results showed that listeners dynamically adjusted internal category structure to be centered on experience with the talker's voice, but the category boundary remained fixed. These patterns were observed for words presented during training as well as novel lexical items. These findings point to input-driven constraints on functional plasticity within the language architecture, which may help to explain how listeners maintain stability of linguistic knowledge while simultaneously demonstrating flexibility for phonetic representations.


Asunto(s)
Fonética , Inteligibilidad del Habla , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Adulto , Clasificación , Retroalimentación Psicológica , Femenino , Humanos , Aprendizaje , Masculino , Memoria , Factores de Tiempo , Voz , Adulto Joven
10.
Artículo en Inglés | MEDLINE | ID: mdl-38811489

RESUMEN

How listeners weight a wide variety of information to interpret ambiguities in the speech signal is a question of interest in speech perception, particularly when understanding how listeners process speech in the context of phrases or sentences. Dominant views of cue use for language comprehension posit that listeners integrate multiple sources of information to interpret ambiguities in the speech signal. Here, we study how semantic context, sentence rate, and vowel length all influence identification of word-final stops. We find that while at the group level all sources of information appear to influence how listeners interpret ambiguities in speech, at the level of the individual listener, we observe systematic differences in cue reliance, such that some individual listeners favor certain cues (e.g., speech rate and vowel length) to the exclusion of others (e.g., semantic context). While listeners exhibit a range of cue preferences, across participants we find a negative relationship between individuals' weighting of semantic and acoustic-phonetic (sentence rate, vowel length) cues. Additionally, we find that these weightings are stable within individuals over a period of 1 month. Taken as a whole, these findings suggest that theories of cue integration and speech processing may fail to capture the rich individual differences that exist between listeners, which could arise due to mechanistic differences between individuals in speech perception.

11.
Neurobiol Lang (Camb) ; 4(1): 145-177, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37229142

RESUMEN

Though the right hemisphere has been implicated in talker processing, it is thought to play a minimal role in phonetic processing, at least relative to the left hemisphere. Recent evidence suggests that the right posterior temporal cortex may support learning of phonetic variation associated with a specific talker. In the current study, listeners heard a male talker and a female talker, one of whom produced an ambiguous fricative in /s/-biased lexical contexts (e.g., epi?ode) and one who produced it in /∫/-biased contexts (e.g., friend?ip). Listeners in a behavioral experiment (Experiment 1) showed evidence of lexically guided perceptual learning, categorizing ambiguous fricatives in line with their previous experience. Listeners in an fMRI experiment (Experiment 2) showed differential phonetic categorization as a function of talker, allowing for an investigation of the neural basis of talker-specific phonetic processing, though they did not exhibit perceptual learning (likely due to characteristics of our in-scanner headphones). Searchlight analyses revealed that the patterns of activation in the right superior temporal sulcus (STS) contained information about who was talking and what phoneme they produced. We take this as evidence that talker information and phonetic information are integrated in the right STS. Functional connectivity analyses suggested that the process of conditioning phonetic identity on talker information depends on the coordinated activity of a left-lateralized phonetic processing system and a right-lateralized talker processing system. Overall, these results clarify the mechanisms through which the right hemisphere supports talker-specific phonetic processing.

12.
J Exp Psychol Learn Mem Cogn ; 49(7): 1161-1175, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36757985

RESUMEN

Individuals differ in their ability to perceive and learn unfamiliar speech sounds, but we lack a comprehensive theoretical account that predicts individual differences in this skill. Predominant theories largely attribute difficulties of non-native speech perception to the relationships between non-native speech sounds/contrasts and native-language categories. The goal of the current study was to test whether the predictions made by these theories can be extended to predict individual differences in naive perception of non-native speech sounds or learning of these sounds. Specifically, we hypothesized that the internal structure of native-language speech categories is the cause of difficulty in perception of unfamiliar sounds such that learners who show more graded (i.e., less categorical) perception of sounds in their native language would have an advantage for perceiving non-native speech sounds because they would be less likely to assimilate unfamiliar speech tokens to their native-language categories. We tested this prediction in two experiments in which listeners categorized speech continua in their native language and performed tasks of discrimination or identification of difficult non-native speech sound contrasts. Overall, results did not support the hypothesis that individual differences in categorical perception of native-language speech sounds is responsible for variability in sensitivity to non-native speech sounds. However, participants who responded more consistently on a speech categorization task showed more accurate perception of non-native speech sounds. This suggests that individual differences in non-native speech perception are more related to the stability of phonetic processing abilities than to individual differences in phonetic category structure. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Percepción del Habla , Humanos , Lenguaje , Aprendizaje , Fonética , Sonido , Habla
13.
J Speech Lang Hear Res ; 66(2): 720-734, 2023 02 13.
Artículo en Inglés | MEDLINE | ID: mdl-36668820

RESUMEN

PURPOSE: Sleep-based memory consolidation has been shown to facilitate perceptual learning of atypical speech input including nonnative speech sounds, accented speech, and synthetic speech. The current research examined the role of sleep-based memory consolidation on perceptual learning for noise-vocoded speech, including maintenance of learning over a 1-week time interval. Because comprehending noise-vocoded speech requires extensive restructuring of the mapping between the acoustic signal and prelexical representations, sleep consolidation may be critical for this type of adaptation. Thus, the purpose of this study was to investigate the role of sleep-based memory consolidation on adaptation to noise-vocoded speech in listeners without hearing loss as a foundational step toward identifying parameters that can be useful to consider for auditory training with clinical populations. METHOD: Two groups of normal-hearing listeners completed a transcription training task with feedback for noise-vocoded sentences in either the morning or the evening. Learning was assessed through transcription accuracy before training, immediately after training, 12 hr after training, and 1 week after training for both trained and novel sentences. RESULTS: Both the morning and evening groups showed improved comprehension of noise-vocoded sentences immediately following training. Twelve hours later, the evening group showed stable gains (following a period of sleep), whereas the morning group demonstrated a decline in gains (following a period of wakefulness). One week after training, the morning and evening groups showed equivalent performance for both trained and novel sentences. CONCLUSION: Sleep-consolidated learning helps stabilize training gains for degraded speech input, which may hold clinical utility for optimizing rehabilitation recommendations.


Asunto(s)
Consolidación de la Memoria , Percepción del Habla , Humanos , Habla , Aprendizaje , Sueño , Estimulación Acústica , Enmascaramiento Perceptual
14.
Brain Lang ; 240: 105264, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37087863

RESUMEN

Theories suggest that speech perception is informed by listeners' beliefs of what phonetic variation is typical of a talker. A previous fMRI study found right middle temporal gyrus (RMTG) sensitivity to whether a phonetic variant was typical of a talker, consistent with literature suggesting that the right hemisphere may play a key role in conditioning phonetic identity on talker information. The current work used transcranial magnetic stimulation (TMS) to test whether the RMTG plays a causal role in processing talker-specific phonetic variation. Listeners were exposed to talkers who differed in how they produced voiceless stop consonants while TMS was applied to RMTG, left MTG, or scalp vertex. Listeners subsequently showed near-ceiling performance in indicating which of two variants was typical of a trained talker, regardless of previous stimulation site. Thus, even though the RMTG is recruited for talker-specific phonetic processing, modulation of its function may have only modest consequences.


Asunto(s)
Fonética , Percepción del Habla , Humanos , Estimulación Magnética Transcraneal , Lóbulo Temporal/diagnóstico por imagen , Percepción del Habla/fisiología , Imagen por Resonancia Magnética
15.
J Cogn Neurosci ; 24(8): 1695-708, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22621261

RESUMEN

Categorical perception, an increased sensitivity to between- compared with within-category contrasts, is a stable property of native speech perception that emerges as language matures. Although recent research suggests that categorical responses to speech sounds can be found in left prefrontal as well as temporo-parietal areas, it is unclear how the neural system develops heightened sensitivity to between-category contrasts. In the current study, two groups of adult participants were trained to categorize speech sounds taken from a dental/retroflex/velar continuum according to two different boundary locations. Behavioral results suggest that for successful learners, categorization training led to increased discrimination accuracy for between-category contrasts with no concomitant increase for within-category contrasts. Neural responses to the learned category schemes were measured using a short-interval habituation design during fMRI scanning. Whereas both inferior frontal and temporal regions showed sensitivity to phonetic contrasts sampled from the continuum, only the bilateral middle frontal gyri exhibited a pattern consistent with encoding of the learned category scheme. Taken together, these results support a view in which top-down information about category membership may reshape perceptual sensitivities via attention or executive mechanisms in the frontal lobes.


Asunto(s)
Encéfalo/fisiología , Habituación Psicofisiológica/fisiología , Aprendizaje/fisiología , Imagen por Resonancia Magnética/métodos , Fonética , Percepción del Habla/fisiología , Adolescente , Adulto , Discriminación en Psicología/fisiología , Femenino , Lóbulo Frontal/fisiología , Humanos , Lenguaje , Imagen por Resonancia Magnética/instrumentación , Masculino , Persona de Mediana Edad , Pruebas Neuropsicológicas , Psicolingüística/métodos , Lóbulo Temporal/fisiología , Adulto Joven
16.
Brain Lang ; 226: 105070, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35026449

RESUMEN

The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that aredeployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.


Asunto(s)
Percepción del Habla , Habla , Humanos , Lenguaje , Fonética , Reproducibilidad de los Resultados
17.
J Cogn Neurosci ; 23(3): 593-603, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20350185

RESUMEN

The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the stimulus set. Behavioral results showed longer voice-onset time for MP target words, replicating earlier behavioral results [Baese-Berk, M., & Goldrick, M. Mechanisms of interaction in speech production. Language and Cognitive Processes, 24, 527-554, 2009]. fMRI results revealed reduced activation for MP words compared to NMP words in a network including left posterior superior temporal gyrus, the supramarginal gyrus, inferior frontal gyrus, and precentral gyrus. These findings support cascade models of spoken word production and show that neural activation at the lexical level modulates activation in those brain regions involved in lexical selection, phonological planning, and, ultimately, motor plans for production. The facilitatory effects for words with MP neighbors suggest that competition effects reflect the overlap inherent in the phonological representation of the target word and its MP neighbor.


Asunto(s)
Encéfalo/fisiología , Lenguaje , Red Nerviosa/fisiología , Habla/fisiología , Adulto , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Medición de la Producción del Habla
18.
Brain Lang ; 215: 104919, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33524740

RESUMEN

Listeners perceive speech sounds categorically. While group-level differences in categorical perception have been observed in children or individuals with reading disorders, recent findings suggest that typical adults vary in how categorically they perceive sounds. The current study investigated neural sources of individual variability in categorical perception of speech. Fifty-seven participants rated phonetic tokens on a visual analogue scale; categoricity and response consistency were measured and related to measures of brain structure from MRI. Increased surface area of the right middle frontal gyrus predicted more categorical perception of a fricative continuum. This finding supports the idea that frontal regions are sensitive to phonetic category-level information and extends it to make behavioral predictions at the individual level. Additionally, more gyrification in bilateral transverse temporal gyri predicted less consistent responses on the task, perhaps reflecting subtle variation in language ability across the population.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Adulto , Niño , Humanos , Individualidad , Fonética , Habla
19.
J Speech Lang Hear Res ; 64(10): 3720-3733, 2021 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-34525309

RESUMEN

Purpose Individuals vary in their ability to learn the sound categories of nonnative languages (nonnative phonetic learning) and to adapt to systematic differences, such as accent or talker differences, in the sounds of their native language (native phonetic learning). Difficulties with both native and nonnative learning are well attested in people with speech and language disorders relative to healthy controls, but substantial variability in these skills is also present in the typical population. This study examines whether this individual variability can be organized around a common ability that we label "phonetic plasticity." Method A group of healthy young adult participants (N = 80), who attested they had no history of speech, language, neurological, or hearing deficits, completed two tasks of nonnative phonetic category learning, two tasks of learning to cope with variation in their native language, and seven tasks of other cognitive functions, distributed across two sessions. Performance on these 11 tasks was compared, and exploratory factor analysis was used to assess the extent to which performance on each task was related to the others. Results Performance on both tasks of native learning and an explicit task of nonnative learning patterned together, suggesting that native and nonnative phonetic learning tasks rely on a shared underlying capacity, which is termed "phonetic plasticity." Phonetic plasticity was also associated with vocabulary, comprehension of words in background noise, and, more weakly, working memory. Conclusions Nonnative sound learning and native language speech perception may rely on shared phonetic plasticity. The results suggest that good learners of native language phonetic variation are also good learners of nonnative phonetic contrasts. Supplemental Material https://doi.org/10.23641/asha.16606778.


Asunto(s)
Fonética , Percepción del Habla , Humanos , Individualidad , Lenguaje , Ruido , Adulto Joven
20.
Atten Percept Psychophys ; 83(5): 2217-2228, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33754298

RESUMEN

Because different talkers produce their speech sounds differently, listeners benefit from maintaining distinct generative models (sets of beliefs) about the correspondence between acoustic information and phonetic categories for different talkers. A robust literature on phonetic recalibration indicates that when listeners encounter a talker who produces their speech sounds idiosyncratically (e.g., a talker who produces their /s/ sound atypically), they can update their generative model for that talker. Such recalibration has been shown to occur in a relatively talker-specific way. Because listeners in ecological situations often meet several new talkers at once, the present study considered how the process of simultaneously updating two distinct generative models compares to updating one model at a time. Listeners were exposed to two talkers, one who produced /s/ atypically and one who produced /∫/ atypically. Critically, these talkers only produced these sounds in contexts where lexical information disambiguated the phoneme's identity (e.g., epi_ode, flouri_ing). When initial exposure to the two talkers was blocked by voice (Experiment 1), listeners recalibrated to these talkers after relatively little exposure to each talker (32 instances per talker, of which 16 contained ambiguous fricatives). However, when the talkers were intermixed during learning (Experiment 2), listeners required more exposure trials before they were able to adapt to the idiosyncratic productions of these talkers (64 instances per talker, of which 32 contained ambiguous fricatives). Results suggest that there is a perceptual cost to simultaneously updating multiple distinct generative models, potentially because listeners must first select which generative model to update.


Asunto(s)
Percepción del Habla , Voz , Humanos , Aprendizaje , Fonética , Sonido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA