Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cogn Sci ; 48(5): e13449, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38773754

RESUMO

We recently reported strong, replicable (i.e., replicated) evidence for lexically mediated compensation for coarticulation (LCfC; Luthra et al., 2021), whereby lexical knowledge influences a prelexical process. Critically, evidence for LCfC provides robust support for interactive models of cognition that include top-down feedback and is inconsistent with autonomous models that allow only feedforward processing. McQueen, Jesse, and Mitterer (2023) offer five counter-arguments against our interpretation; we respond to each of those arguments here and conclude that top-down feedback provides the most parsimonious explanation of extant data.


Assuntos
Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Cognição , Idioma
2.
Brain Lang ; 226: 105070, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35026449

RESUMO

The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that aredeployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.


Assuntos
Percepção da Fala , Fala , Humanos , Idioma , Fonética , Reprodutibilidade dos Testes
3.
J Exp Psychol Hum Percept Perform ; 47(12): 1673-1680, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34881952

RESUMO

Determining how human listeners achieve phonetic constancy despite a variable mapping between the acoustics of speech and phonemic categories is the longest standing challenge in speech perception. A clue comes from studies where the talker changes randomly between stimuli, which slows processing compared with a single-talker baseline. These multitalker processing costs have been observed most often in speeded monitoring paradigms, where participants respond whenever a specific item occurs. Notably, the conventional paradigm imposes attentional demands via two forms of varied mapping in mixed-talker conditions. First, target recycling (i.e., allowing items to serve as targets on some trials but as distractors on others) potentially prevents the development of task automaticity. Second, in mixed trials, participants must respond to two unique stimuli (i.e., one target produced by each talker), whereas in blocked conditions, they need respond to only one token (i.e., multiple target tokens). We seek to understand how attentional demands influence talker normalization, as measured by multitalker processing costs. Across four experiments, multitalker processing costs persisted when target recycling was not allowed but diminished when only one stimulus served as the target on mixed trials. We discuss the logic of using varied mapping to elicit attentional effects and implications for theories of speech perception. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Percepção da Fala , Acústica , Atenção , Humanos , Fonética , Fala
4.
Atten Percept Psychophys ; 83(6): 2367-2376, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33948883

RESUMO

Researchers have hypothesized that in order to accommodate variability in how talkers produce their speech sounds, listeners must perform a process of talker normalization. Consistent with this proposal, several studies have shown that spoken word recognition is slowed when speech is produced by multiple talkers compared with when all speech is produced by one talker (a multitalker processing cost). Nusbaum and colleagues have argued that talker normalization is modulated by attention (e.g., Nusbaum & Morin, 1992, Speech Perception, Production and Linguistic Structure, pp. 113-134). Some of the strongest evidence for this claim is from a speeded monitoring study where a group of participants who expected to hear two talkers showed a multitalker processing cost, but a separate group who expected one talker did not (Magnuson & Nusbaum, 2007, Journal of Experimental Psychology, 33[2], 391-409). In that study, however, the sample size was small and the crucial interaction was not significant. In this registered report, we present the results of a well-powered attempt to replicate those findings. In contrast to the previous study, we did not observe multitalker processing costs in either of our groups. To rule out the possibility that the null result was due to task constraints, we conducted a second experiment using a speeded classification task. As in Experiment 1, we found no influence of expectations on talker normalization, with no multitalker processing cost observed in either group. Our data suggest that the previous findings of Magnuson and Nusbaum (2007) be regarded with skepticism and that talker normalization may not be permeable to high-level expectations.


Assuntos
Motivação , Percepção da Fala , Atenção , Humanos , Fonética , Fala
5.
Cogn Sci ; 45(4): e12962, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33877697

RESUMO

A long-standing question in cognitive science is how high-level knowledge is integrated with sensory input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech sound, but do such effects reflect direct top-down influences on perception or merely postperceptual biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context phoneme (e.g., /s/ in Christma#) can alter the perceived place of articulation of a subsequent target phoneme (e.g., the initial phoneme of a stimulus from a tapes-capes continuum), consistent with the influence of an unambiguous context phoneme in the same position. Because this phoneme-to-phoneme compensation for coarticulation is considered sublexical, scientists agree that evidence for LCfC would constitute strong support for top-down interaction. However, results from previous LCfC studies have been inconsistent, and positive effects have often been small. Here, we conducted extensive piloting of stimuli prior to testing for LCfC. Specifically, we ensured that context items elicited robust phoneme restoration (e.g., that the final phoneme of Christma# was reliably identified as /s/) and that unambiguous context-final segments (e.g., a clear /s/ at the end of Christmas) drove reliable compensation for coarticulation for a subsequent target phoneme. We observed robust LCfC in a well-powered, preregistered experiment with these pretested items (N = 40) as well as in a direct replication study (N = 40). These results provide strong evidence in favor of computational models of spoken word recognition that include top-down feedback.


Assuntos
Percepção da Fala , Humanos , Fonética
6.
Psychon Bull Rev ; 28(4): 1354-1364, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33742423

RESUMO

Perceptual learning serves as a mechanism for listeners to adapt to novel phonetic information. Distributional tracking theories posit that this adaptation occurs as a result of listeners accumulating talker-specific distributional information about the phonetic category in question (Kleinschmidt & Jaeger, Psychological Review, 122, 148-203, 2015). What is not known is how listeners build these talker-specific distributions-that is, if they aggregate all information received over a certain time period, or if they rely more heavily upon the most recent information received and down-weight older, consolidated information. In the present experiment, listeners were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions of a "s"-"sh" contrast. Listeners returned several days later and completed the identical task again. In each individual session, listener's perception of a "s"-"sh" contrast was biased by the information in the immediately preceding lexical decision block (though only when participants heard the "sh"-biasing block first, which was likely driven by stimulus characteristics). There was evidence that listeners accrued information about the talker over time since the bias effect diminished in the second session. In general, results suggest that listeners initially maintain some flexibility with their talker-specific phonetic representations, but over the course of several exposures begin to consolidate these representations.


Assuntos
Fonética , Percepção da Fala , Adaptação Fisiológica , Humanos , Aprendizagem , Fatores de Tempo
7.
Atten Percept Psychophys ; 83(4): 1842-1860, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33398658

RESUMO

A fundamental problem in speech perception is how (or whether) listeners accommodate variability in the way talkers produce speech. One view of the way listeners cope with this variability is that talker differences are normalized - a mapping between talker-specific characteristics and phonetic categories is computed such that speech is recognized in the context of the talker's vocal characteristics. Consistent with this view, listeners process speech more slowly when the talker changes randomly than when the talker remains constant. An alternative view is that speech perception is based on talker-specific auditory exemplars in memory clustered around linguistic categories that allow talker-independent perception. Consistent with this view, listeners become more efficient at talker-specific phonetic processing after voice identification training. We asked whether phonetic efficiency would increase with talker familiarity by testing listeners with extremely familiar talkers (family members), newly familiar talkers (based on laboratory training), and unfamiliar talkers. We also asked whether familiarity would reduce the need for normalization. As predicted, phonetic efficiency (word recognition in noise) increased with familiarity (unfamiliar < trained-on < family). However, we observed a constant processing cost for talker changes even for pairs of family members. We discuss how normalization and exemplar theories might account for these results, and constraints the results impose on theoretical accounts of phonetic constancy.


Assuntos
Percepção da Fala , Voz , Humanos , Fonética , Reconhecimento Psicológico , Fala
8.
Psychon Bull Rev ; 27(4): 819, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32588197

RESUMO

The authors have retracted this article (Saltzman and Myers, 2018) because upon re-review of the data, a programming error was found that led to unequal presentations of items during the test phases of the experiment.

9.
Neurobiol Lang (Camb) ; 1(3): 339-364, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35784619

RESUMO

The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the "dorsal stream") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.

10.
Psychon Bull Rev ; 25(2): 718-724, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-28924946

RESUMO

Perceptual learning serves as a mechanism for listenexrs to adapt to novel phonetic information. Distributional tracking theories posit that this adaptation occurs as a result of listeners accumulating talker-specific distributional information about the phonetic category in question (Kleinschmidt & Jaeger, 2015, Psychological Review, 122). What is not known is how listeners build these talker-specific distributions; that is, if they aggregate all information received over a certain time period, or if they rely more heavily upon the most recent information received and down-weight older, consolidated information. In the present experiment, listeners were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions along a "s"-"sh" continuum. Listeners returned several days later and completed the identical task again. Evidence was consistent with listeners using a relatively short temporal window of integration at the individual session level. Namely, in each individual session, listeners' perception of a "s"-"sh" contrast was biased by the information in the immediately preceding lexical decision block, and there was no evidence that listeners summed their experience with the talker over the entire session. Similarly, the magnitude of the bias effect did not change between sessions, consistent with the idea that talker-specific information remains flexible, even after consolidation. In general, results suggest that listeners are maximally flexible when considering how to categorize speech from a novel talker.


Assuntos
Aprendizagem/fisiologia , Psicolinguística , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Adaptação Fisiológica , Adolescente , Adulto , Humanos , Fonética , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...