Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cognition ; 222: 105014, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35033864

RESUMO

In the contexts of language learning and music processing, hand gestures conveying acoustic information visually influence perception of speech and non-speech sounds (Connell et al., 2013; Morett & Chang, 2015). Currently, it is unclear whether this effect is due to these gestures' use of the human body to highlight relevant features of language (embodiment) or the cross-modal mapping between the visual motion trajectories of these gestures and corresponding auditory features (conceptual metaphor). To address this question, we examined identification of the pitch contours of lexical tones and non-speech analogs learned with pitch gesture, comparable dot motion, or no motion. Critically, pitch gesture and dot motion were either congruent or incongruent with the vertical conceptual metaphor of pitch. Consistent with our hypotheses, we found that identification accuracy increased for tones learned with congruent pitch gesture and dot motion, whereas it remained stable or decreased for tones learned with incongruent pitch gesture and dot motion. These findings provide the first evidence that both embodied and non-embodied visual stimuli congruent with the vertical conceptual metaphor of pitch enhance lexical and non-speech tone learning. Thus, they illuminate the influences of conceptual metaphor and embodiment on lexical and non-speech auditory perception, providing insight into how they can be leveraged to enhance language learning and music processing.


Assuntos
Música , Percepção da Fala , Humanos , Metáfora , Percepção da Altura Sonora , Fala
2.
Atten Percept Psychophys ; 83(6): 2583-2598, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33884572

RESUMO

Visual speech cues play an important role in speech recognition, and the McGurk effect is a classic demonstration of this. In the original McGurk & Macdonald (Nature 264, 746-748 1976) experiment, 98% of participants reported an illusory "fusion" percept of /d/ when listening to the spoken syllable /b/ and watching the visual speech movements for /g/. However, more recent work shows that subject and task differences influence the proportion of fusion responses. In the current study, we varied task (forced-choice vs. open-ended), stimulus set (including /d/ exemplars vs. not), and data collection environment (lab vs. Mechanical Turk) to investigate the robustness of the McGurk effect. Across experiments, using the same stimuli to elicit the McGurk effect, we found fusion responses ranging from 10% to 60%, thus showing large variability in the likelihood of experiencing the McGurk effect across factors that are unrelated to the perceptual information provided by the stimuli. Rather than a robust perceptual illusion, we therefore argue that the McGurk effect exists only for some individuals under specific task situations.Significance: This series of studies re-evaluates the classic McGurk effect, which shows the relevance of visual cues on speech perception. We highlight the importance of taking into account subject variables and task differences, and challenge future researchers to think carefully about the perceptual basis of the McGurk effect, how it is defined, and what it can tell us about audiovisual integration in speech.


Assuntos
Ilusões , Percepção da Fala , Percepção Auditiva , Humanos , Fala , Percepção Visual
3.
Wiley Interdiscip Rev Cogn Sci ; 12(2): e1541, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32767836

RESUMO

Recent advances in cognitive neuroscience have provided a detailed picture of the early time-course of speech perception. In this review, we highlight this work, placing it within the broader context of research on the neurobiology of speech processing, and discuss how these data point us toward new models of speech perception and spoken language comprehension. We focus, in particular, on temporally-sensitive measures that allow us to directly measure early perceptual processes. Overall, the data provide support for two key principles: (a) speech perception is based on gradient representations of speech sounds and (b) speech perception is interactive and receives input from higher-level linguistic context at the earliest stages of cortical processing. Implications for models of speech processing and the neurobiology of language more broadly are discussed. This article is categorized under: Psychology > Language Psychology > Perception and Psychophysics Neuroscience > Cognition.


Assuntos
Neurociência Cognitiva , Compreensão , Idioma , Fonética , Percepção da Fala/fisiologia , Potenciais Evocados/fisiologia , Humanos , Imageamento por Ressonância Magnética
4.
Psychol Sci ; 30(6): 830-841, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31018103

RESUMO

An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.g., "potatoes") following associated visual primes (e.g., "MASHED"), neutral visual primes (e.g., "FACE"), or a visual mask (e.g., "XXXX"). Auditory targets began with voiced (/b/, /d/, /g/) or voiceless (/p/, /t/, /k/) stop consonants, an acoustic difference known to yield differences in N1 amplitude. In Experiment 1 (N = 21), semantic context modulated responses to upcoming targets, with smaller N1 amplitudes for semantic associates. In Experiment 2 (N = 29), semantic context changed how listeners encoded sounds: Ambiguous voice-onset times were encoded similarly to the voicing end point elicited by semantic associates. These results are consistent with an interactive model of spoken-word recognition that includes top-down effects on early perception.


Assuntos
Percepção Auditiva/fisiologia , Semântica , Percepção da Fala/fisiologia , Fenômenos Eletrofisiológicos , Potenciais Evocados , Feminino , Humanos , Masculino , Modelos Neurológicos , Fonética , Tempo de Reação , Adulto Jovem
5.
Cognition ; 175: 101-108, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29486377

RESUMO

An audiovisual correspondence (AVC) refers to an observer's seemingly arbitrary yet consistent matching of sensory features across the two modalities; for example, between an auditory pitch and visual size. Research on AVCs has frequently used a speeded classification procedure in which participants are asked to rapidly classify an image when it is either accompanied by a congruent or an incongruent sound (or vice versa). When, as is typically the case, classification is faster in the presence of a congruent stimulus, researchers have inferred that the AVC is automatic and bottom-up. Such an inference is incomplete because the procedure does not show that the AVC is not subject to top-down influences. To remedy this problem, we devised a procedure that allows us to assess the degree of "bottom-up-ness" and "top-down-ness" in the processing of an AVC. We did this in studies of AVCs between pitch and five visual features: size, height, spatial frequency, brightness, and angularity. We find that all the AVCs we studied involve both bottom-up and top-down processing, thus undermining the prevalent generalization that AVCs are automatic.


Assuntos
Percepção Auditiva/fisiologia , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa
6.
Brain Sci ; 7(3)2017 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-28335558

RESUMO

Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

7.
Q J Exp Psychol (Hove) ; 70(7): 1151-1165, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27094855

RESUMO

This paper revisits the conclusion of our previous work regarding the dominance of meaning in the competition between rhythmic parsing and linguistic parsing. We played five-note rhythm patterns in which each sound is a spoken word of a five-word sentence. We asked listeners to indicate the starting point of the rhythm while disregarding which word would normally be heard as the first word of the sentence. In four studies, we varied task demands by introducing differences in rhythm complexity, rhythm ambiguity, rhythm pairing, and semantic coherence. We found that task complexity affects the dominance of meaning. We therefore amend our previous conclusion: when processing resources are taxed, listeners do not always primarily attend to meaning; instead, they primarily attend to the aspect of the pattern (rhythm or meaning) that is more salient.


Assuntos
Compreensão/fisiologia , Formação de Conceito/fisiologia , Conflito Psicológico , Periodicidade , Psicolinguística , Estimulação Acústica , Comportamento de Escolha/fisiologia , Feminino , Humanos , Masculino , Percepção/fisiologia , Estimulação Luminosa , Estudantes , Universidades
8.
Atten Percept Psychophys ; 77(8): 2728-39, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26337611

RESUMO

In this paper, we explore the rules followed by the auditory system in grouping temporal patterns. Imagine the following cyclical pattern (which we call an "auditory necklace"-AN for short-because those patterns are best visualized as beads arranged on a circle) consisting of notes (1s) and rests (0s): … 1110011011100110 …. It is perceived either as repeating 11100110 or as repeating 11011100. We devised a method to explore the temporal segmentation of ANs. In two experiments, while an AN was played, a circular array of icons appeared on the screen. At the time of each event (i.e., note or rest), one icon was highlighted; the highlight moved cyclically around the circular array. The participants were asked to click on the icon that corresponded to the note they perceived as the starting point, or clasp, of the AN. The best account of the segmentation of our ANs is based on Garner's (1974) run and gap principles. An important feature of our probabilistic model is the way in which it combines the effects of run length and gap length: additively. This result is an auditory analogue of Kubovy and van den Berg's (2008) discovery of the additivity of the effects of two visual grouping principles (proximity and similarity) conjointly applied to the same stimulus.


Assuntos
Percepção Auditiva/fisiologia , Estimulação Acústica , Algoritmos , Feminino , Humanos , Masculino , Modelos Estatísticos , Desempenho Psicomotor/fisiologia , Adulto Jovem
9.
Q J Exp Psychol (Hove) ; 68(11): 2243-54, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25747914

RESUMO

We provide a test of Patel's [( 2003 ). Language, music, syntax and the brain. Nature Neuroscience, 6, 674-681] shared syntactic integration resources hypothesis by investigating the competition between determinants of rhythmic parsing and linguistic parsing using a sentence-rhythm Stroop task. We played five-note rhythm patterns in which each note is replaced with a spoken word of a five-word sentence and asked participants to indicate the starting point of the rhythm while they disregarded which word would normally be heard as the first word of the sentence. In Study 1, listeners completed the task in their native language. In Study 2, we investigated whether this competition is weakened if the sentences were in a listener's non-native language. In Study 3, we investigated how much language mastery is necessary to obtain the effects seen in Studies 1 and 2. We demonstrate that processing resources for rhythmic parsing and linguistic parsing overlap with one another, particularly when the task is demanding. We also show that the tendency for language to bias processing does not require deep knowledge of the language.


Assuntos
Psicolinguística , Fala , Teste de Stroop , Estimulação Acústica , Percepção Auditiva , Conflito Psicológico , Percepção de Forma , Humanos , Aprendizagem , Estimulação Luminosa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA