Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Multisens Res ; 31(1-2): 39-56, 2018 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-31264595

RESUMO

Visual information on a talker's face can influence what a listener hears. Commonly used approaches to study this include mismatched audiovisual stimuli (e.g., McGurk type stimuli) or visual speech in auditory noise. In this paper we discuss potential limitations of these approaches and introduce a novel visual phonemic restoration method. This method always presents the same visual stimulus (e.g., /ba/) dubbed with a matched auditory stimulus (/ba/) or one that has weakened consonantal information and sounds more /a/-like). When this reduced auditory stimulus (or /a/) is dubbed with the visual /ba/, a visual influence will result in effectively 'restoring' the weakened auditory cues so that the stimulus is perceived as a /ba/. An oddball design in which participants are asked to detect the /a/ among a stream of more frequently occurring /ba/s while either a speaking face or face with no visual speech was used. In addition, the same paradigm was presented for a second contrast in which participants detected /pa/ among /ba/s, a contrast which should be unaltered by the presence of visual speech. Behavioral and some ERP findings reflect the expected phonemic restoration for the /ba/ vs. /a/ contrast; specifically, we observed reduced accuracy and P300 response in the presence of visual speech. Further, we report an unexpected finding of reduced accuracy and P300 response for both speech contrasts in the presence of visual speech, suggesting overall modulation of the auditory signal in the presence of visual speech. Consistent with this, we observed a mismatch negativity (MMN) effect for the /ba/ vs. /pa/ contrast only that was larger in absence of visual speech. We discuss the potential utility for this paradigm for listeners who cannot respond actively, such as infants and individuals with developmental disabilities.

2.
Brain Sci ; 7(6)2017 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-28574442

RESUMO

When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant-vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus). All speech tokens were paired with a face producing /ba/ or a face with a pixelated mouth containing motion but no visual speech. In this paradigm, the visual /ba/ should cause the auditory /a/ to be perceived as /ba/, creating an attenuated oddball response; in contrast, a pixelated video (without articulatory information) should not have this effect. Behaviorally, participants showed visual phonemic restoration (reduced accuracy in detecting deviant /a/) in the presence of a speaking face. In addition, ERPs were observed in both an early time window (N100) and a later time window (P300) that were sensitive to speech context (/ba/ or /a/) and modulated by face context (speaking face with visible articulation or with pixelated mouth). Specifically, the oddball responses for the N100 and P300 were attenuated in the presence of a face producing /ba/ relative to a pixelated face, representing a possible neural correlate of the phonemic restoration effect. Notably, those individuals with more traits associated with autism (yet still in the non-clinical range) had smaller P300 responses overall, regardless of face context, suggesting generally reduced phonemic discrimination.

3.
J Acoust Soc Am ; 141(5): 3145, 2017 05.
Artigo em Inglês | MEDLINE | ID: mdl-28599552

RESUMO

When a speaker talks, the visible consequences of what they are saying can be seen. Listeners are influenced by this visible speech both in a noisy listening environment and even when auditory speech can easily be heard. While visible influence on heard speech has been reported to increase from early to late childhood, little is known about the mechanism that underlies this developmental trend. One possible account of developmental differences is that looking behavior to the face of a speaker changes with age. To examine this possibility, the gaze to a speaking face was examined in children from 5 to 10 yrs of age and adults. Participants viewed a speaker's face in a range of conditions that elicit looking: in a visual only (speech reading) condition, in the presence of auditory noise (speech in noise) condition, and in an audiovisual mismatch (McGurk) condition. Results indicate an increase in gaze on the face, and specifically, to the mouth of a speaker between the ages of 5 and 10 for all conditions. This change in looking behavior may help account for previous findings in the literature showing that visual influence on heard speech increases with development.


Assuntos
Expressão Facial , Fixação Ocular , Acústica da Fala , Percepção da Fala , Percepção Visual , Qualidade da Voz , Estimulação Acústica , Fatores Etários , Audiometria da Fala , Criança , Pré-Escolar , Sinais (Psicologia) , Feminino , Humanos , Masculino , Ruído/efeitos adversos , Mascaramento Perceptivo , Estimulação Luminosa , Gravação em Vídeo , Adulto Jovem
4.
Clin Linguist Phon ; 29(1): 76-83, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25313714

RESUMO

Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.


Assuntos
Recursos Audiovisuais , Transtornos Globais do Desenvolvimento Infantil/terapia , Aplicativos Móveis , Percepção da Fala , Atenção , Criança , Transtornos Globais do Desenvolvimento Infantil/diagnóstico , Transtornos Globais do Desenvolvimento Infantil/psicologia , Compreensão , Humanos , Leitura Labial , Masculino , Reconhecimento Visual de Modelos , Fonética , Semântica
5.
Front Psychol ; 5: 397, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24847297

RESUMO

Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD) and a group with typical development (TD). Patterns of gaze were observed under three conditions: audiovisual (AV) speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers' mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.

6.
Child Dev ; 82(5): 1397-403, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21790542

RESUMO

This study used eye-tracking methodology to assess audiovisual speech perception in 26 children ranging in age from 5 to 15 years, half with autism spectrum disorders (ASD) and half with typical development. Given the characteristic reduction in gaze to the faces of others in children with ASD, it was hypothesized that they would show reduced influence of visual information on heard speech. Responses were compared on a set of auditory, visual, and audiovisual speech perception tasks. Even when fixated on the face of the speaker, children with ASD were less visually influenced than typical development controls. This indicates fundamental differences in the processing of audiovisual speech in children with ASD, which may contribute to their language and communication impairments.


Assuntos
Transtornos Globais do Desenvolvimento Infantil/psicologia , Leitura Labial , Reconhecimento Visual de Modelos , Percepção da Fala , Comportamento Verbal , Adolescente , Criança , Transtornos Globais do Desenvolvimento Infantil/diagnóstico , Pré-Escolar , Comunicação , Feminino , Humanos , Transtornos do Desenvolvimento da Linguagem/diagnóstico , Transtornos do Desenvolvimento da Linguagem/psicologia , Masculino , Mascaramento Perceptivo , Valores de Referência
7.
Lang Speech ; 49(Pt 1): 21-53, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16922061

RESUMO

We report four experiments designed to determine whether visual information affects judgments of acoustically-specified nonspeech events as well as speech events (the "McGurk effect"). Previous findings have shown only weak McGurk effects for nonspeech stimuli, whereas strong effects are found for consonants. We used click sounds that serve as consonants in some African languages, but that are perceived as nonspeech by American English listeners. We found a significant McGurk effect for clicks presented in isolation that was much smaller than that found for stop-consonant-vowel syllables. In subsequent experiments, we found strong McGurk effects, comparable to those found for English syllables, for click-vowel syllables, and weak effects, comparable to those found for isolated clicks, for excised release bursts of stop consonants presented in isolation. We interpret these findings as evidence that the potential contributions of speech-specific processes on the McGurk effect are limited, and discuss the results in relation to current explanations for the McGurk effect.


Assuntos
Ilusões , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Atenção , Humanos , Mascaramento Perceptivo , Psicoacústica , Acústica da Fala
8.
Percept Psychophys ; 67(5): 759-69, 2005 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16334050

RESUMO

The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.


Assuntos
Percepção da Fala , Percepção Visual , Adolescente , Adulto , Percepção Auditiva , Sinais (Psicologia) , Humanos , Pessoa de Meia-Idade
9.
J Exp Psychol Hum Percept Perform ; 30(3): 445-63, 2004 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-15161378

RESUMO

Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.


Assuntos
Leitura Labial , Percepção da Fala , Percepção Visual , Vocabulário , Adolescente , Adulto , Feminino , Humanos , Masculino , Fonética , Tempo de Reação
10.
Percept Psychophys ; 65(4): 591-601, 2003 May.
Artigo em Inglês | MEDLINE | ID: mdl-12812281

RESUMO

Previous work has demonstrated that the graded internal structure of phonetic categories is sensitive to a variety of contextual factors. One such factor is place of articulation: The best exemplars of voiceless stop consonants along auditory bilabial and velar voice onset time (VOT) continua occur over different ranges of VOTs (Volaitis & Miller, 1992). In the present study, we exploited the McGurk effect to examine whether visual information for place of articulation also shifts the best exemplar range for voiceless consonants, following Green and Kuhl's (1989) demonstration of effects of visual place of articulation on the location of voicing boundaries. In Experiment 1, we established that /p/ and /t/ have different best exemplar ranges along auditory bilabial and alveolar VOT continua. We then found, in Experiment 2, a similar shift in the best-exemplar range for /t/ relative to that for /p/ when there was a change in visual place of articulation, with auditory place of articulation held constant. These findings indicate that the perceptual mechanisms that determine internal phonetic category structure are sensitive to visual, as well as to auditory, information.


Assuntos
Fonética , Percepção da Fala , Percepção Visual , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Distribuição Aleatória , Fatores de Tempo , Voz
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...