Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Data Brief ; 29: 105242, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32154336

RESUMO

This article presents the data analyzed in the paper "Is imagining a voice like listening to it? Evidence from ERPs" [1]. The data include individual ERP data when participants were performing auditory imagery of native and non-native English speech during silent reading vs. normal silent reading, and behavioral results from participants performing the Nelson-Denny Reading Comprehension task and Bucknell Auditory Imagery Scale (BAIS). The repository includes the R scripts used to carry out the statistical analyses reported in the original paper.

2.
Brain Lang ; 184: 32-42, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29960165

RESUMO

Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.


Assuntos
Encéfalo/diagnóstico por imagem , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Encéfalo/fisiologia , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Neuroimagem , Imagem Óptica , Fonética , Adulto Jovem
3.
J Cogn Neurosci ; 27(9): 1723-37, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25848682

RESUMO

Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.


Assuntos
Encéfalo/fisiologia , Percepção de Movimento/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Lábio , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Imagem Óptica , Estimulação Luminosa/métodos , Fala , Adulto Jovem
4.
Atten Percept Psychophys ; 74(8): 1761-81, 2012 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23070884

RESUMO

Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker's phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory-visual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.


Assuntos
Sinais (Psicologia) , Potenciais Evocados Auditivos , Face , Movimento , Ruído , Fonética , Percepção da Fala , Estimulação Acústica , Acústica , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação , Fala , Fatores de Tempo
5.
Psychon Bull Rev ; 17(1): 15-21, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20081155

RESUMO

It is well known that conversation (e.g., on a cell phone) impairs driving. We demonstrate that the reverse is also true: Language production and comprehension, and the encoding of the products of comprehension into memory, are less accurate when one is driving. Ninety-six pairs of drivers and conversation partners engaged in a story-retelling task in a driving simulator. Half of the pairs were older adults. Each pair completed one dual-task block (driving during the retelling task) and two single-task control blocks. The results showed a decline in the accuracy of the drivers' storytelling and of their memory for stories that were told to them by their nondriving partners. Speech production suffered an additional cost when the difficulty of driving increased. Measures of driving performance suggested that the drivers gave priority to the driving task when they were conversing. As a result, their linguistic performance suffered.


Assuntos
Condução de Veículo/psicologia , Fala , Fatores Etários , Idoso , Atenção , Feminino , Humanos , Masculino , Memória , Rememoração Mental , Desempenho Psicomotor , Adulto Jovem
6.
J Mem Lang ; 60(3): 368-392, 2009 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-20160997

RESUMO

Constraint-based lexical models of language processing assume that readers resolve temporary ambiguities by relying on a variety of cues, including particular knowledge of how verbs combine with nouns. Previous experiments have demonstrated verb bias effects only in structurally complex sentences, and have been criticized on the grounds that such effects could be due to a rapid reanalysis stage in a two-stage modular processing system. In a self-paced reading experiment and an eyetracking experiment, we demonstrate verb bias effects in sentences with simple structures that should require no reanalyis, and thus provide evidence that the combinatorial properties of individual words influence the earliest stages of sentence comprehension.

7.
Proc Natl Acad Sci U S A ; 104(43): 17157-62, 2007 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-17942677

RESUMO

Language processing involves the rapid interaction of multiple brain regions. The study of its neurophysiological bases would therefore benefit from neuroimaging techniques combining both good spatial and good temporal resolution. Here we use the event-related optical signal (EROS), a recently developed imaging method, to reveal rapid interactions between left superior/middle temporal cortices (S/MTC) and inferior frontal cortices (IFC) during the processing of semantically or syntactically anomalous sentences. Participants were presented with sentences of these types intermixed with nonanomalous control sentences and were required to judge their acceptability. ERPs were recorded simultaneously with EROS and showed the typical activities that are elicited when processing anomalous stimuli: the N400 and the P600 for semantic and syntactic anomalies, respectively. The EROS response to semantically anomalous words showed increased activity in the S/MTC (corresponding in time with the N400), followed by IFC activity. Syntactically anomalous words evoked a similar sequence, with a temporal-lobe EROS response (corresponding in time with the P600), followed by frontal activity. However, the S/MTC activity corresponding to a semantic anomaly was more ventral than that corresponding to a syntactic anomaly. These data suggest that activation related to anomaly processing in sentences proceeds from temporal to frontal brain regions for both semantic and syntactic anomalies. This first EROS study investigating language processing shows that EROS can be used to image rapid interactions across cortical areas.


Assuntos
Córtex Cerebral/fisiologia , Potenciais Evocados Visuais/fisiologia , Processamento de Imagem Assistida por Computador , Semântica , Adolescente , Adulto , Mapeamento Encefálico , Eletrodos , Feminino , Humanos , Masculino , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...