Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Brain Lang ; 211: 104875, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33086178

RESUMO

Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language processing cascades proceed on a millisecond-by-millisecond basis. Our results indicate: (1) perceptual processing and memory for fine-grained acoustics is preserved in brain activity for up to 900 msec; (2) contextual analysis begins early and is graded with respect to the acoustic signal; and (3) top-down predictions influence perceptual processing in some cases, however, these predictions are available simultaneously with the veridical signal. These mechanistic insights provide a basis for a better understanding of the cortical language network.


Assuntos
Estimulação Acústica/métodos , Compreensão/fisiologia , Eletroencefalografia/métodos , Idioma , Motivação/fisiologia , Percepção da Fala/fisiologia , Adulto , Percepção Auditiva/fisiologia , Feminino , Humanos , Masculino , Semântica
2.
Atten Percept Psychophys ; 81(4): 1047-1064, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30945141

RESUMO

Some early studies of people with aphasia reported strikingly better performance on lexical than on sublexical speech perception tasks. These findings challenged the claim that lexical processing depends on sublexical processing and suggested that acoustic information could be mapped directly to lexical representations. However, Dial and Martin (Neuropsychologia 96: 192-212, 2017) argued that these studies failed to match the discriminability of targets and distractors for the sublexical and lexical stimuli and showed that when using closely matched tasks with natural speech tokens, no patient performed substantially better at the lexical than at the sublexical processing task. In the current study, we sought to provide converging evidence for the dependence of lexical on sublexical processing by examining the perception of synthetic speech stimuli varied on a voice-onset time continuum using eye-tracking methodology, which is sensitive to online speech perception processes. Eight individuals with aphasia and ten age-matched controls completed two visual world paradigm tasks: phoneme (sublexical) and word (lexical) identification. For both identification and eye-movement data, strong correlations were observed between the sublexical and lexical tasks. Critically, no patient within the control range on the lexical task was impaired on the sublexical task. Overall, the current study supports the claim that lexical processing depends on sublexical processing. Implications for inferring deficits in people with aphasia and the use of sublexical tasks to assess sublexical processing are also discussed.


Assuntos
Afasia/fisiopatologia , Afasia/psicologia , Movimentos Oculares/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Idoso , Estudos de Casos e Controles , Feminino , Humanos , Masculino , Fonética , Análise e Desempenho de Tarefas
3.
Ear Hear ; 40(4): 961-980, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30531260

RESUMO

OBJECTIVES: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. DESIGN: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. RESULTS: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. CONCLUSIONS: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.


Assuntos
Implante Coclear , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Percepção da Fala/fisiologia , Incerteza , Estimulação Acústica , Adulto , Estudos de Casos e Controles , Medições dos Movimentos Oculares , Feminino , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa
4.
Stat Methods Med Res ; 26(6): 2708-2725, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26400088

RESUMO

In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses.


Assuntos
Dinâmica não Linear , Psicolinguística/estatística & dados numéricos , Estimulação Acústica , Algoritmos , Bioestatística/métodos , Implantes Cocleares , Simulação por Computador , Humanos , Idioma , Modelos Logísticos , Modelos Estatísticos , Distribuição Normal , Fatores de Tempo
5.
J Speech Lang Hear Res ; 56(4): 1328-45, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23926331

RESUMO

PURPOSE: Researchers have begun to use eye tracking in the visual world paradigm (VWP) to study clinical differences in language processing, but the reliability of such laboratory tests has rarely been assessed. In this article, the authors assess test-retest reliability of the VWP for spoken word recognition. Methods Participants performed an auditory VWP task in repeated sessions and a visual-only VWP task in a third session. The authors performed correlation and regression analyses on several parameters to determine which reflect reliable behavior and which are predictive of behavior in later sessions. RESULTS: Results showed that the fixation parameters most closely related to timing and degree of fixations were moderately-to-strongly correlated across days, whereas the parameters related to rate of increase or decrease of fixations to particular items were less strongly correlated. Moreover, when including factors derived from the visual-only task, the performance of the regression model was at least moderately correlated with Day 2 performance on all parameters ( R > .30). CONCLUSION: The VWP is stable enough (with some caveats) to serve as an individual measure. These findings suggest guidelines for future use of the paradigm and for areas of improvement in both methodology and analysis.


Assuntos
Movimentos Oculares , Testes de Linguagem/normas , Reconhecimento Fisiológico de Modelo , Reconhecimento Visual de Modelos , Percepção da Fala , Fala , Estimulação Acústica/métodos , Adulto , Feminino , Fixação Ocular , Humanos , Masculino , Fonética , Estimulação Luminosa/métodos , Tempo de Reação , Análise de Regressão , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA