Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 67(3): 853-869, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38407093

RESUMO

PURPOSE: Our goal is to understand how the different types of plural marking are understood and processed by children with cochlear implants (CIs): (a) how does salience affect the processing of plural marking, (b) how is this processing affected by the incomplete signal provided by the CIs, and (c) is it linked to individual factors such as chronological age, vocabulary development, and phonological working memory? METHOD: Sixteen children with CIs and 30 age-matched children with normal hearing (NH) participated in an eye-tracking study. Their task was to choose the corresponding picture to an auditorily presented singular or plural noun. Accuracy, reaction time, and gaze fixation were measured and analyzed with mixed-effect models. RESULTS: Group differences were found in accuracy but not in reaction time or gaze fixation. Plural processing is qualitatively similar in children with CIs and children with NH, with more difficulties in processing plurals involving stem-vowel changes and less with those involving suffixes. Age effects indicate that processing abilities still evolve between 5 and 11 years, and processing is further linked to lexical development. CONCLUSIONS: Our results indicate that early implantation seems to be beneficial for the acquisition of plural as indicated by very small between-group differences in processing and comprehension. Processing is furthermore affected by the type of material (i.e., phonetic, phonological, or morphological) used to mark plural and less so by their segmental salience. Our study emphasizes the need to take into account the form of the linguistic material in future investigations at higher levels of processing.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Criança , Humanos , Pré-Escolar , Tecnologia de Rastreamento Ocular , Idioma , Implante Coclear/métodos , Fonética , Audição , Surdez/cirurgia
2.
J Child Lang ; : 1-28, 2023 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-37718673

RESUMO

The ability to process plural marking of nouns is acquired early: at a very young age, children are able to understand if a noun represents one item or more than one. However, little is known about how the segmental characteristics of plural marking are used in this process. Using eye-tracking, we aim at understanding how five to twelve-year old children use the phonetic, phonological, and morphological information available to process noun plural marking in German (i.e., a very complex system) compared to adults. We expected differences with stem vowels, stem-final consonants or different suffixes, alone or in combination, reflecting different processing of their segmental information. Our results show that for plural processing: 1) a suffix is the most helpful cue, an umlaut the least helpful, and voicing does not play a role; 2) one cue can be sufficient and 3) school-age children have not reached adult-like processing of plural marking.

3.
Clin Neurophysiol ; 132(9): 2290-2305, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34120838

RESUMO

OBJECTIVE: Cochlear implants (CIs) provide access to the auditory world for deaf individuals. We investigated whether CIs enforce attentional alterations of auditory cortical processing in post-lingually deafened CI users compared to normal-hearing (NH) controls. METHODS: Event-related potentials (ERPs) were recorded in 40 post-lingually deafened CI users and in a group of 40 NH controls using an auditory three-stimulus oddball task, which included frequent standard tones (Standards) and infrequent deviant tones (Targets), as well as infrequently occurring unique sounds (Novels). Participants were exposed twice to the three-stimulus oddball task, once under the instruction to ignore the stimuli (ignore condition), and once under the instruction to respond to infrequently occurring deviant tones (attend condition). RESULTS: The allocation of attention to auditory oddball stimuli exerted stronger effects on N1 amplitudes at posterior electrodes in response to Standards and to Targets in CI users than in NH controls. Other ERP amplitudes showed similar attentional modulations in both groups (P2 in response to Standards, N2 in response to Targets and Novels, P3 in response to Targets). We also observed a statistical trend for an attenuated attentional modulation of Novelty P3 amplitudes in CI users compared to NH controls. CONCLUSIONS: ERP correlates of enhanced CI-mediated auditory attention are confined to the latency range of the auditory N1, suggesting that enhanced attentional modulation during auditory stimulus discrimination occurs primarily in associative auditory cortices of CI users. SIGNIFICANCE: The present ERP data support the hypothesis of attentional alterations of auditory cortical processing in CI users. These findings may be of clinical relevance for the CI rehabilitation.


Assuntos
Implantes Cocleares/efeitos adversos , Potenciais Evocados Auditivos , Adulto , Idoso , Atenção , Retroalimentação Sensorial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
5.
Sci Rep ; 11(1): 5994, 2021 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-33727628

RESUMO

Age-related hearing loss typically affects the hearing of high frequencies in older adults. Such hearing loss influences the processing of spoken language, including higher-level processing such as that of complex sentences. Hearing aids may alleviate some of the speech processing disadvantages associated with hearing loss. However, little is known about the relation between hearing loss, hearing aid use, and their effects on higher-level language processes. This neuroimaging (fMRI) study examined these factors by measuring the comprehension and neural processing of simple and complex spoken sentences in hard-of-hearing older adults (n = 39). Neither hearing loss severity nor hearing aid experience influenced sentence comprehension at the behavioral level. In contrast, hearing loss severity was associated with increased activity in left superior frontal areas and the left anterior insula, but only when processing specific complex sentences (i.e. object-before-subject) compared to simple sentences. Longer hearing aid experience in a sub-set of participants (n = 19) was associated with recruitment of several areas outside of the core speech processing network in the right hemisphere, including the cerebellum, the precentral gyrus, and the cingulate cortex, but only when processing complex sentences. Overall, these results indicate that brain activation for language processing is affected by hearing loss as well as subsequent hearing aid use. Crucially, they show that these effects become apparent through investigation of complex but not simple sentences.


Assuntos
Envelhecimento , Surdez/etiologia , Surdez/terapia , Auxiliares de Audição , Idoso , Mapeamento Encefálico , Análise de Dados , Surdez/diagnóstico , Surdez/fisiopatologia , Gerenciamento Clínico , Suscetibilidade a Doenças , Feminino , Audição , Testes Auditivos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Resultado do Tratamento
6.
J Speech Lang Hear Res ; 64(1): 250-262, 2021 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-33400550

RESUMO

Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject-object order and verb-arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss (n = 20) and age-matched normal-hearing controls (n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.


Assuntos
Perda Auditiva , Percepção da Fala , Adulto , Compreensão , Audição , Testes Auditivos , Humanos , Fala
7.
PLoS One ; 15(3): e0230280, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32208429

RESUMO

We introduce here the word-by-word paradigm, a dynamic setting, in which two people take turns in producing a single sentence. This task requires a high degree of coordination between the partners and the simplicity of the task allows us to study with sufficient experimental control behavioral and neural processes that underlie this controlled interaction. For this study, 13 pairs of individuals engaged in a scripted word-by-word interaction, while we recorded the neural activity of both participants simultaneously using wireless EEG. To study expectation building, different semantic contexts were primed for each participant. Semantically unexpected continuations were introduced in 25% of all sentences. In line with the hypothesis, we observed amplitude differences for the P200-N400-P600 ERPs for unexpected compared to expected words. Moreover, we could successfully assess speech and reaction times. Our results show that it is possible to measure ERPs and RTs to semantically unexpected words in a dyadic interactive scenario.


Assuntos
Encéfalo/fisiologia , Potenciais Evocados , Percepção da Fala , Adulto , Feminino , Humanos , Masculino , Semântica
8.
Neurobiol Lang (Camb) ; 1(2): 226-248, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-37213656

RESUMO

Previous research has shown effects of syntactic complexity on sentence processing. In linguistics, syntactic complexity (caused by different word orders) is traditionally explained by distinct linguistic operations. This study investigates whether different complex word orders indeed result in distinct patterns of neural activity, as would be expected when distinct linguistic operations are applied. Twenty-two older adults performed an auditory sentence processing paradigm in German with and without increased cognitive load. The results show that without increased cognitive load, complex sentences show distinct activation patterns compared with less complex, canonical sentences: complex object-initial sentences show increased activity in the left inferior frontal and temporal regions, whereas complex adjunct-initial sentences show increased activity in occipital and right superior frontal regions. Increased cognitive load seems to affect the processing of different sentence structures differently, increasing neural activity for canonical sentences, but leaving complex sentences relatively unaffected. We discuss these results in the context of the idea that linguistic operations required for processing sentence structures with higher levels of complexity involve distinct brain operations.

9.
Brain Behav ; 9(7): e01308, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31197970

RESUMO

INTRODUCTION: Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb-based syntactic preferences. METHODS: We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double-object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging. RESULTS: First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb-based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb-structure pairing. CONCLUSION: In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.


Assuntos
Compreensão/fisiologia , Idioma , Imageamento por Ressonância Magnética/métodos , Meio Social , Vocabulário , Adulto , Mapeamento Encefálico/métodos , Feminino , Lobo Frontal/diagnóstico por imagem , Humanos , Masculino , Lobo Parietal/diagnóstico por imagem , Psicolinguística , Lobo Temporal/diagnóstico por imagem , Comportamento Verbal/fisiologia
10.
J Speech Lang Hear Res ; 62(2): 387-409, 2019 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-30950684

RESUMO

Purpose The purpose of this study was to investigate the processing of morphosyntactic cues (case and verb agreement) by children with cochlear implants (CIs) in German which-questions, where interpretation depends on these morphosyntactic cues. The aim was to examine whether children with CIs who perceive the different cues also make use of them in speech comprehension and processing in the same way as children with normal hearing (NH). Method Thirty-three children with CIs (age 7;01-12;04 years;months, M = 9;07, bilaterally implanted before age 3;3) and 36 children with NH (age 7;05-10;09 years, M = 9;01) received a picture selection task with eye tracking to test their comprehension of subject, object, and passive which-questions. Two screening tasks tested their auditory discrimination of case morphology and their perception and comprehension of subject-verb agreement. Results Children with CIs who performed well on the screening tests still showed more difficulty on the comprehension of object questions than children with NH, whereas they comprehended subject questions and passive questions equally well as children with NH. There was large interindividual variability within the CI group. The gaze patterns of children with NH showed reanalysis effects for object questions disambiguated later in the sentence by verb agreement, but not for object questions disambiguated by case at the first noun phrase. The gaze patterns of children with CIs showed reanalysis effects even for object questions disambiguated at the first noun phrase. Conclusions Even when children with CIs perceive case and subject-verb agreement, their ability to use these cues for offline comprehension and online processing still lags behind normal development, which is reflected in lower performance rates and longer processing times. Individual variability within the CI group can partly be explained by working memory and hearing age. Supplemental Material https://doi.org/10.23641/asha.7728731.


Assuntos
Implantes Cocleares , Compreensão/fisiologia , Fala/fisiologia , Criança , Sinais (Psicologia) , Discriminação Psicológica/fisiologia , Medições dos Movimentos Oculares , Feminino , Fixação Ocular/fisiologia , Alemanha , Humanos , Idioma , Masculino , Memória de Curto Prazo/fisiologia
11.
Front Psychol ; 8: 689, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28659836

RESUMO

Children with hearing impairment (HI) show disorders in syntax and morphology. The question is whether and how these disorders are connected to problems in the auditory domain. The aim of this paper is to examine whether moderate to severe hearing loss at a young age affects the ability of German-speaking orally trained children to understand and produce sentences. We focused on sentence structures that are derived by syntactic movement, which have been identified as a sensitive marker for syntactic impairment in other languages and in other populations with syntactic impairment. Therefore, our study tested subject and object relatives, subject and object Wh-questions, passive sentences, and topicalized sentences, as well as sentences with verb movement to second sentential position. We tested 19 HI children aged 9;5-13;6 and compared their performance with hearing children using comprehension tasks of sentence-picture matching and sentence repetition tasks. For the comprehension tasks, we included HI children who passed an auditory discrimination task; for the sentence repetition tasks, we selected children who passed a screening task of simple sentence repetition without lip-reading; this made sure that they could perceive the words in the tests, so that we could test their grammatical abilities. The results clearly showed that most of the participants with HI had considerable difficulties in the comprehension and repetition of sentences with syntactic movement: they had significant difficulties understanding object relatives, Wh-questions, and topicalized sentences, and in the repetition of object who and which questions and subject relatives, as well as in sentences with verb movement to second sentential position. Repetition of passives was only problematic for some children. Object relatives were still difficult at this age for both HI and hearing children. An additional important outcome of the study is that not all sentence structures are impaired-passive structures were not problematic for most of the HI children.

13.
Front Psychol ; 7: 1854, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27965604

RESUMO

Language occurs naturally in conversations. However, the study of the neural underpinnings of language has mainly taken place in single individuals using controlled language material. The interactive elements of a conversation (e.g., turn-taking) are often not part of neurolinguistic setups. The prime reason is the difficulty to combine open unrestricted conversations with the requirements of neuroimaging. It is necessary to find a trade-off between the naturalness of a conversation and the restrictions imposed by neuroscientific methods to allow for ecologically more valid studies. Here, we make an attempt to study the effects of a conversational element, namely turn-taking, on linguistic neural correlates, specifically the N400 effect. We focus on the physiological aspect of turn-taking, the speaker-switch, and its effect on the detectability of the N400 effect. The N400 event-related potential reflects expectation violations in a semantic context; the N400 effect describes the difference of the N400 amplitude between semantically expected and unexpected items. Sentences with semantically congruent and incongruent final words were presented in two turn-taking modes: (1) reading aloud first part of the sentence and listening to speaker-switch for the final word, and (2) listening to first part of the sentence and speaker-switch for the final word. A significant N400 effect was found for both turn-taking modes, which was not influenced by the mode itself. However, the mode significantly affected the P200, which was increased for the reading aloud mode compared to the listening mode. Our results show that an N400 effect can be detected during a speaker-switch. Speech articulation (reading aloud) before the analyzed sentence fragment did also not impede the N400 effect detection for the final word. The speaker-switch, however, seems to influence earlier components of the electroencephalogram, related to processing of salient stimuli. We conclude that the N400 can effectively be used to study neural correlates of language in conversational approaches including speaker-switches.

14.
Ear Hear ; 37(6): e391-e401, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27748664

RESUMO

OBJECTIVE: The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. DESIGN: Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. RESULTS: The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. CONCLUSION: Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.


Assuntos
Cognição , Compreensão , Perda Auditiva Neurossensorial/fisiopatologia , Percepção da Fala , Idoso , Estudos de Casos e Controles , Feminino , Perda Auditiva Neurossensorial/psicologia , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação
15.
Front Psychol ; 7: 990, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27458400

RESUMO

Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.

16.
Neuropsychologia ; 87: 169-181, 2016 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-27212057

RESUMO

There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in noisy background conditions. Our results also revealed that verbal abilities are related to speech processing and speech intelligibility in CI users, confirming the view that auditory cognition plays an important role for CI outcome. We conclude that differences in auditory-cognitive processing contribute to the variability in speech performance outcomes observed in CI users.


Assuntos
Encéfalo/fisiopatologia , Implantes Cocleares , Cognição/fisiologia , Perda Auditiva/fisiopatologia , Perda Auditiva/reabilitação , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Idioma , Testes de Linguagem , Masculino , Memória de Curto Prazo , Pessoa de Meia-Idade , Testes Neuropsicológicos , Testes de Discriminação da Fala
17.
Neuropsychologia ; 82: 91-103, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26776233

RESUMO

Intonation phrase boundaries (IPBs) were hypothesized to be especially difficult to process in the presence of an amplitude modulated noise masker because of a potential rhythmic competition. In an event-related potential study, IPBs were presented in silence, stationary, and amplitude modulated noise. We elicited centro-parietal Closure Positive Shifts (CPS) in 23 young adults with normal hearing at IPBs in all acoustic conditions, albeit with some differences. CPS peak amplitudes were highest in stationary noise, followed by modulated noise, and lowest in silence. Both noise types elicited CPS delays, slightly more so in stationary compared to amplitude modulated noise. These data suggest that amplitude modulation is not tantamount to a rhythmic competitor for prosodic phrasing but rather supports an assumed speech perception benefit due to local release from masking. The duration of CPS time windows was, however, not only longer in noise compared to silence, but also longer for amplitude modulated compared to stationary noise. This is interpreted as support for additional processing load associated with amplitude modulation for the CPS component. Taken together, processing prosodic phrasing of sentences in amplitude modulated noise seems to involve the same issues that have been observed for the perception and processing of segmental information that are related to lexical items presented in noise: a benefit from local release from masking, even for prosodic cues, and a detrimental additional processing load that is associated with either stream segregation or signal reconstruction.


Assuntos
Córtex Cerebral/fisiologia , Ruído , Acústica da Fala , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Razão Sinal-Ruído , Adulto Jovem
18.
J Acoust Soc Am ; 134(4): 3039-56, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24116439

RESUMO

To allow for a systematic variation of linguistic complexity of sentences while acoustically controlling for intelligibility of sentence fragments, a German corpus, Oldenburg linguistically and audiologically controlled sentences (OLACS), was designed, implemented, and evaluated. Sentences were controlled for plausibility with a questionnaire survey. Verification of the speech material was performed in three listening conditions (quiet, stationary, and fluctuating noise) by collecting speech reception thresholds (SRTs) and response latencies as well as individual cognitive measures for 20 young listeners with normal hearing. Consistent differences in response latencies across sentence types verified the effect of linguistic complexity on processing speed. The addition of noise decreased response latencies, giving evidence for different response strategies for measurements in noise. Linguistic complexity had a significant effect on SRT. In fluctuating noise, this effect was more pronounced, indicating that fluctuating noise correlates with stronger cognitive contributions. SRTs in quiet correlated with hearing thresholds, whereas cognitive measures explained up to 40% of the variance in SRTs in noise. In conclusion, OLACS appears to be a suitable tool for assessing the interaction between aspects of speech understanding (including cognitive processing) and speech intelligibility in German.


Assuntos
Audiometria da Fala/métodos , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Cognição , Compreensão , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Ruído/efeitos adversos , Mascaramento Perceptivo , Psicolinguística , Tempo de Reação , Reprodutibilidade dos Testes , Fatores de Tempo , Adulto Jovem
19.
J Psycholinguist Res ; 42(2): 139-59, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22460688

RESUMO

This paper discusses the influence of stationary (non-fluctuating) noise on processing and understanding of sentences, which vary in their syntactic complexity (with the factors canonicity, embedding, ambiguity). It presents data from two RT-studies with 44 participants testing processing of German sentences in silence and in noise. Results show a stronger impact of noise on the processing of structurally difficult than on syntactically simpler parts of the sentence. This may be explained by a combination of decreased acoustical information and an increased strain on cognitive resources, such as working memory or attention, which is caused by noise. The noise effect for embedded sentences is less than for non-embedded sentences, which may be explained by a benefit from prosodic information.


Assuntos
Atenção/fisiologia , Idioma , Memória de Curto Prazo/fisiologia , Ruído , Percepção da Fala/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Psicolinguística , Tempo de Reação/fisiologia
20.
Int J Audiol ; 50(9): 621-31, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21714708

RESUMO

OBJECTIVE: We investigated if linguistic complexity contributes to the variation of the speech reception threshold in noise (SRTN) and thus should be employed as an additional design criterion in sentence tests used for audiometry. DESIGN: Three test lists were established with sentences from the Göttingen sentence test ( Kollmeier & Wesselkamp, 1997 ). One list contained linguistically simple sentences, the other two lists contained sentences with two types of linguistic complexity. For each listener the SRTN was determined for each list. STUDY SAMPLE: Younger and older listeners with normal hearing and older listeners with hearing impairment were tested. RESULTS: Younger listeners with normal hearing showed significantly worse SRTNs on the complex lists than on the simple list. This difference could not be found for either of the older groups. CONCLUSIONS: The effect of linguistic complexity on speech recognition seems to depend on age and/or hearing status. Hence, pending further research, linguistic complexity seems less relevant as a sentence test design criterion for clinical-audiological purposes, but we argue that a test with larger variation in linguistic complexity across sentences might show a relation between linguistic complexity and speech recognition even in a clinical population.


Assuntos
Transtornos da Audição/fisiopatologia , Audição/fisiologia , Linguística , Testes de Discriminação da Fala , Inteligibilidade da Fala , Percepção da Fala , Adulto , Fatores Etários , Audiometria de Tons Puros , Limiar Auditivo , Feminino , Alemanha , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...