Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 88
Filtrar
1.
J Speech Lang Hear Res ; 64(10): 3883-3893, 2021 10 04.
Artigo em Inglês | MEDLINE | ID: mdl-34491816

RESUMO

Purpose This study reports on the development of an auditory passage comprehension task for Swedish primary school children of cultural and linguistic diversity. It also reports on their performance on the task in quiet and in noise. Method Eighty-eight children aged 7-9 years and showing normal hearing participated. The children were divided into three groups based on presumed language exposure: 13 children were categorized as Swedish-speaking monolinguals, 19 children were categorized as simultaneous bilinguals, and 56 children were categorized as sequential bilinguals. No significant difference in working memory capacity was seen between the three language groups. Two passages and associated multiple-choice questions were developed. During development of the passage comprehension task, steps were taken to reduce the impact of culture-specific prior experience and knowledge on performance. This was achieved by using the story grammar principles, universal topics and plots, and simple language that avoided complex or unusual grammatical structures and words. Results The findings indicate no significant difference between the two passages and similar response distributions. Passage comprehension performance was significantly better in quiet than in noise, regardless of language exposure group. The monolinguals outperformed both simultaneous and sequential bilinguals in both listening conditions. Conclusions Because the task was designed to minimize the effect of cultural knowledge on auditory passage comprehension, this suggests that compared with monolinguals, both simultaneous and sequential bilinguals have a disadvantage in auditory passage comprehension. As expected, the findings demonstrate that noise has a negative effect on auditory passage comprehension. The magnitude of this effect does not relate to language exposure. The developed auditory passage comprehension task seems suitable for assessing auditory passage comprehension in primary school children of linguistic and cultural diversity.


Assuntos
Compreensão , Percepção da Fala , Criança , Humanos , Idioma , Linguística , Instituições Acadêmicas , Suécia
2.
J Exp Child Psychol ; 210: 105203, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34118494

RESUMO

Background noise makes listening effortful and may lead to fatigue. This may compromise classroom learning, especially for children with a non-native background. In the current study, we used pupillometry to investigate listening effort and fatigue during listening comprehension under typical (0 dB signal-to-noise ratio [SNR]) and favorable (+10 dB SNR) listening conditions in 63 Swedish primary school children (7-9 years of age) performing a narrative speech-picture verification task. Our sample comprised both native (n = 25) and non-native (n = 38) speakers of Swedish. Results revealed greater pupil dilation, indicating more listening effort, in the typical listening condition compared with the favorable listening condition, and it was primarily the non-native speakers who contributed to this effect (and who also had lower performance accuracy than the native speakers). Furthermore, the native speakers had greater pupil dilation during successful trials, whereas the non-native speakers showed greatest pupil dilation during unsuccessful trials, especially in the typical listening condition. This set of results indicates that whereas native speakers can apply listening effort to good effect, non-native speakers may have reached their effort ceiling, resulting in poorer listening comprehension. Finally, we found that baseline pupil size decreased over trials, which potentially indicates more listening-related fatigue, and this effect was greater in the typical listening condition compared with the favorable listening condition. Collectively, these results provide novel insight into the underlying dynamics of listening effort, fatigue, and listening comprehension in typical classroom conditions compared with favorable classroom conditions, and they demonstrate for the first time how sensitive this interplay is to language experience.


Assuntos
Percepção da Fala , Percepção Auditiva , Criança , Fadiga , Humanos , Ruído , Instituições Acadêmicas
3.
Cereb Cortex ; 31(7): 3165-3176, 2021 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-33625498

RESUMO

Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual-verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.


Assuntos
Córtex Auditivo/diagnóstico por imagem , Surdez/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Memória de Curto Prazo/fisiologia , Língua de Sinais , Percepção Visual , Adulto , Córtex Auditivo/fisiologia , Surdez/fisiopatologia , Feminino , Humanos , Masculino , Plasticidade Neuronal/fisiologia , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
4.
J Speech Lang Hear Res ; 64(2): 359-370, 2021 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-33439747

RESUMO

Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.


Assuntos
Auxiliares de Audição , Percepção da Fala , Cognição , Audição , Humanos , Idioma , Memória de Curto Prazo
5.
Front Psychol ; 11: 534741, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33192776

RESUMO

Auditory cortex in congenitally deaf early sign language users reorganizes to support cognitive processing in the visual domain. However, evidence suggests that the potential benefits of this reorganization are largely unrealized. At the same time, there is growing evidence that experience of playing computer and console games improves visual cognition, in particular visuospatial attentional processes. In the present study, we investigated in a group of deaf early signers whether those who reported recently playing computer or console games (deaf gamers) had better visuospatial attentional control than those who reported not playing such games (deaf non-gamers), and whether any such effect was related to cognitive processing in the visual domain. Using a classic test of attentional control, the Eriksen Flanker task, we found that deaf gamers performed on a par with hearing controls, while the performance of deaf non-gamers was poorer. Among hearing controls there was no effect of gaming. This suggests that deaf gamers may have better visuospatial attentional control than deaf non-gamers, probably because they are less susceptible to parafoveal distractions. Future work should examine the robustness of this potential gaming benefit and whether it is associated with neural plasticity in early deaf signers, as well as whether gaming intervention can improve visuospatial cognition in deaf people.

7.
Front Neurosci ; 14: 573254, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33100961

RESUMO

Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.

8.
Front Psychol ; 11: 1981, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32982836

RESUMO

The language environment is important for the development of early communication and language. In the current study, we describe the natural home language environment of 9-month-old infants in Sweden and its concurrent association with language development. Eighty-eight families took part in the study. The home language environment was measured using the Language ENvironment Analysis (LENA) system, and language development was assessed using Swedish Early Communicative Development Inventory (SECDI), a parent questionnaire. LENA measures showed dramatic variation between individuals but were comparable to and showed overlapping variance with previous studies conducted in English-speaking households. Nonetheless, there were significantly more infant vocalizations and conversational turns in the present study than in one previous study. Adult word count correlated significantly and positively with infants' Use of gestures and the subscale of that section Communicative gestures. These together with another four non-significant associations formed a consistent overall pattern that suggested a link between infants' language environment and language development. Although the direction of causality cannot be determined from the current data, future studies should examine children longitudinally to assess the directionality or the bidirectionality of the reported associations between infant's language environment and language development.

9.
Data Brief ; 29: 105108, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31993467

RESUMO

This article provides a description of eye movement data collected during an ocular-motor serial reaction time task. Raw gaze data files for 63 infants and 24 adults along with the data processing and analysis script for extracting saccade latencies, summarizing participants' performance, and testing statistical differences, are hosted on Open Science Framework (OSF). Files (in Matlab format) available for download allow for replication of the results reported in "Procedural memory in infancy: Evidence from implicit sequence learning in an eye-tracking paradigm" [1].

10.
J Exp Child Psychol ; 191: 104733, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31805463

RESUMO

Procedural memory underpins the learning of skills and habits. It is often tested in children and adults with sequence learning on the serial reaction time (SRT) task, which involves manual motor control. However, due to infants' slowly developing control of motor actions, most procedures that require motor control cannot be examined in infancy. Here, we investigated procedural memory using an SRT task adapted for infants. During the task, images appeared at one of three locations on a screen, with the location order following a five-item recurring sequence. Three blocks of recurring sequences were followed by a random-order fourth block and finally another block of recurring sequences. Eye movement data were collected for infants (n = 35) and adults (n = 31). Reaction time was indexed by calculating the saccade latencies for orienting to each image as it appeared. The entire protocol took less than 3 min. Sequence learning in the SRT task can be operationalized as an increase in latencies in the random block as compared with the preceding and following sequence blocks. This pattern was observed in both the infants and adults. This study is the first to report learning in an SRT task in infants as young as 9  months. This SRT protocol is a promising procedure for measuring procedural memory in infants.


Assuntos
Desenvolvimento Infantil/fisiologia , Memória/fisiologia , Aprendizagem Seriada/fisiologia , Percepção Visual/fisiologia , Adulto , Tecnologia de Rastreamento Ocular , Feminino , Humanos , Lactente , Masculino , Adulto Jovem
11.
Eur J Neurosci ; 51(11): 2236-2249, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31872480

RESUMO

Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words-['fɑ́ːsɛn] phase and ['fɑ̀ːsɛn] damn-that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords-['vɑ́ːsɛm] and ['vɑ̀ːsɛm]-were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence-angry ['fɑ̀ːsɛn] damn-would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300-350 ms, and affective prosody evoked a P3a at 350-400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820-870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.


Assuntos
Emoções , Percepção da Fala , Mapeamento Encefálico , Eletroencefalografia , Humanos , Linguística , Semântica
12.
Front Hum Neurosci ; 13: 374, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31695602

RESUMO

Sign languages are natural languages in the visual domain. Because they lack a written form, they provide a sharper tool than spoken languages for investigating lexicality effects which may be confounded by orthographic processing. In a previous study, we showed that the neural networks supporting phoneme monitoring in deaf British Sign Language (BSL) users are modulated by phonology but not lexicality or iconicity. In the present study, we investigated whether this pattern generalizes to deaf Swedish Sign Language (SSL) users. British and SSLs have a largely overlapping phoneme inventory but are mutually unintelligible because lexical overlap is small. This is important because it means that even when signs lexicalized in BSL are unintelligible to users of SSL they are usually still phonologically acceptable. During fMRI scanning, deaf users of the two different sign languages monitored signs that were lexicalized in either one or both of those languages for phonologically contrastive elements. Neural activation patterns relating to different linguistic levels of processing were similar across SLs; in particular, we found no effect of lexicality, supporting the notion that apparent lexicality effects on sublexical processing of speech may be driven by orthographic strategies. As expected, we found an effect of phonology but not iconicity. Further, there was a difference in neural activation between the two groups in a motion-processing region of the left occipital cortex, possibly driven by cultural differences, such as education. Importantly, this difference was not modulated by the linguistic characteristics of the material, underscoring the robustness of the neural activation patterns relating to different linguistic levels of processing.

13.
Front Psychol ; 10: 1536, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31333549

RESUMO

The large body of research that forms the ease of language understanding (ELU) model emphasizes the important contribution of cognitive processes when listening to speech in adverse conditions; however, speech-in-noise (SIN) processing is yet to be thoroughly tested in populations with cognitive deficits. The purpose of the current study was to contribute to the field in this regard by assessing SIN performance in a sample of adolescents with attention deficit hyperactivity disorder (ADHD) and comparing results with age-matched controls. This population was chosen because core symptoms of ADHD include developmental deficits in cognitive control and working memory capacity and because these top-down processes are thought to reach maturity during adolescence in individuals with typical development. The study utilized natural language sentence materials under experimental conditions that manipulated the dependency on cognitive mechanisms in varying degrees. In addition, participants were tested on cognitive capacity measures of complex working memory-span, selective attention, and lexical access. Primary findings were in support of the ELU-model. Age was shown to significantly covary with SIN performance, and after controlling for age, ADHD participants demonstrated greater difficulty than controls with the experimental manipulations. In addition, overall SIN performance was strongly predicted by individual differences in cognitive capacity. Taken together, the results highlight the general disadvantage persons with deficient cognitive capacity have when attending to speech in typically noisy listening environments. Furthermore, the consistently poorer performance observed in the ADHD group suggests that auditory processing tasks designed to tax attention and working memory capacity may prove to be beneficial clinical instruments when diagnosing ADHD.

14.
Front Psychol ; 10: 1149, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31191388

RESUMO

Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.

15.
J Speech Lang Hear Res ; 62(4S): 1117-1130, 2019 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-31026199

RESUMO

Purpose Hearing loss is associated with changes in brain volume in regions supporting auditory and cognitive processing. The purpose of this study was to determine whether there is a systematic association between hearing ability and brain volume in cross-sectional data from a large nonclinical cohort of middle-aged adults available from the UK Biobank Resource ( http://www.ukbiobank.ac.uk ). Method We performed a set of regression analyses to determine the association between speech reception threshold in noise (SRTn) and global brain volume as well as predefined regions of interest (ROIs) based on T1-weighted structural images, controlling for hearing-related comorbidities and cognition as well as demographic factors. In a 2nd set of analyses, we additionally controlled for hearing aid (HA) use. We predicted statistically significant associations globally and in ROIs including auditory and cognitive processing regions, possibly modulated by HA use. Results Whole-brain gray matter volume was significantly lower for individuals with poorer SRTn. Furthermore, the volume of 9 predicted ROIs including both auditory and cognitive processing regions was lower for individuals with poorer SRTn. The greatest percentage difference (-0.57%) in ROI volume relating to a 1 SD worsening of SRTn was found in the left superior temporal gyrus. HA use did not substantially modulate the pattern of association between brain volume and SRTn. Conclusions In a large middle-aged nonclinical population, poorer hearing ability is associated with lower brain volume globally as well as in cortical and subcortical regions involved in auditory and cognitive processing, but there was no conclusive evidence that this effect is moderated by HA use. This pattern of results supports the notion that poor hearing leads to reduced volume in brain regions recruited during speech understanding under challenging conditions. These findings should be tested in future longitudinal, experimental studies. Supplemental Material https://doi.org/10.23641/asha.7949357.


Assuntos
Limiar Auditivo/fisiologia , Encéfalo/patologia , Perda Auditiva/patologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Idoso , Cognição , Estudos Transversais , Feminino , Audição , Perda Auditiva/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Tamanho do Órgão , Análise de Regressão , Teste do Limiar de Recepção da Fala
16.
Int J Audiol ; 58(5): 247-261, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30714435

RESUMO

OBJECTIVE: The current update of the Ease of Language Understanding (ELU) model evaluates the predictive and postdictive aspects of speech understanding and communication. DESIGN: The aspects scrutinised concern: (1) Signal distortion and working memory capacity (WMC), (2) WMC and early attention mechanisms, (3) WMC and use of phonological and semantic information, (4) hearing loss, WMC and long-term memory (LTM), (5) WMC and effort, and (6) the ELU model and sign language. Study Samples: Relevant literature based on own or others' data was used. RESULTS: Expectations 1-4 are supported whereas 5-6 are constrained by conceptual issues and empirical data. Further strands of research were addressed, focussing on WMC and contextual use, and on WMC deployment in relation to hearing status. A wider discussion of task demands, concerning, for example, inference-making and priming, is also introduced and related to the overarching ELU functions of prediction and postdiction. Finally, some new concepts and models that have been inspired by the ELU-framework are presented and discussed. CONCLUSIONS: The ELU model has been productive in generating empirical predictions/expectations, the majority of which have been confirmed. Nevertheless, new insights and boundary conditions need to be experimentally tested to further shape the model.


Assuntos
Cognição , Perda Auditiva/psicologia , Memória de Curto Prazo , Percepção da Fala , Atenção , Humanos , Memória de Longo Prazo
17.
Ear Hear ; 40(5): 1140-1148, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30624251

RESUMO

OBJECTIVES: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important. DESIGN: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions. RESULTS: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speech was slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance. CONCLUSIONS: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.


Assuntos
Perda Auditiva Neurossensorial/fisiopatologia , Percepção da Fala , Idoso , Compreensão , Feminino , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Memória de Curto Prazo , Pessoa de Meia-Idade , Semântica , Índice de Gravidade de Doença
18.
Ear Hear ; 40(2): 272-286, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-29923867

RESUMO

OBJECTIVES: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. DESIGN: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). RESULTS: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. CONCLUSIONS: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.


Assuntos
Sinais (Psicologia) , Rememoração Mental/fisiologia , Pupila/fisiologia , Percepção da Fala/fisiologia , Adolescente , Adulto , Percepção Auditiva , Cognição , Feminino , Humanos , Masculino , Memória , Memória de Curto Prazo , Semântica , Razão Sinal-Ruído , Adulto Jovem
19.
Front Psychol ; 9: 1193, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30050489

RESUMO

In the primary school classroom, children are exposed to multiple factors that combine to create adverse conditions for listening to and understanding what the teacher is saying. Despite the ubiquity of these conditions, there is little knowledge concerning the way in which various factors combine to influence listening comprehension and the effortfulness of listening. The aim of the present study was to investigate the combined effects of background noise, voice quality, and visual cues on children's listening comprehension and effort. To achieve this aim, we performed a set of four well-controlled, yet ecologically valid, experiments with 245 eight-year-old participants. Classroom listening conditions were simulated using a digitally animated talker with a dysphonic (hoarse) voice and background babble noise composed of several children talking. Results show that even low levels of babble noise interfere with listening comprehension, and there was some evidence that this effect was reduced by seeing the talker's face. Dysphonia did not significantly reduce listening comprehension scores, but it was considered unpleasant and made listening seem difficult, probably by reducing motivation to listen. We found some evidence that listening comprehension performance under adverse conditions is positively associated with individual differences in executive function. Overall, these results suggest that multiple factors combine to influence listening comprehension and effort for child listeners in the primary school classroom. The constellation of these room, talker, modality, and listener factors should be taken into account in the planning and design of educational and learning activities.

20.
Front Psychol ; 9: 679, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29867655

RESUMO

Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA