Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 129
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Int J Lang Commun Disord ; 59(1): 293-303, 2024.
Article in English | MEDLINE | ID: mdl-37589337

ABSTRACT

BACKGROUND: The impact of hearing impairment is typically studied in terms of its effects on speech perception, yet this fails to account for the interactive nature of communication. Recently, there has been a move towards studying the effects of age-related hearing impairment on interaction, often using referential communication tasks; however, little is known about how interaction in these tasks compares to everyday communication. AIMS: To investigate utterances and requests for clarification used in one-to-one conversations between older adults with hearing impairment and younger adults without hearing impairment, and between two younger adults without hearing impairment. METHODS & PROCEDURES: A total of 42 participants were recruited to the study and split into 21 pairs, 10 with two younger adults without hearing impairment and 11 with one younger adult without hearing impairment and one older participant with age-related hearing impairment (hard of hearing). Results from three tasks-spontaneous conversation and two trials of a referential communication task-were compared. A total of 5 min of interaction in each of the three tasks was transcribed, and the frequency of requests for clarification, mean length of utterance and total utterances were calculated for individual participants and pairs. OUTCOMES & RESULTS: When engaging in spontaneous conversation, participants made fewer requests for clarification than in the referential communication, regardless of hearing status/age (p ≤ 0.012). Participants who were hard of hearing made significantly more requests for clarification than their partners without hearing impairment in only the second trial of the referential communication task (U = 25, p = 0.019). Mean length of utterance was longer in spontaneous conversation than in the referential communication task in the pairs without hearing impairment (p ≤ 0.021), but not in the pairs including a person who was hard of hearing. However, participants who were hard of hearing used significantly longer utterances than their partners without hearing impairment in the spontaneous conversation (U = 8, p < 0.001) but not in the referential communication tasks. CONCLUSIONS & IMPLICATIONS: The findings suggest that patterns of interaction observed in referential communication tasks differ to those observed in spontaneous conversation. The results also suggest that fatigue may be an important consideration when planning studies of interaction that use multiple conditions of a communication task, particularly when participants are older or hard of hearing. WHAT THIS PAPER ADDS: What is already known on this subject Age-related hearing impairment is known to affect communication; however, the majority of studies have focused on its impact on speech perception in controlled conditions. This indicates little about the impact on everyday, interactive, communication. What this study adds to the existing knowledge We investigated utterance length and requests for clarification in one-to-one conversations between pairs consisting of one older adult who is hard of hearing and one younger adult without hearing impairment, or two younger adults without hearing impairment. Results from three tasks (two trials of a referential communication task and spontaneous conversation) were compared. The findings demonstrated a significant effect of task type on requests for clarification in both groups. Furthermore, in spontaneous conversation, older adults who were hard of hearing used significantly longer utterances than their partners without hearing impairment. This pattern was not observed in the referential communication task. What are the potential or actual clinical implications of this work? These findings have important implications for generalizing results from controlled communication tasks to more everyday conversation. Specifically, they suggest that the previously observed strategy of monopolizing conversation, possibly as an attempt to control it, may be more frequently used by older adults who are hard of hearing in natural conversation than in a more contrived communication task.


Subject(s)
Hearing Loss , Speech Perception , Humans , Aged , Communication
2.
Int J Audiol ; 62(2): 101-109, 2023 02.
Article in English | MEDLINE | ID: mdl-35306958

ABSTRACT

OBJECTIVE: Using data from the n200-study, we aimed to investigate the relationship between behavioural (the Swedish HINT and Hagerman speech-in-noise tests) and self-report (Speech, Spatial and Qualities of Hearing Questionnaire (SSQ)) measures of listening under adverse conditions. DESIGN: The Swedish HINT was masked with a speech-shaped noise (SSN), the Hagerman was masked with a SSN and a four-talker babble, and the subscales from the SSQ were used as a self-report measure. The HINT and Hagerman were administered through an experimental hearing aid. STUDY SAMPLE: This study included 191 hearing aid users with hearing loss (mean PTA4 = 37.6, SD = 10.8) and 195 normally hearing adults (mean PTA4 = 10.0, SD = 6.0). RESULTS: The present study found correlations between behavioural measures of speech-in-noise and self-report scores of the SSQ in normally hearing individuals, but not in hearing aid users. CONCLUSION: The present study may help identify relationships between clinically used behavioural measures, and a self-report measure of speech recognition. The results from the present study suggest that use of a self-report measure as a complement to behavioural speech in noise tests might help to further our understanding of how self-report, and behavioural results can be generalised to everyday functioning.


Subject(s)
Hearing Aids , Speech Perception , Adult , Humans , Self Report , Speech , Noise/adverse effects , Hearing
3.
Ear Hear ; 43(5): 1437-1446, 2022.
Article in English | MEDLINE | ID: mdl-34983896

ABSTRACT

OBJECTIVES: Previous research suggests that there is a robust relationship between cognitive functioning and speech-in-noise performance for older adults with age-related hearing loss. For normal-hearing adults, on the other hand, the research is not entirely clear. Therefore, the current study aimed to examine the relationship between cognitive functioning, aging, and speech-in-noise, in a group of older normal-hearing persons and older persons with hearing loss who wear hearing aids. DESIGN: We analyzed data from 199 older normal-hearing individuals (mean age = 61.2) and 200 older individuals with hearing loss (mean age = 60.9) using multigroup structural equation modeling. Four cognitively related tasks were used to create a cognitive functioning construct: the reading span task, a visuospatial working memory task, the semantic word-pairs task, and Raven's progressive matrices. Speech-in-noise, on the other hand, was measured using Hagerman sentences. The Hagerman sentences were presented via an experimental hearing aid to both normal hearing and hearing-impaired groups. Furthermore, the sentences were presented with one of the two background noise conditions: the Hagerman original speech-shaped noise or four-talker babble. Each noise condition was also presented with three different hearing processing settings: linear processing, fast compression, and noise reduction. RESULTS: Cognitive functioning was significantly related to speech-in-noise identification. Moreover, aging had a significant effect on both speech-in-noise and cognitive functioning. With regression weights constrained to be equal for the two groups, the final model had the best fit to the data. Importantly, the results showed that the relationship between cognitive functioning and speech-in-noise was not different for the two groups. Furthermore, the same pattern was evident for aging: the effects of aging on cognitive functioning and aging on speech-in-noise were not different between groups. CONCLUSION: Our findings revealed similar cognitive functioning and aging effects on speech-in-noise performance in older normal-hearing and aided hearing-impaired listeners. In conclusion, the findings support the Ease of Language Understanding model as cognitive processes play a critical role in speech-in-noise independent from the hearing status of elderly individuals.


Subject(s)
Deafness , Presbycusis , Speech Perception , Aged , Aged, 80 and over , Cognition , Humans , Latent Class Analysis , Middle Aged , Speech
4.
Int J Audiol ; 61(6): 473-481, 2022 06.
Article in English | MEDLINE | ID: mdl-31613169

ABSTRACT

Retraction statementWe, the Editor and Publisher of the International Journal of Audiology, have retracted the following article.Rachel J. Ellis, and Jerker Rönnberg. 2019. "Temporal fine structure: relations to cognition and aided speech recognition." International Journal of Audiology. doi:10.1080/14992027.2019.1672899.The authors of the above-mentioned article published in the International Journal of Audiology have identified errors in the reported analysis (relating to the inclusion of data that should have been excluded) which impact the validity of the findings. The authors have, therefore, requested that the article be retracted.We have been informed in our decision-making by our policy on publishing ethics and integrity and the COPE guidelines on retractions.The retracted article will remain online to maintain the scholarly record, but it will be digitally watermarked on each page as "Retracted".

5.
Int J Audiol ; 61(9): 778-786, 2022 09.
Article in English | MEDLINE | ID: mdl-34292115

ABSTRACT

OBJECTIVES: To investigate associations between sensitivity to temporal fine structure (TFS) and performance in cognitive and speech-in-noise recognition tests. DESIGN: A binaural test of TFS sensitivity (the TFS-LF) was used. Measures of cognition included the reading span, Raven's, and text-reception threshold tests. Measures of speech recognition included the Hearing in noise (HINT) and the Hagerman matrix sentence tests in three signal processing conditions. STUDY SAMPLE: Analyses are based on the performance of 324/317 adults with and without hearing impairment. RESULTS: Sensitivity to TFS was significantly correlated with both the reading span test and the recognition of speech-in-noise processed using noise reduction, the latter only when limited to participants with hearing impairment. Neither association was significant when the effects of age were partialled out. CONCLUSIONS: The findings are consistent with previous research in finding no evidence of a link between sensitivity to TFS and working memory once the effects of age had been partialled out. The results provide some evidence of an influence of signal processing strategy on the association between TFS sensitivity and speech-in-noise recognition. However, further research is necessary to assess the generalisability of the findings before any claims can be made regarding any clinical implications of these findings.


Subject(s)
Hearing Loss , Speech Perception , Adult , Cognition , Hearing , Humans , Speech
6.
Ear Hear ; 42(6): 1668-1679, 2021.
Article in English | MEDLINE | ID: mdl-33859121

ABSTRACT

OBJECTIVES: Communication requires cognitive processes which are not captured by traditional speech understanding tests. Under challenging listening situations, more working memory resources are needed to process speech, leaving fewer resources available for storage. The aim of the current study was to investigate the effect of task difficulty predictability, that is, knowing versus not knowing task difficulty in advance, and the effect of noise reduction on working memory resource allocation to processing and storage of speech heard in background noise. For this purpose, an "offline" behavioral measure, the Sentence-Final Word Identification and Recall (SWIR) test, and an "online" physiological measure, pupillometry, were combined. Moreover, the outcomes of the two measures were compared to investigate whether they reflect the same processes related to resource allocation. DESIGN: Twenty-four experienced hearing aid users with moderate to moderately severe hearing loss participated in this study. The SWIR test and pupillometry were measured simultaneously with noise reduction in the test hearing aids activated and deactivated in a background noise composed of four-talker babble. The task of the SWIR test is to listen to lists of sentences, repeat the last word immediately after each sentence and recall the repeated words when the list is finished. The sentence baseline dilation, which is defined as the mean pupil dilation before each sentence, and task-evoked peak pupil dilation (PPD) were analyzed over the course of the lists. The task difficulty predictability was manipulated by including lists of three, five, and seven sentences. The test was conducted over two sessions, one during which the participants were informed about list length before each list (predictable task difficulty) and one during which they were not (unpredictable task difficulty). RESULTS: The sentence baseline dilation was higher when task difficulty was unpredictable compared to predictable, except at the start of the list, where there was no difference. The PPD tended to be higher at the beginning of the list, this pattern being more prominent when task difficulty was unpredictable. Recall performance was better and sentence baseline dilation was higher when noise reduction was on, especially toward the end of longer lists. There was no effect of noise reduction on PPD. CONCLUSIONS: Task difficulty predictability did not have an effect on resource allocation, since recall performance was similar independently of whether task difficulty was predictable or unpredictable. The higher sentence baseline dilation when task difficulty was unpredictable likely reflected a difference in the recall strategy or higher degree of task engagement/alertness or arousal. Hence, pupillometry captured processes which the SWIR test does not capture. Noise reduction frees up resources to be used for storage of speech, which was reflected in the better recall performance and larger sentence baseline dilation toward the end of the list when noise reduction was on. Thus, both measures captured different temporal aspects of the same processes related to resource allocation with noise reduction on and off.


Subject(s)
Hearing Aids , Speech Perception , Humans , Noise , Pupil/physiology
7.
Int J Audiol ; 59(3): 208-218, 2020 03.
Article in English | MEDLINE | ID: mdl-31809220

ABSTRACT

Objective: The aim of this study was to examine how background noise and hearing aid experience affect the robust relationship between working memory and speech recognition.Design: Matrix sentences were used to measure speech recognition in noise. Three measures of working memory were administered. Study sample: 148 participants with at least 2 years of hearing aid experience.Results: A stronger overall correlation between working memory and speech recognition performance was found in a four-talker babble than in a stationary noise background. This correlation was significantly weaker in participants with most hearing aid experience than those with least experience when background noise was stationary. In the four-talker babble, however, no significant difference was found between the strength of correlations between users with different experience.Conclusion: In general, more explicit processing of working memory is invoked when listening in a multi-talker babble. The matching processes (cf. Ease of Language Understanding model, ELU) were more efficient for experienced than for less experienced users when perceiving speech. This study extends the existing ELU model that mismatch may also lead to the establishment of new phonological representations in the long-term memory.


Subject(s)
Auditory Threshold , Hearing Aids/psychology , Hearing Loss, Sensorineural/psychology , Memory, Short-Term , Speech Perception , Aged , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Noise , Perceptual Masking , Regression Analysis , Speech Reception Threshold Test , Time Factors
8.
Int J Audiol ; 59(10): 792-800, 2020 10.
Article in English | MEDLINE | ID: mdl-32564633

ABSTRACT

OBJECTIVE: In the present study, we investigated whether varying the task difficulty of the Sentence-Final Word Identification and Recall (SWIR) Test has an effect on the benefit of noise reduction, as well as whether task difficulty predictability affects recall. The relationship between working memory and recall was examined. DESIGN: Task difficulty was manipulated by varying the list length with noise reduction on and off in competing speech and speech-shaped noise. Half of the participants were informed about list length in advance. Working memory capacity was measured using the Reading Span. STUDY SAMPLE: Thirty-two experienced hearing aid users with moderate sensorineural hearing loss. RESULTS: Task difficulty did not affect the noise reduction benefit and task difficulty predictability did not affect recall. Participants may have employed a different recall strategy when task difficulty was unpredictable and noise reduction off. Reading Span scores positively correlated with the SWIR test. Noise reduction improved recall in competing speech. CONCLUSIONS: The SWIR test with varying list length is suitable for detecting the benefit of noise reduction. The correlation with working memory suggests that the SWIR test could be modified to be adaptive to individual cognitive capacity. The results on noise and noise reduction replicate previous findings.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Hearing Loss, Sensorineural/diagnosis , Humans , Memory, Short-Term , Mental Recall , Noise/adverse effects
9.
Ear Hear ; 40(5): 1210-1219, 2019.
Article in English | MEDLINE | ID: mdl-30807540

ABSTRACT

OBJECTIVES: Previous studies strongly suggest that declines in auditory threshold can lead to impaired cognition. The aim of this study was to expand that picture by investigating how the relationships between age, auditory function, and cognitive function vary with the types of auditory and cognitive function considered. DESIGN: Three auditory constructs (threshold, temporal-order identification, and gap detection) were modeled to have an effect on four cognitive constructs (episodic long-term memory, semantic long-term memory, working memory, and cognitive processing speed) together with age that could have an effect on both cognitive and auditory constructs. The model was evaluated with structural equation modeling of the data from 213 adults ranging in age from 18 to 86 years. RESULTS: The model provided good a fit to the data. Regarding the auditory measures, temporal-order identification had the strongest effect on the cognitive functions, followed by weaker indirect effects for gap detection and nonsignificant effects for threshold. Regarding the cognitive measures, the association with audition was strongest for semantic long-term memory and working memory but weaker for episodic long-term memory and cognitive speed. Age had a very strong effect on threshold and cognitive speed, a moderate effect on temporal-order identification, episodic long-term memory, and working memory, a weak effect on gap detection, and nonsignificant, close to zero effect on semantic long-term memory. CONCLUSIONS: The result shows that auditory temporal-order function has the strongest effect on cognition, which has implications both for which auditory concepts to include in cognitive hearing science experiments and for practitioners. The fact that the total effect of age was different for different aspects of cognition and partly mediated via auditory concepts is also discussed.


Subject(s)
Auditory Perception/physiology , Cognition/physiology , Memory, Episodic , Memory, Long-Term/physiology , Memory, Short-Term/physiology , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Auditory Threshold , Female , Humans , Male , Middle Aged , Young Adult
10.
Ear Hear ; 40(2): 272-286, 2019.
Article in English | MEDLINE | ID: mdl-29923867

ABSTRACT

OBJECTIVES: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. DESIGN: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). RESULTS: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. CONCLUSIONS: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.


Subject(s)
Cues , Mental Recall/physiology , Pupil/physiology , Speech Perception/physiology , Adolescent , Adult , Auditory Perception , Cognition , Female , Humans , Male , Memory , Memory, Short-Term , Semantics , Signal-To-Noise Ratio , Young Adult
11.
Ear Hear ; 40(2): 312-327, 2019.
Article in English | MEDLINE | ID: mdl-29870521

ABSTRACT

OBJECTIVE: We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. DESIGN: The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. RESULTS: In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. CONCLUSIONS: The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Speech Perception/physiology , Visual Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Severity of Illness Index
12.
Cereb Cortex ; 28(10): 3540-3554, 2018 10 01.
Article in English | MEDLINE | ID: mdl-28968707

ABSTRACT

Early deafness results in crossmodal reorganization of the superior temporal cortex (STC). Here, we investigated the effect of deafness on cognitive processing. Specifically, we studied the reorganization, due to deafness and sign language (SL) knowledge, of linguistic and nonlinguistic visual working memory (WM). We conducted an fMRI experiment in groups that differed in their hearing status and SL knowledge: deaf native signers, and hearing native signers, hearing nonsigners. Participants performed a 2-back WM task and a control task. Stimuli were signs from British Sign Language (BSL) or moving nonsense objects in the form of point-light displays. We found characteristic WM activations in fronto-parietal regions in all groups. However, deaf participants also recruited bilateral posterior STC during the WM task, independently of the linguistic content of the stimuli, and showed less activation in fronto-parietal regions. Resting-state connectivity analysis showed increased connectivity between frontal regions and STC in deaf compared to hearing individuals. WM for signs did not elicit differential activations, suggesting that SL WM does not rely on modality-specific linguistic processing. These findings suggest that WM networks are reorganized due to early deafness, and that the organization of cognitive networks is shaped by the nature of the sensory inputs available during development.


Subject(s)
Deafness/physiopathology , Hearing/physiology , Memory, Short-Term/physiology , Nerve Net/physiopathology , Adult , Deafness/diagnostic imaging , Female , Humans , Language Development , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging , Neuronal Plasticity/physiology , Psycholinguistics , Reaction Time/physiology , Sign Language , Young Adult
13.
Int J Audiol ; 58(5): 247-261, 2019 05.
Article in English | MEDLINE | ID: mdl-30714435

ABSTRACT

OBJECTIVE: The current update of the Ease of Language Understanding (ELU) model evaluates the predictive and postdictive aspects of speech understanding and communication. DESIGN: The aspects scrutinised concern: (1) Signal distortion and working memory capacity (WMC), (2) WMC and early attention mechanisms, (3) WMC and use of phonological and semantic information, (4) hearing loss, WMC and long-term memory (LTM), (5) WMC and effort, and (6) the ELU model and sign language. Study Samples: Relevant literature based on own or others' data was used. RESULTS: Expectations 1-4 are supported whereas 5-6 are constrained by conceptual issues and empirical data. Further strands of research were addressed, focussing on WMC and contextual use, and on WMC deployment in relation to hearing status. A wider discussion of task demands, concerning, for example, inference-making and priming, is also introduced and related to the overarching ELU functions of prediction and postdiction. Finally, some new concepts and models that have been inspired by the ELU-framework are presented and discussed. CONCLUSIONS: The ELU model has been productive in generating empirical predictions/expectations, the majority of which have been confirmed. Nevertheless, new insights and boundary conditions need to be experimentally tested to further shape the model.


Subject(s)
Cognition , Hearing Loss/psychology , Memory, Short-Term , Speech Perception , Attention , Humans , Memory, Long-Term
14.
Neural Plast ; 2018: 2576047, 2018.
Article in English | MEDLINE | ID: mdl-30662455

ABSTRACT

Congenital deafness is often compensated by early sign language use leading to typical language development with corresponding neural underpinnings. However, deaf individuals are frequently reported to have poorer numerical abilities than hearing individuals and it is not known whether the underlying neuronal networks differ between groups. In the present study, adult deaf signers and hearing nonsigners performed a digit and letter order tasks, during functional magnetic resonance imaging. We found the neuronal networks recruited in the two tasks to be generally similar across groups, with significant activation in the dorsal visual stream for the letter order task, suggesting letter identification and position encoding. For the digit order task, no significant activation was found for either of the two groups. Region of interest analyses on parietal numerical processing regions revealed different patterns of activation across groups. Importantly, deaf signers showed significant activation in the right horizontal portion of the intraparietal sulcus for the digit order task, suggesting engagement of magnitude manipulation during numerical order processing in this group.


Subject(s)
Brain/diagnostic imaging , Deafness/diagnostic imaging , Nerve Net/diagnostic imaging , Adult , Brain/physiopathology , Deafness/congenital , Deafness/physiopathology , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Male , Nerve Net/physiopathology , Sign Language , Young Adult
15.
J Cogn Neurosci ; 28(1): 20-40, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26351993

ABSTRACT

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


Subject(s)
Brain Mapping , Cerebral Cortex/physiopathology , Perception/physiology , Phonetics , Adult , Analysis of Variance , Cerebral Cortex/blood supply , Cues , Deafness/pathology , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Middle Aged , Oxygen/blood , Photic Stimulation , Psychoacoustics , Reaction Time/physiology , Semantics
16.
Neuroimage ; 124(Pt A): 96-106, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26348556

ABSTRACT

Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation. Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls. Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus.


Subject(s)
Auditory Cortex/physiopathology , Deafness/physiopathology , Neuronal Plasticity , Sign Language , Adult , Brain Mapping , Echo-Planar Imaging , Female , Humans , Linguistics , Male , Middle Aged
17.
Ear Hear ; 37(5): 620-2, 2016.
Article in English | MEDLINE | ID: mdl-27232076

ABSTRACT

Experimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.


Subject(s)
Deafness/psychology , Memory, Short-Term , Spatial Processing , Adult , Aged , Case-Control Studies , Female , Hearing Loss/psychology , Humans , Male , Middle Aged , Severity of Illness Index , United Kingdom
18.
Ear Hear ; 37(1): e26-36, 2016.
Article in English | MEDLINE | ID: mdl-26244401

ABSTRACT

OBJECTIVES: Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. DESIGN: Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. RESULTS: Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p < 0.001). There was no association between the subjective measure of functional hearing and verbal reasoning. Functional hearing significantly interacted with education (p < 0.002), showing a trend for functional hearing to have a greater impact on verbal reasoning among those with a higher level of formal education. Among those with poor hearing, hearing aid usage had a significant positive, but not necessarily causal, effect on both numerical and linguistic verbal reasoning (p < 0.005). The estimated effect of hearing aid usage was less than the effect of poor functional hearing. Structural equation modeling analyses confirmed that controlling for education reduced the effect of functional hearing on verbal reasoning and showed that controlling for executive function eliminated the effect. However, when computer usage was controlled for, the eliminating effect of executive function was weakened. CONCLUSIONS: Poor functional hearing was associated with poor verbal reasoning in a 40- to 70-year-old community-dwelling population after controlling for age, gender, and education. The effect of functional hearing on verbal reasoning was significantly reduced among hearing aid users and completely overcome by good executive function skills, which may be enhanced by playing computer games.


Subject(s)
Cognition , Executive Function , Hearing Aids , Hearing Loss/psychology , Intelligence , Adult , Age Factors , Aged , Audiometry, Pure-Tone , Cross-Sectional Studies , Educational Status , Female , Hearing Loss/rehabilitation , Humans , Independent Living , Linear Models , Male , Middle Aged , Regression Analysis , Sex Factors , United Kingdom
19.
Ear Hear ; 37(1): 73-9, 2016.
Article in English | MEDLINE | ID: mdl-26317162

ABSTRACT

OBJECTIVE: The aim of the study was to investigate the utility of an internet-based version of the trail making test (TMT) to predict performance on a speech-in-noise perception task. DESIGN: Data were taken from a sample of 1509 listeners between ages 18 and 91 years old. Participants completed computerized versions of the TMT and an adaptive speech-in-noise recognition test. All testing was conducted via the internet. RESULTS: The results indicate that better performance on both the simple and complex subtests of the TMT are associated with better speech-in-noise recognition scores. Thirty-eight percent of the participants had scores on the speech-in-noise test that indicated the presence of a hearing loss. CONCLUSIONS: The findings suggest that the TMT may be a useful tool in the assessment, and possibly the treatment, of speech-recognition difficulties. The results indicate that the relation between speech-in-noise recognition and TMT performance relates both to the capacity of the TMT to index processing speed and to the more complex cognitive abilities also implicated in TMT performance.


Subject(s)
Cognition/physiology , Noise , Speech Perception/physiology , Trail Making Test , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Internet , Male , Mass Screening , Middle Aged , Young Adult
20.
Mem Cognit ; 44(4): 608-20, 2016 May.
Article in English | MEDLINE | ID: mdl-26800983

ABSTRACT

Working memory (WM) for spoken language improves when the to-be-remembered items correspond to preexisting representations in long-term memory. We investigated whether this effect generalizes to the visuospatial domain by administering a visual n-back WM task to deaf signers and hearing signers, as well as to hearing nonsigners. Four different kinds of stimuli were presented: British Sign Language (BSL; familiar to the signers), Swedish Sign Language (SSL; unfamiliar), nonsigns, and nonlinguistic manual actions. The hearing signers performed better with BSL than with SSL, demonstrating a facilitatory effect of preexisting semantic representation. The deaf signers also performed better with BSL than with SSL, but only when WM load was high. No effect of preexisting phonological representation was detected. The deaf signers performed better than the hearing nonsigners with all sign-based materials, but this effect did not generalize to nonlinguistic manual actions. We argue that deaf signers, who are highly reliant on visual information for communication, develop expertise in processing sign-based items, even when those items do not have preexisting semantic or phonological representations. Preexisting semantic representation, however, enhances the quality of the gesture-based representations temporarily maintained in WM by this group, thereby releasing WM resources to deal with increased load. Hearing signers, on the other hand, may make strategic use of their speech-based representations for mnemonic purposes. The overall pattern of results is in line with flexible-resource models of WM.


Subject(s)
Deafness/physiopathology , Memory, Short-Term/physiology , Semantics , Sign Language , Adult , Humans , Middle Aged , Space Perception/physiology , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL