Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 129
Filter
1.
PLoS One ; 19(6): e0306113, 2024.
Article in English | MEDLINE | ID: mdl-38924006

ABSTRACT

Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging.


Subject(s)
Emotions , Facial Expression , Memory, Short-Term , Humans , Memory, Short-Term/physiology , Male , Female , Emotions/physiology , Young Adult , Adult
2.
Int J Lang Commun Disord ; 59(1): 293-303, 2024.
Article in English | MEDLINE | ID: mdl-37589337

ABSTRACT

BACKGROUND: The impact of hearing impairment is typically studied in terms of its effects on speech perception, yet this fails to account for the interactive nature of communication. Recently, there has been a move towards studying the effects of age-related hearing impairment on interaction, often using referential communication tasks; however, little is known about how interaction in these tasks compares to everyday communication. AIMS: To investigate utterances and requests for clarification used in one-to-one conversations between older adults with hearing impairment and younger adults without hearing impairment, and between two younger adults without hearing impairment. METHODS & PROCEDURES: A total of 42 participants were recruited to the study and split into 21 pairs, 10 with two younger adults without hearing impairment and 11 with one younger adult without hearing impairment and one older participant with age-related hearing impairment (hard of hearing). Results from three tasks-spontaneous conversation and two trials of a referential communication task-were compared. A total of 5 min of interaction in each of the three tasks was transcribed, and the frequency of requests for clarification, mean length of utterance and total utterances were calculated for individual participants and pairs. OUTCOMES & RESULTS: When engaging in spontaneous conversation, participants made fewer requests for clarification than in the referential communication, regardless of hearing status/age (p ≤ 0.012). Participants who were hard of hearing made significantly more requests for clarification than their partners without hearing impairment in only the second trial of the referential communication task (U = 25, p = 0.019). Mean length of utterance was longer in spontaneous conversation than in the referential communication task in the pairs without hearing impairment (p ≤ 0.021), but not in the pairs including a person who was hard of hearing. However, participants who were hard of hearing used significantly longer utterances than their partners without hearing impairment in the spontaneous conversation (U = 8, p < 0.001) but not in the referential communication tasks. CONCLUSIONS & IMPLICATIONS: The findings suggest that patterns of interaction observed in referential communication tasks differ to those observed in spontaneous conversation. The results also suggest that fatigue may be an important consideration when planning studies of interaction that use multiple conditions of a communication task, particularly when participants are older or hard of hearing. WHAT THIS PAPER ADDS: What is already known on this subject Age-related hearing impairment is known to affect communication; however, the majority of studies have focused on its impact on speech perception in controlled conditions. This indicates little about the impact on everyday, interactive, communication. What this study adds to the existing knowledge We investigated utterance length and requests for clarification in one-to-one conversations between pairs consisting of one older adult who is hard of hearing and one younger adult without hearing impairment, or two younger adults without hearing impairment. Results from three tasks (two trials of a referential communication task and spontaneous conversation) were compared. The findings demonstrated a significant effect of task type on requests for clarification in both groups. Furthermore, in spontaneous conversation, older adults who were hard of hearing used significantly longer utterances than their partners without hearing impairment. This pattern was not observed in the referential communication task. What are the potential or actual clinical implications of this work? These findings have important implications for generalizing results from controlled communication tasks to more everyday conversation. Specifically, they suggest that the previously observed strategy of monopolizing conversation, possibly as an attempt to control it, may be more frequently used by older adults who are hard of hearing in natural conversation than in a more contrived communication task.


Subject(s)
Hearing Loss , Speech Perception , Humans , Aged , Communication
3.
Brain Sci ; 13(4)2023 Apr 01.
Article in English | MEDLINE | ID: mdl-37190566

ABSTRACT

Face-to-face communication is one of the most common means of communication in daily life. We benefit from both auditory and visual speech signals that lead to better language understanding. People prefer face-to-face communication when access to auditory speech cues is limited because of background noise in the surrounding environment or in the case of hearing impairment. We demonstrated that an early, short period of exposure to audiovisual speech stimuli facilitates subsequent auditory processing of speech stimuli for correct identification, but early auditory exposure does not. We called this effect "perceptual doping" as an early audiovisual speech stimulation dopes or recalibrates auditory phonological and lexical maps in the mental lexicon in a way that results in better processing of auditory speech signals for correct identification. This short opinion paper provides an overview of perceptual doping and how it differs from similar auditory perceptual aftereffects following exposure to audiovisual speech materials, its underlying cognitive mechanism, and its potential usefulness in the aural rehabilitation of people with hearing difficulties.

4.
Front Psychol ; 14: 1015227, 2023.
Article in English | MEDLINE | ID: mdl-36936006

ABSTRACT

Objective: The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database. Methods: Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed. Results: Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions. Conclusion: The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated.

5.
Int J Audiol ; 62(2): 101-109, 2023 02.
Article in English | MEDLINE | ID: mdl-35306958

ABSTRACT

OBJECTIVE: Using data from the n200-study, we aimed to investigate the relationship between behavioural (the Swedish HINT and Hagerman speech-in-noise tests) and self-report (Speech, Spatial and Qualities of Hearing Questionnaire (SSQ)) measures of listening under adverse conditions. DESIGN: The Swedish HINT was masked with a speech-shaped noise (SSN), the Hagerman was masked with a SSN and a four-talker babble, and the subscales from the SSQ were used as a self-report measure. The HINT and Hagerman were administered through an experimental hearing aid. STUDY SAMPLE: This study included 191 hearing aid users with hearing loss (mean PTA4 = 37.6, SD = 10.8) and 195 normally hearing adults (mean PTA4 = 10.0, SD = 6.0). RESULTS: The present study found correlations between behavioural measures of speech-in-noise and self-report scores of the SSQ in normally hearing individuals, but not in hearing aid users. CONCLUSION: The present study may help identify relationships between clinically used behavioural measures, and a self-report measure of speech recognition. The results from the present study suggest that use of a self-report measure as a complement to behavioural speech in noise tests might help to further our understanding of how self-report, and behavioural results can be generalised to everyday functioning.


Subject(s)
Hearing Aids , Speech Perception , Adult , Humans , Self Report , Speech , Noise/adverse effects , Hearing
7.
Trends Hear ; 26: 23312165221130581, 2022.
Article in English | MEDLINE | ID: mdl-36305085

ABSTRACT

The aim of the current study was to investigate whether task-evoked pupillary responses measured during encoding, individual working memory capacity and noise reduction in hearing aids were associated with the likelihood of subsequently recalling an item in an auditory free recall test combined with pupillometry. Participants with mild to moderately severe symmetrical sensorineural hearing loss (n = 21) were included. The Sentence-final Word Identification and Recall (SWIR) test was administered in a background noise composed of sixteen talkers with noise reduction in hearing aids activated and deactivated. The task-evoked peak pupil dilation (PPD) was measured. The Reading Span (RS) test was used as a measure of individual working memory capacity. Larger PPD at a single trial level was significantly associated with higher likelihood of subsequently recalling a word, presumably reflecting the intensity of attention devoted during encoding. There was no clear evidence of a significant relationship between working memory capacity and subsequent memory recall, which may be attributed to the SWIR test and RS test being administered in different modalities, as well as differences in task characteristics. Noise reduction did not have a significant effect on subsequent memory recall. This may be due to the background noise not having a detrimental effect on attentional processing at the favorable signal-to-noise ratio levels at which the test was conducted.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Humans , Speech Perception/physiology , Hearing Loss, Sensorineural/diagnosis , Mental Recall/physiology , Memory, Short-Term
8.
Article in English | MEDLINE | ID: mdl-36148149

ABSTRACT

Background: Numerous resting-state studies on attention deficit hyperactivity disorder (ADHD) have reported aberrant functional connectivity (FC) between the default-mode network (DMN) and the ventral attention/salience network (VA/SN). This finding has commonly been interpreted as an index of poorer DMN regulation associated with symptoms of mind wandering in ADHD literature. However, a competing perspective suggests that dysfunctional organization of the DMN and VA/SN may additionally index increased sensitivity to the external environment. The goal of the current study was to test this latter perspective in relation to auditory distraction by investigating whether ADHD-adults exhibit aberrant FC between DMN, VA/SN, and auditory networks. Methods: Twelve minutes of resting-state fMRI data was collected from two adult groups: ADHD (n = 17) and controls (n = 17); from which the FC between predefined regions comprising the DMN, VA/SN, and auditory networks were analyzed. Results: A weaker anticorrelation between the VA/SN and DMN was observed in ADHD. DMN and VA/SN hubs also exhibited aberrant FC with the auditory network in ADHD. Additionally, participants who displayed a stronger anticorrelation between the VA/SN and auditory network at rest, also performed better on a cognitively demanding behavioral task that involved ignoring a distracting auditory stimulus. Conclusion: Results are consistent with the hypothesis that auditory distraction in ADHD is linked to aberrant interactions between DMN, VA/SN, and auditory systems. Our findings support models that implicate dysfunctional organization of the DMN and VA/SN in the disorder and encourage more research into sensory interactions with these major networks.

9.
Front Psychol ; 13: 967260, 2022.
Article in English | MEDLINE | ID: mdl-36118435

ABSTRACT

The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.

10.
Front Neurosci ; 16: 876807, 2022.
Article in English | MEDLINE | ID: mdl-35937878

ABSTRACT

Despite the evidence of a positive relationship between task demands and listening effort, the Framework for Understanding Effortful Listening (FUEL) highlights the important role of arousal on an individual's choice to engage in challenging listening tasks. Previous studies have interpreted physiological responses in conjunction with behavioral responses as markers of task engagement. The aim of the current study was to investigate the effect of potential changes in physiological arousal, indexed by the pupil baseline, on task engagement over the course of an auditory recall test. Furthermore, the aim was to investigate whether working memory (WM) capacity and the signal-to-noise ratio (SNR) at which the test was conducted had an effect on changes in arousal. Twenty-one adult hearing aid users with mild to moderately severe symmetrical sensorineural hearing loss were included. The pupil baseline was measured during the Sentence-final Word Identification and Recall (SWIR) test, which was administered in a background noise composed of sixteen talkers. The Reading Span (RS) test was used as a measure of WM capacity. The findings showed that the pupil baseline decreased over the course of the SWIR test. However, recall performance remained stable, indicating that the participants maintained the necessary engagement level required to perform the task. These findings were interpreted as a decline in arousal as a result of task habituation. There was no effect of WM capacity or individual SNR level on the change in pupil baseline over time. A significant interaction was found between WM capacity and SNR level on the overall mean pupil baseline. Individuals with higher WM capacity exhibited an overall larger mean pupil baseline at low SNR levels compared to individuals with poorer WM capacity. This may be related to the ability of individuals with higher WM capacity to perform better than individual with poorer WM capacity in challenging listening conditions.

11.
Ear Hear ; 43(5): 1437-1446, 2022.
Article in English | MEDLINE | ID: mdl-34983896

ABSTRACT

OBJECTIVES: Previous research suggests that there is a robust relationship between cognitive functioning and speech-in-noise performance for older adults with age-related hearing loss. For normal-hearing adults, on the other hand, the research is not entirely clear. Therefore, the current study aimed to examine the relationship between cognitive functioning, aging, and speech-in-noise, in a group of older normal-hearing persons and older persons with hearing loss who wear hearing aids. DESIGN: We analyzed data from 199 older normal-hearing individuals (mean age = 61.2) and 200 older individuals with hearing loss (mean age = 60.9) using multigroup structural equation modeling. Four cognitively related tasks were used to create a cognitive functioning construct: the reading span task, a visuospatial working memory task, the semantic word-pairs task, and Raven's progressive matrices. Speech-in-noise, on the other hand, was measured using Hagerman sentences. The Hagerman sentences were presented via an experimental hearing aid to both normal hearing and hearing-impaired groups. Furthermore, the sentences were presented with one of the two background noise conditions: the Hagerman original speech-shaped noise or four-talker babble. Each noise condition was also presented with three different hearing processing settings: linear processing, fast compression, and noise reduction. RESULTS: Cognitive functioning was significantly related to speech-in-noise identification. Moreover, aging had a significant effect on both speech-in-noise and cognitive functioning. With regression weights constrained to be equal for the two groups, the final model had the best fit to the data. Importantly, the results showed that the relationship between cognitive functioning and speech-in-noise was not different for the two groups. Furthermore, the same pattern was evident for aging: the effects of aging on cognitive functioning and aging on speech-in-noise were not different between groups. CONCLUSION: Our findings revealed similar cognitive functioning and aging effects on speech-in-noise performance in older normal-hearing and aided hearing-impaired listeners. In conclusion, the findings support the Ease of Language Understanding model as cognitive processes play a critical role in speech-in-noise independent from the hearing status of elderly individuals.


Subject(s)
Deafness , Presbycusis , Speech Perception , Aged , Aged, 80 and over , Cognition , Humans , Latent Class Analysis , Middle Aged , Speech
12.
Int J Audiol ; 61(6): 473-481, 2022 06.
Article in English | MEDLINE | ID: mdl-31613169

ABSTRACT

Retraction statementWe, the Editor and Publisher of the International Journal of Audiology, have retracted the following article.Rachel J. Ellis, and Jerker Rönnberg. 2019. "Temporal fine structure: relations to cognition and aided speech recognition." International Journal of Audiology. doi:10.1080/14992027.2019.1672899.The authors of the above-mentioned article published in the International Journal of Audiology have identified errors in the reported analysis (relating to the inclusion of data that should have been excluded) which impact the validity of the findings. The authors have, therefore, requested that the article be retracted.We have been informed in our decision-making by our policy on publishing ethics and integrity and the COPE guidelines on retractions.The retracted article will remain online to maintain the scholarly record, but it will be digitally watermarked on each page as "Retracted".

13.
Int J Audiol ; 61(9): 778-786, 2022 09.
Article in English | MEDLINE | ID: mdl-34292115

ABSTRACT

OBJECTIVES: To investigate associations between sensitivity to temporal fine structure (TFS) and performance in cognitive and speech-in-noise recognition tests. DESIGN: A binaural test of TFS sensitivity (the TFS-LF) was used. Measures of cognition included the reading span, Raven's, and text-reception threshold tests. Measures of speech recognition included the Hearing in noise (HINT) and the Hagerman matrix sentence tests in three signal processing conditions. STUDY SAMPLE: Analyses are based on the performance of 324/317 adults with and without hearing impairment. RESULTS: Sensitivity to TFS was significantly correlated with both the reading span test and the recognition of speech-in-noise processed using noise reduction, the latter only when limited to participants with hearing impairment. Neither association was significant when the effects of age were partialled out. CONCLUSIONS: The findings are consistent with previous research in finding no evidence of a link between sensitivity to TFS and working memory once the effects of age had been partialled out. The results provide some evidence of an influence of signal processing strategy on the association between TFS sensitivity and speech-in-noise recognition. However, further research is necessary to assess the generalisability of the findings before any claims can be made regarding any clinical implications of these findings.


Subject(s)
Hearing Loss , Speech Perception , Adult , Cognition , Hearing , Humans , Speech
14.
Front Hum Neurosci ; 15: 771711, 2021.
Article in English | MEDLINE | ID: mdl-34916918

ABSTRACT

Cognitive control provides us with the ability to inter alia, regulate the locus of attention and ignore environmental distractions in accordance with our goals. Auditory distraction is a frequently cited symptom in adults with attention deficit hyperactivity disorder (aADHD)-yet few task-based fMRI studies have explored whether deficits in cognitive control (associated with the disorder) impedes on the ability to suppress/compensate for exogenously evoked cortical responses to noise in this population. In the current study, we explored the effects of auditory distraction as function of working memory (WM) load. Participants completed two tasks: an auditory target detection (ATD) task in which the goal was to actively detect salient oddball tones amidst a stream of standard tones in noise, and a visual n-back task consisting of 0-, 1-, and 2-back WM conditions whilst concurrently ignoring the same tonal signal from the ATD task. Results indicated that our sample of young aADHD (n = 17), compared to typically developed controls (n = 17), had difficulty attenuating auditory cortical responses to the task-irrelevant sound when WM demands were high (2-back). Heightened auditory activity to task-irrelevant sound was associated with both poorer WM performance and symptomatic inattentiveness. In the ATD task, we observed a significant increase in functional communications between auditory and salience networks in aADHD. Because performance outcomes were on par with controls for this task, we suggest that this increased functional connectivity in aADHD was likely an adaptive mechanism for suboptimal listening conditions. Taken together, our results indicate that aADHD are more susceptible to noise interference when they are engaged in a primary task. The ability to cope with auditory distraction appears to be related to the WM demands of the task and thus the capacity to deploy cognitive control.

15.
Ear Hear ; 42(6): 1668-1679, 2021.
Article in English | MEDLINE | ID: mdl-33859121

ABSTRACT

OBJECTIVES: Communication requires cognitive processes which are not captured by traditional speech understanding tests. Under challenging listening situations, more working memory resources are needed to process speech, leaving fewer resources available for storage. The aim of the current study was to investigate the effect of task difficulty predictability, that is, knowing versus not knowing task difficulty in advance, and the effect of noise reduction on working memory resource allocation to processing and storage of speech heard in background noise. For this purpose, an "offline" behavioral measure, the Sentence-Final Word Identification and Recall (SWIR) test, and an "online" physiological measure, pupillometry, were combined. Moreover, the outcomes of the two measures were compared to investigate whether they reflect the same processes related to resource allocation. DESIGN: Twenty-four experienced hearing aid users with moderate to moderately severe hearing loss participated in this study. The SWIR test and pupillometry were measured simultaneously with noise reduction in the test hearing aids activated and deactivated in a background noise composed of four-talker babble. The task of the SWIR test is to listen to lists of sentences, repeat the last word immediately after each sentence and recall the repeated words when the list is finished. The sentence baseline dilation, which is defined as the mean pupil dilation before each sentence, and task-evoked peak pupil dilation (PPD) were analyzed over the course of the lists. The task difficulty predictability was manipulated by including lists of three, five, and seven sentences. The test was conducted over two sessions, one during which the participants were informed about list length before each list (predictable task difficulty) and one during which they were not (unpredictable task difficulty). RESULTS: The sentence baseline dilation was higher when task difficulty was unpredictable compared to predictable, except at the start of the list, where there was no difference. The PPD tended to be higher at the beginning of the list, this pattern being more prominent when task difficulty was unpredictable. Recall performance was better and sentence baseline dilation was higher when noise reduction was on, especially toward the end of longer lists. There was no effect of noise reduction on PPD. CONCLUSIONS: Task difficulty predictability did not have an effect on resource allocation, since recall performance was similar independently of whether task difficulty was predictable or unpredictable. The higher sentence baseline dilation when task difficulty was unpredictable likely reflected a difference in the recall strategy or higher degree of task engagement/alertness or arousal. Hence, pupillometry captured processes which the SWIR test does not capture. Noise reduction frees up resources to be used for storage of speech, which was reflected in the better recall performance and larger sentence baseline dilation toward the end of the list when noise reduction was on. Thus, both measures captured different temporal aspects of the same processes related to resource allocation with noise reduction on and off.


Subject(s)
Hearing Aids , Speech Perception , Humans , Noise , Pupil/physiology
16.
J Speech Lang Hear Res ; 64(2): 359-370, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33439747

ABSTRACT

Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.


Subject(s)
Hearing Aids , Speech Perception , Cognition , Hearing , Humans , Language , Memory, Short-Term
17.
Front Neurosci ; 14: 573254, 2020.
Article in English | MEDLINE | ID: mdl-33100961

ABSTRACT

Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.

18.
Int J Audiol ; 59(10): 792-800, 2020 10.
Article in English | MEDLINE | ID: mdl-32564633

ABSTRACT

OBJECTIVE: In the present study, we investigated whether varying the task difficulty of the Sentence-Final Word Identification and Recall (SWIR) Test has an effect on the benefit of noise reduction, as well as whether task difficulty predictability affects recall. The relationship between working memory and recall was examined. DESIGN: Task difficulty was manipulated by varying the list length with noise reduction on and off in competing speech and speech-shaped noise. Half of the participants were informed about list length in advance. Working memory capacity was measured using the Reading Span. STUDY SAMPLE: Thirty-two experienced hearing aid users with moderate sensorineural hearing loss. RESULTS: Task difficulty did not affect the noise reduction benefit and task difficulty predictability did not affect recall. Participants may have employed a different recall strategy when task difficulty was unpredictable and noise reduction off. Reading Span scores positively correlated with the SWIR test. Noise reduction improved recall in competing speech. CONCLUSIONS: The SWIR test with varying list length is suitable for detecting the benefit of noise reduction. The correlation with working memory suggests that the SWIR test could be modified to be adaptive to individual cognitive capacity. The results on noise and noise reduction replicate previous findings.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Hearing Loss, Sensorineural/diagnosis , Humans , Memory, Short-Term , Mental Recall , Noise/adverse effects
20.
Int J Audiol ; 59(3): 208-218, 2020 03.
Article in English | MEDLINE | ID: mdl-31809220

ABSTRACT

Objective: The aim of this study was to examine how background noise and hearing aid experience affect the robust relationship between working memory and speech recognition.Design: Matrix sentences were used to measure speech recognition in noise. Three measures of working memory were administered. Study sample: 148 participants with at least 2 years of hearing aid experience.Results: A stronger overall correlation between working memory and speech recognition performance was found in a four-talker babble than in a stationary noise background. This correlation was significantly weaker in participants with most hearing aid experience than those with least experience when background noise was stationary. In the four-talker babble, however, no significant difference was found between the strength of correlations between users with different experience.Conclusion: In general, more explicit processing of working memory is invoked when listening in a multi-talker babble. The matching processes (cf. Ease of Language Understanding model, ELU) were more efficient for experienced than for less experienced users when perceiving speech. This study extends the existing ELU model that mismatch may also lead to the establishment of new phonological representations in the long-term memory.


Subject(s)
Auditory Threshold , Hearing Aids/psychology , Hearing Loss, Sensorineural/psychology , Memory, Short-Term , Speech Perception , Aged , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Noise , Perceptual Masking , Regression Analysis , Speech Reception Threshold Test , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL