Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Cogn Neurosci ; 36(6): 997-1020, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38579256

ABSTRACT

Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.


Subject(s)
Aging , Electroencephalography , Pupil , Speech Perception , Humans , Aged , Male , Female , Pupil/physiology , Aging/physiology , Speech Perception/physiology , Acoustic Stimulation , Aged, 80 and over , Middle Aged , Memory/physiology , Recognition, Psychology/physiology , Evoked Potentials/physiology , Auditory Perception/physiology , Mental Recall/physiology
2.
Bull Math Biol ; 86(4): 40, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38489047

ABSTRACT

Use of nonlinear statistical methods and models are ubiquitous in scientific research. However, these methods may not be fully understood, and as demonstrated here, commonly-reported parameter p-values and confidence intervals may be inaccurate. The gentle introduction to nonlinear regression modelling and comprehensive illustrations given here provides applied researchers with the needed overview and tools to appreciate the nuances and breadth of these important methods. Since these methods build upon topics covered in first and second courses in applied statistics and predictive modelling, the target audience includes practitioners and students alike. To guide practitioners, we summarize, illustrate, develop, and extend nonlinear modelling methods, and underscore caveats of Wald statistics using basic illustrations and give key reasons for preferring likelihood methods. Parameter profiling in multiparameter models and exact or near-exact versus approximate likelihood methods are discussed and curvature measures are connected with the failure of the Wald approximations regularly used in statistical software. The discussion in the main paper has been kept at an introductory level and it can be covered on a first reading; additional details given in the Appendices can be worked through upon further study. The associated online Supplementary Information also provides the data and R computer code which can be easily adapted to aid researchers to fit nonlinear models to their data.


Subject(s)
Models, Biological , Nonlinear Dynamics , Humans , Computer Simulation , Mathematical Concepts , Likelihood Functions , Models, Statistical
3.
Psychophysiology ; 60(9): e14312, 2023 09.
Article in English | MEDLINE | ID: mdl-37203307

ABSTRACT

Readers use prior context to predict features of upcoming words. When predictions are accurate, this increases the efficiency of comprehension. However, little is known about the fate of predictable and unpredictable words in memory or the neural systems governing these processes. Several theories suggest that the speech production system, including the left inferior frontal cortex (LIFC), is recruited for prediction but evidence that LIFC plays a causal role is lacking. We first examined the effects of predictability on memory and then tested the role of posterior LIFC using transcranial magnetic stimulation (TMS). In Experiment 1, participants read category cues, followed by a predictable, unpredictable, or incongruent target word for later recall. We observed a predictability benefit to memory, with predictable words remembered better than unpredictable words. In Experiment 2, participants performed the same task with electroencephalography (EEG) while undergoing event-related TMS over posterior LIFC using a protocol known to disrupt speech production, or over the right hemisphere homologue as an active control site. Under control stimulation, predictable words were better recalled than unpredictable words, replicating Experiment 1. This predictability benefit to memory was eliminated under LIFC stimulation. Moreover, while an a priori ROI-based analysis did not yield evidence for a reduction in the N400 predictability effect, mass-univariate analyses did suggest that the N400 predictability effect was reduced in spatial and temporal extent under LIFC stimulation. Collectively, these results provide causal evidence that the LIFC is recruited for prediction during silent reading, consistent with prediction-through-production accounts.


Subject(s)
Electroencephalography , Transcranial Magnetic Stimulation , Humans , Male , Female , Semantics , Evoked Potentials/physiology , Reading , Frontal Lobe/physiology , Comprehension/physiology
4.
Ear Hear ; 44(5): 1121-1132, 2023.
Article in English | MEDLINE | ID: mdl-36935395

ABSTRACT

OBJECTIVES: Everyday listening environments are filled with competing noise and distractors. Although significant research has examined the effect of competing noise on speech recognition and listening effort, little is understood about the effect of distraction. The framework for understanding effortful listening recognizes the importance of attention-related processes in speech recognition and listening effort; however, it underspecifies the role that they play, particularly with respect to distraction. The load theory of attention predicts that resources will be automatically allocated to processing a distractor, but only if perceptual load in the listening task is low enough. If perceptual load is high (i.e., listening in noise), then resources that would otherwise be allocated to processing a distractor are used to overcome the increased perceptual load and are unavailable for distractor processing. Although there is ample evidence for this theory in the visual domain, there has been little research investigating how the load theory of attention may apply to speech processing. In this study, we sought to measure the effect of distractors on speech recognition and listening effort and to evaluate whether the load theory of attention can be used to understand a listener's resource allocation in the presence of distractors. DESIGN: Fifteen adult listeners participated in a monosyllabic words repetition task. Test stimuli were presented in quiet or in competing speech (+5 dB signal-to-noise ratio) and in distractor or no distractor conditions. In conditions with distractors, auditory distractors were presented before the target words on 24% of the trials in quiet and in noise. Percent-correct was recorded as speech recognition, and verbal response time (VRT) was recorded as a measure of listening effort. RESULTS: A significant interaction was present for speech recognition, showing reduced speech recognition when distractors were presented in the quiet condition but no effect of distractors when noise was present. VRTs were significantly longer when distractors were present, regardless of listening condition. CONCLUSIONS: Consistent with the load theory of attention, distractors significantly reduced speech recognition in the low-perceptual load condition (i.e., listening in quiet) but did not impact speech recognition scores in conditions of high perceptual load (i.e., listening in noise). The increases in VRTs in the presence of distractors in both low- and high-perceptual load conditions (i.e., quiet and noise) suggest that the load theory of attention may not apply to listening effort. However, the large effect of distractors on VRT in both conditions is consistent with the previous work demonstrating that distraction-related shifts of attention can delay processing of the target task. These findings also fit within the framework for understanding effortful listening, which proposes that involuntary attentional shifts result in a depletion of cognitive resources, leaving less resources readily available to process the signal of interest; resulting in increased listening effort (i.e., elongated VRT).


Subject(s)
Speech Perception , Adult , Humans , Speech Perception/physiology , Acoustic Stimulation/methods , Speech , Listening Effort , Noise
5.
J Speech Lang Hear Res ; 65(6): 2364-2390, 2022 06 08.
Article in English | MEDLINE | ID: mdl-35623337

ABSTRACT

PURPOSE: Previous studies have suggested that the negative effects of acoustic challenge on speech memory can be attenuated with assistive text captions, particularly among older adults with hearing impairment. However, no studies have systematically examined the effects of text-captioning errors, which are common in automated speech recognition (ASR) systems. METHOD: In two experiments, we examined memory for text-captioned speech (with and without background noise) when captions had no errors (control) or had one of three common ASR errors: substitution, deletion, or insertion errors. RESULTS: In both Experiment 1 (young adults with normal hearing) and Experiment 2 (older adults with varying hearing acuity), we observed similar additive effects of caption errors and background noise, such that increased background noise and the presence of captioning errors negatively impacted memory outcomes. Notably, the negative effects of captioning errors were largest among older adults with increased hearing thresholds, suggesting that older adults with hearing loss may show an increased reliance on text captions compared to adults with normal hearing. CONCLUSION: Our findings show that even a single-word error can be deleterious to memory for text-captioned speech, especially in older adults with hearing loss. Therefore, to produce the greatest benefit to memory, it is crucial that text captions are accurate.


Subject(s)
Deafness , Speech Perception , Aged , Hearing , Humans , Noise/adverse effects , Speech , Young Adult
6.
Ear Hear ; 43(1): 115-127, 2022.
Article in English | MEDLINE | ID: mdl-34260436

ABSTRACT

OBJECTIVE: Everyday speech understanding frequently occurs in perceptually demanding environments, for example, due to background noise and normal age-related hearing loss. The resulting degraded speech signals increase listening effort, which gives rise to negative downstream effects on subsequent memory and comprehension, even when speech is intelligible. In two experiments, we explored whether the presentation of realistic assistive text captioned speech offsets the negative effects of background noise and hearing impairment on multiple measures of speech memory. DESIGN: In Experiment 1, young normal-hearing adults (N = 48) listened to sentences for immediate recall and delayed recognition memory. Speech was presented in quiet or in two levels of background noise. Sentences were either presented as speech only or as text captioned speech. Thus, the experiment followed a 2 (caption vs no caption) × 3 (no noise, +7 dB signal-to-noise ratio, +3 dB signal-to-noise ratio) within-subjects design. In Experiment 2, a group of older adults (age range: 61 to 80, N = 31), with varying levels of hearing acuity completed the same experimental task as in Experiment 1. For both experiments, immediate recall, recognition memory accuracy, and recognition memory confidence were analyzed via general(ized) linear mixed-effects models. In addition, we examined individual differences as a function of hearing acuity in Experiment 2. RESULTS: In Experiment 1, we found that the presentation of realistic text-captioned speech in young normal-hearing listeners showed improved immediate recall and delayed recognition memory accuracy and confidence compared with speech alone. Moreover, text captions attenuated the negative effects of background noise on all speech memory outcomes. In Experiment 2, we replicated the same pattern of results in a sample of older adults with varying levels of hearing acuity. Moreover, we showed that the negative effects of hearing loss on speech memory in older adulthood were attenuated by the presentation of text captions. CONCLUSIONS: Collectively, these findings strongly suggest that the simultaneous presentation of text can offset the negative effects of effortful listening on speech memory. Critically, captioning benefits extended from immediate word recall to long-term sentence recognition memory, a benefit that was observed not only for older adults with hearing loss but also young normal-hearing listeners. These findings suggest that the text captioning benefit to memory is robust and has potentially wide applications for supporting speech listening in acoustically challenging environments.


Subject(s)
Deafness , Presbycusis , Speech Perception , Aged , Aged, 80 and over , Humans , Middle Aged , Noise , Speech
7.
Cortex ; 142: 296-316, 2021 09.
Article in English | MEDLINE | ID: mdl-34332197

ABSTRACT

There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.


Subject(s)
Electroencephalography , Speech Perception , Electrophysiology , Evoked Potentials , Female , Humans , Male , Noise
SELECTION OF CITATIONS
SEARCH DETAIL
...