Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Neurophysiol ; 129(6): 1359-1377, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37096924

RESUMEN

Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception.NEW & NOTEWORTHY We observed age-related changes in cortical temporal processing of continuous speech that may be related to older adults' difficulty in understanding speech in noise. These changes occur in both timing and strength of the speech representations at different cortical processing stages and depend on both noise condition and selective attention. Critically, their dependence on noise condition changes dramatically among the early, middle, and late cortical processing stages, underscoring how aging differentially affects these stages.


Asunto(s)
Percepción del Habla , Habla , Habla/fisiología , Percepción Auditiva , Ruido , Percepción del Habla/fisiología , Estimulación Acústica/métodos
2.
J Acoust Soc Am ; 153(1): 286, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36732241

RESUMEN

Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.


Asunto(s)
Memoria a Corto Plazo , Percepción del Habla , Adulto , Humanos , Habla , Individualidad , Audiometría de Tonos Puros
3.
Neuroimage ; 260: 119496, 2022 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-35870697

RESUMEN

Identifying the directed connectivity that underlie networked activity between different cortical areas is critical for understanding the neural mechanisms behind sensory processing. Granger causality (GC) is widely used for this purpose in functional magnetic resonance imaging analysis, but there the temporal resolution is low, making it difficult to capture the millisecond-scale interactions underlying sensory processing. Magnetoencephalography (MEG) has millisecond resolution, but only provides low-dimensional sensor-level linear mixtures of neural sources, which makes GC inference challenging. Conventional methods proceed in two stages: First, cortical sources are estimated from MEG using a source localization technique, followed by GC inference among the estimated sources. However, the spatiotemporal biases in estimating sources propagate into the subsequent GC analysis stage, may result in both false alarms and missing true GC links. Here, we introduce the Network Localized Granger Causality (NLGC) inference paradigm, which models the source dynamics as latent sparse multivariate autoregressive processes and estimates their parameters directly from the MEG measurements, integrated with source localization, and employs the resulting parameter estimates to produce a precise statistical characterization of the detected GC links. We offer several theoretical and algorithmic innovations within NLGC and further examine its utility via comprehensive simulations and application to MEG data from an auditory task involving tone processing from both younger and older participants. Our simulation studies reveal that NLGC is markedly robust with respect to model mismatch, network size, and low signal-to-noise ratio, whereas the conventional two-stage methods result in high false alarms and mis-detections. We also demonstrate the advantages of NLGC in revealing the cortical network-level characterization of neural activity during tone processing and resting state by delineating task- and age-related connectivity changes.


Asunto(s)
Imagen por Resonancia Magnética , Magnetoencefalografía , Algoritmos , Encéfalo/diagnóstico por imagen , Simulación por Computador , Humanos , Imagen por Resonancia Magnética/métodos , Magnetoencefalografía/métodos
4.
J Acoust Soc Am ; 151(6): 3866, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35778214

RESUMEN

Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.


Asunto(s)
Pruebas Auditivas , Calidad de Vida , Umbral Auditivo , Audición , Ruido/efectos adversos
5.
J Cogn Neurosci ; 34(1): 127-152, 2021 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-34673939

RESUMEN

Difficulty perceiving phonological contrasts in a second language (L2) can impede initial L2 lexical learning. Such is the case for English speakers learning tonal languages, like Mandarin Chinese. Given the hypothesized role of reduced neuroplasticity in adulthood limiting L2 phonological perception, the current study examined whether transcutaneous auricular vagus nerve stimulation (taVNS), a relatively new neuromodulatory technique, can facilitate L2 lexical learning for English speakers learning Mandarin Chinese over 2 days. Using a double-blind design, one group of participants received 10 min of continuous priming taVNS before lexical training and testing each day, a second group received 500 msec of peristimulus (peristim) taVNS preceding each to-be-learned item in the same tasks, and a third group received passive sham stimulation. Results of the lexical recognition test administered at the end of each day revealed evidence of learning for all groups, but a higher likelihood of accuracy across days for the peristim group and a greater improvement in response time between days for the priming group. Analyses of N400 ERP components elicited during the same tasks indicate behavioral advantages for both taVNS groups coincided with stronger lexico-semantic encoding for target words. Comparison of these findings to pupillometry results for the same study reported in Pandza, N. B., Phillips, I., Karuzis, V. P., O'Rourke, P., and Kuchinsky, S. E. (Neurostimulation and pupillometry: New directions for learning and research in applied linguistics. Annual Review of Applied Linguistics, 40, 56-77, 2020) suggest that positive effects of priming taVNS (but not peristim taVNS) on lexico-semantic encoding are related to sustained attentional effort.


Asunto(s)
Lenguaje , Estimulación del Nervio Vago , Adulto , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Semántica
6.
J Acoust Soc Am ; 150(2): 920, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34470337

RESUMEN

One potential benefit of bilateral cochlear implants is reduced listening effort in speech-on-speech masking situations. However, the symmetry of the input across ears, possibly related to spectral resolution, could impact binaural benefits. Fifteen young adults with normal hearing performed digit recall with target and interfering digits presented to separate ears and attention directed to the target ear. Recall accuracy and pupil size over time (used as an index of listening effort) were measured for unprocessed, 16-channel vocoded, and 4-channel vocoded digits. Recall accuracy was significantly lower for dichotic (with interfering digits) than for monotic listening. Dichotic recall accuracy was highest when the target was less degraded and the interferer was more degraded. With matched target and interferer spectral resolution, pupil dilation was lower with more degradation. Pupil dilation grew more shallowly over time when the interferer had more degradation. Overall, interferer spectral resolution more strongly affected listening effort than target spectral resolution. These results suggest that interfering speech both lowers performance and increases listening effort, and that the relative spectral resolution of target and interferer affect the listening experience. Ignoring a clearer interferer is more effortful.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Percepción Auditiva , Humanos , Habla , Adulto Joven
7.
Neuroimage ; 222: 117291, 2020 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-32835821

RESUMEN

Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.


Asunto(s)
Envejecimiento/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Corteza Auditiva/fisiología , Electroencefalografía/métodos , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino , Persona de Mediana Edad , Habla , Adulto Joven
8.
Neuroimage ; 191: 116-126, 2019 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-30731247

RESUMEN

Human listeners can quickly and easily recognize different sound sources (objects and events) in their environment. Understanding how this impressive ability is accomplished can improve signal processing and machine intelligence applications along with assistive listening technologies. However, it is not clear how the brain represents the many sounds that humans can recognize (such as speech and music) at the level of individual sources, categories and acoustic features. To examine the cortical organization of these representations, we used patterns of fMRI responses to decode 1) four individual speakers and instruments from one another (separately, within each category), 2) the superordinate category labels associated with each stimulus (speech or instrument), and 3) a set of simple synthesized sounds that could be differentiated entirely on their acoustic features. Data were collected using an interleaved silent steady state sequence to increase the temporal signal-to-noise ratio, and mitigate issues with auditory stimulus presentation in fMRI. Largely separable clusters of voxels in the temporal lobes supported the decoding of individual speakers and instruments from other stimuli in the same category. Decoding the superordinate category of each sound was more accurate and involved a larger portion of the temporal lobes. However, these clusters all overlapped with areas that could decode simple, acoustically separable stimuli. Thus, individual sound sources from different sound categories are represented in separate regions of the temporal lobes that are situated within regions implicated in more general acoustic processes. These results bridge an important gap in our understanding of cortical representations of sounds and their acoustics.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Música , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Adulto Joven
9.
J Exp Child Psychol ; 161: 95-112, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-28505505

RESUMEN

Stress and fatigue from effortful listening may compromise well-being, learning, and academic achievement in school-aged children. The aim of this study was to investigate the effect of a signal-to-noise ratio (SNR) typical of those in school classrooms on listening effort (behavioral and pupillometric) and listening-related fatigue (self-report and pupillometric) in a group of school-aged children. A sample of 41 normal-hearing children aged 8-11years performed a narrative speech-picture verification task in a condition with recommended levels of background noise ("ideal": +15dB SNR) and a condition with typical classroom background noise levels ("typical": -2dB SNR). Participants showed increased task-evoked pupil dilation in the typical listening condition compared with the ideal listening condition, consistent with an increase in listening effort. No differences were found between listening conditions in terms of performance accuracy and response time on the behavioral task. Similarly, no differences were found between listening conditions in self-report and pupillometric markers of listening-related fatigue. This is the first study to (a) examine listening-related fatigue in children using pupillometry and (b) demonstrate physiological evidence consistent with increased listening effort while listening to spoken narratives despite ceiling-level task performance accuracy. Understanding the physiological mechanisms that underpin listening-related effort and fatigue could inform intervention strategies and ultimately mitigate listening difficulties in children.


Asunto(s)
Percepción Auditiva/fisiología , Fatiga/fisiopatología , Pupila/fisiología , Percepción del Habla/fisiología , Niño , Femenino , Humanos , Masculino , Ruido , Tiempo de Reacción
10.
J Neurosci ; 35(9): 3929-37, 2015 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-25740521

RESUMEN

Speech recognition in noise can be challenging for older adults and elicits elevated activity throughout a cingulo-opercular network that is hypothesized to monitor and modify behaviors to optimize performance. A word recognition in noise experiment was used to test the hypothesis that cingulo-opercular engagement provides performance benefit for older adults. Healthy older adults (N = 31; 50-81 years of age; mean pure tone thresholds <32 dB HL from 0.25 to 8 kHz, best ear; species: human) performed word recognition in multitalker babble at 2 signal-to-noise ratios (SNR = +3 or +10 dB) during a sparse sampling fMRI experiment. Elevated cingulo-opercular activity was associated with an increased likelihood of correct recognition on the following trial independently of SNR and performance on the preceding trial. The cingulo-opercular effect increased for participants with the best overall performance. These effects were lower for older adults compared with a younger, normal-hearing adult sample (N = 18). Visual cortex activity also predicted trial-level recognition for the older adults, which resulted from discrete decreases in activity before errors and occurred for the oldest adults with the poorest recognition. Participants demonstrating larger visual cortex effects also had reduced fractional anisotropy in an anterior portion of the left inferior frontal-occipital fasciculus, which projects between frontal and occipital regions where activity predicted word recognition. Together, the results indicate that older adults experience performance benefit from elevated cingulo-opercular activity, but not to the same extent as younger adults, and that declines in attentional control can limit word recognition.


Asunto(s)
Corteza Cerebral/fisiología , Ruido , Percepción del Habla/fisiología , Anciano , Anciano de 80 o más Años , Envejecimiento/psicología , Imagen de Difusión Tensora , Femenino , Humanos , Individualidad , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Reconocimiento en Psicología , Relación Señal-Ruido
11.
Exp Aging Res ; 42(1): 67-82, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26683042

RESUMEN

BACKGROUND/STUDY CONTEXT: Adaptive control, reflected by elevated activity in cingulo-opercular brain regions, optimizes performance in challenging tasks by monitoring outcomes and adjusting behavior. For example, cingulo-opercular function benefits trial-level word recognition in noise for normal-hearing adults. Because auditory system deficits may limit the communicative benefit from adaptive control, we examined the extent to which cingulo-opercular engagement supports word recognition in noise for older adults with hearing loss (HL). METHODS: Participants were selected to form groups with Less HL (n = 12; mean pure tone threshold, pure tone average [PTA] = 19.2 ± 4.8 dB HL [hearing level]) and More HL (n = 12; PTA = 38.4 ± 4.5 dB HL, 0.25-8 kHz, both ears). A word recognition task was performed with words presented in multitalker babble at +3 or +10 dB signal-to-noise ratios (SNRs) during a sparse acquisition fMRI experiment. The participants were middle-aged and older (ages: 64.1 ± 8.4 years) English speakers with no history of neurological or psychiatric diagnoses. RESULTS: Elevated cingulo-opercular activity occurred with increased likelihood of correct word recognition on the next trial (t(23) = 3.28, p = .003), and this association did not differ between hearing loss groups. During trials with word recognition errors, the More HL group exhibited higher blood oxygen level-dependent (BOLD) contrast in occipital and parietal regions compared with the Less HL group. Across listeners, more pronounced cingulo-opercular activity during recognition errors was associated with better overall word recognition performance. CONCLUSION: The trial-level word recognition benefit from cingulo-opercular activity was equivalent for both hearing loss groups. When speech audibility and performance levels are similar for older adults with mild to moderate hearing loss, cingulo-opercular adaptive control contributes to word recognition in noise.


Asunto(s)
Corteza Cerebral/fisiopatología , Pérdida Auditiva/fisiopatología , Ruido , Percepción del Habla , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Índice de Severidad de la Enfermedad
12.
Exp Aging Res ; 42(1): 50-66, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26683041

RESUMEN

BACKGROUND/STUDY CONTEXT: Vigilance refers to the ability to sustain and adapt attentional focus in response to changing task demands. For older adults with hearing loss, vigilant listening may be particularly effortful and variable across individuals. This study examined the extent to which neural responses to sudden, unexpected changes in task structure (e.g., from rest to word recognition epochs) were related to pupillometry measures of listening effort. METHODS: Individual differences in the task-evoked pupil response during word recognition were used to predict functional magnetic resonance imaging (MRI) estimates of neural responses to salient transitions between quiet rest, noisy rest, and word recognition in unintelligible, fluctuating background noise. Participants included 29 older adults (M = 70.2 years old) with hearing loss (pure tone average across all frequencies = 36.1 dB HL [hearing level], SD = 6.7). RESULTS: Individuals with a greater average pupil response exhibited a more vigilant pattern of responding on a standardized continuous performance test (response time variability across varying interstimulus intervals r(27) = .38, p = .04). Across participants there was widespread engagement of attention- and sensory-related cortices in response to transitions between blocks of rest and word recognition conditions. Individuals who exhibited larger task-evoked pupil dilation also showed even greater activity in the right primary auditory cortex in response to changes in task structure. CONCLUSION: Pupillometric estimates of word recognition effort predicted variation in activity within cortical regions that were responsive to salient changes in the environment for older adults with hearing loss. The results of the current study suggest that vigilant attention is increased amongst older adults who exert greater listening effort.


Asunto(s)
Pérdida Auditiva/fisiopatología , Ruido , Percepción del Habla , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Análisis y Desempeño de Tareas
13.
J Neurosci ; 33(48): 18979-86, 2013 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-24285902

RESUMEN

Recognizing speech in difficult listening conditions requires considerable focus of attention that is often demonstrated by elevated activity in putative attention systems, including the cingulo-opercular network. We tested the prediction that elevated cingulo-opercular activity provides word-recognition benefit on a subsequent trial. Eighteen healthy, normal-hearing adults (10 females; aged 20-38 years) performed word recognition (120 trials) in multi-talker babble at +3 and +10 dB signal-to-noise ratios during a sparse sampling functional magnetic resonance imaging (fMRI) experiment. Blood oxygen level-dependent (BOLD) contrast was elevated in the anterior cingulate cortex, anterior insula, and frontal operculum in response to poorer speech intelligibility and response errors. These brain regions exhibited significantly greater correlated activity during word recognition compared with rest, supporting the premise that word-recognition demands increased the coherence of cingulo-opercular network activity. Consistent with an adaptive control network explanation, general linear mixed model analyses demonstrated that increased magnitude and extent of cingulo-opercular network activity was significantly associated with correct word recognition on subsequent trials. These results indicate that elevated cingulo-opercular network activity is not simply a reflection of poor performance or error but also supports word recognition in difficult listening conditions.


Asunto(s)
Red Nerviosa/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Adulto , Mapeo Encefálico , Femenino , Lóbulo Frontal/fisiología , Giro del Cíngulo/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Vías Nerviosas/fisiología , Oxígeno/sangre , Desempeño Psicomotor/fisiología , Relación Señal-Ruido , Adulto Joven
14.
Trends Hear ; 28: 23312165241292215, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39474748

RESUMEN

People regularly communicate in complex environments, requiring them to flexibly shift their attention across multiple sources of sensory information. Increasing recruitment of the executive functions that support successful speech comprehension in these multitasking settings is thought to contribute to the sense of effort that listeners often experience. One common research method employed to quantify listening effort is the dual-task paradigm in which individuals recognize speech and concurrently perform a secondary (often visual) task. Effort is operationalized as performance decrements on the secondary task as speech processing demands increase. However, recent reviews have noted critical inconsistencies in the results of dual-task experiments, likely in part due to how and when the two tasks place demands on a common set of mental resources and how flexibly individuals can allocate their attention to them. We propose that in order to move forward to address this gap, we need to first look backward: better integrating theoretical models of resource capacity and allocation as well as of task-switching that have been historically developed in domains outside of hearing research (viz., cognitive psychology and neuroscience). With this context in mind, we describe how dual-task experiments could be designed and interpreted such that they provide better and more robust insights into the mechanisms that contribute to effortful listening.


Asunto(s)
Atención , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Función Ejecutiva/fisiología , Comprensión , Estimulación Acústica , Comportamiento Multifuncional , Análisis y Desempeño de Tareas , Percepción Visual
15.
IEEE Open J Eng Med Biol ; 5: 621-626, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39184968

RESUMEN

Goal: This paper introduces an automated post-traumatic stress disorder (PTSD) screening tool that could potentially be used as a self-assessment or inserted into routine medical visits to aid in PTSD diagnosis and treatment. Methods: With an emotion estimation algorithm providing arousal (excited to calm) and valence (pleasure to displeasure) levels through discourse, we select regions of the acoustic signal that are most salient for PTSD detection. Our algorithm was tested on a subset of data from the DVBIC-TBICoE TBI Study, which contains PTSD Check List Civilian (PCL-C) assessment scores. Results: Speech from low-arousal and positive-valence regions provide the highest discrimination for PTSD. Our model achieved an AUC (area under the curve) of 0.80 in detecting PCL-C ratings, outperforming models with no emotion filtering (AUC = 0.68). Conclusions: This result suggests that emotion drives the selection of the most salient temporal regions of an audio recording for PTSD detection.

16.
Cereb Cortex ; 22(6): 1360-71, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21862447

RESUMEN

The distractibility that older adults experience when listening to speech in challenging conditions has been attributed in part to reduced inhibition of irrelevant information within and across sensory systems. Whereas neuroimaging studies have shown that younger adults readily suppress visual cortex activation when listening to auditory stimuli, it is unclear the extent to which declining inhibition in older adults results in reduced suppression or compensatory engagement of other sensory cortices. The current functional magnetic resonance imaging study examined the effects of age and stimulus intelligibility in a word listening task. Across all participants, auditory cortex was engaged when listening to words. However, increasing age and declining word intelligibility had independent and spatially similar effects: both were associated with increasing engagement of visual cortex. Visual cortex activation was not explained by age-related differences in vascular reactivity but rather auditory and visual cortices were functionally connected across word listening conditions. The nature of this correlation changed with age: younger adults deactivated visual cortex when activating auditory cortex, middle-aged adults showed no relation, and older adults synchronously activated both cortices. These results suggest that age and stimulus integrity are additive modulators of crossmodal suppression and activation.


Asunto(s)
Estimulación Acústica/métodos , Comprensión/fisiología , Pruebas del Lenguaje , Percepción del Habla/fisiología , Corteza Visual/fisiología , Adulto , Factores de Edad , Anciano , Corteza Auditiva/fisiología , Femenino , Predicción , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
17.
J Speech Lang Hear Res ; 66(10): 4066-4082, 2023 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-37672797

RESUMEN

PURPOSE: This study examined the extent to which acoustic, linguistic, and cognitive task demands interactively impact listening effort. METHOD: Using a dual-task paradigm, on each trial, participants were instructed to perform either a single task or two tasks. In the primary word recognition task, participants repeated Northwestern University Auditory Test No. 6 words presented in speech-shaped noise at either an easier or a harder signal-to-noise ratio (SNR). The words varied in how commonly they occur in the English language (lexical frequency). In the secondary visual task, participants were instructed to press a specific key as soon as a number appeared on screen (simpler task) or one of two keys to indicate whether the visualized number was even or odd (more complex task). RESULTS: Manipulation checks revealed that key assumptions of the dual-task design were met. A significant three-way interaction was observed, such that the expected effect of SNR on effort was only observable for words with lower lexical frequency and only when multitasking demands were relatively simpler. CONCLUSIONS: This work reveals that variability across speech stimuli can influence the sensitivity of the dual-task paradigm for detecting changes in listening effort. In line with previous work, the results of this study also suggest that higher cognitive demands may limit the ability to detect expected effects of SNR on measures of effort. With implications for real-world listening, these findings highlight that even relatively minor changes in lexical and multitasking demands can alter the effort devoted to listening in noise.


Asunto(s)
Percepción del Habla , Humanos , Esfuerzo de Escucha , Ruido , Pruebas Auditivas , Acústica
18.
Am J Audiol ; 32(3S): 694-705, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36796026

RESUMEN

PURPOSE: The objectives of this study were to (a) describe normative ranges-expressed as reference intervals (RIs)-for vestibular and balance function tests in a cohort of Service Members and Veterans (SMVs) and (b) to describe the interrater reliability of these tests. METHOD: As part of the Defense and Veterans Brain Injury Center (DVBIC)/Traumatic Brain Injury Center of Excellence 15-year Longitudinal Traumatic Brain Injury (TBI) Study, participants completed the following: vestibulo-ocular reflex suppression, visual-vestibular enhancement, subjective visual vertical, subjective visual horizontal, sinusoidal harmonic acceleration, the computerized rotational head impulse test (crHIT), and the sensory organization test. RIs were calculated using nonparametric methods and interrater reliability was assessed using intraclass correlation coefficients between three audiologists who independently reviewed and cleaned the data. RESULTS: Reference populations for each outcome measure comprised 40 to 72 individuals, 19 to 61 years of age, who served either as noninjured controls (NIC) or injured controls (IC) in the 15-year study; none had a history of TBI or blast exposure. A subset of 15 SMVs from the NIC, IC, and TBI groups were included in the interrater reliability calculations. RIs are reported for 27 outcome measures from the seven rotational vestibular and balance tests. Interrater reliability was considered excellent for all tests except the crHIT, which was found to have good interrater reliability. CONCLUSION: This study provides clinicians and scientists with important information regarding normative ranges and interrater reliability for rotational vestibular and balance tests in SMVs.


Asunto(s)
Lesiones Traumáticas del Encéfalo , Lesiones Encefálicas , Veteranos , Humanos , Reproducibilidad de los Resultados , Reflejo Vestibuloocular , Lesiones Traumáticas del Encéfalo/diagnóstico
19.
Neuroimage ; 60(3): 1843-55, 2012 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-22500925

RESUMEN

Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results.


Asunto(s)
Artefactos , Encéfalo/fisiología , Neuroimagen Funcional/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Patrones de Reconocimiento Fisiológico/fisiología , Adulto , Anciano , Algoritmos , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Tamaño de la Muestra , Sensibilidad y Especificidad , Adulto Joven
20.
Lang Cogn Neurosci ; 36(2): 211-239, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-39035844

RESUMEN

Incremental language processing means that listeners confront temporary ambiguity about how to structure the input, which can generate misinterpretations. In four "visual-world" experiments, we tested whether engaging cognitive control - which detects and resolves conflict - assists revision during comprehension. We recorded listeners' eye-movements and actions while following instructions that were ripe for misanalysis. In Experiments 1 and 3, sentences followed trials from a nonverbal conflict task that manipulated cognitive-control engagement, to test its impact on the ability to revise. To isolate conflict-driven effects of cognitive-control on comprehension, we manipulated attention in a non-conflict task in Experiments 2 and 4. We observed fewer comprehension errors, and earlier revision, when cognitive control (more than attention) was elicited on an immediately preceding trial. These results extend previous correlations between cognitive control and language processing by revealing the influence of domain-general cognitive-control engagement on the temporal unfolding of error-revision processes during language comprehension.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA