Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
J Neurophysiol ; 129(6): 1359-1377, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37096924

RESUMEN

Understanding speech in a noisy environment is crucial in day-to-day interactions and yet becomes more challenging with age, even for healthy aging. Age-related changes in the neural mechanisms that enable speech-in-noise listening have been investigated previously; however, the extent to which age affects the timing and fidelity of encoding of target and interfering speech streams is not well understood. Using magnetoencephalography (MEG), we investigated how continuous speech is represented in auditory cortex in the presence of interfering speech in younger and older adults. Cortical representations were obtained from neural responses that time-locked to the speech envelopes with speech envelope reconstruction and temporal response functions (TRFs). TRFs showed three prominent peaks corresponding to auditory cortical processing stages: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). Older adults showed exaggerated speech envelope representations compared with younger adults. Temporal analysis revealed both that the age-related exaggeration starts as early as ∼50 ms and that older adults needed a substantially longer integration time window to achieve their better reconstruction of the speech envelope. As expected, with increased speech masking envelope reconstruction for the attended talker decreased and all three TRF peaks were delayed, with aging contributing additionally to the reduction. Interestingly, for older adults the late peak was delayed, suggesting that this late peak may receive contributions from multiple sources. Together these results suggest that there are several mechanisms at play compensating for age-related temporal processing deficits at several stages but which are not able to fully reestablish unimpaired speech perception.NEW & NOTEWORTHY We observed age-related changes in cortical temporal processing of continuous speech that may be related to older adults' difficulty in understanding speech in noise. These changes occur in both timing and strength of the speech representations at different cortical processing stages and depend on both noise condition and selective attention. Critically, their dependence on noise condition changes dramatically among the early, middle, and late cortical processing stages, underscoring how aging differentially affects these stages.


Asunto(s)
Percepción del Habla , Habla , Habla/fisiología , Percepción Auditiva , Ruido , Percepción del Habla/fisiología , Estimulación Acústica/métodos
2.
PLoS Comput Biol ; 16(8): e1008172, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32813712

RESUMEN

Estimating the latent dynamics underlying biological processes is a central problem in computational biology. State-space models with Gaussian statistics are widely used for estimation of such latent dynamics and have been successfully utilized in the analysis of biological data. Gaussian statistics, however, fail to capture several key features of the dynamics of biological processes (e.g., brain dynamics) such as abrupt state changes and exogenous processes that affect the states in a structured fashion. Although Gaussian mixture process noise models have been considered as an alternative to capture such effects, data-driven inference of their parameters is not well-established in the literature. The objective of this paper is to develop efficient algorithms for inferring the parameters of a general class of Gaussian mixture process noise models from noisy and limited observations, and to utilize them in extracting the neural dynamics that underlie auditory processing from magnetoencephalography (MEG) data in a cocktail party setting. We develop an algorithm based on Expectation-Maximization to estimate the process noise parameters from state-space observations. We apply our algorithm to simulated and experimentally-recorded MEG data from auditory experiments in the cocktail party paradigm to estimate the underlying dynamic Temporal Response Functions (TRFs). Our simulation results show that the richer representation of the process noise as a Gaussian mixture significantly improves state estimation and capturing the heterogeneity of the TRF dynamics. Application to MEG data reveals improvements over existing TRF estimation techniques, and provides a reliable alternative to current approaches for probing neural dynamics in a cocktail party scenario, as well as attention decoding in emerging applications such as smart hearing aids. Our proposed methodology provides a framework for efficient inference of Gaussian mixture process noise models, with application to a wide range of biological data with underlying heterogeneous and latent dynamics.


Asunto(s)
Vías Auditivas/fisiología , Algoritmos , Humanos , Magnetoencefalografía/métodos , Modelos Neurológicos
3.
Neuroimage ; 222: 117291, 2020 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-32835821

RESUMEN

Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.


Asunto(s)
Envejecimiento/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Corteza Auditiva/fisiología , Electroencefalografía/métodos , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía/métodos , Masculino , Persona de Mediana Edad , Habla , Adulto Joven
4.
J Neurophysiol ; 124(4): 1152-1164, 2020 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-32877288

RESUMEN

Aging is associated with an exaggerated representation of the speech envelope in auditory cortex. The relationship between this age-related exaggerated response and a listener's ability to understand speech in noise remains an open question. Here, information-theory-based analysis methods are applied to magnetoencephalography recordings of human listeners, investigating their cortical responses to continuous speech, using the novel nonlinear measure of phase-locked mutual information between the speech stimuli and cortical responses. The cortex of older listeners shows an exaggerated level of mutual information, compared with younger listeners, for both attended and unattended speakers. The mutual information peaks for several distinct latencies: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). For the late component, the neural enhancement of attended over unattended speech is affected by stimulus signal-to-noise ratio, but the direction of this dependency is reversed by aging. Critically, in older listeners and for the same late component, greater cortical exaggeration is correlated with decreased behavioral inhibitory control. This negative correlation also carries over to speech intelligibility in noise, where greater cortical exaggeration in older listeners is correlated with worse speech intelligibility scores. Finally, an age-related lateralization difference is also seen for the ∼100 ms latency peaks, where older listeners show a bilateral response compared with younger listeners' right lateralization. Thus, this information-theory-based analysis provides new, and less coarse-grained, results regarding age-related change in auditory cortical speech processing, and its correlation with cognitive measures, compared with related linear measures.NEW & NOTEWORTHY Cortical representations of natural speech are investigated using a novel nonlinear approach based on mutual information. Cortical responses, phase-locked to the speech envelope, show an exaggerated level of mutual information associated with aging, appearing at several distinct latencies (∼50, ∼100, and ∼200 ms). Critically, for older listeners only, the ∼200 ms latency response components are correlated with specific behavioral measures, including behavioral inhibition and speech comprehension.


Asunto(s)
Envejecimiento/fisiología , Percepción del Habla , Adolescente , Adulto , Anciano , Potenciales Evocados Auditivos , Femenino , Humanos , Teoría de la Información , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Tiempo de Reacción , Corteza Sensoriomotora/crecimiento & desarrollo , Corteza Sensoriomotora/fisiología
5.
J Neurophysiol ; 122(6): 2372-2387, 2019 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-31596649

RESUMEN

Younger adults with normal hearing can typically understand speech in the presence of a competing speaker without much effort, but this ability to understand speech in challenging conditions deteriorates with age. Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Earlier auditory studies using the frequency-following response (FFR), primarily believed to be generated by the midbrain, demonstrated age-related neural deficits when analyzed with traditional measures. Here we use a mutual information paradigm to analyze the FFR to speech (masked by a competing speech signal) by estimating the amount of stimulus information contained in the FFR. Our results show, first, a broadband informational loss associated with aging for both FFR amplitude and phase. Second, this age-related loss of information is more severe in higher-frequency FFR bands (several hundred hertz). Third, the mutual information between the FFR and the stimulus decreases as noise level increases for both age groups. Fourth, older adults benefit neurally, i.e., show a reduction in loss of information, when the speech masker is changed from meaningful (talker speaking a language that they can comprehend, such as English) to meaningless (talker speaking a language that they cannot comprehend, such as Dutch). This benefit is not seen in younger listeners, which suggests that age-related informational loss may be more severe when the speech masker is meaningful than when it is meaningless. In summary, as a method, mutual information analysis can unveil new results that traditional measures may not have enough statistical power to assess.NEW & NOTEWORTHY Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Auditory studies using the frequency-following response (FFR) have demonstrated age-related neural deficits with traditional methods. Here we use a mutual information paradigm to analyze the FFR to speech masked by competing speech. Results confirm those from traditional analysis but additionally show that older adults benefit neurally when the masker changes from a language that they comprehend to a language they cannot.


Asunto(s)
Envejecimiento/fisiología , Corteza Cerebral/fisiología , Electroencefalografía , Entropía , Teoría de la Información , Mesencéfalo/fisiología , Enmascaramiento Perceptual/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Adulto Joven
6.
Neuroimage ; 172: 162-174, 2018 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-29366698

RESUMEN

Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Magnetoencefalografía/métodos , Procesamiento de Señales Asistido por Computador , Percepción del Habla/fisiología , Acústica , Adolescente , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino , Adulto Joven
7.
Ear Hear ; 39(4): 810-824, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29287038

RESUMEN

OBJECTIVE: Older adults often have trouble adjusting to hearing aids when they start wearing them for the first time. Probe microphone measurements verify appropriate levels of amplification up to the tympanic membrane. Little is known, however, about the effects of amplification on auditory-evoked responses to speech stimuli during initial hearing aid use. The present study assesses the effects of amplification on neural encoding of a speech signal in older adults using hearing aids for the first time. It was hypothesized that amplification results in improved stimulus encoding (higher amplitudes, improved phase locking, and earlier latencies), with greater effects for the regions of the signal that are less audible. DESIGN: Thirty-seven adults, aged 60 to 85 years with mild to severe sensorineural hearing loss and no prior hearing aid use, were bilaterally fit with Widex Dream 440 receiver-in-the-ear hearing aids. Probe microphone measures were used to adjust the gain of the hearing aids and verify the fitting. Unaided and aided frequency-following responses and cortical auditory-evoked potentials to the stimulus /ga/ were recorded in sound field over the course of 2 days for three conditions: 65 dB SPL and 80 dB SPL in quiet, and 80 dB SPL in six-talker babble (+10 signal to noise ratio). RESULTS: Responses from midbrain were analyzed in the time regions corresponding to the consonant transition (18 to 68 ms) and the steady state vowel (68 to 170 ms). Generally, amplification increased phase locking and amplitude and decreased latency for the region and presentation conditions that had lower stimulus amplitudes-the transition region and 65 dB SPL level. Responses from cortex showed decreased latency for P1, but an unexpected decrease in N1 amplitude. Previous studies have demonstrated an exaggerated cortical representation of speech in older adults compared to younger adults, possibly because of an increase in neural resources necessary to encode the signal. Therefore, a decrease in N1 amplitude with amplification and with increased presentation level may suggest that amplification decreases the neural resources necessary for cortical encoding. CONCLUSION: Increased phase locking and amplitude and decreased latency in midbrain suggest that amplification may improve neural representation of the speech signal in new hearing aid users. The improvement with amplification was also found in cortex, and, in particular, decreased P1 latencies and lower N1 amplitudes may indicate greater neural efficiency. Further investigations will evaluate changes in subcortical and cortical responses during the first 6 months of hearing aid use.


Asunto(s)
Potenciales Evocados Auditivos , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Anciano , Anciano de 80 o más Años , Potenciales Evocados Auditivos del Tronco Encefálico , Femenino , Pérdida Auditiva Sensorineural/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Relación Señal-Ruido
8.
Acta Acust United Acust ; 104(5): 774-777, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30686956

RESUMEN

Previous research has found that, paradoxically, while older adults have more difficulty comprehending speech in challenging circumstances than younger adults, their brain responses track the envelope of the acoustic signal more robustly. Here we investigate this puzzle by using magnetoencephalography (MEG) source localization to determine the anatomical origin of this difference. Our results indicate that this robust tracking in older adults does not arise merely from having the same responses as younger adults but with larger amplitudes; instead, they recruit additional regions, inferior to core auditory cortex, with a short latency of ~30 ms relative to the acoustic signal.

9.
Ear Hear ; 38(6): e389-e393, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28475545

RESUMEN

OBJECTIVES: Several studies have investigated the feasibility of using electrophysiology as an objective tool to efficiently map cochlear implants. A pervasive problem when measuring event-related potentials is the need to remove the direct-current (DC) artifact produced by the cochlear implant. Here, we describe how DC artifact removal can corrupt the response waveform and how the appropriate choice of stimulus duration may minimize this corruption. DESIGN: Event-related potentials were recorded to a synthesized vowel /a/ with a 170- or 400-ms duration. RESULTS: The P2 response, which occurs between 150 and 250 ms, was corrupted by the DC artifact removal algorithm for a 170-ms stimulus duration but was relatively uncorrupted for a 400-ms stimulus duration. CONCLUSIONS: To avoid response waveform corruption from DC artifact removal, one should choose a stimulus duration such that the offset of the stimulus does not temporally coincide with the specific peak of interest. While our data have been analyzed with only one specific algorithm, we argue that the length of the stimulus may be a critical factor for any DC artifact removal algorithm.


Asunto(s)
Estimulación Acústica/métodos , Artefactos , Implantes Cocleares , Sordera/fisiopatología , Potenciales Evocados Auditivos/fisiología , Anciano , Algoritmos , Implantación Coclear , Sordera/rehabilitación , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Factores de Tiempo
10.
Neuroimage ; 124(Pt A): 906-917, 2016 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-26436490

RESUMEN

The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.


Asunto(s)
Atención/fisiología , Percepción Sonora/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Algoritmos , Percepción Auditiva/fisiología , Ambiente , Femenino , Humanos , Magnetoencefalografía , Masculino , Modelos Neurológicos , Adulto Joven
11.
J Neurophysiol ; 116(5): 2356-2367, 2016 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-27605531

RESUMEN

The ability to understand speech is significantly degraded by aging, particularly in noisy environments. One way that older adults cope with this hearing difficulty is through the use of contextual cues. Several behavioral studies have shown that older adults are better at following a conversation when the target speech signal has high contextual content or when the background distractor is not meaningful. Specifically, older adults gain significant benefit in focusing on and understanding speech if the background is spoken by a talker in a language that is not comprehensible to them (i.e., a foreign language). To understand better the neural mechanisms underlying this benefit in older adults, we investigated aging effects on midbrain and cortical encoding of speech when in the presence of a single competing talker speaking in a language that is meaningful or meaningless to the listener (i.e., English vs. Dutch). Our results suggest that neural processing is strongly affected by the informational content of noise. Specifically, older listeners' cortical responses to the attended speech signal are less deteriorated when the competing speech signal is an incomprehensible language rather than when it is their native language. Conversely, temporal processing in the midbrain is affected by different backgrounds only during rapid changes in speech and only in younger listeners. Additionally, we found that cognitive decline is associated with an increase in cortical envelope tracking, suggesting an age-related over (or inefficient) use of cognitive resources that may explain their difficulty in processing speech targets while trying to ignore interfering noise.


Asunto(s)
Envejecimiento/fisiología , Corteza Auditiva/fisiología , Comprensión/fisiología , Mesencéfalo/fisiología , Ruido , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Electroencefalografía/tendencias , Femenino , Humanos , Magnetoencefalografía/tendencias , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Habla/fisiología , Adulto Joven
12.
J Neurophysiol ; 116(5): 2346-2355, 2016 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-27535374

RESUMEN

Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently.


Asunto(s)
Envejecimiento/fisiología , Corteza Auditiva/fisiología , Mesencéfalo/fisiología , Ruido , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Anciano , Electroencefalografía/tendencias , Femenino , Humanos , Magnetoencefalografía/tendencias , Masculino , Persona de Mediana Edad , Ruido/efectos adversos , Habla/fisiología , Adulto Joven
13.
Ear Hear ; 36(6): e352-63, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26177213

RESUMEN

OBJECTIVES: The authors investigated aging effects on the envelope of the frequency following response to dynamic and static components of speech. Older adults frequently experience problems understanding speech, despite having clinically normal hearing. Improving audibility with hearing aids provides variable benefit, as amplification cannot restore the temporal precision degraded by aging. Previous studies have demonstrated age-related delays in subcortical timing specific to the dynamic, transition region of the stimulus. However, it is unknown whether this delay is mainly due to a failure to encode rapid changes in the formant transition because of central temporal processing deficits or as a result of cochlear damage that reduces audibility for the high-frequency components of the speech syllable. To investigate the nature of this delay, the authors compared subcortical responses in younger and older adults with normal hearing to the speech syllables /da/ and /a/, hypothesizing that the delays in peak timing observed in older adults are mainly caused by temporal processing deficits in the central auditory system. DESIGN: The frequency following response was recorded to the speech syllables /da/ and /a/ from 15 younger and 15 older adults with normal hearing, normal IQ, and no history of neurological disorders. Both speech syllables were presented binaurally with alternating polarities at 80 dB SPL at a rate of 4.3 Hz through electromagnetically shielded insert earphones. A vertical montage of four Ag-AgCl electrodes (Cz, active, forehead ground, and earlobe references) was used. RESULTS: The responses of older adults were significantly delayed with respect to younger adults for the transition and onset regions of the /da/ syllable and for the onset of the /a/ syllable. However, in contrast with the younger adults who had earlier latencies for /da/ than for /a/ (as was expected given the high-frequency energy in the /da/ stop consonant burst), latencies in older adults were not significantly different between the responses to /da/ and /a/. An unexpected finding was noted in the amplitude and phase dissimilarities between the two groups in the later part of the steady-state region, rather than in the transition region. This amplitude reduction may indicate prolonged neural recovery or response decay associated with a loss of auditory nerve fibers. CONCLUSIONS: These results suggest that older adults' peak timing delays may arise from decreased synchronization to the onset of the stimulus due to reduced audibility, though the possible role of impaired central auditory processing cannot be ruled out. Conversely, a deterioration in temporal processing mechanisms in the auditory nerve, brainstem, or midbrain may be a factor in the sudden loss of synchronization in the later part of the steady-state response in older adults.


Asunto(s)
Envejecimiento/fisiología , Percepción del Habla/fisiología , Adulto , Anciano , Percepción Auditiva/fisiología , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
14.
J Perinatol ; 44(4): 521-527, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37604967

RESUMEN

OBJECTIVE: To assess the use of continuous heart rate variability (HRV) as a predictor of brain injury severity in newborns with moderate to severe HIE that undergo therapeutic hypothermia. STUDY DESIGN: Two cohorts of newborns (n1 = 55, n2 = 41) with moderate to severe hypoxic-ischemic encephalopathy previously treated with therapeutic hypothermia. HRV was characterized by root mean square in the short time scales (RMSS) during therapeutic hypothermia and through completion of rewarming. A logistic regression and Naïve Bayes models were developed to predict the MRI outcome of the infants using RMSS. The encephalopathy grade and gender were used as control variables. RESULTS: For both cohorts, the predicted outcomes were compared with the observed outcomes. Our algorithms were able to predict the outcomes with an area under the receiver operating characteristic curve of about 0.8. CONCLUSIONS: HRV assessed by RMSS can predict severity of brain injury in newborns with HIE.


Asunto(s)
Lesiones Encefálicas , Hipotermia Inducida , Hipoxia-Isquemia Encefálica , Lactante , Humanos , Recién Nacido , Frecuencia Cardíaca/fisiología , Hipoxia-Isquemia Encefálica/diagnóstico por imagen , Hipoxia-Isquemia Encefálica/terapia , Teorema de Bayes , Imagen por Resonancia Magnética , Lesiones Encefálicas/terapia
15.
Ann Otol Rhinol Laryngol ; 131(4): 365-372, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34096343

RESUMEN

OBJECTIVES: Facial paralysis is a debilitating condition with substantial functional and psychological consequences. This feline-model study evaluates whether facial muscles can be selectively activated in acute and chronic implantation of 16-channel multichannel cuff electrodes (MCE). METHODS: Two cats underwent acute terminal MCE implantation experiments, 2 underwent chronic MCE implantation in uninjured facial nerves (FN) and tested for 6 months, and 2 underwent chronic MCE implantation experiments after FN transection injury and tested for 3 months. The MCE were wrapped around the main trunk of the skeletonized FN, and data collection consisted of EMG thresholds, amplitudes, and selectivity of muscle activation. RESULTS: In acute experimentation, activation of specific channels (ie, channels 1-3 and 6-8) resulted in selective activation of orbicularis oculi, whereas activation of other channels (ie, channels 4, 5, or 8) led to selective activation of levator auris longus with higher EMG amplitudes. MCE implantation yielded stable and selective facial muscle activation EMG thresholds and amplitudes up to a 5-month period. Modest selective muscle activation was furthermore obtained after a complete transection-reapproximating nerve injury after a 3-month recovery period and implantation reoperation. Chronic implantation of MCE did not lead to fibrosis on histology. Field steering was achieved to activate distinct facial muscles by sending simultaneous subthreshold currents to multiple channels, thus theoretically protecting against nerve damage from chronic electrical stimulation. CONCLUSION: Our proof-of-concept results show the ability of an MCE, supplemented with field steering, to provide a degree of selective facial muscle stimulation in a feline model, even following nerve regeneration after FN injury. LEVEL OF EVIDENCE: N/A.


Asunto(s)
Terapia por Estimulación Eléctrica/instrumentación , Electrodos Implantados , Músculos Faciales/inervación , Músculos Faciales/fisiopatología , Traumatismos del Nervio Facial/complicaciones , Parálisis Facial/terapia , Contracción Muscular/fisiología , Animales , Gatos , Modelos Animales de Enfermedad , Electromiografía , Traumatismos del Nervio Facial/fisiopatología , Parálisis Facial/etiología , Parálisis Facial/fisiopatología , Femenino
16.
J Neurophysiol ; 106(4): 1875-87, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21768121

RESUMEN

Chronic recordings from ensembles of cortical neurons in primary motor and somatosensory areas in rhesus macaques provide accurate information about bipedal locomotion (Fitzsimmons NA, Lebedev MA, Peikon ID, Nicolelis MA. Front Integr Neurosci 3: 3, 2009). Here we show that the linear and angular kinematics of the ankle, knee, and hip joints during both normal and precision (attentive) human treadmill walking can be inferred from noninvasive scalp electroencephalography (EEG) with decoding accuracies comparable to those from neural decoders based on multiple single-unit activities (SUAs) recorded in nonhuman primates. Six healthy adults were recorded. Participants were asked to walk on a treadmill at their self-selected comfortable speed while receiving visual feedback of their lower limbs (i.e., precision walking), to repeatedly avoid stepping on a strip drawn on the treadmill belt. Angular and linear kinematics of the left and right hip, knee, and ankle joints and EEG were recorded, and neural decoders were designed and optimized with cross-validation procedures. Of note, the optimal set of electrodes of these decoders were also used to accurately infer gait trajectories in a normal walking task that did not require subjects to control and monitor their foot placement. Our results indicate a high involvement of a fronto-posterior cortical network in the control of both precision and normal walking and suggest that EEG signals can be used to study in real time the cortical dynamics of walking and to develop brain-machine interfaces aimed at restoring human gait function.


Asunto(s)
Mapeo Encefálico , Electroencefalografía , Pierna/fisiología , Corteza Motora/fisiología , Corteza Somatosensorial/fisiología , Interfaz Usuario-Computador , Caminata/fisiología , Adolescente , Adulto , Articulación del Tobillo/fisiología , Artefactos , Fenómenos Biomecánicos , Sistemas de Computación , Electroencefalografía/métodos , Movimientos Oculares/fisiología , Retroalimentación Sensorial , Femenino , Marcha , Articulación de la Cadera/fisiología , Humanos , Articulación de la Rodilla/fisiología , Masculino , Cuero Cabelludo , Procesamiento de Señales Asistido por Computador , Adulto Joven
17.
PLoS One ; 14(3): e0213899, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30865718

RESUMEN

Age-related deficits in speech-in-noise understanding pose a significant problem for older adults. Despite the vast number of studies conducted to investigate the neural mechanisms responsible for these communication difficulties, the role of central auditory deficits, beyond peripheral hearing loss, remains unclear. The current study builds upon our previous work that investigated the effect of aging on normal-hearing individuals and aims to estimate the effect of peripheral hearing loss on the representation of speech in noise in two critical regions of the aging auditory pathway: the midbrain and cortex. Data from 14 hearing-impaired older adults were added to a previously published dataset of 17 normal-hearing younger adults and 15 normal-hearing older adults. The midbrain response, measured by the frequency-following response (FFR), and the cortical response, measured with the magnetoencephalography (MEG) response, were recorded from subjects listening to speech in quiet and noise conditions at four signal-to-noise ratios (SNRs): +3, 0, -3, and -6 dB sound pressure level (SPL). Both groups of older listeners showed weaker midbrain response amplitudes and overrepresentation of cortical responses compared to younger listeners. No significant differences were found between the two older groups when the midbrain and cortical measurements were analyzed independently. However, significant differences between the older groups were found when investigating the midbrain-cortex relationships; that is, only hearing-impaired older adults showed significant correlations between midbrain and cortical measurements, suggesting that hearing loss may alter reciprocal connections between lower and higher levels of the auditory pathway. The overall paucity of differences in midbrain or cortical responses between the two older groups suggests that age-related temporal processing deficits may contribute to older adults' communication difficulties beyond what might be predicted from peripheral hearing loss alone; however, hearing loss does seem to alter the connectivity between midbrain and cortex. These results may have important ramifications for the field of audiology, as it indicates that algorithms in clinical devices, such as hearing aids, should consider age-related temporal processing deficits to maximize user benefit.


Asunto(s)
Envejecimiento/fisiología , Pérdida Auditiva Sensorineural/fisiopatología , Percepción del Habla/fisiología , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Envejecimiento/psicología , Corteza Auditiva/fisiopatología , Vías Auditivas/fisiopatología , Electroencefalografía , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Magnetoencefalografía , Masculino , Mesencéfalo/fisiopatología , Persona de Mediana Edad , Ruido , Relación Señal-Ruido , Pruebas de Discriminación del Habla , Adulto Joven
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4148-4151, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946783

RESUMEN

In the last few years, a large number of experiments have been focused on exploring the possibility of using non-invasive techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG), to identify auditory-related neuromarkers which are modulated by attention. Results from several studies where participants listen to a story narrated by one speaker, while trying to ignore a different story narrated by a competing speaker, suggest the feasibility of extracting neuromarkers that demonstrate enhanced phase locking to the attended speech stream. These promising findings have the potential to be used in clinical applications, such as EEG-driven hearing aids. One major challenge in achieving this goal is the need to devise an algorithm capable of tracking these neuromarkers in real-time when individuals are given the freedom to repeatedly switch attention among speakers at will. Here we present an algorithm pipeline that is designed to efficiently recognize changes of neural speech tracking during a dynamic-attention switching task and to use them as an input for a near real-time state-space model that translates these neuromarkers into attentional state estimates with a minimal delay. This algorithm pipeline was tested with MEG data collected from participants who had the freedom to change the focus of their attention between two speakers at will. Results suggest the feasibility of using our algorithm pipeline to track changes of attention in near-real time in a dynamic auditory scene.


Asunto(s)
Atención , Electroencefalografía , Magnetoencefalografía , Percepción del Habla , Adulto , Algoritmos , Percepción Auditiva , Electroencefalografía/instrumentación , Audífonos , Humanos , Magnetoencefalografía/instrumentación
19.
J Assoc Res Otolaryngol ; 19(4): 451-466, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29749573

RESUMEN

The acoustic change complex (ACC) is a scalp-recorded cortical evoked potential complex generated in response to changes (e.g., frequency, amplitude) in an auditory stimulus. The ACC has been well studied in humans, but to our knowledge, no animal model has been evaluated. In particular, it was not known whether the ACC could be recorded under the conditions of sedation that likely would be necessary for recordings from animals. For that reason, we tested the feasibility of recording ACC from sedated cats in response to changes of frequency and amplitude of pure-tone stimuli. Cats were sedated with ketamine and acepromazine, and subdermal needle electrodes were used to record electroencephalographic (EEG) activity. Tones were presented from a small loudspeaker located near the right ear. Continuous tones alternated at 500-ms intervals between two frequencies or two levels. Neurometric functions were created by recording neural response amplitudes while systematically varying the magnitude of steps in frequency centered in octave frequency around 2, 4, 8, and 16 kHz, all at 75 dB SPL, or in decibel level around 75 dB SPL tested at 4 and 8 kHz. The ACC could be recorded readily under this ketamine/azepromazine sedation. In contrast, ACC could not be recorded reliably under any level of isoflurane anesthesia that was tested. The minimum frequency (expressed as Weber fractions (df/f)) or level steps (expressed in dB) needed to elicit ACC fell in the range of previous thresholds reported in animal psychophysical tests of discrimination. The success in recording ACC in sedated animals suggests that the ACC will be a useful tool for evaluation of other aspects of auditory acuity in normal hearing and, presumably, in electrical cochlear stimulation, especially for novel stimulation modes that are not yet feasible in humans.


Asunto(s)
Estimulación Acústica , Corteza Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Acepromazina/farmacología , Animales , Gatos , Sedación Consciente , Electroencefalografía , Potenciales Evocados Auditivos/efectos de los fármacos , Femenino , Isoflurano/farmacología , Ketamina/farmacología , Masculino , Modelos Animales
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2206-2209, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440843

RESUMEN

Permanent facial paralysis and paresis (FP) results from damage to the facial nerve (FN), and is a debilitating condition with substantial functional and psychological consequences for the patient. Unfortunately, surgeons have few tools with which they can satisfactorily reanimate the face. Current strategies employ static (e.g., implantation of nonmuscular material in the face to aid in function/cosmesis) and dynamic options (e.g., gracilis myoneurovascular free tissue transfer) to partially restore volitional facial function and cosmesis. Here, we propose a novel neuroprosthetic approach for facial reanimation that utilizes electromyographic (EMG) input coupled to a chronically implanted multichannel cuff electrode (MCE) to restore instantaneous, volitional, and selective hemifacial movement in a feline model. To accomplish this goal, we developed a single-channel EMG-drive current source coupled with a chronically implanted MCE via a portable microprocessor board. Our results demonstrated a successful feasibility trial in which human EMG input resulted in FN stimulation with subsequent concentric contraction of discrete regions of a feline face.


Asunto(s)
Nervio Facial , Parálisis Facial , Animales , Gatos , Electrodos Implantados , Cara , Humanos , Movimiento
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA