Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Ann Otol Rhinol Laryngol ; 131(4): 365-372, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34096343

ABSTRACT

OBJECTIVES: Facial paralysis is a debilitating condition with substantial functional and psychological consequences. This feline-model study evaluates whether facial muscles can be selectively activated in acute and chronic implantation of 16-channel multichannel cuff electrodes (MCE). METHODS: Two cats underwent acute terminal MCE implantation experiments, 2 underwent chronic MCE implantation in uninjured facial nerves (FN) and tested for 6 months, and 2 underwent chronic MCE implantation experiments after FN transection injury and tested for 3 months. The MCE were wrapped around the main trunk of the skeletonized FN, and data collection consisted of EMG thresholds, amplitudes, and selectivity of muscle activation. RESULTS: In acute experimentation, activation of specific channels (ie, channels 1-3 and 6-8) resulted in selective activation of orbicularis oculi, whereas activation of other channels (ie, channels 4, 5, or 8) led to selective activation of levator auris longus with higher EMG amplitudes. MCE implantation yielded stable and selective facial muscle activation EMG thresholds and amplitudes up to a 5-month period. Modest selective muscle activation was furthermore obtained after a complete transection-reapproximating nerve injury after a 3-month recovery period and implantation reoperation. Chronic implantation of MCE did not lead to fibrosis on histology. Field steering was achieved to activate distinct facial muscles by sending simultaneous subthreshold currents to multiple channels, thus theoretically protecting against nerve damage from chronic electrical stimulation. CONCLUSION: Our proof-of-concept results show the ability of an MCE, supplemented with field steering, to provide a degree of selective facial muscle stimulation in a feline model, even following nerve regeneration after FN injury. LEVEL OF EVIDENCE: N/A.


Subject(s)
Electric Stimulation Therapy/instrumentation , Electrodes, Implanted , Facial Muscles/innervation , Facial Muscles/physiopathology , Facial Nerve Injuries/complications , Facial Paralysis/therapy , Muscle Contraction/physiology , Animals , Cats , Disease Models, Animal , Electromyography , Facial Nerve Injuries/physiopathology , Facial Paralysis/etiology , Facial Paralysis/physiopathology , Female
2.
J Assoc Res Otolaryngol ; 19(4): 451-466, 2018 08.
Article in English | MEDLINE | ID: mdl-29749573

ABSTRACT

The acoustic change complex (ACC) is a scalp-recorded cortical evoked potential complex generated in response to changes (e.g., frequency, amplitude) in an auditory stimulus. The ACC has been well studied in humans, but to our knowledge, no animal model has been evaluated. In particular, it was not known whether the ACC could be recorded under the conditions of sedation that likely would be necessary for recordings from animals. For that reason, we tested the feasibility of recording ACC from sedated cats in response to changes of frequency and amplitude of pure-tone stimuli. Cats were sedated with ketamine and acepromazine, and subdermal needle electrodes were used to record electroencephalographic (EEG) activity. Tones were presented from a small loudspeaker located near the right ear. Continuous tones alternated at 500-ms intervals between two frequencies or two levels. Neurometric functions were created by recording neural response amplitudes while systematically varying the magnitude of steps in frequency centered in octave frequency around 2, 4, 8, and 16 kHz, all at 75 dB SPL, or in decibel level around 75 dB SPL tested at 4 and 8 kHz. The ACC could be recorded readily under this ketamine/azepromazine sedation. In contrast, ACC could not be recorded reliably under any level of isoflurane anesthesia that was tested. The minimum frequency (expressed as Weber fractions (df/f)) or level steps (expressed in dB) needed to elicit ACC fell in the range of previous thresholds reported in animal psychophysical tests of discrimination. The success in recording ACC in sedated animals suggests that the ACC will be a useful tool for evaluation of other aspects of auditory acuity in normal hearing and, presumably, in electrical cochlear stimulation, especially for novel stimulation modes that are not yet feasible in humans.


Subject(s)
Acoustic Stimulation , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Acepromazine/pharmacology , Animals , Cats , Conscious Sedation , Electroencephalography , Evoked Potentials, Auditory/drug effects , Female , Isoflurane/pharmacology , Ketamine/pharmacology , Male , Models, Animal
3.
Ear Hear ; 38(6): e389-e393, 2017.
Article in English | MEDLINE | ID: mdl-28475545

ABSTRACT

OBJECTIVES: Several studies have investigated the feasibility of using electrophysiology as an objective tool to efficiently map cochlear implants. A pervasive problem when measuring event-related potentials is the need to remove the direct-current (DC) artifact produced by the cochlear implant. Here, we describe how DC artifact removal can corrupt the response waveform and how the appropriate choice of stimulus duration may minimize this corruption. DESIGN: Event-related potentials were recorded to a synthesized vowel /a/ with a 170- or 400-ms duration. RESULTS: The P2 response, which occurs between 150 and 250 ms, was corrupted by the DC artifact removal algorithm for a 170-ms stimulus duration but was relatively uncorrupted for a 400-ms stimulus duration. CONCLUSIONS: To avoid response waveform corruption from DC artifact removal, one should choose a stimulus duration such that the offset of the stimulus does not temporally coincide with the specific peak of interest. While our data have been analyzed with only one specific algorithm, we argue that the length of the stimulus may be a critical factor for any DC artifact removal algorithm.


Subject(s)
Acoustic Stimulation/methods , Artifacts , Cochlear Implants , Deafness/physiopathology , Evoked Potentials, Auditory/physiology , Aged , Algorithms , Cochlear Implantation , Deafness/rehabilitation , Electroencephalography , Female , Humans , Male , Middle Aged , Time Factors
4.
J Neurophysiol ; 116(5): 2346-2355, 2016 11 01.
Article in English | MEDLINE | ID: mdl-27535374

ABSTRACT

Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently.


Subject(s)
Aging/physiology , Auditory Cortex/physiology , Mesencephalon/physiology , Noise , Speech Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Aged , Electroencephalography/trends , Female , Humans , Magnetoencephalography/trends , Male , Middle Aged , Noise/adverse effects , Speech/physiology , Young Adult
5.
Neuroimage ; 124(Pt A): 906-917, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26436490

ABSTRACT

The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy.


Subject(s)
Attention/physiology , Loudness Perception/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Algorithms , Auditory Perception/physiology , Environment , Female , Humans , Magnetoencephalography , Male , Models, Neurological , Young Adult
6.
Clin Neurophysiol ; 121(9): 1540-1550, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20413346

ABSTRACT

OBJECTIVE: The nature of the auditory steady-state responses (ASSR) evoked with 40-Hz click trains and their relationship to auditory brainstem and middle latency responses (ABR/MLR), gamma band responses (GBR) and beta band responses (BBR) were investigated using superposition theory. Transient responses obtained by continuous loop averaging deconvolution (CLAD) and last click responses (LCR) were used to synthesize ASSRs and GBRs. METHODS: ASSRs were obtained with trains of low jittered 40 Hz clicks presented monaurally and deconvolved using a modified CLAD. Resulting transient responses and modified LCRs were used to predict the ASSRs and the GBR. RESULTS: The ABR/MLR obtained with deconvolution predicted accurately the steady portion of the ASSR but failed to predict its onset portion. The modified LCR failed to fully predict both portions. The GBRs were predicted by narrow band filtering of the ASSRs. Significant BBR activity was found both in the ASSRs and deconvolved ABR/MLRs. CONCLUSIONS: Simulations using deconvolved ABR/MLRs obtained at 40 Hz predict fully the steady state but not the onset portion of the ASSRs, thus confirming the superposition theory. SIGNIFICANCE: Click rate adaptation plays a significant role in ASSR generation with click trains and should be considered in evaluating convolved response generation theories.


Subject(s)
Auditory Perception/physiology , Contingent Negative Variation/physiology , Evoked Potentials, Auditory/physiology , Reaction Time/physiology , Acoustic Stimulation/methods , Adult , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL