Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 139
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Cereb Cortex ; 34(2)2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38212291

RESUMEN

Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Humanos , Habla , Percepción del Habla/fisiología , Corteza Auditiva/fisiología , Aprendizaje , Electroencefalografía , Plasticidad Neuronal/fisiología , Estimulación Acústica
2.
Cereb Cortex ; 33(18): 10076-10086, 2023 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-37522248

RESUMEN

So-called duplex speech stimuli with perceptually ambiguous spectral cues to one ear and isolated low- versus high-frequency third formant "chirp" to the opposite ear yield a coherent percept supporting their phonetic categorization. Critically, such dichotic sounds are only perceived categorically upon binaural integration. Here, we used frequency-following responses (FFRs), scalp-recorded potentials reflecting phase-locked subcortical activity, to investigate brainstem responses to fused speech percepts and to determine whether FFRs reflect binaurally integrated category-level representations. We recorded FFRs to diotic and dichotic stop-consonants (/da/, /ga/) that either did or did not require binaural fusion to properly label along with perceptually ambiguous sounds without clear phonetic identity. Behaviorally, listeners showed clear categorization of dichotic speech tokens confirming they were heard with a fused, phonetic percept. Neurally, we found FFRs were stronger for categorically perceived speech relative to category-ambiguous tokens but also differentiated phonetic categories for both diotically and dichotically presented speech sounds. Correlations between neural and behavioral data further showed FFR latency predicted the degree to which listeners labeled tokens as "da" versus "ga." The presence of binaurally integrated, category-level information in FFRs suggests human brainstem processing reflects a surprisingly abstract level of the speech code typically circumscribed to much later cortical processing.


Asunto(s)
Percepción del Habla , Habla , Humanos , Percepción del Habla/fisiología , Tronco Encefálico/fisiología , Encéfalo/fisiología , Audición , Percepción Auditiva/fisiología , Estimulación Acústica
3.
Sensors (Basel) ; 24(4)2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38400211

RESUMEN

A deviation in the soundness of cognitive health is known as mild cognitive impairment (MCI), and it is important to monitor it early to prevent complicated diseases such as dementia, Alzheimer's disease (AD), and Parkinson's disease (PD). Traditionally, MCI severity is monitored with manual scoring using the Montreal Cognitive Assessment (MoCA). In this study, we propose a new MCI severity monitoring algorithm with regression analysis of extracted features of single-channel electro-encephalography (EEG) data by automatically generating severity scores equivalent to MoCA scores. We evaluated both multi-trial and single-trail analysis for the algorithm development. For multi-trial analysis, 590 features were extracted from the prominent event-related potential (ERP) points and corresponding time domain characteristics, and we utilized the lasso regression technique to select the best feature set. The 13 best features were used in the classical regression techniques: multivariate regression (MR), ensemble regression (ER), support vector regression (SVR), and ridge regression (RR). The best results were observed for ER with an RMSE of 1.6 and residual analysis. In single-trial analysis, we extracted a time-frequency plot image from each trial and fed it as an input to the constructed convolutional deep neural network (CNN). This deep CNN model resulted an RMSE of 2.76. To our knowledge, this is the first attempt to generate automated scores for MCI severity equivalent to MoCA from single-channel EEG data with multi-trial and single data.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Humanos , Disfunción Cognitiva/diagnóstico , Análisis de Regresión , Electroencefalografía/métodos , Gravedad del Paciente
4.
Neuroimage ; 269: 119899, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36720437

RESUMEN

The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.


Asunto(s)
Percepción del Habla , Habla , Humanos , Encéfalo/fisiología , Tronco Encefálico/fisiología , Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Estimulación Acústica
5.
Int J Audiol ; 62(10): 920-926, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-35822427

RESUMEN

OBJECTIVE: We investigated auditory temporal processing in children with amblyaudia (AMB), a subtype of auditory processing disorder (APD), via cortical neural entrainment. DESIGN AND STUDY SAMPLES: Evoked responses were recorded to click-trains at slow vs. fast (8.5 vs. 14.9/s) rates in n = 14 children with AMB and n = 11 age-matched controls. Source and time-frequency analyses (TFA) decomposed EEGs into oscillations (reflecting neural entrainment) stemming from bilateral auditory cortex. RESULTS: Phase-locking strength in AMB depended critically on the speed of auditory stimuli. In contrast to age-matched peers, AMB responses were largely insensitive to rate manipulations. This rate resistance occurred regardless of the ear of presentation and in both cortical hemispheres. CONCLUSIONS: Children with AMB show less rate-related changes in auditory cortical entrainment. In addition to reduced capacity to integrate information between the ears, we identify more rigid tagging of external auditory stimuli. Our neurophysiological findings may account for domain-general temporal processing deficits commonly observed in AMB and related APDs behaviourally. More broadly, our findings may inform communication strategies and future rehabilitation programmes; increasing the rate of stimuli above a normal (slow) speech rate is likely to make stimulus processing more challenging for individuals with AMB/APD.


Asunto(s)
Corteza Auditiva , Trastornos de la Percepción Auditiva , Percepción del Habla , Humanos , Niño , Corteza Auditiva/fisiología , Estimulación Acústica , Percepción Auditiva/fisiología , Electroencefalografía , Potenciales Evocados Auditivos/fisiología , Percepción del Habla/fisiología
6.
Neuroimage ; 263: 119627, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36122686

RESUMEN

Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.


Asunto(s)
Percepción del Habla , Habla , Humanos , Percepción del Habla/fisiología , Tronco Encefálico/fisiología , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Ruido , Estimulación Acústica
7.
J Cogn Neurosci ; 33(5): 840-852, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34449838

RESUMEN

Categorical judgments of otherwise identical phonemes are biased toward hearing words (i.e., "Ganong effect") suggesting lexical context influences perception of even basic speech primitives. Lexical biasing could manifest via late stage postperceptual mechanisms related to decision or, alternatively, top-down linguistic inference that acts on early perceptual coding. Here, we exploited the temporal sensitivity of EEG to resolve the spatiotemporal dynamics of these context-related influences on speech categorization. Listeners rapidly classified sounds from a /gɪ/-/kɪ/ gradient presented in opposing word-nonword contexts (GIFT-kift vs. giss-KISS), designed to bias perception toward lexical items. Phonetic perception shifted toward the direction of words, establishing a robust Ganong effect behaviorally. ERPs revealed a neural analog of lexical biasing emerging within ∼200 msec. Source analyses uncovered a distributed neural network supporting the Ganong including middle temporal gyrus, inferior parietal lobe, and middle frontal cortex. Yet, among Ganong-sensitive regions, only left middle temporal gyrus and inferior parietal lobe predicted behavioral susceptibility to lexical influence. Our findings confirm lexical status rapidly constrains sublexical categorical representations for speech within several hundred milliseconds but likely does so outside the purview of canonical auditory-sensory brain areas.


Asunto(s)
Percepción del Habla , Mapeo Encefálico , Lóbulo Parietal , Fonética , Lóbulo Temporal
8.
Neuroimage ; 235: 118014, 2021 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-33794356

RESUMEN

Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.


Asunto(s)
Atención/fisiología , Audición/fisiología , Ruido , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Vías Auditivas/fisiología , Percepción Auditiva , Tronco Encefálico/fisiología , Corteza Cerebral/fisiología , Conectoma , Electroencefalografía , Femenino , Humanos , Masculino , Enmascaramiento Perceptual
9.
Am J Med Genet A ; 185(12): 3717-3727, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34331386

RESUMEN

Sensorineural hearing loss (SNHL) is characteristic of Usher syndrome type 2 (USH2), but less is known about SNHL in nonsyndromic autosomal recessive retinitis pigmentosa (ARRP) and olfaction in USH2A-associated retinal degeneration. The Rate of Progression of USH2A-related Retinal Degeneration (RUSH2A) is a natural history study that enrolled 127 participants, 80 with USH2 and 47 with ARRP. Hearing was measured by pure-tone thresholds and word recognition scores, and olfaction by the University of Pennsylvania Smell Identification Test (UPSIT). SNHL was moderate in 72% of USH2 participants and severe or profound in 25%, while 9% of ARRP participants had moderate adult-onset SNHL. Pure-tone thresholds worsened with age in ARRP but not in USH2 participants. The degree of SNHL was not associated with other participant characteristics in either USH2 or ARRP. Median pure-tone thresholds in ARRP participants were significantly higher than the normative population (p < 0.001). Among 14 USH2 participants reporting newborn hearing screening results, 7 reported passing. Among RUSH2A participants, 7% had mild microsmia and 5% had moderate or severe microsmia. Their mean (±SD) UPSIT score was 35 (±3), similar to healthy controls (34 [±3]; p = 0.39). Olfaction differed by country (p = 0.02), but was not significantly associated with clinical diagnosis, age, gender, race/ethnicity, smoking status, visual measures, or hearing. Hearing loss in USH2A-related USH2 did not progress with age. ARRP patients had higher pure-tone thresholds than normal. Newborn hearing screening did not identify all USH2A-related hearing loss. Olfaction was not significantly worse than normal in participants with USH2A-related retinal degeneration.


Asunto(s)
Proteínas de la Matriz Extracelular/genética , Predisposición Genética a la Enfermedad , Pérdida Auditiva Sensorineural/genética , Retinitis Pigmentosa/genética , Síndromes de Usher/genética , Adolescente , Adulto , Edad de Inicio , Femenino , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva Sensorineural/patología , Humanos , Masculino , Persona de Mediana Edad , Mutación , Linaje , Degeneración Retiniana/diagnóstico , Degeneración Retiniana/genética , Degeneración Retiniana/patología , Retinitis Pigmentosa/diagnóstico , Retinitis Pigmentosa/patología , Olfato/genética , Síndromes de Usher/diagnóstico , Síndromes de Usher/patología , Adulto Joven
10.
Proc Natl Acad Sci U S A ; 115(51): 13129-13134, 2018 12 18.
Artículo en Inglés | MEDLINE | ID: mdl-30509989

RESUMEN

Musical training is associated with a myriad of neuroplastic changes in the brain, including more robust and efficient neural processing of clean and degraded speech signals at brainstem and cortical levels. These assumptions stem largely from cross-sectional studies between musicians and nonmusicians which cannot address whether training itself is sufficient to induce physiological changes or whether preexisting superiority in auditory function before training predisposes individuals to pursue musical interests and appear to have similar neuroplastic benefits as musicians. Here, we recorded neuroelectric brain activity to clear and noise-degraded speech sounds in individuals without formal music training but who differed in their receptive musical perceptual abilities as assessed objectively via the Profile of Music Perception Skills. We found that listeners with naturally more adept listening skills ("musical sleepers") had enhanced frequency-following responses to speech that were also more resilient to the detrimental effects of noise, consistent with the increased fidelity of speech encoding and speech-in-noise benefits observed previously in highly trained musicians. Further comparisons between these musical sleepers and actual trained musicians suggested that experience provides an additional boost to the neural encoding and perception of speech. Collectively, our findings suggest that the auditory neuroplasticity of music engagement likely involves a layering of both preexisting (nature) and experience-driven (nurture) factors in complex sound processing. In the absence of formal training, individuals with intrinsically proficient auditory systems can exhibit musician-like auditory function that can be further shaped in an experience-dependent manner.


Asunto(s)
Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Música , Vías Nerviosas/fisiología , Percepción del Habla/fisiología , Adulto , Estudios Transversales , Potenciales Evocados Auditivos , Femenino , Humanos , Masculino , Enseñanza , Adulto Joven
11.
J Acoust Soc Am ; 149(3): 1644, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33765780

RESUMEN

Categorical perception (CP) describes how the human brain categorizes speech despite inherent acoustic variability. We examined neural correlates of CP in both evoked and induced electroencephalogram (EEG) activity to evaluate which mode best describes the process of speech categorization. Listeners labeled sounds from a vowel gradient while we recorded their EEGs. Using a source reconstructed EEG, we used band-specific evoked and induced neural activity to build parameter optimized support vector machine models to assess how well listeners' speech categorization could be decoded via whole-brain and hemisphere-specific responses. We found whole-brain evoked ß-band activity decoded prototypical from ambiguous speech sounds with ∼70% accuracy. However, induced γ-band oscillations showed better decoding of speech categories with ∼95% accuracy compared to evoked ß-band activity (∼70% accuracy). Induced high frequency (γ-band) oscillations dominated CP decoding in the left hemisphere, whereas lower frequencies (θ-band) dominated the decoding in the right hemisphere. Moreover, feature selection identified 14 brain regions carrying induced activity and 22 regions of evoked activity that were most salient in describing category-level speech representations. Among the areas and neural regimes explored, induced γ-band modulations were most strongly associated with listeners' behavioral CP. The data suggest that the category-level organization of speech is dominated by relatively high frequency induced brain rhythms.


Asunto(s)
Percepción del Habla , Habla , Estimulación Acústica , Electroencefalografía , Potenciales Evocados Auditivos , Humanos , Fonética
12.
Ear Hear ; 41(2): 268-277, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31283529

RESUMEN

OBJECTIVES: In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s). DESIGN: Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants' gaze to different parts of a talker's face during SIN perception. RESULTS: As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker's face had little effect on speech recognition by itself. Listeners' eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker. CONCLUSIONS: Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception.


Asunto(s)
Percepción del Habla , Fijación Ocular , Audición , Humanos , Ruido , Habla
13.
Neuroimage ; 201: 116022, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31310863

RESUMEN

To construct our perceptual world, the brain categorizes variable sensory cues into behaviorally-relevant groupings. Categorical representations are apparent within a distributed fronto-temporo-parietal brain network but how this neural circuitry is shaped by experience remains undefined. Here, we asked whether speech and music categories might be formed within different auditory-linguistic brain regions depending on listeners' auditory expertise. We recorded EEG in highly skilled (musicians) vs. less experienced (nonmusicians) perceivers as they rapidly categorized speech and musical sounds. Musicians showed perceptual enhancements across domains, yet source EEG data revealed a double dissociation in the neurobiological mechanisms supporting categorization between groups. Whereas musicians coded categories in primary auditory cortex (PAC), nonmusicians recruited non-auditory regions (e.g., inferior frontal gyrus, IFG) to generate category-level information. Functional connectivity confirmed nonmusicians' increased left IFG involvement reflects stronger routing of signal from PAC directed to IFG, presumably because sensory coding is insufficient to construct categories in less experienced listeners. Our findings establish auditory experience modulates specific engagement and inter-regional communication in the auditory-linguistic network supporting categorical perception. Whereas early canonical PAC representations are sufficient to generate categories in highly trained ears, less experienced perceivers broadcast information downstream to higher-order linguistic brain areas (IFG) to construct abstract sound labels.


Asunto(s)
Percepción Auditiva/fisiología , Lingüística , Plasticidad Neuronal/fisiología , Adulto , Corteza Auditiva/fisiología , Femenino , Humanos , Masculino , Música , Corteza Prefrontal/fisiología , Habla , Adulto Joven
14.
J Acoust Soc Am ; 146(1): 60, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31370660

RESUMEN

Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units via categorical perception (CP). Beyond shrinking observers' perceptual space, CP might aid degraded speech perception if categories are more resistant to noise than surface acoustic features. Combining audiovisual (AV) cues also enhances speech recognition, particularly in noisy environments. This study investigated the degree to which visual cues from a talker (i.e., mouth movements) aid speech categorization amidst noise interference by measuring participants' identification of clear and noisy speech (0 dB signal-to-noise ratio) presented in auditory-only or combined AV modalities (i.e., A, A+noise, AV, AV+noise conditions). Auditory noise expectedly weakened (i.e., shallower identification slopes) and slowed speech categorization. Interestingly, additional viseme cues largely counteracted noise-related decrements in performance and stabilized classification speeds in both clear and noise conditions suggesting more precise acoustic-phonetic representations with multisensory information. Results are parsimoniously described under a signal detection theory framework and by a reduction (visual cues) and increase (noise) in the precision of perceptual object representation, which were not due to lapses of attention or guessing. Collectively, findings show that (i) mapping sounds to categories aids speech perception in "cocktail party" environments; (ii) visual cues help lattice formation of auditory-phonetic categories to enhance and refine speech identification.

15.
J Neurosci ; 37(13): 3610-3620, 2017 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-28270574

RESUMEN

Mild cognitive impairment (MCI) is recognized as a transitional phase in the progression toward more severe forms of dementia and is an early precursor to Alzheimer's disease. Previous neuroimaging studies reveal that MCI is associated with aberrant sensory-perceptual processing in cortical brain regions subserving auditory and language function. However, whether the pathophysiology of MCI extends to speech processing before conscious awareness (brainstem) is unknown. Using a novel electrophysiological approach, we recorded both brainstem and cortical speech-evoked brain event-related potentials (ERPs) in older, hearing-matched human listeners who did and did not present with subtle cognitive impairment revealed through behavioral neuropsychological testing. We found that MCI was associated with changes in neural speech processing characterized as hypersensitivity (larger) brainstem and cortical speech encoding in MCI compared with controls in the absence of any perceptual speech deficits. Group differences also interacted with age differentially across the auditory pathway; brainstem responses became larger and cortical ERPs smaller with advancing age. Multivariate classification revealed that dual brainstem-cortical speech activity correctly identified MCI listeners with 80% accuracy, suggesting its application as a biomarker of early cognitive decline. Brainstem responses were also a more robust predictor of individuals' MCI severity than cortical activity. Our findings suggest that MCI is associated with poorer encoding and transfer of speech signals between functional levels of the auditory system and advance the pathophysiological understanding of cognitive aging by identifying subcortical deficits in auditory sensory processing mere milliseconds (<10 ms) after sound onset and before the emergence of perceptual speech deficits.SIGNIFICANCE STATEMENT Mild cognitive impairment (MCI) is a precursor to dementia marked by declines in communication skills. Whether MCI pathophysiology extends below cerebral cortex to affect speech processing before conscious awareness (brainstem) is unknown. By recording neuroelectric brain activity to speech from brainstem and cortex, we show that MCI hypersensitizes the normal encoding of speech information across the hearing brain. Deficient neural responses to speech (particularly those generated from the brainstem) predicted the presence of MCI with high accuracy and before behavioral deficits. Our findings advance the neurological understanding of MCI by identifying a subcortical biomarker in auditory-sensory processing before conscious awareness, which may be a precursor to declines in speech understanding.


Asunto(s)
Envejecimiento , Tronco Encefálico/fisiopatología , Corteza Cerebral/fisiopatología , Disfunción Cognitiva/fisiopatología , Trastornos del Habla/fisiopatología , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Disfunción Cognitiva/complicaciones , Femenino , Humanos , Masculino , Persona de Mediana Edad , Red Nerviosa/fisiopatología , Trastornos del Habla/etiología
16.
Neuroimage ; 175: 56-69, 2018 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-29604459

RESUMEN

Frequency-following responses (FFRs) are neurophonic potentials that provide a window into the encoding of complex sounds (e.g., speech/music), auditory disorders, and neuroplasticity. While the neural origins of the FFR remain debated, renewed controversy has reemerged after demonstration that FFRs recorded via magnetoencephalography (MEG) are dominated by cortical rather than brainstem structures as previously assumed. Here, we recorded high-density (64 ch) FFRs via EEG and applied state-of-the art source imaging techniques to multichannel data (discrete dipole modeling, distributed imaging, independent component analysis, computational simulations). Our data confirm a mixture of generators localized to bilateral auditory nerve (AN), brainstem inferior colliculus (BS), and bilateral primary auditory cortex (PAC). However, frequency-specific scrutiny of source waveforms showed the relative contribution of these nuclei to the aggregate FFR varied across stimulus frequencies. Whereas AN and BS sources produced robust FFRs up to ∼700 Hz, PAC showed weak phase-locking with little FFR energy above the speech fundamental (100 Hz). Notably, CLARA imaging further showed PAC activation was eradicated for FFRs >150 Hz, above which only subcortical sources remained active. Our results show (i) the site of FFR generation varies critically with stimulus frequency; and (ii) opposite the pattern observed in MEG, subcortical structures make the largest contribution to electrically recorded FFRs (AN ≥ BS > PAC). We infer that cortical dominance observed in previous neuromagnetic data is likely due to the bias of MEG to superficial brain tissue, underestimating subcortical structures that drive most of the speech-FFR. Cleanly separating subcortical from cortical FFRs can be achieved by ensuring stimulus frequencies are >150-200 Hz, above the phase-locking limit of cortical neurons.


Asunto(s)
Corteza Auditiva/fisiología , Electroencefalografía/métodos , Potenciales Evocados Auditivos/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Colículos Inferiores/fisiología , Percepción del Habla/fisiología , Adulto , Corteza Auditiva/diagnóstico por imagen , Potenciales Evocados Auditivos del Tronco Encefálico/fisiología , Femenino , Humanos , Colículos Inferiores/diagnóstico por imagen , Masculino , Adulto Joven
17.
J Med Syst ; 42(10): 185, 2018 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-30167826

RESUMEN

Body sensor network (BSN) is a promising human-centric technology to monitor neurophysiological data. We propose a fully-reconfigurable architecture that addresses the major challenges of a heterogenous BSN, such as scalabiliy, modularity and flexibility in deployment. Existing BSNs especially with Electroencephalogarm (EEG) have these limitations mainly due to the use of driven-right-leg (DRL) circuit. We address these limitations by custom-designing DRL-less EEG smart sensing nodes (SSN) for modular and spatially distributed systems. Each single-channel EEG SSN with a input-referred noise of 0.82 µVrms and CMRR of 70 dB (at 60 Hz), samples brain signals at 512 sps. SSNs in the network can be configured at the time of deployment and can process information locally to significantly reduce data payload of the network. A Control Command Node (CCN) initializes, synchronizes, periodically scans for the available SSNs in the network, aggregates their data and sends it wirelessly to a paired device at a baud rate of 115.2 kbps. At the given settings of the I2C bus speed of 100 kbps, CCN can configure up to 39 EEG SSNs in a lego-like platform. The temporal and frequency-domain performance of the designed "DRL-less" EEG SSNs is evaluated against a research-grade Neuroscan and consumer-grade Emotiv EPOC EEG. The results show that the proposed network system with wearable EEG can be deployed in situ for continuous brain signal recording in real-life scenarios. The proposed system can also seamlessly incorporate other physiological SSNs for ECG, HRV, temperature etc. along with EEG within the same topology.


Asunto(s)
Encéfalo/fisiología , Electroencefalografía , Dispositivos Electrónicos Vestibles , Amplificadores Electrónicos , Redes de Comunicación de Computadores , Humanos
18.
Eur J Neurosci ; 45(5): 690-699, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28112440

RESUMEN

Categorical perception (CP) is highly evident in audition when listeners' perception of speech sounds abruptly shifts identity despite equidistant changes in stimulus acoustics. While CP is an inherent property of speech perception, how (if) it is expressed in other auditory modalities (e.g., music) is less clear. Moreover, prior neuroimaging studies have been equivocal on whether attentional engagement is necessary for the brain to categorically organize sound. To address these questions, we recorded neuroelectric brain responses [event-related potentials (ERPs)] from listeners as they rapidly categorized sounds along a speech and music continuum (active task) or during passive listening. Behaviorally, listeners' achieved sharper psychometric functions and faster identification for speech than musical stimuli, which was perceived in a continuous mode. Behavioral results coincided with stronger ERP differentiation between prototypical and ambiguous tokens (i.e., categorical processing) for speech but not for music. Neural correlates of CP were only observed when listeners actively attended to the auditory signal. These findings were corroborated by brain-behavior associations; changes in neural activity predicted more successful CP (psychometric slopes) for active but not passively evoked ERPs. Our results demonstrate auditory categorization is influenced by attention (active > passive) and is stronger for more familiar/overlearned stimulus domains (speech > music). In contrast to previous studies examining highly trained listeners (i.e., musicians), we infer that (i) CP skills are largely domain-specific and do not generalize to stimuli for which a listener has no immediate experience and (ii) categorical neural processing requires active engagement with the auditory stimulus.


Asunto(s)
Atención , Encéfalo/fisiología , Percepción del Habla , Adulto , Potenciales Evocados , Femenino , Humanos , Masculino , Música
19.
Ear Hear ; 38(4): e215-e226, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28125444

RESUMEN

OBJECTIVES: Providing cochlear implant (CI) patients the optimal signal processing settings during mapping sessions is critical for facilitating their speech perception. Here, we aimed to evaluate whether auditory cortical event-related potentials (ERPs) could be used to objectively determine optimal CI parameters. DESIGN: While recording neuroelectric potentials, we presented a set of acoustically vocoded consonants (aKa, aSHa, and aNa) to normal-hearing listeners (n = 12) that simulated speech tokens processed through four different combinations of CI stimulation rate and number of spectral maxima. Parameter settings were selected to feature relatively fast/slow stimulation rates and high/low number of maxima; 1800 pps/20 maxima, 1800/8, 500/20 and 500/8. RESULTS: Speech identification and reaction times did not differ with changes in either the number of maxima or stimulation rate indicating ceiling behavioral performance. Similarly, we found that conventional univariate analysis (analysis of variance) of N1 and P2 amplitude/latency failed to reveal strong modulations across CI-processed speech conditions. In contrast, multivariate discriminant analysis based on a combination of neural measures was used to create "neural confusion matrices" and identified a unique parameter set (1800/8) that maximally differentiated speech tokens at the neural level. This finding was corroborated by information transfer analysis which confirmed these settings optimally transmitted information in listeners' neural and perceptual responses. CONCLUSIONS: Translated to actual implant patients, our findings suggest that scalp-recorded ERPs might be useful in determining optimal signal processing settings from among a closed set of parameter options and aid in the objective fitting of CI devices.


Asunto(s)
Corteza Auditiva/fisiología , Implantes Cocleares , Potenciales Evocados Auditivos/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Voluntarios Sanos , Humanos , Masculino , Persona de Mediana Edad , Tiempo de Reacción , Adulto Joven
20.
BMC Ophthalmol ; 17(1): 240, 2017 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-29212538

RESUMEN

BACKGROUND: In this study, we examined audiovisual (AV) processing in normal and visually impaired individuals who exhibit partial loss of vision due to inherited retinal dystrophies (IRDs). METHODS: Two groups were analyzed for this pilot study: Group 1 was composed of IRD participants: two with autosomal dominant retinitis pigmentosa (RP), two with autosomal recessive cone-rod dystrophy (CORD), and two with the related complex disorder, Bardet-Biedl syndrome (BBS); Group 2 was composed of 15 non-IRD participants (controls). Audiovisual looming and receding stimuli (conveying perceptual motion) were used to assess the cortical processing and integration of unimodal (A or V) and multimodal (AV) sensory cues. Electroencephalography (EEG) was used to simultaneously resolve the temporal and spatial characteristics of AV processing and assess differences in neural responses between groups. Measurement of AV integration was accomplished via quantification of the EEG's spectral power and event-related brain potentials (ERPs). RESULTS: Results show that IRD individuals exhibit reduced AV integration for concurrent audio and visual (AV) stimuli but increased brain activity during the unimodal A (but not V) presentation. This was corroborated in behavioral responses, where IRD patients showed slower and less accurate judgments of AV and V stimuli but more accurate responses in the A-alone condition. CONCLUSIONS: Collectively, our findings imply a neural compensation from auditory sensory brain areas due to visual deprivation.


Asunto(s)
Percepción Auditiva/fisiología , Distrofias Retinianas/fisiopatología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Proyectos Piloto , Análisis de Regresión , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA