Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38082885

ABSTRACT

Block-design is a popular experimental paradigm for functional near-infrared spectroscopy (fNIRS). Traditional block-design analysis techniques such as generalized linear modeling (GLM) and waveform averaging (WA) assume that the brain is a time-invariant system. This is a flawed assumption. In this paper, we propose a parametric Gaussian model to quantify the time-variant behavior found across consecutive trials of block-design fNIRS experiments. Using simulated data at different signal-to-noise ratios (SNRs), we demonstrate that our proposed technique is capable of characterizing Gaussian-like fNIRS signal features with ≥3dB SNR. When used to fit recorded data from an auditory block-design experiment, model parameter values quantitatively revealed statistically significant changes in fNIRS responses across trials, consistent with visual inspection of data from individual trials. Our results suggest that our model effectively captures trial-to-trial differences in response, which enables researchers to study time-variant brain responses using block-design fNIRS experiments.


Subject(s)
Brain , Spectroscopy, Near-Infrared , Spectroscopy, Near-Infrared/methods , Brain/diagnostic imaging , Brain/physiology , Linear Models
2.
Article in English | MEDLINE | ID: mdl-38083703

ABSTRACT

Resting-state functional connectivity is a promising tool for understanding and characterizing brain network architecture. However, obtaining uninterrupted long recording of resting-state data is challenging in many clinically relevant populations. Moreover, the interpretation of connectivity results may heavily depend on the data length and functional connectivity measure used. We compared the performance of three frequency-domain connectivity measures: magnitude-squared, wavelet and multitaper coherence; and the effect of data length ranging from 3 to 9 minutes. Performance was characterized by distinguishing two groups of channel pairs with known different connectivity strengths. While all methods considered improved the ability to distinguish the two groups with increasing data lengths, wavelet coherence performed best for the shortest time window of 3 minutes. Knowledge of which measure is more reliably used when shorter fNIRS recordings are available could make the utility of functional connectivity biomarkers more feasible in clinical populations of interest.


Subject(s)
Brain Mapping , Brain , Brain/diagnostic imaging , Brain Mapping/methods , Spectrum Analysis
3.
Article in English | MEDLINE | ID: mdl-38083712

ABSTRACT

Many studies on morphology analysis show that if short inter-stimulus intervals separate tasks, the hemodynamic response amplitude will return to the resting-state baseline before the subsequent stimulation onset; hence, responses to successive tasks do not overlap. Accordingly, popular brain imaging analysis techniques assume changes in hemodynamic response amplitude subside after a short time (around 15 seconds). However, whether this assumption holds when studying brain functional connectivity has yet to be investigated. This paper assesses whether or not the functional connectivity network in control trials returns to the resting-state functional connectivity network. Traditionally, control trials in block-design experiments are used to evaluate response morphology to no stimulus. We analyzed data from an event-related experiment with audio and visual stimuli and resting state. Our results showed that functional connectivity networks during control trials were more similar to that of tasks than resting-state networks. In other words, contrary to task-related changes in the hemodynamic amplitude, where responses settle after a short time, the brain's functional connectivity networks do not return to their intrinsic resting-state network in such short intervals.


Subject(s)
Magnetic Resonance Imaging , Nerve Net , Magnetic Resonance Imaging/methods , Nerve Net/diagnostic imaging , Nerve Net/physiology , Rest/physiology , Brain/diagnostic imaging , Brain/physiology , Neuroimaging
4.
J Neural Eng ; 20(1)2023 02 24.
Article in English | MEDLINE | ID: mdl-36763991

ABSTRACT

Objective.Hearing is an important sensory function that plays a key role in how children learn to speak and develop language skills. Although previous neuroimaging studies have established that much of brain network maturation happens in early childhood, our understanding of the developmental trajectory of language areas is still very limited. We hypothesized that typical development trajectory of language areas in early childhood could be established by analyzing the changes of functional connectivity in normal hearing infants at different ages using functional near-infrared spectroscopy.Approach.Resting-state data were recorded from two bilateral temporal and prefrontal regions associated with language processing by measuring the relative changes of oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR) concentrations. Connectivity was calculated using magnitude-squared coherence of channel pairs located in (a) inter-hemispheric homologous and (b) intra-hemispheric brain regions to assess connectivity between homologous regions across hemispheres and two regions of interest in the same hemisphere, respectively.Main results.A linear regression model fitted to the age vs coherence of inter-hemispheric homologous test group revealed a significant coefficient of determination for both HbO (R2= 0.216,p= 0.0169) and HbR (R2= 0.206,p= 0.0198). A significant coefficient of determination was also found for intra-hemispheric test group for HbO (R2= 0.237,p= 0.0117) but not for HbR (R2= 0.111,p= 0.0956).Significance.The findings from HbO data suggest that both inter-hemispheric homologous and intra-hemispheric connectivity between primary language regions significantly strengthen with age in the first year of life. Mapping out the developmental trajectory of primary language areas of normal hearing infants as measured by functional connectivity could potentially allow us to better understand the altered connectivity and its effects on language delays in infants with hearing impairments.


Subject(s)
Brain , Spectroscopy, Near-Infrared , Child , Humans , Infant , Child, Preschool , Spectroscopy, Near-Infrared/methods , Brain/metabolism , Brain Mapping/methods , Language , Hemoglobins , Magnetic Resonance Imaging
5.
Ear Hear ; 44(4): 776-786, 2023.
Article in English | MEDLINE | ID: mdl-36706073

ABSTRACT

OBJECTIVES: Cardiac responses (e.g., heart rate changes) due to an autonomous response to sensory stimuli have been reported in several studies. This study investigated whether heart rate information extracted from functional near-infrared spectroscopy (fNIRS) data can be used to assess the discrimination of speech sounds in sleeping infants. This study also investigated the adaptation of the heart rate response over multiple, sequential stimulus presentations. DESIGN: fNIRS data were recorded from 23 infants with no known hearing loss, aged 2 to 10 months. Speech syllables were presented using a habituation/dishabituation test paradigm: the infant's heart rate response was first habituated by repeating blocks of one speech sound; then, the heart rate response was dishabituated with the contrasting (novel) speech sound. This stimulus presentation sequence was repeated for as long as the infants were asleep. RESULTS: The group-level average heart rate response to the novel stimulus was greater than that to the habituated first sound, indicating that sleeping infants were able to discriminate the speech sound contrast. A significant adaptation of the heart rate responses was seen over the session duration. CONCLUSION: The dishabituation response could be a valuable marker for speech discrimination, especially when used in conjunction with the fNIRS hemodynamic response.


Subject(s)
Deafness , Speech Perception , Humans , Infant , Speech Perception/physiology , Heart Rate , Spectroscopy, Near-Infrared , Speech
6.
Neurophotonics ; 9(1): 015001, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35071689

ABSTRACT

Significance: Functional near-infrared spectroscopy (fNIRS) is a neuroimaging tool that can measure resting-state functional connectivity; however, non-neuronal components present in fNIRS signals introduce false discoveries in connectivity, which can impact interpretation of functional networks. Aim: We investigated the effect of short channel correction on resting-state connectivity by removing non-neuronal signals from fNIRS long channel data. We hypothesized that false discoveries in connectivity can be reduced, hence improving the discriminability of functional networks of known, different connectivity strengths. Approach: A principal component analysis-based short channel correction technique was applied to resting-state data of 10 healthy adult subjects. Connectivity was analyzed using magnitude-squared coherence of channel pairs in connectivity groups of homologous and control brain regions, which are known to differ in connectivity. Results: By removing non-neuronal components using short channel correction, significant reduction of coherence was observed for oxy-hemoglobin concentration changes in frequency bands associated with resting-state connectivity that overlap with the Mayer wave frequencies. The results showed that short channel correction reduced spurious correlations in connectivity measures and improved the discriminability between homologous and control groups. Conclusions: Resting-state functional connectivity analysis with short channel correction performs better than without correction in its ability to distinguish functional networks with distinct connectivity characteristics.

7.
Sci Rep ; 11(1): 24006, 2021 12 14.
Article in English | MEDLINE | ID: mdl-34907273

ABSTRACT

Speech detection and discrimination ability are important measures of hearing ability that may inform crucial audiological intervention decisions for individuals with a hearing impairment. However, behavioral assessment of speech discrimination can be difficult and inaccurate in infants, prompting the need for an objective measure of speech detection and discrimination ability. In this study, the authors used functional near-infrared spectroscopy (fNIRS) as the objective measure. Twenty-three infants, 2 to 10 months of age participated, all of whom had passed newborn hearing screening or diagnostic audiology testing. They were presented with speech tokens at a comfortable listening level in a natural sleep state using a habituation/dishabituation paradigm. The authors hypothesized that fNIRS responses to speech token detection as well as speech token contrast discrimination could be measured in individual infants. The authors found significant fNIRS responses to speech detection in 87% of tested infants (false positive rate 0%), as well as to speech discrimination in 35% of tested infants (false positive rate 9%). The results show initial promise for the use of fNIRS as an objective clinical tool for measuring infant speech detection and discrimination ability; the authors highlight the further optimizations of test procedures and analysis techniques that would be required to improve accuracy and reliability to levels needed for clinical decision-making.


Subject(s)
Acoustic Stimulation , Spectroscopy, Near-Infrared , Speech Perception/physiology , Speech/physiology , Female , Humans , Infant , Male
8.
J Neural Eng ; 18(4)2021 06 04.
Article in English | MEDLINE | ID: mdl-34010826

ABSTRACT

Objective. Stimulus-elicited changes in electroencephalography (EEG) recordings can be represented using Fourier magnitude and phase features (Makeiget al(2004Trends Cogn. Sci.8204-10)). The present study aimed to quantify how much information about hearing responses are contained in the magnitude, quantified by event-related spectral perturbations (ERSPs); and the phase, quantified by inter-trial coherence (ITC). By testing if one feature contained more information and whether this information was mutually exclusive to the features, we aimed to relate specific EEG magnitude and phase features to hearing perception.Approach.EEG responses were recorded from 20 adults who were presented with acoustic stimuli, and 20 adult cochlear implant users with electrical stimuli. Both groups were presented with short, 50 ms stimuli at varying intensity levels relative to their hearing thresholds. Extracted ERSP and ITC features were inputs for a linear discriminant analysis classifier (Wonget al(2016J. Neural. Eng.13036003)). The classifier then predicted whether the EEG signal contained information about the sound stimuli based on the input features. Classifier decoding accuracy was quantified with the mutual information measure (Cottaris and Elfar (2009J. Neural. Eng.6026007), Hawelleket al(2016Proc. Natl Acad. Sci.11313492-7)), and compared across the two feature sets, and to when both feature sets were combined.Main results. We found that classifiers using either ITC or ERSP feature sets were both able to decode hearing perception, but ITC-feature classifiers were able to decode responses to a lower but still audible stimulation intensity, making ITC more useful than ERSP for hearing threshold estimation. We also found that combining the information from both feature sets did not improve decoding significantly, implying that ERSP brain dynamics has a limited contribution to the EEG response, possibly due to the stimuli used in this study.Significance.We successfully related hearing perception to an EEG measure, which does not require behavioral feedback from the listener; an objective measure is important in both neuroscience research and clinical audiology.


Subject(s)
Cochlear Implants , Evoked Potentials, Auditory , Acoustic Stimulation , Acoustics , Auditory Threshold , Electroencephalography , Hearing
9.
Hear Res ; 377: 24-33, 2019 06.
Article in English | MEDLINE | ID: mdl-30884368

ABSTRACT

Cochlear implant users require fitting of electrical threshold and comfort levels for optimal access to sound. In this study, we used single-channel cortical auditory evoked responses (CAEPs) obtained from 20 participants using a Nucleus device. A fully objective method to estimate threshold levels was developed, using growth function fitting and the peak phase-locking value feature. Results demonstrated that growth function fitting is a viable method for estimating threshold levels in cochlear implant users, with a strong correlation (r = 0.979, p < 0.001) with behavioral thresholds. Additionally, we compared the threshold estimates using CAEPs acquired from a standard montage (Cz to mastoid) against using a montage of recording channels near the cochlear implant, simulating recording from the device itself. The correlation between estimated and behavioural thresholds remained strong (r = 0.966, p < 0.001), however the recording time needed to be increased to produce a similar estimate accuracy. Finally, a method for estimating comfort levels was investigated, and showed that the comfort level estimates were mildly correlated with behavioral comfort levels (r = 0.50, p = 0.024).


Subject(s)
Auditory Threshold , Cochlear Implantation/instrumentation , Cochlear Implants , Electroencephalography , Evoked Potentials, Auditory , Loudness Perception , Persons With Hearing Impairments/rehabilitation , Prosthesis Fitting , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Electric Stimulation , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Predictive Value of Tests , Prosthesis Design , Treatment Outcome
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4682-4685, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946907

ABSTRACT

Cortical pitch responses (CPRs) are generated at the initiation of pitch-bearing sounds. CPR components have been shown to reflect the pitch salience of iterated rippled noise with different temporal periodicity. However, it is unclear whether features of the CPR correlate with the pitch salience of resolved and unresolved harmonics of speech when the temporal periodicity is identical, and whether CPRs could be a neural index for auditory cortical pitch processing. In this study, CPRs were recorded to two speech sounds: a set including only resolved harmonics and a set including only unresolved harmonics. Speech-shaped noise preceding and following the speech was used to temporally discriminate the neural activity coding the onset of acoustic energy from the onset of time-varying pitch. Analysis of CPR peak latency and peak amplitude (Na) showed that the peak latency to speech sounds with only resolved harmonics was significantly shorter than for sounds with unresolved harmonics (p = 0.01), and that peak amplitude to sounds with only resolved harmonics was significantly higher than for sounds with unresolved harmonics (p <; 0.001). Further, the CPR peak phase locking value in response to sounds with only resolved harmonics was significantly higher than to sounds with only unresolved harmonics (p <; 0.001). Our findings suggest that the CPR changes with pitch salience and that CPR is a potentially useful indicator of auditory cortical pitch processing.


Subject(s)
Cerebral Cortex , Noise , Pitch Perception , Acoustic Stimulation , Cerebral Cortex/physiology , Evoked Potentials, Auditory , Humans , Language , Phonetics , Sound
11.
Front Neurosci ; 12: 820, 2018.
Article in English | MEDLINE | ID: mdl-30505262

ABSTRACT

Accurate perception of voice pitch plays a vital role in speech understanding, especially for tonal languages such as Mandarin. Lexical tones are primarily distinguished by the fundamental frequency (F0) contour of the acoustic waveform. It has been shown that the auditory system could extract the F0 from the resolved and unresolved harmonics, and the tone identification performance of resolved harmonics was better than unresolved harmonics. To evaluate the neural response to the resolved and unresolved components of Mandarin tones in quiet and in speech-shaped noise, we recorded the frequency-following response. In this study, four types of stimuli were used: speech with either only-resolved harmonics or only-unresolved harmonics, both in quiet and in speech-shaped noise. Frequency-following responses (FFRs) were recorded to alternating-polarity stimuli and were added or subtracted to enhance the neural response to the envelope (FFRENV) or fine structure (FFRTFS), respectively. The neural representation of the F0 strength reflected by the FFRENV was evaluated by the peak autocorrelation value in the temporal domain and the peak phase-locking value (PLV) at F0 in the spectral domain. Both evaluation methods showed that the FFRENV F0 strength in quiet was significantly stronger than in noise for speech including unresolved harmonics, but not for speech including resolved harmonics. The neural representation of the temporal fine structure reflected by the FFRTFS was assessed by the PLV at the harmonic near to F1 (4th of F0). The PLV at harmonic near to F1 (4th of F0) of FFRTFS to resolved harmonics was significantly larger than to unresolved harmonics. Spearman's correlation showed that the FFRENV F0 strength to unresolved harmonics was correlated with tone identification performance in noise (0 dB SNR). These results showed that the FFRENV F0 strength to speech sounds with resolved harmonics was not affected by noise. In contrast, the response to speech sounds with unresolved harmonics, which were significantly smaller in noise compared to quiet. Our results suggest that coding resolved harmonics was more important than coding envelope for tone identification performance in noise.

12.
Hear Res ; 370: 74-83, 2018 12.
Article in English | MEDLINE | ID: mdl-30336355

ABSTRACT

Cortical auditory evoked potential (CAEP) thresholds have been shown to correlate well with behaviourally determined hearing thresholds. Growth functions of CAEPs show promise as an alternative to single level detection for objective hearing threshold estimation; however, the accuracy and clinical relevance of this method is not well examined. In this study, we used temporal and spectral CAEP features to generate feature growth functions. Spectral features may be more robust than traditional peak-picking methods where CAEP morphology is variable, such as in children or hearing device users. Behavioural hearing thresholds were obtained and CAEPs were recorded in response to a 1 kHz puretone from twenty adults with no hearing loss. Four features, peak-to-peak amplitude, root-mean-square, peak spectral power and peak phase-locking value (PLV) were extracted from the CAEPs. Functions relating each feature with stimulus level were used to calculate objective hearing threshold estimates. We assessed the performance of each feature by calculating the difference between the objective estimate and the behaviourally-determined threshold. We compared the accuracy of the estimates using each feature and found that the peak PLV feature performed best, with a mean threshold error of 2.7 dB and standard deviation of 5.9 dB from behavioural threshold across subjects. We also examined the relation between recording time, data quality and threshold estimate errors, and found that on average for a single threshold, 12.7 minutes of recording was needed for a 95% confidence that the threshold estimate was within 20 dB of the behavioural threshold using the peak-to-peak amplitude feature, while 14 minutes is needed for the peak PLV feature. These results show that the PLV of CAEPs can be used to find a clinically relevant hearing threshold estimate. Its potential stability in differing morphology may be an advantage in testing infants or cochlear implant users.


Subject(s)
Auditory Cortex/physiology , Auditory Threshold , Electroencephalography , Evoked Potentials, Auditory , Hearing Tests/methods , Acoustic Stimulation , Adult , Audiometry, Pure-Tone , Humans , Predictive Value of Tests , Psychoacoustics , Reproducibility of Results , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...