Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
CNS Neurosci Ther ; 30(3): e14385, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-37525451

RESUMO

AIM: Disruption of functional brain connectivity is thought to underlie disorders of consciousness (DOC) and recovery of impaired connectivity is suggested as an indicator of consciousness restoration. We recently found that rhythmic acoustic-electric trigeminal-nerve stimulation (i.e., musical stimulation synchronized to electrical stimulation of the trigeminal nerve) in the gamma band can improve consciousness in patients with DOC. Here, we investigated whether these beneficial stimulation effects are mediated by alterations in functional connectivity. METHODS: Sixty-three patients with DOC underwent 5 days of gamma, beta, or sham acoustic-electric trigeminal-nerve stimulation. Resting-state electroencephalography was measured before and after the stimulation and functional connectivity was assessed using phase-lag index (PLI). RESULTS: We found that gamma stimulation induces an increase in gamma-band PLI. Further characterization revealed that the enhancing effect is (i) specific to the gamma band (as we observed no comparable change in beta-band PLI and no effect of beta-band acoustic-electric stimulation or sham stimulation), (ii) widely spread across the cortex, and (iii) accompanied by improvements in patients' auditory abilities. CONCLUSION: These findings show that gamma acoustic-electric trigeminal-nerve stimulation can improve resting-state functional connectivity in the gamma band, which in turn may be linked to auditory abilities and/or consciousness restoration in DOC patients.


Assuntos
Encéfalo , Transtornos da Consciência , Humanos , Transtornos da Consciência/terapia , Estado de Consciência/fisiologia , Eletroencefalografia , Estimulação Elétrica
2.
Neuroimage ; 285: 120476, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38030051

RESUMO

Multimodal stimulation can reverse pathological neural activity and improve symptoms in neuropsychiatric diseases. Recent research shows that multimodal acoustic-electric trigeminal-nerve stimulation (TNS) (i.e., musical stimulation synchronized to electrical stimulation of the trigeminal nerve) can improve consciousness in patients with disorders of consciousness. However, the reliability and mechanism of this novel approach remain largely unknown. We explored the effects of multimodal acoustic-electric TNS in healthy human participants by assessing conscious perception before and after stimulation using behavioral and neural measures in tactile and auditory target-detection tasks. To explore the mechanisms underlying the putative effects of acoustic-electric stimulation, we fitted a biologically plausible neural network model to the neural data using dynamic causal modeling. We observed that (1) acoustic-electric stimulation improves conscious tactile perception without a concomitant change in auditory perception, (2) this improvement is caused by the interplay of the acoustic and electric stimulation rather than any of the unimodal stimulation alone, and (3) the effect of acoustic-electric stimulation on conscious perception correlates with inter-regional connection changes in a recurrent neural processing model. These results provide evidence that acoustic-electric TNS can promote conscious perception. Alterations in inter-regional cortical connections might be the mechanism by which acoustic-electric TNS achieves its consciousness benefits.


Assuntos
Percepção Auditiva , Estado de Consciência , Humanos , Reprodutibilidade dos Testes , Estimulação Elétrica , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Acústica , Nervo Trigêmeo/fisiologia
3.
Neurophotonics ; 10(4): 045005, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37928600

RESUMO

Significance: Brain-computer interfaces (BCIs) can provide severely motor-impaired patients with a motor-independent communication channel. Functional near-infrared spectroscopy (fNIRS) constitutes a promising BCI-input modality given its high mobility, safety, user comfort, cost-efficiency, and relatively low motion sensitivity. Aim: The present study aimed at developing an efficient and convenient two-choice fNIRS communication BCI by implementing a relatively short encoding time (2 s), considerably increasing communication speed, and decreasing the cognitive load of BCI users. Approach: To encode binary answers to 10 biographical questions, 10 healthy adults repeatedly performed a combined motor-speech imagery task within 2 different time windows guided by auditory instructions. Each answer-encoding run consisted of 10 trials. Answers were decoded during the ongoing experiment from the time course of the individually identified most-informative fNIRS channel-by-chromophore combination. Results: The answers of participants were decoded online with an accuracy of 85.8% (run-based group mean). Post-hoc analysis yielded an average single-trial accuracy of 68.1%. Analysis of the effect of number of trial repetitions showed that the best information-transfer rate could be obtained by combining four encoding trials. Conclusions: The study demonstrates that an encoding time as short as 2 s can enable immediate, efficient, and convenient fNIRS-BCI communication.

4.
Cereb Cortex ; 33(13): 8748-8758, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37197766

RESUMO

Research on social threat has shown influences of various factors, such as agent characteristics, proximity, and social interaction on social threat perception. An important, yet understudied aspect of threat exposure concerns the ability to exert control over the threat and its implications for threat perception. In this study, we used a virtual reality (VR) environment showing an approaching avatar that was either angry (threatening body expression) or neutral (neutral body expression) and informed participants to stop avatars from coming closer under five levels of control success (0, 25, 50, 75, or 100%) when they felt uncomfortable. Behavioral results revealed that social threat triggered faster reactions at a greater virtual distance from the participant than the neutral avatar. Event-related potentials (ERPs) revealed that the angry avatar elicited a larger N170/vertex positive potential (VPP) and a smaller N3 than the neutral avatar. The 100% control condition elicited a larger late positive potential (LPP) than the 75% control condition. In addition, we observed enhanced theta power and accelerated heart rate for the angry avatar vs. neutral avatar, suggesting that these measures index threat perception. Our results indicate that perception of social threat takes place in early to middle cortical processing stages, and control ability is associated with cognitive evaluation in middle to late stages.


Assuntos
Controle Comportamental , Realidade Virtual , Humanos , Percepção Social , Eletroencefalografia , Cognição , Eletrocardiografia
5.
J Cogn Neurosci ; 35(8): 1262-1278, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37172122

RESUMO

While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses (auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top-down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams.


Assuntos
Percepção da Fala , Processamento de Texto , Humanos , Percepção da Fala/fisiologia , Percepção Auditiva , Idioma , Fonética
6.
Neuroimage ; 276: 120172, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37230207

RESUMO

In brain-based communication, voluntarily modulated brain signals (instead of motor output) are utilized to interact with the outside world. The possibility to circumvent the motor system constitutes an important alternative option for severely paralyzed. Most communication brain-computer interface (BCI) paradigms require intact visual capabilities and impose a high cognitive load, but for some patients, these requirements are not given. In these situations, a better-suited, less cognitively demanding information-encoding approach may exploit auditorily-cued selective somatosensory attention to vibrotactile stimulation. Here, we propose, validate and optimize a novel communication-BCI paradigm using differential fMRI activation patterns evoked by selective somatosensory attention to tactile stimulation of the right hand or left foot. Using cytoarchitectonic probability maps and multi-voxel pattern analysis (MVPA), we show that the locus of selective somatosensory attention can be decoded from fMRI-signal patterns in (especially primary) somatosensory cortex with high accuracy and reliability, with the highest classification accuracy (85.93%) achieved when using Brodmann area 2 (SI-BA2) at a probability level of 0.2. Based on this outcome, we developed and validated a novel somatosensory attention-based yes/no communication procedure and demonstrated its high effectiveness even when using only a limited amount of (MVPA) training data. For the BCI user, the paradigm is straightforward, eye-independent, and requires only limited cognitive functioning. In addition, it is BCI-operator friendly given its objective and expertise-independent procedure. For these reasons, our novel communication paradigm has high potential for clinical applications.


Assuntos
Interfaces Cérebro-Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Eletroencefalografia/métodos , Encéfalo/diagnóstico por imagem , Mãos , Córtex Somatossensorial/diagnóstico por imagem , Córtex Somatossensorial/fisiologia
7.
Neuroimage ; 274: 120140, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37120042

RESUMO

Auditory perception can benefit from stimuli in non-auditory sensory modalities, as for example in lip-reading. Compared with such visual influences, tactile influences are still poorly understood. It has been shown that single tactile pulses can enhance the perception of auditory stimuli depending on their relative timing, but whether and how such brief auditory enhancements can be stretched in time with more sustained, phase-specific periodic tactile stimulation is still unclear. To address this question, we presented tactile stimulation that fluctuated coherently and continuously at 4 Hz with an auditory noise (either in-phase or anti-phase) and assessed its effect on the cortical processing and perception of an auditory signal embedded in that noise. Scalp-electroencephalography recordings revealed an enhancing effect of in-phase tactile stimulation on cortical responses phase-locked to the noise and a suppressive effect of anti-phase tactile stimulation on responses evoked by the auditory signal. Although these effects appeared to follow well-known principles of multisensory integration of discrete audio-tactile events, they were not accompanied by corresponding effects on behavioral measures of auditory signal perception. Our results indicate that continuous periodic tactile stimulation can enhance cortical processing of acoustically-induced fluctuations and mask cortical responses to an ongoing auditory signal. They further suggest that such sustained cortical effects can be insufficient for inducing sustained bottom-up auditory benefits.


Assuntos
Potenciais Evocados Auditivos , Tato , Humanos , Potenciais Evocados Auditivos/fisiologia , Tato/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Ruído , Estimulação Acústica/métodos
8.
Neuroimage Clin ; 36: 103170, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36063757

RESUMO

Accumulating evidence shows that consciousness is linked to neural oscillations in the thalamocortical system, suggesting that deficits in these oscillations may underlie disorders of consciousness (DOC). However, patient-friendly non-invasive treatments targeting this functional anomaly are still missing and the therapeutic value of oscillation restoration has remained unclear. We propose a novel approach that aims to restore DOC patients' thalamocortical oscillations by combining rhythmic trigeminal-nerve stimulation with comodulated musical stimulation ("musical-electrical TNS"). In a double-blind, placebo-controlled, parallel-group study, we recruited 63 patients with DOC and randomly assigned them to groups receiving gamma, beta, or sham musical-electrical TNS. The stimulation was applied for 40 min on five consecutive days. We measured patients' consciousness before and after the stimulation using behavioral indicators and neural responses to rhythmic auditory speech. We further assessed their outcomes one year later. We found that musical-electrical TNS reliably lead to improvements in consciousness and oscillatory brain activity at the stimulation frequency: 43.5 % of patients in the gamma group and 25 % of patients in the beta group showed an improvement of their diagnosis after being treated with the stimulation. This group of benefitting patients still showed more positive outcomes one year later. Moreover, patients with stronger behavioral benefits showed stronger improvements in oscillatory brain activity. These findings suggest that brain oscillations contribute to consciousness and that musical-electrical TNS may serve as a promising approach to improve consciousness and predict long-term outcomes in patients with DOC.


Assuntos
Transtornos da Consciência , Nervo Trigêmeo , Humanos , Transtornos da Consciência/terapia , Encéfalo/fisiologia , Estimulação Elétrica , Método Duplo-Cego
9.
Neuroimage ; 258: 119375, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35700949

RESUMO

Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica/métodos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Percepção Auditiva , Audição , Humanos , Fonética , Fala/fisiologia , Percepção da Fala/fisiologia
10.
Neuroimage ; 254: 119142, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35342007

RESUMO

Developmental dyslexia is often accompanied by altered phonological processing of speech. Underlying neural changes have typically been characterized in terms of stimulus- and/or task-related responses within individual brain regions or their functional connectivity. Less is known about potential changes in the more global functional organization of brain networks. Here we recorded electroencephalography (EEG) in typical and dyslexic readers while they listened to (a) a random sequence of syllables and (b) a series of tri-syllabic real words. The network topology of the phase synchronization of evoked cortical oscillations was investigated in four frequency bands (delta, theta, alpha and beta) using minimum spanning tree graphs. We found that, compared to syllable tracking, word tracking triggered a shift toward a more integrated network topology in the theta band in both groups. Importantly, this change was significantly stronger in the dyslexic readers, who also showed increased reliance on a right frontal cluster of electrodes for word tracking. The current findings point towards an altered effect of word-level processing on the functional brain network organization that may be associated with less efficient phonological and reading skills in dyslexia.


Assuntos
Dislexia , Percepção da Fala , Percepção Auditiva , Encéfalo , Eletroencefalografia , Humanos , Leitura , Fala , Percepção da Fala/fisiologia
11.
Front Hum Neurosci ; 15: 784522, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34899223

RESUMO

Severely motor-disabled patients, such as those suffering from the so-called "locked-in" syndrome, cannot communicate naturally. They may benefit from brain-computer interfaces (BCIs) exploiting brain signals for communication and therewith circumventing the muscular system. One BCI technique that has gained attention recently is functional near-infrared spectroscopy (fNIRS). Typically, fNIRS-based BCIs allow for brain-based communication via voluntarily modulation of brain activity through mental task performance guided by visual or auditory instructions. While the development of fNIRS-BCIs has made great progress, the reliability of fNIRS-BCIs across time and environments has rarely been assessed. In the present fNIRS-BCI study, we tested six healthy participants across three consecutive days using a straightforward four-choice fNIRS-BCI communication paradigm that allows answer encoding based on instructions using various sensory modalities. To encode an answer, participants performed a motor imagery task (mental drawing) in one out of four time periods. Answer encoding was guided by either the visual, auditory, or tactile sensory modality. Two participants were tested outside the laboratory in a cafeteria. Answers were decoded from the time course of the most-informative fNIRS channel-by-chromophore combination. Across the three testing days, we obtained mean single- and multi-trial (joint analysis of four consecutive trials) accuracies of 62.5 and 85.19%, respectively. Obtained multi-trial accuracies were 86.11% for visual, 80.56% for auditory, and 88.89% for tactile sensory encoding. The two participants that used the fNIRS-BCI in a cafeteria obtained the best single- (72.22 and 77.78%) and multi-trial accuracies (100 and 94.44%). Communication was reliable over the three recording sessions with multi-trial accuracies of 86.11% on day 1, 86.11% on day 2, and 83.33% on day 3. To gauge the trade-off between number of optodes and decoding accuracy, averaging across two and three promising fNIRS channels was compared to the one-channel approach. Multi-trial accuracy increased from 85.19% (one-channel approach) to 91.67% (two-/three-channel approach). In sum, the presented fNIRS-BCI yielded robust decoding results using three alternative sensory encoding modalities. Further, fNIRS-BCI communication was stable over the course of three consecutive days, even in a natural (social) environment. Therewith, the developed fNIRS-BCI demonstrated high flexibility, reliability and robustness, crucial requirements for future clinical applicability.

12.
Eur J Neurosci ; 54(10): 7626-7641, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34697833

RESUMO

Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non-speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non-speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech-to-environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech-to-environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech-to-environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech-specific top-down and bottom-up mechanisms activated during speech perception that are needed for tracking speech in real-life-like auditory environments.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica , Animais , Percepção Auditiva , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Fonética , Fala
13.
Neuropsychologia ; 158: 107889, 2021 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-33991561

RESUMO

Statistical learning, or the ability to extract statistical regularities from the sensory environment, plays a critical role in language acquisition and reading development. Here we employed electroencephalography (EEG) with frequency-tagging measures to track the temporal evolution of speech-structure learning in individuals with reading difficulties due to developmental dyslexia and in typical readers. We measured EEG while participants listened to (a) a structured stream of repeated tri-syllabic pseudowords, (b) a random stream of the same isochronous syllables, and (c) a series of tri-syllabic real Dutch words. Participants' behavioral learning outcome (pseudoword recognition) was measured after training. We found that syllable-rate tracking was comparable between the two groups and stable across both the random and structured streams of syllables. More importantly, we observed a gradual emergence of the tracking of tri-syllabic pseudoword structures in both groups. Compared to the typical readers, however, in the dyslexic readers this implicit speech structure learning seemed to build up at a slower pace. A brain-behavioral correlation analysis showed that slower learners (i.e., participants who were slower in establishing the neural tracking of pseudowords) were less skilled in phonological awareness. Moreover, those who showed stronger neural tracking of real words tended to be less fluent in the visual-verbal conversion of linguistic symbols. Taken together, our study provides an online neurophysiological approach to track the progression of implicit learning processes and gives insights into the learning difficulties associated with dyslexia from a dynamic perspective.


Assuntos
Dislexia , Fala , Humanos , Idioma , Aprendizagem , Leitura
15.
Proc Natl Acad Sci U S A ; 118(7)2021 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-33568530

RESUMO

Brain connectivity plays a major role in the encoding, transfer, and integration of sensory information. Interregional synchronization of neural oscillations in the γ-frequency band has been suggested as a key mechanism underlying perceptual integration. In a recent study, we found evidence for this hypothesis showing that the modulation of interhemispheric oscillatory synchrony by means of bihemispheric high-density transcranial alternating current stimulation (HD-TACS) affects binaural integration of dichotic acoustic features. Here, we aimed to establish a direct link between oscillatory synchrony, effective brain connectivity, and binaural integration. We experimentally manipulated oscillatory synchrony (using bihemispheric γ-TACS with different interhemispheric phase lags) and assessed the effect on effective brain connectivity and binaural integration (as measured with functional MRI and a dichotic listening task, respectively). We found that TACS reduced intrahemispheric connectivity within the auditory cortices and antiphase (interhemispheric phase lag 180°) TACS modulated connectivity between the two auditory cortices. Importantly, the changes in intra- and interhemispheric connectivity induced by TACS were correlated with changes in perceptual integration. Our results indicate that γ-band synchronization between the two auditory cortices plays a functional role in binaural integration, supporting the proposed role of interregional oscillatory synchrony in perceptual integration.


Assuntos
Percepção Auditiva , Encéfalo/fisiologia , Lateralidade Funcional , Conectoma , Feminino , Ritmo Gama , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Transcraniana por Corrente Contínua , Adulto Jovem
16.
Neuroimage ; 228: 117670, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33359352

RESUMO

Selective attention is essential for the processing of multi-speaker auditory scenes because they require the perceptual segregation of the relevant speech ("target") from irrelevant speech ("distractors"). For simple sounds, it has been suggested that the processing of multiple distractor sounds depends on bottom-up factors affecting task performance. However, it remains unclear whether such dependency applies to naturalistic multi-speaker auditory scenes. In this study, we tested the hypothesis that increased perceptual demand (the processing requirement posed by the scene to separate the target speech) reduces the cortical processing of distractor speech thus decreasing their perceptual segregation. Human participants were presented with auditory scenes including three speakers and asked to selectively attend to one speaker while their EEG was acquired. The perceptual demand of this selective listening task was varied by introducing an auditory cue (interaural time differences, ITDs) for segregating the target from the distractor speakers, while acoustic differences between the distractors were matched in ITD and loudness. We obtained a quantitative measure of the cortical segregation of distractor speakers by assessing the difference in how accurately speech-envelope following EEG responses could be predicted by models of averaged distractor speech versus models of individual distractor speech. In agreement with our hypothesis, results show that interaural segregation cues led to improved behavioral word-recognition performance and stronger cortical segregation of the distractor speakers. The neural effect was strongest in the δ-band and at early delays (0 - 200 ms). Our results indicate that during low perceptual demand, the human cortex represents individual distractor speech signals as more segregated. This suggests that, in addition to purely acoustical properties, the cortical processing of distractor speakers depends on factors like perceptual demand.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Adulto Jovem
17.
Sci Rep ; 10(1): 11872, 2020 07 17.
Artigo em Inglês | MEDLINE | ID: mdl-32681138

RESUMO

Patients with schizophrenia (ScZ) often show impairments in auditory information processing. These impairments have been related to clinical symptoms, such as auditory hallucinations. Some researchers have hypothesized that aberrant low-frequency oscillations contribute to auditory information processing deficits in ScZ. A paradigm for which modulations in low-frequency oscillations are consistently found in healthy individuals is the auditory continuity illusion (ACI), in which restoration processes lead to a perceptual grouping of tone fragments and a mask, so that a physically interrupted sound is perceived as continuous. We used the ACI paradigm to test the hypothesis that low-frequency oscillations play a role in aberrant auditory information processing in patients with ScZ (N = 23). Compared with healthy control participants we found that patients with ScZ show elevated continuity illusions of interrupted, partially-masked tones. Electroencephalography data demonstrate that this elevated continuity perception is reflected by diminished 3 Hz power. This suggests that reduced low-frequency oscillations relate to elevated restoration processes in ScZ. Our findings support the hypothesis that aberrant low-frequency oscillations contribute to altered perception-related auditory information processing in ScZ.


Assuntos
Alucinações , Ilusões/psicologia , Esquizofrenia/diagnóstico , Psicologia do Esquizofrênico , Estimulação Acústica , Análise de Dados , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino
18.
Front Neurosci ; 14: 362, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32351361

RESUMO

Auditory perception is facilitated by prior knowledge about the statistics of the acoustic environment. Predictions about upcoming auditory stimuli are processed at various stages along the human auditory pathway, including the cortex and midbrain. Whether such auditory predictions are processed also at hierarchically lower stages-in the peripheral auditory system-is unclear. To address this question, we assessed outer hair cell (OHC) activity in response to isochronous tone sequences and varied the predictability and behavioral relevance of the individual tones (by manipulating tone-to-tone probabilities and the human participants' task, respectively). We found that predictability alters the amplitude of distortion-product otoacoustic emissions (DPOAEs, a measure of OHC activity) in a manner that depends on the behavioral relevance of the tones. Simultaneously recorded cortical responses showed a significant effect of both predictability and behavioral relevance of the tones, indicating that their experimental manipulations were effective in central auditory processing stages. Our results provide evidence for a top-down effect on the processing of auditory predictability in the human peripheral auditory system, in line with previous studies showing peripheral effects of auditory attention.

19.
Front Hum Neurosci ; 14: 113, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32351371

RESUMO

"Locked-in" patients lose their ability to communicate naturally due to motor system dysfunction. Brain-computer interfacing offers a solution for their inability to communicate by enabling motor-independent communication. Straightforward and convenient in-session communication is essential in clinical environments. The present study introduces a functional near-infrared spectroscopy (fNIRS)-based binary communication paradigm that requires limited preparation time and merely nine optodes. Eighteen healthy participants performed two mental imagery tasks, mental drawing and spatial navigation, to answer yes/no questions during one of two auditorily cued time windows. Each of the six questions was answered five times, resulting in five trials per answer. This communication paradigm thus combines both spatial (two different mental imagery tasks, here mental drawing for "yes" and spatial navigation for "no") and temporal (distinct time windows for encoding a "yes" and "no" answer) fNIRS signal features for information encoding. Participants' answers were decoded in simulated real-time using general linear model analysis. Joint analysis of all five encoding trials resulted in an average accuracy of 66.67 and 58.33% using the oxygenated (HbO) and deoxygenated (HbR) hemoglobin signal respectively. For half of the participants, an accuracy of 83.33% or higher was reached using either the HbO signal or the HbR signal. For four participants, effective communication with 100% accuracy was achieved using either the HbO or HbR signal. An explorative analysis investigated the differentiability of the two mental tasks based solely on spatial fNIRS signal features. Using multivariate pattern analysis (MVPA) group single-trial accuracies of 58.33% (using 20 training trials per task) and 60.56% (using 40 training trials per task) could be obtained. Combining the five trials per run using a majority voting approach heightened these MVPA accuracies to 62.04 and 75%. Additionally, an fNIRS suitability questionnaire capturing participants' physical features was administered to explore its predictive value for evaluating general data quality. Obtained questionnaire scores correlated significantly (r = -0.499) with the signal-to-noise of the raw light intensities. While more work is needed to further increase decoding accuracy, this study shows the potential of answer encoding using spatiotemporal fNIRS signal features or spatial fNIRS signal features only.

20.
J Cogn Neurosci ; 32(8): 1428-1437, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32427072

RESUMO

Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception [Kösem, A., Bosker, H. R., Takashima, A., Meyer, A., Jensen, O., & Hagoort, P. Neural entrainment determines the words we hear. Current Biology, 28, 2867-2875, 2018]. We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating current stimulation (tACS) at distinct oscillatory frequencies (3 and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and therewith how speech is perceptually sampled, leading to a perceptual overestimation or underestimation of the vowel's duration. Whereas results from Experiment 1 did not confirm this prediction, results from Experiment 2 suggested a small effect of tACS frequency on target word perception: Faster tACS leads to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in Experiment 1 versus Experiment 2, suggesting that the impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations might be a prerequisite for tACS to be effective.


Assuntos
Córtex Auditivo , Estimulação Transcraniana por Corrente Contínua , Percepção Auditiva , Audição , Humanos , Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA