Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
1.
Cogn Neurodyn ; 18(4): 1753-1765, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39104689

ABSTRACT

Recently, the interest in spiking neural networks (SNNs) remarkably increased, as up to now some key advances of biological neural networks are still out of reach. Thus, the energy efficiency and the ability to dynamically react and adapt to input stimuli as observed in biological neurons is still difficult to achieve. One neuron model commonly used in SNNs is the leaky-integrate-and-fire (LIF) neuron. LIF neurons already show interesting dynamics and can be run in two operation modes: coincidence detectors for low and integrators for high membrane decay times, respectively. However, the emergence of these modes in SNNs and the consequence on network performance and information processing ability is still elusive. In this study, we examine the effect of different decay times in SNNs trained with a surrogate-gradient-based approach. We propose two measures that allow to determine the operation mode of LIF neurons: the number of contributing input spikes and the effective integration interval. We show that coincidence detection is characterized by a low number of input spikes as well as short integration intervals, whereas integration behavior is related to many input spikes over long integration intervals. We find the two measures to linearly correlate via a correlation factor that depends on the decay time. Thus, the correlation factor as function of the decay time shows a powerlaw behavior, which could be an intrinsic property of LIF networks. We argue that our work could be a starting point to further explore the operation modes in SNNs to boost efficiency and biological plausibility. Supplementary Information: The online version of this article (10.1007/s11571-023-10038-0) contains supplementary material, which is available to authorized users.

3.
Neuroimage ; 297: 120696, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-38909761

ABSTRACT

How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.


Subject(s)
Deep Learning , Electroencephalography , Humans , Animals , Electroencephalography/methods , Cerebral Cortex/physiology , Male , Unsupervised Machine Learning , Rats , Adult , Female
4.
Neural Comput ; 36(3): 351-384, 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38363658

ABSTRACT

Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.

5.
J Cogn Neurosci ; 36(3): 475-491, 2024 03 01.
Article in English | MEDLINE | ID: mdl-38165737

ABSTRACT

Most parts of speech are voiced, exhibiting a degree of periodicity with a fundamental frequency and many higher harmonics. Some neural populations respond to this temporal fine structure, in particular at the fundamental frequency. This frequency-following response to speech consists of both subcortical and cortical contributions and can be measured through EEG as well as through magnetoencephalography (MEG), although both differ in the aspects of neural activity that they capture: EEG is sensitive to both radial and tangential sources as well as to deep sources, whereas MEG is more restrained to the measurement of tangential and superficial neural activity. EEG responses to continuous speech have shown an early subcortical contribution, at a latency of around 9 msec, in agreement with MEG measurements in response to short speech tokens, whereas MEG responses to continuous speech have not yet revealed such an early component. Here, we analyze MEG responses to long segments of continuous speech. We find an early subcortical response at latencies of 4-11 msec, followed by later right-lateralized cortical activities at delays of 20-58 msec as well as potential subcortical activities. Our results show that the early subcortical component of the FFR to continuous speech can be measured from MEG in populations of participants and that its latency agrees with that measured with EEG. They furthermore show that the early subcortical component is temporally well separated from later cortical contributions, enabling an independent assessment of both components toward further aspects of speech processing.


Subject(s)
Electroencephalography , Speech Perception , Humans , Electroencephalography/methods , Speech , Magnetoencephalography/methods , Cerebral Cortex/physiology , Speech Perception/physiology
6.
J Neurosci ; 43(44): 7429-7440, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37793908

ABSTRACT

Selective attention to one of several competing speakers is required for comprehending a target speaker among other voices and for successful communication with them. It moreover has been found to involve the neural tracking of low-frequency speech rhythms in the auditory cortex. Effects of selective attention have also been found in subcortical neural activities, in particular regarding the frequency-following response related to the fundamental frequency of speech (speech-FFR). Recent investigations have, however, shown that the speech-FFR contains cortical contributions as well. It remains unclear whether these are also modulated by selective attention. Here we used magnetoencephalography to assess the attentional modulation of the cortical contributions to the speech-FFR. We presented both male and female participants with two competing speech signals and analyzed the cortical responses during attentional switching between the two speakers. Our findings revealed robust attentional modulation of the cortical contribution to the speech-FFR: the neural responses were higher when the speaker was attended than when they were ignored. We also found that, regardless of attention, a voice with a lower fundamental frequency elicited a larger cortical contribution to the speech-FFR than a voice with a higher fundamental frequency. Our results show that the attentional modulation of the speech-FFR does not only occur subcortically but extends to the auditory cortex as well.SIGNIFICANCE STATEMENT Understanding speech in noise requires attention to a target speaker. One of the speech features that a listener can use to identify a target voice among others and attend it is the fundamental frequency, together with its higher harmonics. The fundamental frequency arises from the opening and closing of the vocal folds and is tracked by high-frequency neural activity in the auditory brainstem and in the cortex. Previous investigations showed that the subcortical neural tracking is modulated by selective attention. Here we show that attention affects the cortical tracking of the fundamental frequency as well: it is stronger when a particular voice is attended than when it is ignored.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Male , Female , Speech , Speech Perception/physiology , Auditory Cortex/physiology , Magnetoencephalography , Evoked Potentials, Auditory, Brain Stem/physiology , Acoustic Stimulation , Electroencephalography/methods
8.
HNO ; 71(10): 662-668, 2023 Oct.
Article in German | MEDLINE | ID: mdl-37715002

ABSTRACT

BACKGROUND: About one sixth of the population of western industrialized nations suffers from chronic, subjective tinnitus, causing socioeconomic treatment and follow-up costs of almost 22 billion euros per year in Germany alone. According to the prevailing view, tinnitus develops as a consequence of a maladaptive neurophysiological process in the brain triggered by hearing loss. OBJECTIVES: The Erlangen model of tinnitus development presented here is intended to propose a comprehensive neurophysiological explanation for the initial occurrence of the phantom sound after hearing loss. Based on the model, a new treatment strategy will be developed. MATERIALS AND METHODS: The model summarized here is based on various animal and human physiological studies conducted in recent years. RESULTS: The Erlangen model considers subjective tinnitus as a side effect of a physiological mechanism that permanently optimizes information transmission into the auditory system by means of stochastic resonance (SR) even in the healthy auditory system. In fact, hearing-impaired patients with tinnitus hear better on average than those without tinnitus. This unfamiliar perspective on the phantom percept may already help affected patients to cope better with their suffering. In addition, based on the model, low intensity noise tinnitus suppression (LINTS) has been developed as a new, individually adapted treatment strategy for tonal tinnitus and has already been successfully tested in patients. CONCLUSIONS: A possible limiting factor for the model and treatment strategy is the pitch of the tinnitus percept, which may require adjustments to the treatment strategy for frequencies above about 5 kHz.


Subject(s)
Deafness , Drug-Related Side Effects and Adverse Reactions , Tinnitus , Animals , Humans , Tinnitus/diagnosis , Tinnitus/therapy , Hearing , Brain
9.
Brain ; 146(12): 4809-4825, 2023 12 01.
Article in English | MEDLINE | ID: mdl-37503725

ABSTRACT

Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.


Subject(s)
Hearing Loss , Tinnitus , Humans , Tinnitus/psychology , Bayes Theorem , Artificial Intelligence , Auditory Perception , Auditory Pathways
10.
Neurobiol Sleep Circadian Rhythms ; 14: 100097, 2023 May.
Article in English | MEDLINE | ID: mdl-37275555

ABSTRACT

The human sleep-cycle has been divided into discrete sleep stages that can be recognized in electroencephalographic (EEG) and other bio-signals by trained specialists or machine learning systems. It is however unclear whether these human-defined stages can be re-discovered with unsupervised methods of data analysis, using only a minimal amount of generic pre-processing. Based on EEG data, recorded overnight from sleeping human subjects, we investigate the degree of clustering of the sleep stages using the General Discrimination Value as a quantitative measure of class separability. Virtually no clustering is found in the raw data, even after transforming the EEG signals of each 30-s epoch from the time domain into the more informative frequency domain. However, a Principal Component Analysis (PCA) of these epoch-wise frequency spectra reveals that the sleep stages separate significantly better in the low-dimensional sub-space of certain PCA components. In particular the component C1(t) can serve as a robust, continuous 'master variable' that encodes the depth of sleep and therefore correlates strongly with the 'hypnogram', a common plot of the discrete sleep stages over time. Moreover, C1(t) shows persistent trends during extended time periods where the sleep stage is constant, suggesting that sleep may be better understood as a continuum. These intriguing properties of C1(t) are not only relevant for understanding brain dynamics during sleep, but might also be exploited in low-cost single-channel sleep tracking devices for private and clinical use.

11.
Neuroscience ; 520: 39-45, 2023 06 01.
Article in English | MEDLINE | ID: mdl-37080446

ABSTRACT

The Zwicker tone illusion - an auditory phantom percept after hearing a notched noise stimulus - can serve as an interesting model for acute tinnitus. Recent mechanistic models suggest that the underlying neural mechanisms of both percepts are similar. To date it is not clear if animals do perceive the Zwicker tone, as up to now no behavioral paradigms are available to objectively assess the presence of this phantom percept. Here we introduce, for the first time, a modified version of the gap pre-pulse inhibition of the acoustic startle reflex (GPIAS) paradigm to test if it is possible to induce a Zwicker tone percept in our rodent model, the Mongolian gerbil. Furthermore, we developed a new aversive conditioning learning paradigm and compare the two approaches. We found a significant increase in the GPIAS effect when presenting a notched noise compared to white noise gap pre-pulse inhibition, which is consistent with the interpretation of a Zwicker tone percept in these animals. In the aversive conditioning learning paradigm, no clear effect could be observed in the discrimination performance of the tested animals. When investigating the first 33% of the correct conditioned responses, an effect of a possible Zwicker tone percept can be seen, i.e. animals show identical behavior as if a pure tone was presented, but the paradigm needs to be further improved. Nevertheless, the results indicate that Mongolian gerbils are able to perceive a Zwicker tone and can serve as a neurophysiological model for human tinnitus generation.


Subject(s)
Illusions , Tinnitus , Humans , Animals , Gerbillinae , Hearing , Noise , Reflex, Startle/physiology , Acoustic Stimulation
12.
J Neurophysiol ; 129(5): 1114-1126, 2023 05 01.
Article in English | MEDLINE | ID: mdl-37042559

ABSTRACT

Sensory "aftereffects" are a subgroup of sensory illusions that can be defined as an illusory phenomenon triggered after prolonged exposure to a given sensory inducer. These phenomena are interesting because they can provide insights into the mechanisms of perception. In auditory modality, there is a special interest in the so-called "Zwicker tone" (ZT), an auditory aftereffect triggered after the presentation of a notched noise (NN, broadband noise with a missing frequency band). The ZT has been considered a plausible model of a specific tinnitus subtype since it presents some key characteristics in common with tinnitus. Indeed, both the tinnitus percept and ZT can be triggered by a relative "sensory deprivation," and their pitch corresponds to the frequency region that has been sensory deprived. The effects of a NN presentation on the central auditory system are still barely investigated, and the mechanisms of the ZT are elusive. In this study, we analyzed the laminar structure of the neural activity in the primary cortex of anesthetized and awake guinea pigs during and after white noise (WN) and NN stimulation. We found significantly increased offset responses, in terms of both spiking activity and local field potential amplitude, after NN compared with WN presentation. The offset responses were circumscribed to the granular and upper infragranular layers (input layers) and were maximal when the neuron's best frequency was within or near the missing frequency band. The mechanisms of the offset response and its putative link with the ZT are discussed.NEW & NOTEWORTHY Notched noise (white noise with embedded spectral gap) causes significant excitatory offset responses in the auditory cortex of awake and anesthetized guinea pigs. The largest offset responses were located in the infragranular/granular layers, and current source density analysis revealed that offset responses were associated with an early current sink localized in the upper infragranular layers. We discuss the possibility that the offset responses might be associated with an auditory phantom percept (Zwicker tone).


Subject(s)
Auditory Cortex , Illusions , Tinnitus , Animals , Guinea Pigs , Noise , Auditory Cortex/physiology , Acoustic Stimulation , Illusions/physiology , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology
13.
Sci Rep ; 13(1): 3644, 2023 03 04.
Article in English | MEDLINE | ID: mdl-36871003

ABSTRACT

How do we make sense of the input from our sensory organs, and put the perceived information into context of our past experiences? The hippocampal-entorhinal complex plays a major role in the organization of memory and thought. The formation of and navigation in cognitive maps of arbitrary mental spaces via place and grid cells can serve as a representation of memories and experiences and their relations to each other. The multi-scale successor representation is proposed to be the mathematical principle underlying place and grid cell computations. Here, we present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors. The neural network successfully learns the similarities between different animal species, and constructs a cognitive map of 'animal space' based on the principle of successor representations with an accuracy of around 30% which is near to the theoretical maximum regarding the fact that all animal species have more than one possible successor, i.e. nearest neighbor in feature space. Furthermore, a hierarchical structure, i.e. different scales of cognitive maps, can be modeled based on multi-scale successor representations. We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class, i.e. amphibians, mammals and insects. This could be a putative mechanism enabling the emergence of new, abstract semantic concepts. Finally, even completely new or incomplete input can be represented by interpolation of the representations from the cognitive map with remarkable high accuracy of up to 95%. We conclude that the successor representation can serve as a weighted pointer to past memories and experiences, and may therefore be a crucial building block to include prior knowledge, and to derive context knowledge from novel input. Thus, our model provides a new tool to complement contemporary deep learning approaches on the road towards artificial general intelligence.


Subject(s)
Neural Networks, Computer , Semantics , Animals , Artificial Intelligence , Cognition , Mammals
14.
Sci Rep ; 12(1): 22121, 2022 12 21.
Article in English | MEDLINE | ID: mdl-36543849

ABSTRACT

Data classification, the process of analyzing data and organizing it into categories or clusters, is a fundamental computing task of natural and artificial information processing systems. Both supervised classification and unsupervised clustering work best when the input vectors are distributed over the data space in a highly non-uniform way. These tasks become however challenging in weakly structured data sets, where a significant fraction of data points is located in between the regions of high point density. We derive the theoretical limit for classification accuracy that arises from this overlap of data categories. By using a surrogate data generation model with adjustable statistical properties, we show that sufficiently powerful classifiers based on completely different principles, such as perceptrons and Bayesian models, all perform at this universal accuracy limit under ideal training conditions. Remarkably, the accuracy limit is not affected by certain non-linear transformations of the data, even if these transformations are non-reversible and drastically reduce the information content of the input data. We further compare the data embeddings that emerge by supervised and unsupervised training, using the MNIST data set and human EEG recordings during sleep. We find for MNIST that categories are significantly separated not only after supervised training with back-propagation, but also after unsupervised dimensionality reduction. A qualitatively similar cluster enhancement by unsupervised compression is observed for the EEG sleep data, but with a very small overall degree of cluster separation. We conclude that the handwritten letters in MNIST can be considered as 'natural kinds', whereas EEG sleep recordings are a relatively weakly structured data set, so that unsupervised clustering will not necessarily re-cover the human-defined sleep stages.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Bayes Theorem , Sleep , Sleep Stages
15.
Sci Rep ; 12(1): 11233, 2022 07 04.
Article in English | MEDLINE | ID: mdl-35787659

ABSTRACT

How does the mind organize thoughts? The hippocampal-entorhinal complex is thought to support domain-general representation and processing of structural knowledge of arbitrary state, feature and concept spaces. In particular, it enables the formation of cognitive maps, and navigation on these maps, thereby broadly contributing to cognition. It has been proposed that the concept of multi-scale successor representations provides an explanation of the underlying computations performed by place and grid cells. Here, we present a neural network based approach to learn such representations, and its application to different scenarios: a spatial exploration task based on supervised learning, a spatial navigation task based on reinforcement learning, and a non-spatial task where linguistic constructions have to be inferred by observing sample sentences. In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations. Furthermore, the resulting neural firing patterns are strikingly similar to experimentally observed place and grid cell firing patterns. We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.


Subject(s)
Grid Cells , Language , Cognition , Models, Neurological , Neural Networks, Computer
16.
Front Neurosci ; 16: 908330, 2022.
Article in English | MEDLINE | ID: mdl-35757533

ABSTRACT

Noise is generally considered to harm information processing performance. However, in the context of stochastic resonance, noise has been shown to improve signal detection of weak sub- threshold signals, and it has been proposed that the brain might actively exploit this phenomenon. Especially within the auditory system, recent studies suggest that intrinsic noise plays a key role in signal processing and might even correspond to increased spontaneous neuronal firing rates observed in early processing stages of the auditory brain stem and cortex after hearing loss. Here we present a computational model of the auditory pathway based on a deep neural network, trained on speech recognition. We simulate different levels of hearing loss and investigate the effect of intrinsic noise. Remarkably, speech recognition after hearing loss actually improves with additional intrinsic noise. This surprising result indicates that intrinsic noise might not only play a crucial role in human auditory processing, but might even be beneficial for contemporary machine learning approaches.

17.
Front Neurosci ; 16: 831581, 2022.
Article in English | MEDLINE | ID: mdl-35431789

ABSTRACT

Recently, we proposed a model of tinnitus development based on a physiological mechanism of permanent optimization of information transfer from the auditory periphery to the central nervous system by means of neuronal stochastic resonance utilizing neuronal noise to be added to the cochlear input, thereby improving hearing thresholds. In this view, tinnitus is a byproduct of this added neuronal activity. Interestingly, in healthy subjects auditory thresholds can also be improved by adding external, near-threshold acoustic noise. Based on these two findings and a pilot study we hypostatized that tinnitus loudness (TL) might be reduced, if the internally generated neuronal noise is substituted by externally provided individually adapted acoustic noise. In the present study, we extended the data base of the first pilot and further optimized our approach using a more fine-grained adaptation of the presented noise to the patients' audiometric data. We presented different spectrally filtered near-threshold noises (-2 dB to +6 dB HL, 2 dB steps) for 40 s each to 24 patients with tonal tinnitus and a hearing deficit not exceeding 40 dB. After each presentation, the effect of the noise on the perceived TL was obtained by patient's response to a 5-scale question. In 21 out of 24 patients (13 women) TL was successfully subjectively attenuated during acoustic near-threshold stimulation using noise spectrally centered half an octave below the individual's tinnitus pitch (TP). Six patients reported complete subjective silencing of their tinnitus percept during stimulation. Acoustic noise is able to reduce TL, but the TP has to be taken into account. Based on our findings, we speculate about a possible future treatment of tinnitus by near-threshold bandpass filtered acoustic noise stimulation, which could be implemented in hearing aids with noise generators.

18.
J Am Soc Nephrol ; 33(4): 732-745, 2022 04.
Article in English | MEDLINE | ID: mdl-35149593

ABSTRACT

BACKGROUND: The endocytic reabsorption of proteins in the proximal tubule requires a complex machinery and defects can lead to tubular proteinuria. The precise mechanisms of endocytosis and processing of receptors and cargo are incompletely understood. EHD1 belongs to a family of proteins presumably involved in the scission of intracellular vesicles and in ciliogenesis. However, the relevance of EHD1 in human tissues, in particular in the kidney, was unknown. METHODS: Genetic techniques were used in patients with tubular proteinuria and deafness to identify the disease-causing gene. Diagnostic and functional studies were performed in patients and disease models to investigate the pathophysiology. RESULTS: We identified six individuals (5-33 years) with proteinuria and a high-frequency hearing deficit associated with the homozygous missense variant c.1192C>T (p.R398W) in EHD1. Proteinuria (0.7-2.1 g/d) consisted predominantly of low molecular weight proteins, reflecting impaired renal proximal tubular endocytosis of filtered proteins. Ehd1 knockout and Ehd1R398W/R398W knockin mice also showed a high-frequency hearing deficit and impaired receptor-mediated endocytosis in proximal tubules, and a zebrafish model showed impaired ability to reabsorb low molecular weight dextran. Interestingly, ciliogenesis appeared unaffected in patients and mouse models. In silico structural analysis predicted a destabilizing effect of the R398W variant and possible inference with nucleotide binding leading to impaired EHD1 oligomerization and membrane remodeling ability. CONCLUSIONS: A homozygous missense variant of EHD1 causes a previously unrecognized autosomal recessive disorder characterized by sensorineural deafness and tubular proteinuria. Recessive EHD1 variants should be considered in individuals with hearing impairment, especially if tubular proteinuria is noted.


Subject(s)
Deafness , Zebrafish , Adolescent , Adult , Animals , Child , Child, Preschool , Deafness/genetics , Endocytosis , Humans , Kidney Tubules, Proximal/metabolism , Low Density Lipoprotein Receptor-Related Protein-2/genetics , Low Density Lipoprotein Receptor-Related Protein-2/metabolism , Mice , Mutation , Proteinuria/metabolism , Vesicular Transport Proteins/genetics , Young Adult , Zebrafish/metabolism
19.
Front Psychol ; 13: 1076339, 2022.
Article in English | MEDLINE | ID: mdl-36619132

ABSTRACT

Language is fundamentally predictable, both on a higher schematic level as well as low-level lexical items. Regarding predictability on a lexical level, collocations are frequent co-occurrences of words that are often characterized by high strength of association. So far, psycho- and neurolinguistic studies have mostly employed highly artificial experimental paradigms in the investigation of collocations by focusing on the processing of single words or isolated sentences. In contrast, here we analyze EEG brain responses recorded during stimulation with continuous speech, i.e., audio books. We find that the N400 response to collocations is significantly different from that of non-collocations, whereas the effect varies with respect to cortical region (anterior/posterior) and laterality (left/right). Our results are in line with studies using continuous speech, and they mostly contradict those using artificial paradigms and stimuli. To the best of our knowledge, this is the first neurolinguistic study on collocations using continuous speech stimulation.

SELECTION OF CITATIONS
SEARCH DETAIL