Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Clin EEG Neurosci ; : 15500594241253910, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38751125

RESUMEN

Alterations of mismatch responses (ie, neural activity evoked by unexpected stimuli) are often considered a potential biomarker of schizophrenia. Going beyond establishing the type of observed alterations found in diagnosed patients and related cohorts, computational methods can yield valuable insights into the underlying disruptions of neural mechanisms and cognitive function. Here, we adopt a typology of model-based approaches from computational cognitive neuroscience, providing an overview of the study of mismatch responses and their alterations in schizophrenia from four complementary perspectives: (a) connectivity models, (b) decoding models, (c) neural network models, and (d) cognitive models. Connectivity models aim at inferring the effective connectivity patterns between brain regions that may underlie mismatch responses measured at the sensor level. Decoding models use multivariate spatiotemporal mismatch response patterns to infer the type of sensory violations or to classify participants based on their diagnosis. Neural network models such as deep convolutional neural networks can be used for improved classification performance as well as for a systematic study of various aspects of empirical data. Finally, cognitive models quantify mismatch responses in terms of signaling and updating perceptual predictions over time. In addition to describing the available methodology and reviewing the results of recent computational psychiatry studies, we offer suggestions for future work applying model-based techniques to advance the study of mismatch responses in schizophrenia.

2.
J Neurosci ; 44(10)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38267259

RESUMEN

Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.


Asunto(s)
Corteza Auditiva , Colículos Inferiores , Femenino , Ratas , Animales , Vías Auditivas/fisiología , Colículos Inferiores/fisiología , Mesencéfalo/fisiología , Sonido , Corteza Auditiva/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología
3.
Neuroimage ; 285: 120476, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38030051

RESUMEN

Multimodal stimulation can reverse pathological neural activity and improve symptoms in neuropsychiatric diseases. Recent research shows that multimodal acoustic-electric trigeminal-nerve stimulation (TNS) (i.e., musical stimulation synchronized to electrical stimulation of the trigeminal nerve) can improve consciousness in patients with disorders of consciousness. However, the reliability and mechanism of this novel approach remain largely unknown. We explored the effects of multimodal acoustic-electric TNS in healthy human participants by assessing conscious perception before and after stimulation using behavioral and neural measures in tactile and auditory target-detection tasks. To explore the mechanisms underlying the putative effects of acoustic-electric stimulation, we fitted a biologically plausible neural network model to the neural data using dynamic causal modeling. We observed that (1) acoustic-electric stimulation improves conscious tactile perception without a concomitant change in auditory perception, (2) this improvement is caused by the interplay of the acoustic and electric stimulation rather than any of the unimodal stimulation alone, and (3) the effect of acoustic-electric stimulation on conscious perception correlates with inter-regional connection changes in a recurrent neural processing model. These results provide evidence that acoustic-electric TNS can promote conscious perception. Alterations in inter-regional cortical connections might be the mechanism by which acoustic-electric TNS achieves its consciousness benefits.


Asunto(s)
Percepción Auditiva , Estado de Conciencia , Humanos , Reproducibilidad de los Resultados , Estimulación Eléctrica , Percepción Auditiva/fisiología , Estimulación Acústica/métodos , Acústica , Nervio Trigémino/fisiología
4.
Front Neurosci ; 17: 1180066, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37781257

RESUMEN

Introduction: Extracting regularities from ongoing stimulus streams to form predictions is crucial for adaptive behavior. Such regularities exist in terms of the content of the stimuli and their timing, both of which are known to interactively modulate sensory processing. In real-world stimulus streams such as music, regularities can occur at multiple levels, both in terms of contents (e.g., predictions relating to individual notes vs. their more complex groups) and timing (e.g., pertaining to timing between intervals vs. the overall beat of a musical phrase). However, it is unknown whether the brain integrates predictions in a manner that is mutually congruent (e.g., if "beat" timing predictions selectively interact with "what" predictions falling on pulses which define the beat), and whether integrating predictions in different timing conditions relies on dissociable neural correlates. Methods: To address these questions, our study manipulated "what" and "when" predictions at different levels - (local) interval-defining and (global) beat-defining - within the same stimulus stream, while neural activity was recorded using electroencephalogram (EEG) in participants (N = 20) performing a repetition detection task. Results: Our results reveal that temporal predictions based on beat or interval timing modulated mismatch responses to violations of "what" predictions happening at the predicted time points, and that these modulations were shared between types of temporal predictions in terms of the spatiotemporal distribution of EEG signals. Effective connectivity analysis using dynamic causal modeling showed that the integration of "what" and "when" predictions selectively increased connectivity at relatively late cortical processing stages, between the superior temporal gyrus and the fronto-parietal network. Discussion: Taken together, these results suggest that the brain integrates different predictions with a high degree of mutual congruence, but in a shared and distributed cortical network. This finding contrasts with recent studies indicating separable mechanisms for beat-based and memory-based predictive processing.

5.
J Neurosci ; 43(44): 7361-7375, 2023 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-37684031

RESUMEN

Past reward associations may be signaled from different sensory modalities; however, it remains unclear how different types of reward-associated stimuli modulate sensory perception. In this human fMRI study (female and male participants), a visual target was simultaneously presented with either an intra- (visual) or a cross-modal (auditory) cue that was previously associated with rewards. We hypothesized that, depending on the sensory modality of the cues, distinct neural mechanisms underlie the value-driven modulation of visual processing. Using a multivariate approach, we confirmed that reward-associated cues enhanced the target representation in early visual areas and identified the brain valuation regions. Then, using an effective connectivity analysis, we tested three possible patterns of connectivity that could underlie the modulation of the visual cortex: a direct pathway from the frontal valuation areas to the visual areas, a mediated pathway through the attention-related areas, and a mediated pathway that additionally involved sensory association areas. We found evidence for the third model demonstrating that the reward-related information in both sensory modalities is communicated across the valuation and attention-related brain regions. Additionally, the superior temporal areas were recruited when reward was cued cross-modally. The strongest dissociation between the intra- and cross-modal reward-driven effects was observed at the level of the feedforward and feedback connections of the visual cortex estimated from the winning model. These results suggest that, in the presence of previously rewarded stimuli from different sensory modalities, a combination of domain-general and domain-specific mechanisms are recruited across the brain to adjust the visual perception.SIGNIFICANCE STATEMENT Reward has a profound effect on perception, but it is not known whether shared or disparate mechanisms underlie the reward-driven effects across sensory modalities. In this human fMRI study, we examined the reward-driven modulation of the visual cortex by visual (intra-modal) and auditory (cross-modal) reward-associated cues. Using a model-based approach to identify the most plausible pattern of inter-regional effective connectivity, we found that higher-order areas involved in the valuation and attentional processing were recruited by both types of rewards. However, the pattern of connectivity between these areas and the early visual cortex was distinct between the intra- and cross-modal rewards. This evidence suggests that, to effectively adapt to the environment, reward signals may recruit both domain-general and domain-specific mechanisms.


Asunto(s)
Corteza Visual , Percepción Visual , Humanos , Masculino , Femenino , Atención , Encéfalo , Visión Ocular , Percepción Auditiva , Estimulación Luminosa/métodos , Estimulación Acústica/métodos
6.
Hear Res ; 438: 108857, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37639922

RESUMEN

Perception is sensitive to statistical regularities in the environment, including temporal characteristics of sensory inputs. Interestingly, implicit learning of temporal patterns in one modality can also improve their processing in another modality. However, it is unclear how cross-modal learning transfer affects neural responses to sensory stimuli. Here, we recorded neural activity of human volunteers using electroencephalography (EEG), while participants were exposed to brief sequences of randomly timed auditory or visual pulses. Some trials consisted of a repetition of the temporal pattern within the sequence, and subjects were tasked with detecting these trials. Unknown to the participants, some trials reappeared throughout the experiment across both modalities (Transfer) or only within a modality (Control), enabling implicit learning in one modality and its transfer. Using a novel method of analysis of single-trial EEG responses, we showed that learning temporal structures within and across modalities is reflected in neural learning curves. These putative neural correlates of learning transfer were similar both when temporal information learned in audition was transferred to visual stimuli and vice versa. The modality-specific mechanisms for learning of temporal information and general mechanisms which mediate learning transfer across modalities had distinct physiological signatures: temporal learning within modalities relied on modality-specific brain regions while learning transfer affected beta-band activity in frontal regions.


Asunto(s)
Percepción Auditiva , Aprendizaje , Humanos , Electroencefalografía , Lóbulo Frontal , Voluntarios Sanos
7.
BMC Biol ; 21(1): 130, 2023 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-37254137

RESUMEN

BACKGROUND: Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. RESULTS: Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. CONCLUSIONS: Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output.


Asunto(s)
Corteza Auditiva , Animales , Ratas , Estimulación Acústica , Potenciales de Acción/fisiología , Corteza Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Neuronas/fisiología
8.
J Neurosci ; 43(25): 4697-4708, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37221094

RESUMEN

Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using EEG while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disk was manipulated to control the AV coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced largely independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response evoked by the transient deviants, largely independently of AV coherence. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.SIGNIFICANCE STATEMENT Temporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate audiovisual coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on audiovisual object formation.


Asunto(s)
Percepción Auditiva , Potenciales Evocados , Masculino , Humanos , Femenino , Potenciales Evocados/fisiología , Percepción Auditiva/fisiología , Atención/fisiología , Sonido , Estimulación Acústica , Percepción Visual/fisiología , Estimulación Luminosa
9.
bioRxiv ; 2023 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36711896

RESUMEN

Detecting patterns, and noticing unexpected pattern changes, in the environment is a vital aspect of sensory processing. Adaptation and prediction error responses are two components of neural processing related to these tasks, and previous studies in the auditory system in rodents show that these two components are partially dissociable in terms of the topography and latency of neural responses to sensory deviants. However, many previous studies have focused on repetitions of single stimuli, such as pure tones, which have limited ecological validity. In this study, we tested whether the auditory cortical activity shows adaptation to repetition of more complex sound patterns (bisyllabic pairs). Specifically, we compared neural responses to violations of sequences based on single stimulus probability only, against responses to more complex violations based on stimulus order. We employed an auditory oddball paradigm and monitored the auditory cortex (ACtx) activity of awake mice (N=8) using wide-field calcium imaging. We found that cortical responses were sensitive both to single stimulus probabilities and to more global stimulus patterns, as mismatch signals were elicited following both substitution deviants and transposition deviants. Notably, A2 area elicited larger mismatch signaling to those deviants than primary ACtx (A1), which suggests a hierarchical gradient of prediction error signaling in the auditory cortex. Such a hierarchical gradient was observed for late but not early peaks of calcium transients to deviants, suggesting that the late part of the deviant response may reflect prediction error signaling in response to more complex sensory pattern violations.

10.
Schizophr Bull ; 49(2): 407-416, 2023 03 15.
Artículo en Inglés | MEDLINE | ID: mdl-36318221

RESUMEN

BACKGROUND AND HYPOTHESIS: Differences in sound relevance filtering in schizophrenia are proposed to represent a key index of biological changes in brain function in the illness. This study featured a computational modeling approach to test the hypothesis that processing differences might already be evident in first-episode, becoming more pronounced in the established illness. STUDY DESIGN: Auditory event-related potentials to a typical oddball sequence (rare pitch deviations amongst regular sounds) were recorded from 90 persons with schizophrenia-spectrum disorders (40 first-episode schizophrenia-spectrum, 50 established illness) and age-matched healthy controls. The data were analyzed using dynamic causal modeling to identify the changes in effective connectivity that best explained group differences. STUDY RESULTS: Group differences were linked to intrinsic (within brain region) connectivity changes. In activity-dependent measures these were restricted to the left auditory cortex in first-episode schizophrenia-spectrum but were more widespread in the established illness. Modeling suggested that both established illness and first-episode schizophrenia-spectrum groups expressed significantly lower inhibition of inhibitory interneuron activity and altered gain on superficial pyramidal cells with the data indicative of differences in both putative N-methyl-d-aspartate glutamate receptor activity-dependent plasticity and classic neuromodulation. CONCLUSIONS: The study provides further support for the notion that examining the ability to alter responsiveness to structured sound sequences in schizophrenia and first-episode schizophrenia-spectrum could be informative to uncovering the nature and progression of changes in brain function during the illness. Furthermore, modeling suggested that limited differences present at first-episode schizophrenia-spectrum may become more expansive with illness progression.


Asunto(s)
Esquizofrenia , Humanos , Potenciales Evocados Auditivos/fisiología , Electroencefalografía , Potenciales Evocados/fisiología , Simulación por Computador
12.
Curr Biol ; 32(11): 2548-2555.e5, 2022 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-35487221

RESUMEN

Recent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations.1-4. However, memories of previous stimuli can also be used to form sensory predictions about upcoming stimuli.5,6 Predictive mechanisms allow the brain to create a probable model of the outside world, which can be updated when errors are detected between the model predictions and external inputs. 7-10 Direct recordings in the auditory cortex of awake mice established neural mechanisms for how encoding mechanisms might handle working memory and predictive processes without "overwriting" recent sensory events in instances where predictive mechanisms are triggered by oddballs within a sequence.11 However, it remains unclear whether mnemonic and predictive information can be decoded from cortical activity simultaneously during passive, implicit sequence processing, even in anesthetized models. Here, we recorded neural activity elicited by repeated stimulus sequences using electrocorticography (ECoG) in the auditory cortex of anesthetized rats, where events within the sequence (referred to henceforth as "vowels," for simplicity) were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulses, at overlapping latencies but based on independent and uncorrelated data features. We also demonstrate that predictive representations are dynamically updated over the course of stimulation.


Asunto(s)
Corteza Auditiva , Estimulación Acústica , Animales , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Electrocorticografía , Memoria a Corto Plazo/fisiología , Ratones , Ratas
13.
BMC Biol ; 20(1): 48, 2022 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-35172815

RESUMEN

BACKGROUND: To localize sound sources accurately in a reverberant environment, human binaural hearing strongly favors analyzing the initial wave front of sounds. Behavioral studies of this "precedence effect" have so far largely been confined to human subjects, limiting the scope of complementary physiological approaches. Similarly, physiological studies have mostly looked at neural responses in the inferior colliculus, the main relay point between the inner ear and the auditory cortex, or used modeling of cochlear auditory transduction in an attempt to identify likely underlying mechanisms. Studies capable of providing a direct comparison of neural coding and behavioral measures of sound localization under the precedence effect are lacking. RESULTS: We adapted a "temporal weighting function" paradigm previously developed to quantify the precedence effect in human for use in laboratory rats. The animals learned to lateralize click trains in which each click in the train had a different interaural time difference. Computing the "perceptual weight" of each click in the train revealed a strong onset bias, very similar to that reported for humans. Follow-on electrocorticographic recording experiments revealed that onset weighting of interaural time differences is a robust feature of the cortical population response, but interestingly, it often fails to manifest at individual cortical recording sites. CONCLUSION: While previous studies suggested that the precedence effect may be caused by early processing mechanisms in the cochlea or inhibitory circuitry in the brainstem and midbrain, our results indicate that the precedence effect is not fully developed at the level of individual recording sites in the auditory cortex, but robust and consistent precedence effects are observable only in the auditory cortex at the level of cortical population responses. This indicates that the precedence effect emerges at later cortical processing stages and is a significantly "higher order" feature than has hitherto been assumed.


Asunto(s)
Corteza Auditiva , Colículos Inferiores , Localización de Sonidos , Estimulación Acústica/métodos , Animales , Corteza Auditiva/fisiología , Audición , Humanos , Colículos Inferiores/fisiología , Localización de Sonidos/fisiología
14.
Neuroimage ; 247: 118746, 2022 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-34875382

RESUMEN

The ability to process and respond to external input is critical for adaptive behavior. Why, then, do neural and behavioral responses vary across repeated presentations of the same sensory input? Ongoing fluctuations of neuronal excitability are currently hypothesized to underlie the trial-by-trial variability in sensory processing. To test this, we capitalized on intracranial electrophysiology in neurosurgical patients performing an auditory discrimination task with visual cues: specifically, we examined the interaction between prestimulus alpha oscillations, excitability, task performance, and decoded neural stimulus representations. We found that strong prestimulus oscillations in the alpha+ band (i.e., alpha and neighboring frequencies), rather than the aperiodic signal, correlated with a low excitability state, indexed by reduced broadband high-frequency activity. This state was related to slower reaction times and reduced neural stimulus encoding strength. We propose that the alpha+ rhythm modulates excitability, thereby resulting in variability in behavior and sensory representations despite identical input.


Asunto(s)
Ondas Encefálicas/fisiología , Estimulación Luminosa/métodos , Adulto , Percepción Auditiva/fisiología , Encéfalo/fisiología , Discriminación en Psicología/fisiología , Epilepsia Refractaria/fisiopatología , Electroencefalografía , Femenino , Humanos , Estudios Longitudinales , Masculino , Tiempo de Reacción , Percepción Visual/fisiología
15.
J Cogn Neurosci ; 33(8): 1549-1562, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-34496376

RESUMEN

Our understanding of the sensory environment is contextualized on the basis of prior experience. Measurement of auditory ERPs provides insight into automatic processes that contextualize the relevance of sound as a function of how sequences change over time. However, task-independent exposure to sound has revealed that strong first impressions exert a lasting impact on how the relevance of sound is contextualized. Dynamic causal modeling was applied to auditory ERPs collected during presentation of alternating pattern sequences. A local regularity (a rare p = .125 vs. common p = .875 sound) alternated to create a longer timescale regularity (sound probabilities alternated regularly creating a predictable block length), and the longer timescale regularity changed halfway through the sequence (the regular block length became shorter or longer). Predictions should be revised for local patterns when blocks alternated and for longer patterning when the block length changed. Dynamic causal modeling revealed an overall higher precision for the error signal to the rare sound in the first block type, consistent with the first impression. The connectivity changes in response to errors within the underlying neural network were also different for the two blocks with significantly more revision of predictions in the arrangement that violated the first impression. Furthermore, the effects of block length change suggested errors within the first block type exerted more influence on the updating of longer timescale predictions. These observations support the hypothesis that automatic sequential learning creates a high-precision context (first impression) that impacts learning rates and updates to those learning rates when predictions arising from that context are violated. The results further evidence automatic pattern learning over multiple timescales simultaneously, even during task-independent passive exposure to sound.


Asunto(s)
Aprendizaje Profundo , Estimulación Acústica , Percepción Auditiva , Electroencefalografía , Potenciales Evocados Auditivos , Humanos
16.
Front Neurosci ; 15: 610978, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33790730

RESUMEN

Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models.

17.
Cereb Cortex ; 31(7): 3226-3236, 2021 06 10.
Artículo en Inglés | MEDLINE | ID: mdl-33625488

RESUMEN

In contrast to classical views of working memory (WM) maintenance, recent research investigating activity-silent neural states has demonstrated that persistent neural activity in sensory cortices is not necessary for active maintenance of information in WM. Previous studies in humans have measured putative memory representations indirectly, by decoding memory contents from neural activity evoked by a neutral impulse stimulus. However, it is unclear whether memory contents can also be decoded in different species and attentional conditions. Here, we employ a cross-species approach to test whether auditory memory contents can be decoded from electrophysiological signals recorded in different species. Awake human volunteers (N = 21) were exposed to auditory pure tone and noise burst stimuli during an auditory sensory memory task using electroencephalography. In a closely matching paradigm, anesthetized female rats (N = 5) were exposed to comparable stimuli while neural activity was recorded using electrocorticography from the auditory cortex. In both species, the acoustic frequency could be decoded from neural activity evoked by pure tones as well as neutral frozen noise burst stimuli. This finding demonstrates that memory contents can be decoded in different species and different states using homologous methods, suggesting that the mechanisms of sensory memory encoding are evolutionarily conserved across species.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Memoria a Corto Plazo/fisiología , Adulto , Animales , Electrocorticografía/métodos , Electroencefalografía/métodos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ratas , Ratas Wistar , Tiempo de Reacción/fisiología , Especificidad de la Especie , Adulto Joven
18.
Front Hum Neurosci ; 15: 613903, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33597853

RESUMEN

Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection.

19.
Curr Res Neurobiol ; 2: 100019, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-36246502

RESUMEN

Continuous acoustic streams, such as speech signals, can be chunked into segments containing reoccurring patterns (e.g., words). Noninvasive recordings of neural activity in humans suggest that chunking is underpinned by low-frequency cortical entrainment to the segment presentation rate, and modulated by prior segment experience (e.g., words belonging to a familiar language). Interestingly, previous studies suggest that also primates and rodents may be able to chunk acoustic streams. Here, we test whether neural activity in the rat auditory cortex is modulated by previous segment experience. We recorded subdural responses using electrocorticography (ECoG) from the auditory cortex of 11 anesthetized rats. Prior to recording, four rats were trained to detect familiar triplets of acoustic stimuli (artificial syllables), three were passively exposed to the triplets, while another four rats had no training experience. While low-frequency neural activity peaks were observed at the syllable level, no triplet-rate peaks were observed. Notably, in trained rats (but not in passively exposed and naïve rats), familiar triplets could be decoded more accurately than unfamiliar triplets based on neural activity in the auditory cortex. These results suggest that rats process acoustic sequences, and that their cortical activity is modulated by the training experience even under subsequent anesthesia.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...