Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add more filters










Publication year range
1.
Heliyon ; 10(2): e24750, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38312568

ABSTRACT

Objective: Lipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST). Design: Video recordings were created by dubbing the existing audio files. Sample: Thirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats. Results: Lipreading ability (Vo) ranged from 1 % to 77%-word comprehension. The absolute AV benefit was 9.25 dB SPL in quiet and 4.6 dB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated. Conclusions: The French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format.

2.
PLoS Comput Biol ; 19(11): e1011669, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38011225

ABSTRACT

Humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear and different underlying computational mechanisms have been proposed. Here we explore human perception for tone sequences with some temporal regularity at varying rates, but with considerable variability. Next, using a dynamical systems perspective, we successfully model the participants behavior using an adaptive frequency oscillator which adjusts its spontaneous frequency based on the rate of stimuli. This model better reflects human behavior than a canonical nonlinear oscillator and a predictive ramping model-both widely used for temporal estimation and prediction-and demonstrate that the classical distinction between absolute and relative computational mechanisms can be unified under this framework. In addition, we show that neural oscillators may constitute hard-coded physiological priors-in a Bayesian sense-that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. Together, the results show that adaptive oscillators provide an elegant and biologically plausible means to subserve rhythmic inference, reconciling previously incompatible frameworks for temporal inferential processes.


Subject(s)
Music , Time Perception , Humans , Bayes Theorem
3.
Trends Hear ; 27: 23312165231156412, 2023.
Article in English | MEDLINE | ID: mdl-36794429

ABSTRACT

Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Presbycusis , Speech Perception , Humans , Infant , Aged , Presbycusis/diagnosis , Deafness/rehabilitation , Hearing , Aging , Brain
4.
PLoS Biol ; 20(7): e3001742, 2022 07.
Article in English | MEDLINE | ID: mdl-35905075

ABSTRACT

Categorising voices is crucial for auditory-based social interactions. A recent study by Rupp and colleagues in PLOS Biology capitalises on human intracranial recordings to describe the spatiotemporal pattern of neural activity leading to voice-selective responses in associative auditory cortex.


Subject(s)
Auditory Perception , Voice , Auditory Perception/physiology , Brain/physiology , Brain Mapping , Humans , Temporal Lobe , Voice/physiology
5.
Cereb Cortex Commun ; 3(1): tgac003, 2022.
Article in English | MEDLINE | ID: mdl-35174329

ABSTRACT

The waking brain efficiently detects emotional signals to promote survival. However, emotion detection during sleep is poorly understood and may be influenced by individual sleep characteristics or neural reactivity. Notably, dream recall frequency has been associated with stimulus reactivity during sleep, with enhanced stimulus-driven responses in high vs. low recallers. Using electroencephalography (EEG), we characterized the neural responses of healthy individuals to emotional, neutral voices, and control stimuli, both during wakefulness and NREM sleep. Then, we tested how these responses varied with individual dream recall frequency. Event-related potentials (ERPs) differed for emotional vs. neutral voices, both in wakefulness and NREM. Likewise, EEG arousals (sleep perturbations) increased selectively after the emotional voices, indicating emotion reactivity. Interestingly, sleep ERP amplitude and arousals after emotional voices increased linearly with participants' dream recall frequency. Similar correlations with dream recall were observed for beta and sigma responses, but not for theta. In contrast, dream recall correlations were absent for neutral or control stimuli. Our results reveal that brain reactivity to affective salience is preserved during NREM and is selectively associated to individual memory for dreams. Our findings also suggest that emotion-specific reactivity during sleep, and not generalized alertness, may contribute to the encoding/retrieval of dreams.

6.
Nat Commun ; 13(1): 338, 2022 01 17.
Article in English | MEDLINE | ID: mdl-35039498

ABSTRACT

Making accurate decisions based on unreliable sensory evidence requires cognitive inference. Dysfunction of n-methyl-d-aspartate (NMDA) receptors impairs the integration of noisy input in theoretical models of neural circuits, but whether and how this synaptic alteration impairs human inference and confidence during uncertain decisions remains unknown. Here we use placebo-controlled infusions of ketamine to characterize the causal effect of human NMDA receptor hypofunction on cognitive inference and its neural correlates. At the behavioral level, ketamine triggers inference errors and elevated decision uncertainty. At the neural level, ketamine is associated with imbalanced coding of evidence and premature response preparation in electroencephalographic (EEG) activity. Through computational modeling of inference and confidence, we propose that this specific pattern of behavioral and neural impairments reflects an early commitment to inaccurate decisions, which aims at resolving the abnormal uncertainty generated by NMDA receptor hypofunction.


Subject(s)
Decision Making , Receptors, N-Methyl-D-Aspartate/metabolism , Uncertainty , Adult , Bayes Theorem , Brain/drug effects , Brain/physiology , Cognition/drug effects , Cues , Electroencephalography , Female , Humans , Ketamine/administration & dosage , Ketamine/pharmacology , Male , Psychometrics , Task Performance and Analysis , Time Factors
7.
Nat Commun ; 13(1): 48, 2022 01 10.
Article in English | MEDLINE | ID: mdl-35013268

ABSTRACT

Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.


Subject(s)
Brain-Computer Interfaces , Electrocorticography , Language , Speech , Adult , Brain/diagnostic imaging , Brain Mapping , Electrodes , Female , Humans , Imagination , Male , Middle Aged , Phonetics , Young Adult
8.
PLoS Biol ; 18(9): e3000833, 2020 09.
Article in English | MEDLINE | ID: mdl-32898188

ABSTRACT

The phonological deficit in dyslexia is associated with altered low-gamma oscillatory function in left auditory cortex, but a causal relationship between oscillatory function and phonemic processing has never been established. After confirming a deficit at 30 Hz with electroencephalography (EEG), we applied 20 minutes of transcranial alternating current stimulation (tACS) to transiently restore this activity in adults with dyslexia. The intervention significantly improved phonological processing and reading accuracy as measured immediately after tACS. The effect occurred selectively for a 30-Hz stimulation in the dyslexia group. Importantly, we observed that the focal intervention over the left auditory cortex also decreased 30-Hz activity in the right superior temporal cortex, resulting in reinstating a left dominance for the oscillatory response. These findings establish a causal role of neural oscillations in phonological processing and offer solid neurophysiological grounds for a potential correction of low-gamma anomalies and for alleviating the phonological deficit in dyslexia.


Subject(s)
Dyslexia/therapy , Reading , Speech Perception , Adolescent , Adult , Auditory Cortex/physiopathology , Auditory Cortex/radiation effects , Dyslexia/physiopathology , Electroencephalography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Auditory/radiation effects , Female , Humans , Male , Middle Aged , Phonetics , Speech Perception/physiology , Speech Perception/radiation effects , Transcranial Direct Current Stimulation/methods , Verbal Behavior/physiology , Verbal Behavior/radiation effects , Young Adult
9.
J Acoust Soc Am ; 147(6): EL540, 2020 06.
Article in English | MEDLINE | ID: mdl-32611175

ABSTRACT

One way music is thought to convey emotion is by mimicking acoustic features of affective human vocalizations [Juslin and Laukka (2003). Psychol. Bull. 129(5), 770-814]. Regarding fear, it has been informally noted that music for scary scenes in films frequently exhibits a "scream-like" character. Here, this proposition is formally tested. This paper reports acoustic analyses for four categories of audio stimuli: screams, non-screaming vocalizations, scream-like music, and non-scream-like music. Valence and arousal ratings were also collected. Results support the hypothesis that a key feature of human screams (roughness) is imitated by scream-like music and could potentially signal danger through both music and the voice.


Subject(s)
Music , Voice , Acoustics , Animals , Arousal , Cattle , Emotions , Humans , Male
10.
Camb Q Healthc Ethics ; 28(4): 657-670, 2019 10.
Article in English | MEDLINE | ID: mdl-31475659

ABSTRACT

Neuroprosthetic speech devices are an emerging technology that can offer the possibility of communication to those who are unable to speak. Patients with 'locked in syndrome,' aphasia, or other such pathologies can use covert speech-vividly imagining saying something without actual vocalization-to trigger neural controlled systems capable of synthesizing the speech they would have spoken, but for their impairment.We provide an analysis of the mechanisms and outputs involved in speech mediated by neuroprosthetic devices. This analysis provides a framework for accounting for the ethical significance of accuracy, control, and pragmatic dimensions of prosthesis-mediated speech. We first examine what it means for the output of the device to be accurate, drawing a distinction between technical accuracy on the one hand and semantic accuracy on the other. These are conceptual notions of accuracy.Both technical and semantic accuracy of the device will be necessary (but not yet sufficient) for the user to have sufficient control over the device. Sufficient control is an ethical consideration: we place high value on being able to express ourselves when we want and how we want. Sufficient control of a neural speech prosthesis requires that a speaker can reliably use their speech apparatus as they want to, and can expect their speech to authentically represent them. We draw a distinction between two relevant features which bear on the question of whether the user has sufficient control: voluntariness of the speech and the authenticity of the speech. These can come apart: the user might involuntarily produce an authentic output (perhaps revealing private thoughts) or might voluntarily produce an inauthentic output (e.g., when the output is not semantically accurate). Finally, we consider the role of the interlocutor in interpreting the content and purpose of the communication.These three ethical dimensions raise philosophical questions about the nature of speech, the level of control required for communicative accuracy, and the nature of 'accuracy' with respect to both natural and prosthesis-mediated speech.


Subject(s)
Communication Aids for Disabled/ethics , Communication Aids for Disabled/standards , Neural Prostheses , Speech, Alaryngeal , Brain-Computer Interfaces/ethics , Brain-Computer Interfaces/standards , Electroencephalography , Humans , Neural Prostheses/ethics , Semantics
11.
Neurosci Biobehav Rev ; 107: 136-142, 2019 12.
Article in English | MEDLINE | ID: mdl-31518638

ABSTRACT

In the motor cortex, beta oscillations (∼12-30 Hz) are generally considered a principal rhythm contributing to movement planning and execution. Beta oscillations cohabit and dynamically interact with slow delta oscillations (0.5-4 Hz), but the role of delta oscillations and the subordinate relationship between these rhythms in the perception-action loop remains unclear. Here, we review evidence that motor delta oscillations shape the dynamics of motor behaviors and sensorimotor processes, in particular during auditory perception. We describe the functional coupling between delta and beta oscillations in the motor cortex during spontaneous and planned motor acts. In an active sensing framework, perception is strongly shaped by motor activity, in particular in the delta band, which imposes temporal constraints on the sampling of sensory information. By encoding temporal contextual information, delta oscillations modulate auditory processing and impact behavioral outcomes. Finally, we consider the contribution of motor delta oscillations in the perceptual analysis of speech signals, providing a contextual temporal frame to optimize the parsing and processing of slow linguistic information.


Subject(s)
Auditory Perception/physiology , Delta Rhythm/physiology , Motor Cortex/physiology , Speech Perception/physiology , Acoustic Stimulation , Humans , Speech
12.
Nat Commun ; 10(1): 3671, 2019 08 14.
Article in English | MEDLINE | ID: mdl-31413319

ABSTRACT

Being able to produce sounds that capture attention and elicit rapid reactions is the prime goal of communication. One strategy, exploited by alarm signals, consists in emitting fast but perceptible amplitude modulations in the roughness range (30-150 Hz). Here, we investigate the perceptual and neural mechanisms underlying aversion to such temporally salient sounds. By measuring subjective aversion to repetitive acoustic transients, we identify a nonlinear pattern of aversion restricted to the roughness range. Using human intracranial recordings, we show that rough sounds do not merely affect local auditory processes but instead synchronise large-scale, supramodal, salience-related networks in a steady-state, sustained manner. Rough sounds synchronise activity throughout superior temporal regions, subcortical and cortical limbic areas, and the frontal cortex, a network classically involved in aversion processing. This pattern correlates with subjective aversion in all these regions, consistent with the hypothesis that roughness enhances auditory aversion through spreading of neural synchronisation.


Subject(s)
Attention , Auditory Cortex/physiology , Auditory Perception/physiology , Sound , Acoustic Stimulation , Acoustics , Adolescent , Adult , Auditory Pathways/physiology , Drug Resistant Epilepsy/surgery , Electrocorticography , Epilepsies, Partial/surgery , Female , Humans , Male , Time Factors , Young Adult
13.
Neuropsychologia ; 131: 9-24, 2019 08.
Article in English | MEDLINE | ID: mdl-31158367

ABSTRACT

The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.


Subject(s)
Amygdala/physiology , Emotions/physiology , Evoked Potentials/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Perception/physiology , Electrocorticography , Female , Humans , Male , Middle Aged , Photic Stimulation , Reaction Time/physiology , Visual Perception/physiology , Young Adult
14.
Neuron ; 100(5): 1022-1024, 2018 12 05.
Article in English | MEDLINE | ID: mdl-30521776

ABSTRACT

Predictive coding and neural oscillations are two descriptive levels of brain functioning whose overlap is not yet understood. Chao et al. (2018) now show that hierarchical predictive coding is instantiated by asymmetric information channeling in the γ and α/ß oscillatory ranges.


Subject(s)
Brain , Primates , Animals
15.
Trends Cogn Sci ; 22(10): 870-882, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30266147

ABSTRACT

The ability to predict when something will happen facilitates sensory processing and the ensuing computations. Building on the observation that neural activity entrains to periodic stimulation, leading neurophysiological models imply that temporal predictions rely on oscillatory entrainment. Although they provide a sufficient solution to predict periodic regularities, these models are challenged by a series of findings that question their suitability to account for temporal predictions based on aperiodic regularities. Aiming for a more comprehensive model of how the brain anticipates 'when' in auditory contexts, we emphasize the capacity of motor and higher-order top-down systems to prepare sensory processing in a proactive and temporally flexible manner. Focusing on speech processing, we illustrate how this framework leads to new hypotheses.


Subject(s)
Anticipation, Psychological/physiology , Auditory Perception/physiology , Brain Waves/physiology , Time Factors , Time Perception/physiology , Humans
16.
Physiol Behav ; 193(Pt A): 43-54, 2018 09 01.
Article in English | MEDLINE | ID: mdl-29730041

ABSTRACT

Crying is the principal means by which newborn infants shape parental behavior to meet their needs. While this mechanism can be highly effective, infant crying can also be an aversive stimulus that leads to parental frustration and even abuse. Fathers have recently become more involved in direct caregiving activities in modern, developed nations, and fathers are more likely than mothers to physically abuse infants. In this study, we attempt to explain variation in the neural response to infant crying among human fathers, with the hope of identifying factors that are associated with a more or less sensitive response. We imaged brain function in 39 first-time fathers of newborn infants as they listened to both their own and a standardized unknown infant cry stimulus, as well as auditory control stimuli, and evaluated whether these neural responses were correlated with measured characteristics of fathers and infants that were hypothesized to modulate these responses. Fathers also provided subjective ratings of each cry stimulus on multiple dimensions. Fathers showed widespread activation to both own and unknown infant cries in neural systems involved in empathy and approach motivation. There was no significant difference in the neural response to the own vs. unknown infant cry, and many fathers were unable to distinguish between the two cries. Comparison of these results with previous studies in mothers revealed a high degree of similarity between first-time fathers and first-time mothers in the pattern of neural activation to newborn infant cries. Further comparisons suggested that younger infant age was associated with stronger paternal neural responses, perhaps due to hormonal or novelty effects. In our sample, older fathers found infant cries less aversive and had an attenuated response to infant crying in both the dorsal anterior cingulate cortex (dACC) and the anterior insula, suggesting that compared with younger fathers, older fathers may be better able to avoid the distress associated with empathic over-arousal in response to infant cries. A principal components analysis revealed that fathers with more negative emotional reactions to the unknown infant cry showed decreased activation in the thalamus and caudate nucleus, regions expected to promote positive parental behaviors, as well as increased activation in the hypothalamus and dorsal ACC, again suggesting that empathic over-arousal might result in negative emotional reactions to infant crying. In sum, our findings suggest that infant age, paternal age and paternal emotional reactions to infant crying all modulate the neural response of fathers to infant crying. By identifying neural correlates of variation in paternal subjective reactions to infant crying, these findings help lay the groundwork for evaluating the effectiveness of interventions designed to increase paternal sensitivity and compassion.


Subject(s)
Auditory Perception/physiology , Brain/diagnostic imaging , Brain/physiology , Crying , Parent-Child Relations , Paternal Behavior/physiology , Adult , Aging/physiology , Aging/psychology , Brain Mapping , Emotions/physiology , Female , Humans , Individuality , Infant , Infant, Newborn , Magnetic Resonance Imaging , Male , Neural Pathways/diagnostic imaging , Neural Pathways/physiology , Paternal Behavior/psychology , Pattern Recognition, Physiological/physiology , Social Perception , Testosterone/metabolism , Young Adult
17.
Commun Integr Biol ; 10(5-6): e1349583, 2017.
Article in English | MEDLINE | ID: mdl-29260797

ABSTRACT

The ability to precisely anticipate the timing of upcoming events at the time-scale of seconds is essential to predict objects' trajectories or to select relevant sensory information. What neurophysiological mechanism underlies the temporal precision in anticipating the occurrence of events? In a recent article,1 we demonstrated that the sensori-motor system predictively controls neural oscillations in time to optimize sensory selection. However, whether and how the same oscillatory processes can be used to keep track of elapsing time and evaluate short durations remains unclear. Here, we aim at testing the hypothesis that the brain tracks durations by converting (external, objective) elapsing time into an (internal, subjective) oscillatory phase-angle. To test this, we measured magnetoencephalographic oscillatory activity while participants performed a delayed-target detection task. In the delayed condition, we observe that trials that are perceived as longer are associated with faster delta-band oscillations. This suggests that the subjective indexing of time is reflected in the range of phase-angles covered by delta oscillations during the pre-stimulus period. This result provides new insights into how we predict and evaluate temporal structure and support models in which the active entrainment of sensori-motor oscillatory dynamics is exploited to track elapsing time.

18.
J Neurosci ; 37(33): 7930-7938, 2017 08 16.
Article in English | MEDLINE | ID: mdl-28729443

ABSTRACT

Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving ß activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of ß (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and ß activity, with ß activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and ß activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous ß activity, but not primarily by the capacity of θ activity to track the syllabic rhythm.SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-ß oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Beta Rhythm/physiology , Comprehension/physiology , Speech Perception/physiology , Theta Rhythm/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Random Allocation , Speech/physiology , Time Factors , Young Adult
20.
J Neurosci ; 36(8): 2342-7, 2016 Feb 24.
Article in English | MEDLINE | ID: mdl-26911682

ABSTRACT

Predicting not only what will happen, but also when it will happen is extremely helpful for optimizing perception and action. Temporal predictions driven by periodic stimulation increase perceptual sensitivity and reduce response latencies. At the neurophysiological level, a single mechanism has been proposed to mediate this twofold behavioral improvement: the rhythmic entrainment of slow cortical oscillations to the stimulation rate. However, temporal regularities can occur in aperiodic contexts, suggesting that temporal predictions per se may be dissociable from entrainment to periodic sensory streams. We investigated this possibility in two behavioral experiments, asking human participants to detect near-threshold auditory tones embedded in streams whose temporal and spectral properties were manipulated. While our findings confirm that periodic stimulation reduces response latencies, in agreement with the hypothesis of a stimulus-driven entrainment of neural excitability, they further reveal that this motor facilitation can be dissociated from the enhancement of auditory sensitivity. Perceptual sensitivity improvement is unaffected by the nature of temporal regularities (periodic vs aperiodic), but contingent on the co-occurrence of a fulfilled spectral prediction. Altogether, the dissociation between predictability and periodicity demonstrates that distinct mechanisms flexibly and synergistically operate to facilitate perception and action.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Periodicity , Photic Stimulation/methods , Reaction Time/physiology , Adolescent , Adult , Female , Forecasting , Humans , Male , Middle Aged , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...