Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 210
Filtrar
1.
Cortex ; 134: 320-332, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33340879

RESUMO

Audio-motor integration is currently viewed as a predictive process in which the brain simulates upcoming sounds based on voluntary actions. This perspective does not consider how our auditory environment may trigger involuntary action in the absence of prediction. We address this issue by examining the relationship between acoustic salience and involuntary motor responses. We investigate how acoustic features in music contribute to the perception of salience, and whether those features trigger involuntary peripheral motor responses. Participants with little-to-no musical training listened to musical excerpts once while remaining still during the recording of their muscle activity with surface electromyography (sEMG), and again while they continuously rated perceived salience within the music using a slider. We show cross-correlations between 1) salience ratings and acoustic features, 2) acoustic features and spontaneous muscle activity, and 3) salience ratings and spontaneous muscle activity. Amplitude, intensity, and spectral centroid were perceived as the most salient features in music, and fluctuations in these features evoked involuntary peripheral muscle responses. Our results suggest an involuntary mechanism for audio-motor integration, which may rely on brainstem-spinal or brainstem-cerebellar-spinal pathways. Based on these results, we argue that a new framework is needed to explain the full range of human sensorimotor capabilities. This goal can be achieved by considering how predictive and reactive audio-motor integration mechanisms could operate independently or interactively to optimize human behavior.

2.
J Parkinsons Dis ; 2020 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-33285641

RESUMO

BACKGROUND: It is known that music influences gait parameters in Parkinson's disease (PD). However, it remains unclear whether this effect is merely due to temporal aspects of music (rhythm and tempo) or other musical parameters. OBJECTIVE: To examine the influence of pleasant and unpleasant music on spatiotemporal gait parameters in PD, while controlling for rhythmic aspects of the musical signal. METHODS: We measured spatiotemporal gait parameters of 18 patients suffering from mild PD (50%men, mean±SD age of 64±6 years; mean disease duration of 6±5 years; mean Unified PD Rating scale [UPDRS] motor score of 15±7) who listened to eight different pieces of music. Music pieces varied in harmonic consonance/dissonance to create the experience of pleasant/unpleasant feelings. To measure gait parameters, we used an established analysis of spatiotemporal gait, which consists of a walkway containing pressure-receptive sensors (GAITRite®). Repeated measures analyses of variance were used to evaluate effects of auditory stimuli. In addition, linear regression was used to evaluate effects of valence on gait. RESULTS: Sensory dissonance modulated spatiotemporal and spatial gait parameters, namely velocity and stride length, while temporal gait parameters (cadence, swing duration) were not affected. In contrast, valence in music as perceived by patients was not associated with gait parameters. Motor and musical abilities did not relevantly influence the modulation of gait by auditory stimuli. CONCLUSION: Our observations suggest that dissonant music negatively affects particularly spatial gait parameters in PD by yet unknown mechanisms, but putatively through increased cognitive interference reducing attention in auditory cueing.

3.
Proc Natl Acad Sci U S A ; 117(38): 23223-23224, 2020 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-32967065
4.
Neurosci Biobehav Rev ; 118: 485-503, 2020 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-32810512

RESUMO

Auditory verbal hallucinations (AVH) - experienced as voice hearing independent of a corresponding external sound source - are a cardinal symptom of psychosis. Approximately 6-13% of healthy individuals also experience voice hearing. Despite numerous attempts to explain the neurofunctional mechanisms underlying AVH, they remain notoriously unexplained. However, evidence relates AVH to mechanistic changes in the forward model. This review synthesizes behavioral and neuroimaging studies exploring the central role of cerebellar circuitry in the forward model, with a particular focus on non-verbal and verbal auditory feedback. It confirms that erratic prediction of sensory consequences in voice and sound production is linked to impaired cerebellar function, which initiates AVH and affects higher-level cognitive functions. We propose new research directions linking the forward model to voice and sound feedback processing. We consider this review as a starting point for mapping mechanisms of the forward model to neurocognitive mechanisms underlying AVH.

5.
J Neurosci Methods ; 343: 108830, 2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-32603812

RESUMO

BACKGROUND: Researchers rely on the specified capabilities of their hardware and software even though, in reality, these capabilities are often not achieved. Considering that the number of experiments examining neural oscillations has increased steadily, easy-to-implement tools for testing the capabilities of hardware and software are necessary. NEW METHOD: We present an open-source MATLAB toolbox, the Schultz Cigarette Burn Toolbox (SCiBuT) that allows users to benchmark the capabilities of their visual display devices and align neural and behavioral responses with veridical timing of visual stimuli. Specifically, the toolbox marks the corners of the display with black or white squares to indicate the timing of the onset of static images and the timing of frame changes within videos. Using basic hardware (i.e., a photodiode, an Arduino microcontroller, and an analogue input box), the light changes in the corner of the screen can be captured and synchronized with EEG recordings and/or behavioral responses. RESULTS: We demonstrate that the SCiBuT is sensitive to framerate inconsistencies and provide examples of hardware setups that are suboptimal for measuring fine timing. Finally, we show that inconsistencies in framerate during video presentation can affect EEG oscillations. CONCLUSIONS: The SCiBuT provides tools to benchmark framerates and frame changes and to synchronize frame changes with neural and behavioral signals. This is the first open-source toolbox that can perform these functions. The SCiBuT can be freely downloaded (www.band-lab.com/scibut) and be used during experimental trials to improve the accuracy and precision of timestamps to ensure videos are presented at the intended framerate.

6.
Cortex ; 130: 290-301, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32698087

RESUMO

The forward model monitors the success of sensory feedback to an action and links it to an efference copy originating in the motor system. The Readiness Potential (RP) of the electroencephalogram has been denoted as a neural signature of the efference copy. An open question is whether imagined sensory feedback works similarly to real sensory feedback. We investigated the RP to audible and imagined sounds in a button-press paradigm and assessed the role of sound complexity (vocal vs. non-vocal sound). Sensory feedback (both audible and imagined) in response to a voluntary action modulated the RP amplitude time-locked to the button press. The RP amplitude increase was larger for actions with expected sensory feedback (audible and imagined) than those without sensory feedback, and associated with N1 suppression for audible sounds. Further, the early RP phase was increased when actions elicited an imagined vocal (self-voice) compared to non-vocal sound. Our results support the notion that sensory feedback is anticipated before voluntary actions. This is the case for both audible and imagined sensory feedback and confirms a role of overt and covert feedback in the forward model.

7.
PLoS One ; 15(6): e0233608, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32497064

RESUMO

PURPOSE: Decades of research have explored communication in cerebrovascular diseases by focusing on formulaic expressions (e.g., "Thank you"-"You're welcome"). This category of utterances is known for engaging primarily right-hemisphere frontotemporal and bilateral subcortical neural networks, explaining why left-hemisphere stroke patients with speech-motor planning disorders often produce formulaic expressions comparatively well. The present proof-of-concept study aims to confirm that using verbal cues derived from formulaic expressions can alleviate word-onset difficulties, one major symptom in apraxia of speech. METHODS: In a cross-sectional repeated-measures design, 20 individuals with chronic post-stroke apraxia of speech were asked to produce (i) verbal cues (e.g., /guː/) and (ii) subsequent German target words (e.g., "Tanz") with critical onsets (e.g., /t/). Cues differed, most notably, in aspects of formulaicity (e.g., stereotyped prompt: /guː/, based on formulaic phrase "Guten Morgen"; unstereotyped prompt: /muː/, based on non-formulaic control word "Mutig"). Apart from systematic variation in stereotypy and communicative-pragmatic embeddedness possibly associated with holistic language processing, cues were matched for consonant-vowel structure, syllable-transition frequency, noun-verb classification, meter, and articulatory tempo. RESULTS: Statistical analyses revealed significant increases in correctly produced word onsets after verbal cues with distinct features of formulaicity (e.g., stereotyped versus unstereotyped prompts: p < 0.001), as reflected in large effect sizes (Cohen's dz ≤ 2.2). CONCLUSIONS: The current results indicate that using preserved formulaic language skills can relieve word-onset difficulties in apraxia of speech. This finding is consistent with a dynamic interplay of left perilesional and right intact language networks in post-stroke rehabilitation and may inspire new treatment strategies for individuals with apraxia of speech.


Assuntos
Apraxias/etiologia , Idioma , Fala , Reabilitação do Acidente Vascular Cerebral/métodos , Acidente Vascular Cerebral/complicações , Adulto , Idoso , Idoso de 80 Anos ou mais , Comunicação , Estudos Transversais , Sinais (Psicologia) , Feminino , Humanos , Linguística/métodos , Masculino , Pessoa de Meia-Idade
8.
Sci Rep ; 10(1): 9917, 2020 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-32555256

RESUMO

Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive EEG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a marker for experience-dependent change detection, where its timing and amplitude are indicative of the perceptual system's sensitivity to presented stimuli. We hypothesized that more predictable stimuli (i.e. high phonotactic probability and first syllable stress) would facilitate change detection, indexed by shorter peak latencies or greater peak amplitudes of the MMN. This hypothesis was confirmed for phonotactic probability: high phonotactic probability deviants elicited an earlier MMN than low phonotactic probability deviants. We do not observe a significant modulation of the MMN to variations in syllable stress. Our findings confirm that speech perception is shaped by formal and temporal predictability. This paradigm may be useful to investigate the contribution of implicit processing of statistical regularities during (a)typical language development.

9.
Neuropsychologia ; 146: 107531, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32553846

RESUMO

BACKGROUND: Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but are also present in 6-13% of the general population. Alterations in sensory feedback processing are a likely cause of AVH, indicative of changes in the forward model. However, it is unknown whether such alterations are related to anomalies in forming an efference copy during action preparation, selective for voices, and similar along the psychosis continuum. By directly comparing psychotic and nonclinical voice hearers (NCVH), the current study specifies whether and how AVH proneness modulates both the efference copy (Readiness Potential) and sensory feedback processing for voices and tones (N1, P2) with event-related brain potentials (ERPs). METHODS: Controls with low AVH proneness (n = 15), NCVH (n = 16) and first-episode psychotic patients with AVH (n = 16) engaged in a button-press task with two types of stimuli: self-initiated and externally generated self-voices or tones during EEG recordings. RESULTS: Groups differed in sensory feedback processing of expected and actual feedback: NCVH displayed an atypically enhanced N1 to self-initiated voices, while N1 suppression was reduced in psychotic patients. P2 suppression for voices and tones was strongest in NCVH, but absent for voices in patients. Motor activity preceding the button press was reduced in NCVH and patients, specifically for sensory feedback to self-voice in NCVH. CONCLUSIONS: These findings suggest that selective changes in sensory feedback to voice are core to AVH. These changes already show in preparatory motor activity, potentially reflecting changes in forming an efference copy. The results provide partial support for continuum models of psychosis.

10.
Cognition ; 200: 104249, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32413547

RESUMO

Affective vocalisations such as screams and laughs can convey strong emotional content without verbal information. Previous research using morphed vocalisations (e.g. 25% fear/75% anger) has revealed categorical perception of emotion in voices, showing sudden shifts at emotion category boundaries. However, it is currently unknown how further modulation of vocalisations beyond the veridical emotion (e.g. 125% fear) affects perception. Caricatured facial expressions produce emotions that are perceived as more intense and distinctive, with faster recognition relative to the original and anti-caricatured (e.g. 75% fear) emotions, but a similar effect using vocal caricatures has not been previously examined. Furthermore, caricatures can play a key role in assessing how distinctiveness is identified, in particular by evaluating accounts of emotion perception with reference to prototypes (distance from the central stimulus) and exemplars (density of the stimulus space). Stimuli consisted of four emotions (anger, disgust, fear, and pleasure) morphed at 25% intervals between a neutral expression and each emotion from 25% to 125%, and between each pair of emotions. Emotion perception was assessed using emotion intensity ratings, valence and arousal ratings, speeded categorisation and paired similarity ratings. We report two key findings: 1) across tasks, there was a strongly linear effect of caricaturing, with caricatured emotions (125%) perceived as higher in emotion intensity and arousal, and recognised faster compared to the original emotion (100%) and anti-caricatures (25%-75%); 2) our results reveal evidence for a unique contribution of a prototype-based account in emotion recognition. We show for the first time that vocal caricature effects are comparable to those found previously with facial caricatures. The set of caricatured vocalisations provided open a promising line of research for investigating vocal affect perception and emotion processing deficits in clinical populations.

11.
Int J Psychophysiol ; 147: 193-201, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31738953

RESUMO

Cognitive control is influenced by affective states and the emotional quality of the stimulus it operates on. In the present review, we address how emotional valence influences control processes, distinguish between different types of conflicts (cognitive, emotional), examine physiological correlates of cognition - emotion interactions, and discuss recent work on this interaction in multisensory contexts. We show converging evidence that positive and negative emotions differentially affect cognitive and emotional conflict processing, when the emotional stimulus dimension is or is not task-relevant. These effects are found particularly early in dynamic, multisensory stimuli as the stimulus dimensions can correctly or incorrectly predict one another, and lead to very rapid effects of emotion on cognitive control. We suggest that future research on emotion-cognition interactions should "move towards dynamics" and develop multisensory testing environments that approach real-world complexity.


Assuntos
Afeto/fisiologia , Regulação Emocional/fisiologia , Função Executiva/fisiologia , Percepção/fisiologia , Humanos
12.
Int J Psychophysiol ; 147: 156-175, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31734443

RESUMO

To better understand how emotion impacts cognitive control is important as both influence adaptive behavior in complex real-life situations. Performance changes in emotion and cognitive control as well as in their interaction are often described in psychotic patients as well as in non-clinical participants who experience psychosis-like symptoms. These changes are linked to low motivation and limited social interaction. However, it is unclear whether these changes are driven by emotion, cognitive control, or an interaction of both. This review provides an overview of neuroimaging evidence on the potential interaction of emotion and cognitive control along the psychosis continuum. The literature confirms that over-sensitivity towards negative and lowered sensitivity towards positive emotional stimuli in tasks exploring emotion-cognitive control interaction are associated with the severity of positive and negative symptoms in psychosis. Changes in the dynamic interplay between emotion and context-sensitive cognitive control, mediated by arousal, motivation, and reward processing may underlie poor interpersonal communication and real-life skills in psychosis. In addition, structural and functional changes in subcortical and cortical associative brain regions (e.g., thalamus, basal ganglia, and angular gyrus) may contribute to alterations in emotion and cognitive control interaction along the psychosis continuum. There is limited evidence on how antipsychotic medication and age at illness-onset affect this interaction.


Assuntos
Atenção/fisiologia , Transtorno Bipolar/fisiopatologia , Emoções/fisiologia , Função Executiva/fisiologia , Transtornos Psicóticos/fisiopatologia , Esquizofrenia/fisiopatologia , Humanos
13.
Int J Psychol ; 55(3): 342-346, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31062352

RESUMO

The informative value of time and temporal structure often remains neglected in cognitive assessments. However, next to information about stimulus identity we can exploit temporal ordering principles, such as regularity, periodicity, or grouping to generate predictions about the timing of future events. Such predictions may improve cognitive performance by optimising adaptation to dynamic stimuli. Here, we investigated the influence of temporal structure on verbal working memory by assessing immediate recall performance for aurally presented digit sequences (forward digit span) as a function of standard (1000 ms stimulus-onset-asynchronies, SOAs), short (700 ms), long (1300 ms) and mixed (700-1300 ms) stimulus timing during the presentation phase. Participant's digit spans were lower for short and mixed SOA presentation relative to standard SOAs. This confirms an impact of temporal structure on the classic "magical number seven," suggesting that working memory performance can in part be regulated through the systematic application of temporal ordering principles.


Assuntos
Cognição/fisiologia , Memória de Curto Prazo/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
14.
Front Neurosci ; 13: 1146, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31708737

RESUMO

It has been suggested that speech production is accomplished by an internal forward model, reducing processing activity directed to self-produced speech in the auditory cortex. The current study uses an established N1-suppression paradigm comparing self- and externally initiated natural speech sounds to answer two questions: (1) Are forward predictions generated to process complex speech sounds, such as vowels, initiated via a button press? (2) Are prediction errors regarding self-initiated deviant vowels reflected in the corresponding ERP components? Results confirm an N1-suppression in response to self-initiated speech sounds. Furthermore, our results suggest that predictions leading to the N1-suppression effect are specific, as self-initiated deviant vowels do not elicit an N1-suppression effect. Rather, self-initiated deviant vowels elicit an enhanced N2b and P3a compared to externally generated deviants, externally generated standard, or self-initiated standards, again confirming prediction specificity. Results show that prediction errors are salient in self-initiated auditory speech sounds, which may lead to more efficient error correction in speech production.

15.
Emotion ; 2019 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-31647283

RESUMO

The ability to recognize emotions undergoes major developmental changes from infancy to adolescence, peaking in early adulthood, and declining with aging. A life span approach to emotion recognition is lacking in the auditory domain, and it remains unclear how the speaker's and listener's ages interact in the context of decoding vocal emotions. Here, we examined age-related differences in vocal emotion recognition from childhood until older adulthood and tested for a potential own-age bias in performance. A total of 164 participants (36 children [7-11 years], 53 adolescents [12-17 years], 48 young adults [20-30 years], 27 older adults [58-82 years]) completed a forced-choice emotion categorization task with nonverbal vocalizations expressing pleasure, relief, achievement, happiness, sadness, disgust, anger, fear, surprise, and neutrality. These vocalizations were produced by 16 speakers, 4 from each age group (children [8-11 years], adolescents [14-16 years], young adults [19-23 years], older adults [60-75 years]). Accuracy in vocal emotion recognition improved from childhood to early adulthood and declined in older adults. Moreover, patterns of improvement and decline differed by emotion category: faster development for pleasure, relief, sadness, and surprise and delayed decline for fear and surprise. Vocal emotions produced by older adults were more difficult to recognize when compared to all other age groups. No evidence for an own-age bias was found, except in children. These findings support effects of both speaker and listener ages on how vocal emotions are decoded and inform current models of vocal emotion perception. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

16.
Sci Rep ; 9(1): 14439, 2019 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-31594966

RESUMO

Emotional valence is known to influence word processing dependent upon concreteness. Whereas some studies point towards stronger effects of emotion on concrete words, others claim amplified emotion effects for abstract words. We investigated the interaction of emotion and concreteness by means of fMRI and EEG in a delayed lexical decision task. Behavioral data revealed a facilitating effect of high positive and negative valence on the correct processing of abstract, but not concrete words. EEG data yielded a particularly low amplitude response of the late positive component (LPC) following concrete neutral words. This presumably indicates enhanced allocation of processing resources to abstract and emotional words at late stages of word comprehension. In fMRI, interactions between concreteness and emotion were observed within the semantic processing network: the left inferior frontal gyrus (IFG) and the left middle temporal gyrus (MTG). Higher positive or negative valence appears to facilitate semantic retrieval and selection of abstract words. Surprisingly, a reversal of this effect occurred for concrete words. This points towards enhanced semantic control for emotional concrete words compared to neutral concrete words. Our findings suggest fine-tuned integration of emotional valence and concreteness. Specifically, at late processing stages, semantic control mechanisms seem to integrate emotional cues depending on the previous progress of semantic retrieval.


Assuntos
Percepção Auditiva , Emoções , Eletroencefalografia , Feminino , Humanos , Imagem por Ressonância Magnética , Masculino , Semântica , Vocabulário , Adulto Jovem
17.
NPJ Parkinsons Dis ; 5: 19, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31583269

RESUMO

Individuals with Parkinson's disease (PD) experience rhythm disorders in a number of motor tasks, such as (i) oral diadochokinesis, (ii) finger tapping, and (iii) gait. These common motor deficits may be signs of "general dysrhythmia", a central disorder spanning across effectors and tasks, and potentially sharing the same neural substrate. However, to date, little is known about the relationship between rhythm impairments across domains and effectors. To test this hypothesis, we assessed whether rhythmic disturbances in three different domains (i.e., orofacial, manual, and gait) can be related in PD. Moreover, we investigated whether rhythmic motor performance across these domains can be predicted by rhythm perception, a measure of central rhythmic processing not confounded with motor output. Twenty-two PD patients (mean age: 69.5 ± 5.44) participated in the study. They underwent neurological and neuropsychological assessments, and they performed three rhythmic motor tasks. For oral diadochokinesia, participants had to repeatedly produce a trisyllable pseudoword. For gait, they walked along a computerized walkway. For the manual task, patients had to repeatedly produce finger taps. The first two rhythmic motor tasks were unpaced, and the manual tapping task was performed both without a pacing stimulus and musically paced. Rhythm perception was also tested. We observed that rhythmic variability of motor performances (inter-syllable, inter-tap, and inter-stride time error) was related between the three functions. Moreover, rhythmic performance was predicted by rhythm perception abilities, as demonstrated with a logistic regression model. Hence, rhythm impairments in different motor domains are found to be related in PD and may be underpinned by a common impaired central rhythm mechanism, revealed by a deficit in rhythm perception. These results may provide a novel perspective on how interpret the effects of rhythm-based interventions in PD, within and across motor domains.

18.
PLoS One ; 14(9): e0222420, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31557168

RESUMO

To prepare for an impending event of unknown temporal distribution, humans internally increase the perceived probability of event onset as time elapses. This effect is termed the hazard rate of events. We tested how the neural encoding of hazard rate changes by providing human participants with prior information on temporal event probability. We recorded behavioral and electroencephalographic (EEG) data while participants listened to continuously repeating five-tone sequences, composed of four standard tones followed by a non-target deviant tone, delivered at slow (1.6 Hz) or fast (4 Hz) rates. The task was to detect a rare target tone, which equiprobably appeared at either position two, three or four of the repeating sequence. In this design, potential target position acts as a proxy for elapsed time. For participants uninformed about the target's distribution, elapsed time to uncertain target onset increased response speed, displaying a significant hazard rate effect at both slow and fast stimulus rates. However, only in fast sequences did prior information about the target's temporal distribution interact with elapsed time, suppressing the hazard rate. Importantly, in the fast, uninformed condition pre-stimulus power synchronization in the beta band (Beta 1, 15-19 Hz) predicted the hazard rate of response times. Prior information suppressed pre-stimulus power synchronization in the same band, while still significantly predicting response times. We conclude that Beta 1 power does not simply encode the hazard rate, but-more generally-internal estimates of temporal event probability based upon contextual information.


Assuntos
Encéfalo/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Probabilidade , Adulto Jovem
19.
Neuropsychologia ; 134: 107200, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31557484

RESUMO

Sensory suppression effects observed in electroencephalography (EEG) index successful predictions of the type and timing of self-generated sensory feedback. However, it is unclear how precise the timing prediction of sensory feedback is, and how temporal delays between an action and its sensory feedback affect perception. The current study investigated how prediction errors induced by delaying tone onset times affect the processing of sensory feedback in audition. Participants listened to self-generated (via button press) or externally generated tones. Self-generated tones were presented either without or with various delays (50, 100, or 250 ms; in 30% of trials). Comparing listening to externally generated and self-generated tones resulted in action-related P50 amplitude suppression to tones presented immediately or 100 ms after the button press. Subsequent ERP responses became more sensitive to the type of delay. Whereas the comparison of actual and predicted sensory feedback (N1) tolerated temporal uncertainty up to 100 ms, P2 suppression was modulated by delay in a graded manner: suppression decreased with an increase in sensory feedback delay. Self-generated tones occurring 250 ms after the button press additionally elicited an enhanced N2 response. These findings suggest functionally dissociable processes within the forward model that are affected by the timing of sensory feedback to self-action: relative tolerance of temporal delay in the P50 and N1, confirming previous results, but increased sensitivity in the P2. Further, they indicate that temporal prediction errors are treated differently by the auditory system: only delays that occurred after a temporal integration window (∼100 ms) impact the conscious detection of altered sensory feedback.


Assuntos
Potenciais Evocados Auditivos/fisiologia , Retroalimentação Sensorial , Estimulação Acústica , Adolescente , Adulto , Antecipação Psicológica , Eletroencefalografia , Feminino , Humanos , Aprendizagem , Masculino , Desempenho Psicomotor/fisiologia , Adulto Jovem
20.
PLoS One ; 14(9): e0222385, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31539390

RESUMO

OBJECTIVE: Previous research associated the left inferior frontal cortex with implicit structure learning. The present study tested patients with lesions encompassing the left inferior frontal gyrus (LIFG; including Brodmann areas 44 and 45) to further investigate this cognitive function, notably by using non-verbal material, implicit investigation methods, and by enhancing potential remaining function via dynamic attending. Patients and healthy matched controls were exposed to an artificial pitch grammar in an implicit learning paradigm to circumvent the potential influence of impaired language processing. METHODS: Patients and healthy controls listened to pitch sequences generated within a finite-state grammar (exposure phase) and then performed a categorization task on new pitch sequences (test phase). Participants were not informed about the underlying grammar in either the exposure phase or the test phase. Furthermore, the pitch structures were presented in a highly regular temporal context as the beneficial impact of temporal regularity (e.g. meter) in learning and perception has been previously reported. Based on the Dynamic Attending Theory (DAT), we hypothesized that a temporally regular context helps developing temporal expectations that, in turn, facilitate event perception, and thus benefit artificial grammar learning. RESULTS: Electroencephalography results suggest preserved artificial grammar learning of pitch structures in patients and healthy controls. For both groups, analyses of event-related potentials revealed a larger early negativity (100-200 msec post-stimulus onset) in response to ungrammatical than grammatical pitch sequence events. CONCLUSIONS: These findings suggest that (i) the LIFG does not play an exclusive role in the implicit learning of artificial pitch grammars, and (ii) the use of non-verbal material and an implicit task reveals cognitive capacities that remain intact despite lesions to the LIFG. These results provide grounds for training and rehabilitation, that is, learning of non-verbal grammars that may impact the relearning of verbal grammars.


Assuntos
Lobo Frontal/lesões , Transtornos da Linguagem/etiologia , Deficiências da Aprendizagem/etiologia , Idoso , Área de Broca/lesões , Área de Broca/fisiopatologia , Estudos de Casos e Controles , Cognição/fisiologia , Potenciais Evocados/fisiologia , Feminino , Lobo Frontal/fisiologia , Humanos , Transtornos da Linguagem/fisiopatologia , Aprendizagem/fisiologia , Deficiências da Aprendizagem/fisiopatologia , Masculino , Pessoa de Meia-Idade , Córtex Pré-Frontal/lesões , Córtex Pré-Frontal/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA