RESUMO
The pupil of the eye provides a rich source of information for cognitive scientists, as it can index a variety of bodily states (e.g., arousal, fatigue) and cognitive processes (e.g., attention, decision-making). As pupillometry becomes a more accessible and popular methodology, researchers have proposed a variety of techniques for analyzing pupil data. Here, we focus on time series-based, signal-to-signal approaches that enable one to relate dynamic changes in pupil size over time with dynamic changes in a stimulus time series, continuous behavioral outcome measures, or other participants' pupil traces. We first introduce pupillometry, its neural underpinnings, and the relation between pupil measurements and other oculomotor behaviors (e.g., blinks, saccades), to stress the importance of understanding what is being measured and what can be inferred from changes in pupillary activity. Next, we discuss possible pre-processing steps, and the contexts in which they may be necessary. Finally, we turn to signal-to-signal analytic techniques, including regression-based approaches, dynamic time-warping, phase clustering, detrended fluctuation analysis, and recurrence quantification analysis. Assumptions of these techniques, and examples of the scientific questions each can address, are outlined, with references to key papers and software packages. Additionally, we provide a detailed code tutorial that steps through the key examples and figures in this paper. Ultimately, we contend that the insights gained from pupillometry are constrained by the analysis techniques used, and that signal-to-signal approaches offer a means to generate novel scientific insights by taking into account understudied spectro-temporal relationships between the pupil signal and other signals of interest.
Assuntos
Atenção , Pupila , Humanos , Nível de Alerta , Piscadela , Movimentos SacádicosRESUMO
The neural mechanisms that unfold when humans form a large group defined by an overarching context, such as audiences in theater or sports, are largely unknown and unexplored. This is mainly due to the lack of availability of a scalable system that can record the brain activity from a significantly large portion of such an audience simultaneously. Although the technology for such a system has been readily available for a long time, the high cost as well as the large overhead in human resources and logistic planning have prohibited the development of such a system. However, during the recent years reduction in technology costs and size have led to the emergence of low-cost, consumer-oriented EEG systems, developed primarily for recreational use. Here by combining such a low-cost EEG system with other off-the-shelve hardware and tailor-made software, we develop in the lab and test in a cinema such a scalable EEG hyper-scanning system. The system has a robust and stable performance and achieves accurate unambiguous alignment of the recorded data of the different EEG headsets. These characteristics combined with small preparation time and low-cost make it an ideal candidate for recording large portions of audiences.
RESUMO
Understanding what someone says requires relating words in a sentence to one another as instructed by the grammatical rules of a language. In recent years, the neurophysiological basis for this process has become a prominent topic of discussion in cognitive neuroscience. Current proposals about the neural mechanisms of syntactic structure building converge on a key role for neural oscillations in this process, but they differ in terms of the exact function that is assigned to them. In this Perspective, we discuss two proposed functions for neural oscillations - chunking and multiscale information integration - and evaluate their merits and limitations taking into account a fundamentally hierarchical nature of syntactic representations in natural languages. We highlight insights that provide a tangible starting point for a neurocognitive model of syntactic structure building.
Assuntos
Idioma , Memória , Humanos , SemânticaRESUMO
Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events-standard S, with constant duration, and comparison C, with duration varying trial-by-trial-are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. We simulated our behavioral results with a Bayesian model and replicated the finding that participants disproportionately expand first-position dynamic (unpredictable) short events. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.
Assuntos
Percepção , Humanos , Teorema de BayesRESUMO
Humans tend to perceptually distort (dilate/shrink) the duration of brief stimuli presented in a sequence when discriminating the duration of a second stimulus (Comparison) from the duration of a first stimulus (Standard). This type of distortion, termed "Time order error" (TOE), is an important window into the determinants of subjective perception. We hypothesized that stimulus durations would be optimally processed, suppressing subjective distortions in serial perception, if the events to be compared fell within the boundaries of rhythmic attentive sampling (4-8 Hz, theta band). We used a two-interval forced choice (2IFC) experimental design, and in three separate experiments tested different Standard durations: 120-ms, corresponding to an 8.33 Hz rhythmic attentive window; 160 ms, corresponding to a 6.25 Hz window; and 200 ms, for a 5 Hz window. We found that TOE, as measured by the Constant Error metric, is sizeable for a 120-ms Standard, is reduced for a 160-ms Standard, and statistically disappears for 200-ms Standard events, confirming our hypothesis. For 120- and 160-ms Standard events, to reduce TOEs it was necessary to increase the interval between the Standard and the Comparison event from sub-second (400, 800 ms) to supra-second (1600, 2000 ms) lags, suggesting that the orienting of attention in time waiting for the Comparison event to onset may work as a back-up strategy to optimize its encoding. Our results highlight the flexible use of two different attentive strategies to optimize subjective time perception.
Assuntos
Percepção do Tempo , Atenção , Percepção Auditiva , HumanosRESUMO
Our eyes move in response to stimulus statistics, reacting to surprising events, and adapting to predictable ones. Cortical and subcortical pathways contribute to generating context-specific eye-movement dynamics, and oculomotor dysfunction is recognized as one the early clinical markers of Parkinson's disease (PD). We asked if covert computations of environmental statistics generating temporal expectations for a potential target are registered by eye movements, and if so, assuming that temporal expectations rely on motor system efficiency, whether they are impaired in PD. We used a repeating tone sequence, which generates a hazard rate distribution of target probability, and analyzed the distribution of blinks when participants were waiting for the target, but the target did not appear. Results show that, although PD participants tend to produce fewer and less temporally organized blink events relative to healthy controls, in both groups blinks became more suppressed with increasing target probability, leading to a hazard rate of oculomotor inhibition effects. The covert generation of temporal predictions may reflect a key feature of cognitive resilience in Parkinson's Disease.
RESUMO
BACKGROUND: The prevalence of orthostatic intolerance on the day of surgery is more than 50% after abdominal surgery. The impact of orthostatic intolerance on ambulation on the day of surgery has been little studied. We investigated orthostatic intolerance and walking ability after colorectal and bariatric surgery in an enhanced recovery programme. METHODS: Eighty-two patients (colorectal: n = 46, bariatric n = 36) were included and analysed in this prospective study. Walk tests for 2 min (2-MWT) and 6 min (6-MWT) were performed before and 24 h after surgery, and 3 h after surgery for 2-MWT. Orthostatic intolerance characterised by presyncopal symptoms when rising was recorded at the same time points. Multivariate binary logistic regressions modelling the probability of orthostatic intolerance and walking inability were performed taking into account potential risk factors. RESULTS: Prevalence of orthostatic intolerance and walking inability was, respectively, 65% and 18% 3-hour after surgery. The day after surgery, patients' performance had greatly improved: approximately 20% of the patients experienced orthostatic intolerance, whilst only 5% of the patients were unable to walk. Adjusted binary logistic regressions demonstrated that age (p = .37), sex (p = .39), BMI (p = .74), duration of anaesthesia (p = .71) and type of surgery (p = .71) did not significantly influence walking ability. CONCLUSION: Our study confirms that orthostatic intolerance was frequent (~ 60%) 3-hour after abdominal surgery but prevented a 2-MWT only in ~20% of patients. No risk factors for orthostatic intolerance and walking inability were evidenced.
Assuntos
Neoplasias Colorretais , Intolerância Ortostática , Deambulação Precoce , Humanos , Intolerância Ortostática/epidemiologia , Intolerância Ortostática/etiologia , Cuidados Pós-Operatórios , Estudos ProspectivosRESUMO
Entrainment depends on sequential neural phase reset by regular stimulus onset, a temporal parameter. Entraining to sequences of identical stimuli also entails stimulus feature predictability, but this component is not readily separable from temporal regularity. To test if spectral regularities concur with temporal regularities in determining the strength of auditory entrainment, we devised sound sequences that varied in conditional perceptual inferences based on deviant sound repetition probability: strong inference (100% repetition probability: If a deviant appears, then it will repeat), weak inference (75% repetition probability) and no inference (50%: A deviant may or may not repeat with equal probability). We recorded EEG data from 15 young human participants pre-attentively listening to the experimental sound sequences delivered either isochronously or anisochronously (±20% jitter), at both delta (1.67 Hz) and theta (6.67 Hz) stimulation rates. Strong perceptual inferences significantly enhanced entrainment at either stimulation rate and determined positive correlations between precision in phase distribution at the onset of deviant trials and entrained power. We conclude that both spectral predictability and temporal regularity govern entrainment via neural phase control.
Assuntos
Percepção Auditiva , Eletroencefalografia , Estimulação Acústica , Percepção Auditiva/fisiologia , HumanosRESUMO
Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum between about 4.3 and 5.5 Hz, reflecting the production and processing of linguistic information chunks (syllables and words) every ~200 ms. Interestingly, ~200 ms is also the typical duration of eye fixations during reading. Prompted by this observation, we demonstrate that German readers sample written text at ~5 Hz. A subsequent meta-analysis of 142 studies from 14 languages replicates this result and shows that sampling frequencies vary across languages between 3.9 Hz and 5.2 Hz. This variation systematically depends on the complexity of the writing systems (character-based versus alphabetic systems and orthographic transparency). Finally, we empirically demonstrate a positive correlation between speech spectrum and eye movement sampling in low-skilled non-native readers, with tentative evidence from post hoc analysis suggesting the same relationship in low-skilled native readers. On the basis of this convergent evidence, we propose that during reading, our brain's linguistic processing systems imprint a preferred processing rate-that is, the rate of spoken language production and perception-onto the oculomotor system.
Assuntos
Movimentos Oculares , Leitura , Humanos , Idioma , Linguística , FalaRESUMO
To prepare for an impending event of unknown temporal distribution, humans internally increase the perceived probability of event onset as time elapses. This effect is termed the hazard rate of events. We tested how the neural encoding of hazard rate changes by providing human participants with prior information on temporal event probability. We recorded behavioral and electroencephalographic (EEG) data while participants listened to continuously repeating five-tone sequences, composed of four standard tones followed by a non-target deviant tone, delivered at slow (1.6 Hz) or fast (4 Hz) rates. The task was to detect a rare target tone, which equiprobably appeared at either position two, three or four of the repeating sequence. In this design, potential target position acts as a proxy for elapsed time. For participants uninformed about the target's distribution, elapsed time to uncertain target onset increased response speed, displaying a significant hazard rate effect at both slow and fast stimulus rates. However, only in fast sequences did prior information about the target's temporal distribution interact with elapsed time, suppressing the hazard rate. Importantly, in the fast, uninformed condition pre-stimulus power synchronization in the beta band (Beta 1, 15-19 Hz) predicted the hazard rate of response times. Prior information suppressed pre-stimulus power synchronization in the same band, while still significantly predicting response times. We conclude that Beta 1 power does not simply encode the hazard rate, but-more generally-internal estimates of temporal event probability based upon contextual information.
Assuntos
Encéfalo/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Probabilidade , Adulto JovemRESUMO
Both time-based (when) and feature-based (what) aspects of attention facilitate behavior, so it is natural to hypothesize additive effects. We tested this conjecture by recording response behavior and electroencephalographic (EEG) data to auditory pitch changes, embedded at different time lags in a continuous sound stream. Participants reacted more rapidly to larger rather than smaller feature change magnitudes (deviancy), as well as to changes appearing after longer rather than shorter waiting times (hazard rate of response times). However, the feature and time dimensions of attention separately contributed to response speed, with no significant interaction. Notably, phase coherence at low frequencies (delta and theta bands, 1-7â¯Hz) predominantly reflected attention capture by feature changes, while oscillatory power at higher frequency bands, alpha (8-12â¯Hz) and beta (13-25â¯Hz) reflected the orienting of attention in time. Power and phase coherence predicted different portions of response speed variance, suggesting a division of labor in encoding sensory attention in complex auditory scenes.
Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Tempo de Reação/fisiologia , Estimulação Acústica , Adulto , Percepção Auditiva/fisiologia , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto JovemRESUMO
OBJECTIVE: Prosody comprehension deficits have been reported in major psychoses. It is still not clear whether these deficits occur at early psychosis stages. The aims of our study were to investigate a) linguistic and emotional prosody comprehension abilities in First Episode Psychosis (FEP) patients compared to healthy controls (HC); b) performance differences between non-affective (FEP-NA) and affective (FEP-A) patients, and c) association between symptoms severity and prosodic features. METHODS: A total of 208 FEP (156 FEP-NA and 52 FEP-A) patients and 77 HC were enrolled and assessed with the Italian version of the "Protocole Montréal d'Evaluation de la Communication" to evaluate linguistic and emotional prosody comprehension. Clinical variables were assessed with a comprehensive set of standardized measures. RESULTS: FEP patients displayed significant linguistic and emotional prosody deficits compared to HC, with FEP-NA showing greater impairment than FEP-A. Also, significant correlations between symptom severity and prosodic features in FEP patients were found. CONCLUSIONS: Our results suggest that prosodic impairments occur at the onset of psychosis being more prominent in FEP-NA and in those with severe psychopathology. These findings further support the hypothesis that aprosodia is a core feature of psychosis.
Assuntos
Emoções , Transtornos Psicóticos/diagnóstico , Transtornos Psicóticos/epidemiologia , Distúrbios da Fala/diagnóstico , Distúrbios da Fala/epidemiologia , Adulto , Compreensão/fisiologia , Emoções/fisiologia , Feminino , Humanos , Itália/epidemiologia , Idioma , Masculino , Transtornos Psicóticos/psicologia , Distúrbios da Fala/psicologia , Adulto JovemRESUMO
Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding.
Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Memória/fisiologia , Adulto JovemRESUMO
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se.
Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Potenciais Evocados/fisiologia , Expressão Facial , Reconhecimento Facial/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto JovemRESUMO
It has been shown that abstract concepts are more difficult to process and are acquired later than concrete concepts. We analysed the percentage of concrete words in the narrative lexicon of individuals with Williams Syndrome (WS) as compared to individuals with Down Syndrome (DS) and typically developing (TD) peers. The cognitive profile of WS is characterized by visual-spatial difficulties, while DS presents with predominant impairments in linguistic abilities. We predicted that if linguistic abilities are crucial to the development and use of an abstract vocabulary, DS participants should display a higher concreteness index than both Williams Syndrome and typically developing individuals. Results confirm this prediction, thus supporting the hypothesis of a crucial role of linguistic processes in abstract language acquisition. Correlation analyses suggest that a maturational link exists between the level of abstractness in narrative production and syntactic comprehension.
Assuntos
Aprendizagem , Vocabulário , Adolescente , Estudos de Casos e Controles , Criança , Síndrome de Down , Humanos , Síndrome de WilliamsRESUMO
We report the clinical and rehabilitative follow up of M, a female child carrying a compound heterozygous pathogenic mutations in the TCTN1 gene and affected by Joubert Syndrome (JS). JS is a congenital cerebellar ataxia characterized by "the molar tooth sign" on axial MRI, a pathognomonic neuroradiological malformation involving the cerebellum and brainstem. JS presents with high phenotypic/cognitive variability, and little is known about cognitive rehabilitation programs. We describe the therapeutic settings, intensive rehabilitation targets and outcome indexes in M's cognitive development. Using a single case evidence-based approach, we attempt to distinguish the effectiveness of the intervention from the overall developmental trend. We assume that an adequate amount of focused, goal directed treatment in a relative short period of time can be at least as effective as one provided in longer time, and much less interfering with the child's everyday life. We conclude by discussing specific issues in cognitive development and rehabilitation in JS and, more broadly, in cerebellar malformations.