Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Neurobiol Lang (Camb) ; 5(1): 64-79, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38645616

RESUMO

Many recent studies have shown that representations drawn from neural network language models are extremely effective at predicting brain responses to natural language. But why do these models work so well? One proposed explanation is that language models and brains are similar because they have the same objective: to predict upcoming words before they are perceived. This explanation is attractive because it lends support to the popular theory of predictive coding. We provide several analyses that cast doubt on this claim. First, we show that the ability to predict future words does not uniquely (or even best) explain why some representations are a better match to the brain than others. Second, we show that within a language model, representations that are best at predicting future words are strictly worse brain models than other representations. Finally, we argue in favor of an alternative explanation for the success of language models in neuroscience: These models are effective at predicting brain responses because they generally capture a wide variety of linguistic phenomena.

2.
Sci Rep ; 14(1): 9133, 2024 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-38644370

RESUMO

Multimedia is extensively used for educational purposes. However, certain types of multimedia lack proper design, which could impose a cognitive load on the user. Therefore, it is essential to predict cognitive load and understand how it impairs brain functioning. Participants watched a version of educational multimedia that applied Mayer's principles, followed by a version that did not. Meanwhile, their electroencephalography (EEG) was recorded. Subsequently, they participated in a post-test and completed a self-reported cognitive load questionnaire. The audio envelope and word frequency were extracted from the multimedia, and the temporal response functions (TRFs) were obtained using a linear encoding model. We observed that the behavioral data are different between the two groups and the TRFs of the two multimedia versions were different. We saw changes in the amplitude and latencies of both early and late components. In addition, correlations were found between behavioral data and the amplitude and latencies of TRF components. Cognitive load decreased participants' attention to the multimedia, and semantic processing of words also occurred with a delay and smaller amplitude. Hence, encoding models provide insights into the temporal and spatial mapping of the cognitive load activity, which could help us detect and reduce cognitive load in potential environments such as educational multimedia or simulators for different purposes.


Assuntos
Encéfalo , Cognição , Eletroencefalografia , Multimídia , Humanos , Cognição/fisiologia , Masculino , Feminino , Encéfalo/fisiologia , Adulto Jovem , Adulto , Estimulação Acústica , Linguística , Atenção/fisiologia
3.
Neurobiol Lang (Camb) ; 5(1): 80-106, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38645624

RESUMO

Language neuroscience currently relies on two major experimental paradigms: controlled experiments using carefully hand-designed stimuli, and natural stimulus experiments. These approaches have complementary advantages which allow them to address distinct aspects of the neurobiology of language, but each approach also comes with drawbacks. Here we discuss a third paradigm-in silico experimentation using deep learning-based encoding models-that has been enabled by recent advances in cognitive computational neuroscience. This paradigm promises to combine the interpretability of controlled experiments with the generalizability and broad scope of natural stimulus experiments. We show four examples of simulating language neuroscience experiments in silico and then discuss both the advantages and caveats of this approach.

4.
Neurobiol Lang (Camb) ; 4(4): 611-636, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38144237

RESUMO

A fundamental question in neurolinguistics concerns the brain regions involved in syntactic and semantic processing during speech comprehension, both at the lexical (word processing) and supra-lexical levels (sentence and discourse processing). To what extent are these regions separated or intertwined? To address this question, we introduce a novel approach exploiting neural language models to generate high-dimensional feature sets that separately encode semantic and syntactic information. More precisely, we train a lexical language model, GloVe, and a supra-lexical language model, GPT-2, on a text corpus from which we selectively removed either syntactic or semantic information. We then assess to what extent the features derived from these information-restricted models are still able to predict the fMRI time courses of humans listening to naturalistic text. Furthermore, to determine the windows of integration of brain regions involved in supra-lexical processing, we manipulate the size of contextual information provided to GPT-2. The analyses show that, while most brain regions involved in language comprehension are sensitive to both syntactic and semantic features, the relative magnitudes of these effects vary across these regions. Moreover, regions that are best fitted by semantic or syntactic features are more spatially dissociated in the left hemisphere than in the right one, and the right hemisphere shows sensitivity to longer contexts than the left. The novelty of our approach lies in the ability to control for the information encoded in the models' embeddings by manipulating the training set. These "information-restricted" models complement previous studies that used language models to probe the neural bases of language, and shed new light on its spatial organization.

5.
Neuroimage ; 277: 120240, 2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37348622

RESUMO

Previous research on body representation in the brain has focused on category-specific representation, using fMRI to investigate the response pattern to body stimuli in occipitotemporal cortex. But the central question of the specific computations involved in body selective regions has not been addressed so far. This study used ultra-high field fMRI and banded ridge regression to investigate the computational mechanisms of coding body images, by comparing the performance of three encoding models in predicting brain activity in occipitotemporal cortex and specifically in the extrastriate body area (EBA). Our results indicate that bodies are encoded in occipitotemporal cortex and in the EBA according to a combination of low-level visual features and postural features.


Assuntos
Mapeamento Encefálico , Reconhecimento Visual de Modelos , Humanos , Reconhecimento Visual de Modelos/fisiologia , Mapeamento Encefálico/métodos , Estimulação Luminosa/métodos , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Imageamento por Ressonância Magnética/métodos
6.
Brain Sci ; 12(12)2022 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-36552093

RESUMO

Research on visual encoding models for functional magnetic resonance imaging derived from deep neural networks, especially CNN (e.g., VGG16), has been developed. However, CNNs typically use smaller kernel sizes (e.g., 3 × 3) for feature extraction in visual encoding models. Although the receptive field size of CNN can be enlarged by increasing the network depth or subsampling, it is limited by the small size of the convolution kernel, leading to an insufficient receptive field size. In biological research, the size of the neuronal population receptive field of high-level visual encoding regions is usually three to four times that of low-level visual encoding regions. Thus, CNNs with a larger receptive field size align with the biological findings. The RepLKNet model directly expands the convolution kernel size to obtain a larger-scale receptive field. Therefore, this paper proposes a mixed model to replace CNN for feature extraction in visual encoding models. The proposed model mixes RepLKNet and VGG so that the mixed model has a receptive field of different sizes to extract more feature information from the image. The experimental results indicate that the mixed model achieves better encoding performance in multiple regions of the visual cortex than the traditional convolutional model. Also, a larger-scale receptive field should be considered in building visual encoding models so that the convolution network can play a more significant role in visual representations.

7.
Neuroimage ; 264: 119754, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36400378

RESUMO

The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models' prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.


Assuntos
Mapeamento Encefálico , Percepção Visual , Humanos , Percepção Visual/fisiologia , Aprendizado de Máquina , Encéfalo/fisiologia , Eletroencefalografia
8.
Neuroimage ; 264: 119728, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36334814

RESUMO

Encoding models provide a powerful framework to identify the information represented in brain recordings. In this framework, a stimulus representation is expressed within a feature space and is used in a regularized linear regression to predict brain activity. To account for a potential complementarity of different feature spaces, a joint model is fit on multiple feature spaces simultaneously. To adapt regularization strength to each feature space, ridge regression is extended to banded ridge regression, which optimizes a different regularization hyperparameter per feature space. The present paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model. It also describes how banded ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces. This feature-space selection leads to better prediction accuracy and to better interpretability. Banded ridge regression is then mathematically linked to a number of other regression methods with similar feature-space selection mechanisms. Finally, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.


Assuntos
Análise de Regressão , Humanos , Modelos Lineares
9.
Neuroimage ; 263: 119610, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36064138

RESUMO

A deep understanding of the neural architecture of mental function should enable the accurate prediction of a specific pattern of brain activity for any psychological task, based only on the cognitive functions known to be engaged by that task. Encoding models (EMs), which predict neural responses from known features (e.g., stimulus properties), have succeeded in circumscribed domains (e.g., visual neuroscience), but implementing domain-general EMs that predict brain-wide activity for arbitrary tasks has been limited mainly by availability of datasets that 1) sufficiently span a large space of psychological functions, and 2) are sufficiently annotated with such functions to allow robust EM specification. We examine the use of EMs based on a formal specification of psychological function, to predict cortical activation patterns across a broad range of tasks. We utilized the Multi-Domain Task Battery, a dataset in which 24 subjects completed 32 ten-minute fMRI scans, switching tasks every 35 s and engaging in 44 total conditions of diverse psychological manipulations. Conditions were annotated by a group of experts using the Cognitive Atlas ontology to identify putatively engaged functions, and region-wise cognitive EMs (CEMs) were fit, for individual subjects, on neocortical responses. We found that CEMs predicted cortical activation maps of held-out tasks with high accuracy, outperforming a permutation-based null model while approaching the noise ceiling of the data, without being driven solely by either cognitive or perceptual-motor features. Hierarchical clustering on the similarity structure of CEM generalization errors revealed relationships amongst psychological functions. Spatial distributions of feature importances systematically overlapped with large-scale resting-state functional networks (RSNs), supporting the hypothesis of functional specialization within RSNs while grounding their function in an interpretable data-driven manner. Our implementation and validation of CEMs provides a proof of principle for the utility of formal ontologies in cognitive neuroscience and motivates the use of CEMs in the further testing of cognitive theories.


Assuntos
Encéfalo , Cognição , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Cognição/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética
10.
Neuropsychologia ; 174: 108341, 2022 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-35961387

RESUMO

Distinct brain systems are thought to support statistical learning over different timescales. Regularities encountered during online perceptual experience can be acquired rapidly by the hippocampus. Further processing during offline consolidation can establish these regularities gradually in cortical regions, including the medial prefrontal cortex (mPFC). These mechanisms of statistical learning may be critical during spatial navigation, for which knowledge of the structure of an environment can facilitate future behavior. Rapid acquisition and prolonged retention of regularities have been investigated in isolation, but how they interact in the context of spatial navigation is unknown. We had the rare opportunity to study the brain systems underlying both rapid and gradual timescales of statistical learning using intracranial electroencephalography (iEEG) longitudinally in the same patient over a period of three weeks. As hypothesized, spatial patterns were represented in the hippocampus but not mPFC for up to one week after statistical learning and then represented in the mPFC but not hippocampus two and three weeks after statistical learning. Taken together, these findings suggest that the hippocampus may contribute to the initial extraction of regularities prior to cortical consolidation.


Assuntos
Consolidação da Memória , Navegação Espacial , Humanos , Aprendizagem , Rememoração Mental , Córtex Pré-Frontal , Memória Espacial
11.
Neural Netw ; 154: 31-42, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35849870

RESUMO

Using deep neural networks (DNNs) as models to explore the biological brain is controversial, which is mainly due to the impenetrability of DNNs. Inspired by neural style transfer, we circumvented this problem by using deep features that were given a clear meaning-the representation of the semantic content of an image. Using encoding models and the representational similarity analysis, we quantitatively showed that the deep features which represented the semantic content of an image mainly predicted the activity of voxels in the early visual areas (V1, V2, and V3) and these features were essentially depictive but also propositional. This result is in line with the core viewpoint of the grounded cognition to some extent, which suggested that the representation of information in our brain is essentially depictive and can implement symbolic functions naturally.


Assuntos
Encéfalo , Semântica , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
12.
J Neural Eng ; 19(2)2022 03 07.
Artigo em Inglês | MEDLINE | ID: mdl-35073530

RESUMO

Objective.Brain recordings exhibit dynamics at multiple spatiotemporal scales, which are measured with spike trains and larger-scale field potential signals. To study neural processes, it is important to identify and model causal interactions not only at a single scale of activity, but also across multiple scales, i.e. between spike trains and field potential signals. Standard causality measures are not directly applicable here because spike trains are binary-valued but field potentials are continuous-valued. It is thus important to develop computational tools to recover multiscale neural causality during behavior, assess their performance on neural datasets, and study whether modeling multiscale causalities can improve the prediction of neural signals beyond what is possible with single-scale causality.Approach.We design a multiscale model-based Granger-like causality method based on directed information and evaluate its success both in realistic biophysical spike-field simulations and in motor cortical datasets from two non-human primates (NHP) performing a motor behavior. To compute multiscale causality, we learn point-process generalized linear models that predict the spike events at a given time based on the history of both spike trains and field potential signals. We also learn linear Gaussian models that predict the field potential signals at a given time based on their own history as well as either the history of binary spike events or that of latent firing rates.Main results.We find that our method reveals the true multiscale causality network structure in biophysical simulations despite the presence of model mismatch. Further, models with the identified multiscale causalities in the NHP neural datasets lead to better prediction of both spike trains and field potential signals compared to just modeling single-scale causalities. Finally, we find that latent firing rates are better predictors of field potential signals compared with the binary spike events in the NHP datasets.Significance.This multiscale causality method can reveal the directed functional interactions across spatiotemporal scales of brain activity to inform basic science investigations and neurotechnologies.


Assuntos
Modelos Neurológicos , Neurônios , Potenciais de Ação , Algoritmos , Animais , Causalidade , Modelos Lineares
13.
Neuron ; 110(4): 698-708.e5, 2022 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-34932942

RESUMO

Variation in the neural code contributes to making each individual unique. We probed neural code variation using ∼100 population recordings from major ganglion cell types in the macaque retina, combined with an interpretable computational representation of individual variability. This representation captured variation and covariation in properties such as nonlinearity, temporal dynamics, and spatial receptive field size and preserved invariances such as asymmetries between On and Off cells. The covariation of response properties in different cell types was associated with the proximity of lamination of their synaptic input. Surprisingly, male retinas exhibited higher firing rates and faster temporal integration than female retinas. Exploiting data from previously recorded retinas enabled efficient characterization of a new macaque retina, and of a human retina. Simulations indicated that combining a large dataset of retinal recordings with behavioral feedback could reveal the neural code in a living human and thus improve vision restoration with retinal implants.


Assuntos
Retina , Células Ganglionares da Retina , Animais , Feminino , Macaca , Masculino , Estimulação Luminosa , Retina/fisiologia , Células Ganglionares da Retina/fisiologia , Visão Ocular
14.
Proc Natl Acad Sci U S A ; 118(46)2021 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-34772812

RESUMO

Neural processing is hypothesized to apply the same mathematical operations in a variety of contexts, implementing so-called canonical neural computations. Divisive normalization (DN) is considered a prime candidate for a canonical computation. Here, we propose a population receptive field (pRF) model based on DN and evaluate it using ultra-high-field functional MRI (fMRI). The DN model parsimoniously captures seemingly disparate response signatures with a single computation, superseding existing pRF models in both performance and biological plausibility. We observe systematic variations in specific DN model parameters across the visual hierarchy and show how they relate to differences in response modulation and visuospatial information integration. The DN model delivers a unifying framework for visuospatial responses throughout the human visual hierarchy and provides insights into its underlying information-encoding computations. These findings extend the role of DN as a canonical computation to neuronal populations throughout the human visual hierarchy.


Assuntos
Córtex Visual/fisiologia , Humanos , Imageamento por Ressonância Magnética/métodos , Modelos Neurológicos , Neurônios/fisiologia , Estimulação Luminosa/métodos
15.
Dev Cogn Neurosci ; 52: 101034, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34781250

RESUMO

Humans are born into a social environment and from early on possess a range of abilities to detect and respond to social cues. In the past decade, there has been a rapidly increasing interest in investigating the neural responses underlying such early social processes under naturalistic conditions. However, the investigation of neural responses to continuous dynamic input poses the challenge of how to link neural responses back to continuous sensory input. In the present tutorial, we provide a step-by-step introduction to one approach to tackle this issue, namely the use of linear models to investigate neural tracking responses in electroencephalographic (EEG) data. While neural tracking has gained increasing popularity in adult cognitive neuroscience over the past decade, its application to infant EEG is still rare and comes with its own challenges. After introducing the concept of neural tracking, we discuss and compare the use of forward vs. backward models and individual vs. generic models using an example data set of infant EEG data. Each section comprises a theoretical introduction as well as a concrete example using MATLAB code. We argue that neural tracking provides a promising way to investigate early (social) processing in an ecologically valid setting.


Assuntos
Desenvolvimento Infantil , Eletroencefalografia , Comportamento Social , Humanos , Lactente
16.
J Neurosci ; 41(43): 8946-8962, 2021 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-34503996

RESUMO

In natural conversations, listeners must attend to what others are saying while ignoring extraneous background sounds. Recent studies have used encoding models to predict electroencephalography (EEG) responses to speech in noise-free listening situations, sometimes referred to as "speech tracking." Researchers have analyzed how speech tracking changes with different types of background noise. It is unclear, however, whether neural responses from acoustically rich, naturalistic environments with and without background noise can be generalized to more controlled stimuli. If encoding models for acoustically rich, naturalistic stimuli are generalizable to other tasks, this could aid in data collection from populations of individuals who may not tolerate listening to more controlled and less engaging stimuli for long periods of time. We recorded noninvasive scalp EEG while 17 human participants (8 male/9 female) listened to speech without noise and audiovisual speech stimuli containing overlapping speakers and background sounds. We fit multivariate temporal receptive field encoding models to predict EEG responses to pitch, the acoustic envelope, phonological features, and visual cues in both stimulus conditions. Our results suggested that neural responses to naturalistic stimuli were generalizable to more controlled datasets. EEG responses to speech in isolation were predicted accurately using phonological features alone, while responses to speech in a rich acoustic background were more accurate when including both phonological and acoustic features. Our findings suggest that naturalistic audiovisual stimuli can be used to measure receptive fields that are comparable and generalizable to more controlled audio-only stimuli.SIGNIFICANCE STATEMENT Understanding spoken language in natural environments requires listeners to parse acoustic and linguistic information in the presence of other distracting stimuli. However, most studies of auditory processing rely on highly controlled stimuli with no background noise, or with background noise inserted at specific times. Here, we compare models where EEG data are predicted based on a combination of acoustic, phonetic, and visual features in highly disparate stimuli-sentences from a speech corpus and speech embedded within movie trailers. We show that modeling neural responses to highly noisy, audiovisual movies can uncover tuning for acoustic and phonetic information that generalizes to simpler stimuli typically used in sensory neuroscience experiments.


Assuntos
Estimulação Acústica/métodos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Eletroculografia/métodos , Estimulação Luminosa/métodos , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Filmes Cinematográficos , Adulto Jovem
17.
Brain Sci ; 11(8)2021 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-34439623

RESUMO

Visual encoding models are important computational models for understanding how information is processed along the visual stream. Many improved visual encoding models have been developed from the perspective of the model architecture and the learning objective, but these are limited to the supervised learning method. From the view of unsupervised learning mechanisms, this paper utilized a pre-trained neural network to construct a visual encoding model based on contrastive self-supervised learning for the ventral visual stream measured by functional magnetic resonance imaging (fMRI). We first extracted features using the ResNet50 model pre-trained in contrastive self-supervised learning (ResNet50-CSL model), trained a linear regression model for each voxel, and finally calculated the prediction accuracy of different voxels. Compared with the ResNet50 model pre-trained in a supervised classification task, the ResNet50-CSL model achieved an equal or even relatively better encoding performance in multiple visual cortical areas. Moreover, the ResNet50-CSL model performs hierarchical representation of input visual stimuli, which is similar to the human visual cortex in its hierarchical information processing. Our experimental results suggest that the encoding model based on contrastive self-supervised learning is a strong computational model to compete with supervised models, and contrastive self-supervised learning proves an effective learning method to extract human brain-like representations.

18.
Neuron ; 109(14): 2308-2325.e10, 2021 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-34133944

RESUMO

Humans and other animals can identify objects by active touch, requiring the coordination of exploratory motion and tactile sensation. Both the motor strategies and neural representations employed could depend on the subject's goals. We developed a shape discrimination task that challenged head-fixed mice to discriminate concave from convex shapes. Behavioral decoding revealed that mice did this by comparing contacts across whiskers. In contrast, a separate group of mice performing a shape detection task simply summed up contacts over whiskers. We recorded populations of neurons in the barrel cortex, which processes whisker input, and found that individual neurons across the cortical layers encoded touch, whisker motion, and task-related signals. Sensory representations were task-specific: during shape discrimination, but not detection, neurons responded most to behaviorally relevant whiskers, overriding somatotopy. Thus, sensory cortex employs task-specific representations compatible with behaviorally relevant computations.


Assuntos
Aprendizagem por Discriminação/fisiologia , Percepção de Forma/fisiologia , Neurônios/fisiologia , Córtex Somatossensorial/fisiologia , Percepção do Tato/fisiologia , Animais , Camundongos , Vibrissas/fisiologia
19.
Neuroimage ; 237: 118106, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33991696

RESUMO

Speech comprehension in natural soundscapes rests on the ability of the auditory system to extract speech information from a complex acoustic signal with overlapping contributions from many sound sources. Here we reveal the canonical processing of speech in natural soundscapes on multiple scales by using data-driven modeling approaches to characterize sounds to analyze ultra high field fMRI recorded while participants listened to the audio soundtrack of a movie. We show that at the functional level the neuronal processing of speech in natural soundscapes can be surprisingly low dimensional in the human cortex, highlighting the functional efficiency of the auditory system for a seemingly complex task. Particularly, we find that a model comprising three functional dimensions of auditory processing in the temporal lobes is shared across participants' fMRI activity. We further demonstrate that the three functional dimensions are implemented in anatomically overlapping networks that process different aspects of speech in natural soundscapes. One is most sensitive to complex auditory features present in speech, another to complex auditory features and fast temporal modulations, that are not specific to speech, and one codes mainly sound level. These results were derived with few a-priori assumptions and provide a detailed and computationally reproducible account of the cortical activity in the temporal lobe elicited by the processing of speech in natural soundscapes.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Modelos Teóricos , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Aprendizado de Máquina não Supervisionado , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Filmes Cinematográficos , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
20.
Neuroimage ; 226: 117562, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33189931

RESUMO

An extensive body of work has shown that attentional capture is contingent on the goals of the observer: Capture is strongly reduced or even eliminated when an irrelevant singleton stimulus does not match the target-defining properties (Folk et al., 1992). There has been a long-standing debate on whether attentional capture can be explained by goal-driven and/or stimulus-driven accounts. Here, we shed further light on this matter by using EEG activity (raw EEG and alpha power) to provide a time-resolved index of attentional orienting towards salient stimuli that either matched or did not match target-defining properties. A search display containing the target stimulus was preceded by a spatially uninformative singleton cue that either matched the color of the upcoming target (contingent cues), or that appeared in an irrelevant color (non-contingent cues). Multivariate analysis of raw EEG and alpha power revealed preferential tuning to the location of both contingent and non-contingent cues, with a stronger bias towards contingent than non-contingent cues. The time course of these effects, however, depended on the neural signal. Raw EEG data revealed attentional orienting towards the contingent cue early on in the trial (>156 ms), while alpha power revealed sustained spatial selection in the cued locations at a later moment in the trial (>250 ms). Moreover, while raw EEG showed stronger capture by contingent cues during this early time window, an advantage for contingent cues arose during a later time window in alpha band activity. Thus, our findings suggest that raw EEG activity and alpha-band power tap into distinct neural processes that index separate aspects of covert spatial attention.


Assuntos
Ritmo alfa/fisiologia , Atenção/fisiologia , Encéfalo/fisiologia , Orientação Espacial/fisiologia , Adulto , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Análise Multivariada , Tempo de Reação/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA