Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Neuroinform ; 15: 679838, 2021.
Article in English | MEDLINE | ID: mdl-34630062

ABSTRACT

Over the past decade, deep neural network (DNN) models have received a lot of attention due to their near-human object classification performance and their excellent prediction of signals recorded from biological visual systems. To better understand the function of these networks and relate them to hypotheses about brain activity and behavior, researchers need to extract the activations to images across different DNN layers. The abundance of different DNN variants, however, can often be unwieldy, and the task of extracting DNN activations from different layers may be non-trivial and error-prone for someone without a strong computational background. Thus, researchers in the fields of cognitive science and computational neuroscience would benefit from a library or package that supports a user in the extraction task. THINGSvision is a new Python module that aims at closing this gap by providing a simple and unified tool for extracting layer activations for a wide range of pretrained and randomly-initialized neural network architectures, even for users with little to no programming experience. We demonstrate the general utility of THINGsvision by relating extracted DNN activations to a number of functional MRI and behavioral datasets using representational similarity analysis, which can be performed as an integral part of the toolbox. Together, THINGSvision enables researchers across diverse fields to extract features in a streamlined manner for their custom image dataset, thereby improving the ease of relating DNNs, brain activity, and behavior, and improving the reproducibility of findings in these research fields.

2.
Front Psychol ; 11: 615123, 2020.
Article in English | MEDLINE | ID: mdl-33281694

ABSTRACT

[This corrects the article DOI: 10.3389/fpsyg.2019.00375.].

3.
Front Psychol ; 10: 375, 2019.
Article in English | MEDLINE | ID: mdl-30846961

ABSTRACT

Load theory claims that bottom-up attention is possible under conditions of low perceptual load but not high perceptual load. At variance with this claim, a recent one-trial study showed that under low load, with only two colors in the display - a ring and a disk -, an instruction to process only one of the two stimuli led to better memory performance for the color of the relevant than of the irrelevant stimulus. Control experiments showed that if instructed to pay attention to both objects, participants were able to memorize both colors. Thus, stimulus irrelevance diminished the likelihood of memory for a color stimulus under low perceptual-load conditions. Yet, we noted less than optimal design features in that prior study: a lack of more implicit priming measures of memory or attention and an interval between color stimulus presentation and memory test that probably exceeded 500 ms. We took care of these problems in the current one-trial study by improving the retrieval displays while leaving the encoding displays as in the original study and found that the results only partly replicated prior findings. In particular, there was no evidence of irrelevance-induced blindness under conditions in which a ring was designated as relevant, surrounding an irrelevant disk. However, a continuously cumulative meta-analysis across past and present experiments showed that our results do not refute the irrelevance-induced effects entirely. We conclude with recommendations for future tests of load theory.

SELECTION OF CITATIONS
SEARCH DETAIL
...