Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Neuroimage ; 221: 117148, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32659350

RESUMO

A number of fMRI studies have provided support for the existence of multiple concept representations in areas of the brain such as the anterior temporal lobe (ATL) and inferior parietal lobule (IPL). However, the interaction among different conceptual representations remains unclear. To better understand the dynamics of how the brain extracts meaning from sensory stimuli, we conducted a human high-density electroencephalography (EEG) study in which we first trained participants to associate pseudowords with various animal and tool concepts. After training, multivariate pattern classification of EEG signals in sensor and source space revealed the representation of both animal and tool concepts in the left ATL and tool concepts within the left IPL within 250 â€‹ms. Finally, we used Granger Causality analyses to show that orthography-selective sensors directly modulated activity in the parietal-tool selective cluster. Together, our results provide evidence for distinct but parallel "perceptual-to-conceptual" feedforward hierarchies in the brain.


Assuntos
Aprendizagem por Associação/fisiologia , Mapeamento Encefálico/métodos , Formação de Conceito/fisiologia , Eletroencefalografia/métodos , Lobo Parietal/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Lobo Temporal/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
2.
J Vis ; 19(12): 20, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31644785

RESUMO

The human visual system can detect objects in streams of rapidly presented images at presentation rates of 70 Hz and beyond. Yet, target detection is often impaired when multiple targets are presented in quick temporal succession. Here, we provide evidence for the hypothesis that such impairments can arise from interference between "top-down" feedback signals and the initial "bottom-up" feedforward processing of the second target. Although it is has been recently shown that feedback signals are important for visual detection, this "crash" in neural processing affected both the detection and categorization of both targets. Moreover, experimentally reducing such interference between the feedforward and feedback portions of the two targets substantially improved participants' performance. The results indicate a key role of top-down re-entrant feedback signals and show how their interference with a successive target's feedforward process determine human behavior. These results are not just relevant for our understanding of how, when, and where capacity limits in the brain's processing abilities can arise, but also have ramifications spanning topics from consciousness to learning and attention.


Assuntos
Atenção , Encéfalo/fisiologia , Retroalimentação , Córtex Visual/fisiologia , Percepção Visual , Adolescente , Adulto , Comportamento , Cognição , Eletrodos , Eletroencefalografia , Feminino , Humanos , Aprendizagem , Masculino , Reprodutibilidade dos Testes , Adulto Jovem
3.
J Cogn Neurosci ; 26(2): 408-21, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24001003

RESUMO

A hallmark of human cognition is the ability to rapidly assign meaning to sensory stimuli. It has been suggested that this fast visual object categorization ability is accomplished by a feedforward processing hierarchy consisting of shape-selective neurons in occipito-temporal cortex that feed into task circuits in frontal cortex computing conceptual category membership. We performed an EEG rapid adaptation study to test this hypothesis. Participants were trained to categorize novel stimuli generated with a morphing system that precisely controlled both stimulus shape and category membership. We subsequently performed EEG recordings while participants performed a category matching task on pairs of successively presented stimuli. We used space-time cluster analysis to identify channels and latencies exhibiting selective neural responses. Neural signals before 200 msec on posterior channels demonstrated a release from adaptation for shape changes, irrespective of category membership, compatible with a shape- but not explicitly category-selective neural representation. A subsequent cluster with anterior topography appeared after 200 msec and exhibited release from adaptation consistent with explicit categorization. These signals were subsequently modulated by perceptual uncertainty starting around 300 msec. The degree of category selectivity of the anterior signals was strongly predictive of behavioral performance. We also observed a posterior category-selective signal after 300 msec exhibiting significant functional connectivity with the initial anterior category-selective signal. In summary, our study supports the proposition that perceptual categorization is accomplished by the brain within a quarter second through a largely feedforward process culminating in frontal areas, followed by later category-selective signals in posterior regions.


Assuntos
Adaptação Psicológica/fisiologia , Cognição/fisiologia , Eletroencefalografia , Adolescente , Adulto , Algoritmos , Encéfalo/fisiologia , Mapeamento Encefálico , Análise por Conglomerados , Retroalimentação Psicológica , Feminino , Percepção de Forma/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Cinética , Masculino , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Detecção de Sinal Psicológico , Percepção Visual/fisiologia , Adulto Jovem
4.
Front Neuroinform ; 14: 2, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32116626

RESUMO

Accurate stimulus onset timing is critical to almost all behavioral research. Auditory, visual, or manual response time stimulus onsets are typically sent through wires to various machines that record data such as: eye gaze positions, electroencephalography, stereo electroencephalography, and electrocorticography. These stimulus onsets are collated and analyzed according to experimental condition. If there is variability in the temporal accuracy of the delivery of these onsets to external systems, the quality of the resulting data and scientific analyses will degrade. Here, we describe an approximately 200 dollar Arduino based system and associated open-source codebase that achieved a maximum of 4 microseconds of delay from the inputs to the outputs while electrically opto-isolating the connected external systems. Using an oscilloscope, the device is configurable for the different environmental conditions particular to each laboratory (e.g., light sensor type, screen type, speaker type, stimulus type, temperature, etc). This low-cost open-source project delivered electrically isolated digital stimulus onset Transistor-Transistor Logic triggers with an input/output delay of 4 µs, and was successfully tested with seven different external systems that record eye and neurological data.

5.
J Eye Mov Res ; 13(5)2020 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-33828809

RESUMO

Here, we provide an analysis of the microsaccades that occurred during continuous visual search and targeting of small faces that we pasted either into cluttered background photos or into a simple gray background. Subjects continuously used their eyes to target singular 3-degree upright or inverted faces in changing scenes. As soon as the participant's gaze reached the target face, a new face was displayed in a different and random location. Regardless of the experimental context (e.g. background scene, no background scene), or target eccentricity (from 4 to 20 degrees of visual angle), we found that the microsaccade rate dropped to near zero levels within only 12 milliseconds after stimulus onset. There were almost never any microsaccades after stimulus onset and before the first saccade to the face. One subject completed 118 consecutive trials without a single microsaccade. However, in about 20% of the trials, there was a single microsaccade that occurred almost immediately after the preceding saccade's offset. These microsaccades were task oriented because their facial landmark targeting distributions matched those of saccades within both the upright and inverted face conditions. Our findings show that a single feedforward pass through the visual hierarchy for each stimulus is likely all that is needed to effectuate prolonged continuous visual search. In addition, we provide evidence that microsaccades can serve perceptual functions like correcting saccades or effectuating task-oriented goals during continuous visual search.

6.
Front Hum Neurosci ; 12: 374, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30333737

RESUMO

While several studies have shown human subjects' impressive ability to detect faces in individual images in paced settings (Crouzet et al., 2010), we here report the details of an eye movement dataset in which subjects rapidly and continuously targeted single faces embedded in different scenes at rates approaching six face targets each second (including blinks and eye movement times). In this paper, we describe details of a large publicly available eye movement dataset of this new psychophysical paradigm (Martin et al., 2018). The paradigm produced high-resolution eye-tracking data from an experiment on continuous upright and inverted 3° sized face detection in both background and no-background conditions. The new "Zapping" paradigm allowed large amounts of trials to be completed in a short amount of time. For example, our three studies encompassed a total of 288,000 trials done in 72 separate experiments, and yet only took approximately 40 hours of recording for the three experimental cohorts. Each subject did 4000 trials split into eight blocks of 500 consecutive trials in one of the four different experimental conditions: {upright, inverted} × {scene, no scene}. For each condition, there are several covariates of interest, including: temporal eye positions sampled at 1250 hz, saccades, saccade reaction times, microsaccades, pupil dynamics, target luminances, and global contrasts.

7.
Sci Rep ; 8(1): 12482, 2018 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-30127454

RESUMO

A number of studies have shown human subjects' impressive ability to detect faces in individual images, with saccade reaction times starting as fast as 100 ms after stimulus onset. Here, we report evidence that humans can rapidly and continuously saccade towards single faces embedded in different scenes at rates approaching 6 faces/scenes each second (including blinks and eye movement times). These observations are impressive, given that humans usually make no more than 2 to 5 saccades per second when searching a single scene with eye movements. Surprisingly, attempts to hide the faces by blending them into a large background scene had little effect on targeting rates, saccade reaction times, or targeting accuracy. Upright faces were found more quickly and more accurately than inverted faces; both with and without a cluttered background scene, and over a large range of eccentricities (4°-16°). The fastest subject in our study made continuous saccades to 500 small 3° upright faces at 4° eccentricities in only 96 seconds. The maximum face targeting rate ever achieved by any subject during any sequence of 7 faces during Experiment 3 for the no scene and upright face condition was 6.5 faces targeted/second. Our data provide evidence that the human visual system includes an ultra-rapid and continuous object localization system for upright faces. Furthermore, these observations indicate that continuous paradigms such as the one we have used can push humans to make remarkably fast reaction times that impose strong constraints and challenges on models of how, where, and when visual processing occurs in the human brain.


Assuntos
Movimentos Oculares/fisiologia , Face/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Tempo de Reação/fisiologia , Adulto , Encéfalo/fisiologia , Feminino , Humanos , Masculino , Movimentos Sacádicos/fisiologia , Adulto Jovem
8.
IEEE Trans Neural Netw Learn Syst ; 24(8): 1239-52, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24808564

RESUMO

Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.


Assuntos
Modelos Biológicos , Reconhecimento Automatizado de Padrão , Reconhecimento Visual de Modelos , Reconhecimento Psicológico , Algoritmos , Simulação por Computador , Humanos
9.
Artigo em Inglês | MEDLINE | ID: mdl-19636382

RESUMO

Self-organization, a process by which the internal organization of a system changes without supervision, has been proposed as a possible basis for multisensory enhancement (MSE) in the superior colliculus (Anastasio and Patton, 2003). We simplify and extend these results by presenting a simulation using traditional self-organizing maps, intended to understand and simulate MSE as it may generally occur throughout the central nervous system. This simulation of MSE: (1) uses a standard unsupervised competitive learning algorithm, (2) learns from artificially generated activation levels corresponding to driven and spontaneous stimuli from separate and combined input channels, (3) uses a sigmoidal transfer function to generate quantifiable responses to separate inputs, (4) enhances the responses when those same inputs are combined, (5) obeys the inverse effectiveness principle of multisensory integration, and (6) can topographically congregate MSE in a manner similar to that seen in cortex. Thus, the model provides a useful method for evaluating and simulating the development of enhanced interactions between responses to different sensory modalities.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa