Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Mais filtros

Base de dados
Intervalo de ano de publicação
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 854-858, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268458


We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

Identificação Biométrica/métodos , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Algoritmos , Encéfalo/fisiologia , Eletroencefalografia/instrumentação , Desenho de Equipamento , Potencial Evocado P300/fisiologia , Humanos , Modelos Logísticos , Aprendizado de Máquina , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador
IEEE Trans Pattern Anal Mach Intell ; 32(2): 348-63, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20075463


We present a generative model and inference algorithm for 3D nonrigid object tracking. The model, which we call G-flow, enables the joint inference of 3D position, orientation, and nonrigid deformations, as well as object texture and background texture. Optimal inference under G-flow reduces to a conditionally Gaussian stochastic filtering problem. The optimal solution to this problem reveals a new space of computer vision algorithms, of which classic approaches such as optic flow and template matching are special cases that are optimal only under special circumstances. We evaluate G-flow on the problem of tracking facial expressions and head motion in 3D from single-camera video. Previously, the lack of realistic video data with ground truth nonrigid position information has hampered the rigorous evaluation of nonrigid tracking. We introduce a practical method of obtaining such ground truth data and present a new face video data set that was created using this technique. Results on this data set show that G-flow is much more robust and accurate than current deterministic optic-flow-based approaches.

Algoritmos , Face/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Movimento/fisiologia , Distribuição Normal , Reconhecimento Automatizado de Padrão/métodos , Humanos , Processos Estocásticos , Gravação em Vídeo
J Vis ; 8(7): 32.1-20, 2008 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-19146264


We propose a definition of saliency by considering what the visual system is trying to optimize when directing attention. The resulting model is a Bayesian framework from which bottom-up saliency emerges naturally as the self-information of visual features, and overall saliency (incorporating top-down information with bottom-up saliency) emerges as the pointwise mutual information between the features and the target when searching for a target. An implementation of our framework demonstrates that our model's bottom-up saliency maps perform as well as or better than existing algorithms in predicting people's fixations in free viewing. Unlike existing saliency measures, which depend on the statistics of the particular image being viewed, our measure of saliency is derived from natural image statistics, obtained in advance from a collection of natural images. For this reason, we call our model SUN (Saliency Using Natural statistics). A measure of saliency based on natural image statistics, rather than based on a single test image, provides a straightforward explanation for many search asymmetries observed in humans; the statistics of a single test image lead to predictions that are not consistent with these asymmetries. In our model, saliency is computed locally, which is consistent with the neuroanatomy of the early visual system and results in an efficient algorithm with few free parameters.

Atenção/fisiologia , Teorema de Bayes , Simulação por Computador , Movimentos Oculares/fisiologia , Percepção Visual/fisiologia , Humanos
J Vis ; 8(14): 17.1-14, 2008 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-19146318


We present a Bayesian version of J. Lacroix, J. Murre, and E. Postma's (2006) Natural Input Memory (NIM) model of saccadic visual memory. Our model, which we call NIMBLE (NIM with Bayesian Likelihood Estimation), uses a cognitively plausible image sampling technique that provides a foveated representation of image patches. We conceive of these memorized image fragments as samples from image class distributions and model the memory of these fragments using kernel density estimation. Using these models, we derive class-conditional probabilities of new image fragments and combine individual fragment probabilities to classify images. Our Bayesian formulation of the model extends easily to handle multi-class problems. We validate our model by demonstrating human levels of performance on a face recognition memory task and high accuracy on multi-category face and object identification. We also use NIMBLE to examine the change in beliefs as more fixations are taken from an image. Using fixation data collected from human subjects, we directly compare the performance of NIMBLE's memory component to human performance, demonstrating that using human fixation locations allows NIMBLE to recognize familiar faces with only a single fixation.

Memória/fisiologia , Modelos Psicológicos , Movimentos Sacádicos/fisiologia , Percepção Visual/fisiologia , Teorema de Bayes , Face , Feminino , Fixação Ocular , Humanos , Masculino , Reconhecimento Visual de Modelos , Probabilidade , Reconhecimento Psicológico , Reprodutibilidade dos Testes