Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Appl Econ Lett ; 22(18): 1499-1504, 2015 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-26478711

RESUMO

We investigate a pool of international chess title holders born between 1901 and 1943. Using Elo ratings, we compute for every player his expected score in a game with a randomly selected player from the pool. We use this figure as the player's merit. We measure players' fame as the number of Google hits. The correlation between fame and merit is 0.38. At the same time, the correlation between the logarithm of fame and merit is 0.61. This suggests that fame grows exponentially with merit.

2.
J Theor Biol ; 355: 111-6, 2014 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-24721476

RESUMO

We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of "Devil's staircase" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis.


Assuntos
Ondas Encefálicas , Homicídio , Modelos Biológicos , Humanos , Processos Estocásticos
3.
Cogn Process ; 7(2): 105-12, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16683173

RESUMO

Perceptual multistability during ambiguous visual perception is an important clue to neural dynamics. We examined perceptual switching during ambiguous depth perception using a Necker cube stimulus, and also during binocular rivalry. Analysis of perceptual switching time series using variance-sample size analysis, spectral analysis and time series shuffling shows that switching times behave as a 1/f noise and possess very long range correlations. The long memory feature contrasts sharply with the traditional satiation models of multistability, where the memory is not incorporated, as well as with recently published models of multistability and neural processing, where memory is excluded. On the other hand, the long memory feature favors the concept of "dynamic core" or coalition of neurons, where neurons form transient coalitions. Perceptual switching then corresponds to replacement of one coalition of neurons by another. The inertia and memory measures the stability of a coalition: a strong and stable coalition has to be won over by another similarly strong and stable coalition, resulting in long switching times. The complicated transient dynamics of competing coalitions of neurons may be addressable using a combination of functional imaging, measurement of frequency-tagged magnetoencephalography and frequency-tagged encephalography, simultaneous recordings of groups of neurons in many areas of the brain, and concepts from statistical mechanics and nonlinear dynamics theory.


Assuntos
Memória/fisiologia , Percepção Visual/fisiologia , Interpretação Estatística de Dados , Humanos , Estimulação Luminosa , Visão Binocular
4.
IEEE Trans Neural Netw ; 11(2): 338-55, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18249765

RESUMO

We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

5.
IEEE Trans Neural Netw ; 9(2): 319-29, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18252455

RESUMO

We investigate the convergence properties of two different stochastic approximation algorithms for principal component analysis, and analytically explain some commonly observed experimental results. In our analysis, we use the theory of stochastic approximation, and in particular the results of Fabian, to explore the asymptotic mean square errors (AMSE's) of the algorithms. This study reveals the conditions under which the algorithms produce smaller AMSE's, and also the conditions under which one algorithm has a smaller AMSE than the other. Experimental study with multidimensional Gaussian data corroborate our analytical findings. We next explore the convergence rates of the two algorithms. Our experiments and an analytical explanation reveals the conditions under which the algorithms converge faster to the solution, and also the conditions under which one algorithm converges faster than the other. Finally, we observe that although one algorithm has a larger computation in each iteration, it leads to a smaller AMSE and converges faster for the minor eigenvectors when compared to the other algorithm.

6.
IEEE Trans Neural Netw ; 8(3): 663-78, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255669

RESUMO

We describe self-organizing learning algorithms and associated neural networks to extract features that are effective for preserving class separability. As a first step, an adaptive algorithm for the computation of Q(-1/2) (where Q is the correlation or covariance matrix of a random vector sequence) is described. Convergence of this algorithm with probability one is proven by using stochastic approximation theory, and a single-layer linear network architecture for this algorithm is described, which we call the Q(-1/2) network. Using this network, we describe feature extraction architectures for: 1) unimodal and multicluster Gaussian data in the multiclass case; 2) multivariate linear discriminant analysis (LDA) in the multiclass case; and 3) Bhattacharyya distance measure for the two-class case. The LDA and Bhattacharyya distance features are extracted by concatenating the Q (-1/2) network with a principal component analysis network, and the two-layer network is proven to converge with probability one. Every network discussed in the study considers a flow or sequence of inputs for training. Numerical studies on the performance of the networks for multiclass random data are presented.

7.
IEEE Trans Neural Netw ; 8(6): 1518-30, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255752

RESUMO

We discuss a new approach to self-organization that leads to novel adaptive algorithms for generalized eigen-decomposition and its variance for a single-layer linear feedforward neural network. First, we derive two novel iterative algorithms for linear discriminant analysis (LDA) and generalized eigen-decomposition by utilizing a constrained least-mean-squared classification error cost function, and the framework of a two-layer linear heteroassociative network performing a one-of-m classification. By using the concept of deflation, we are able to find sequential versions of these algorithms which extract the LDA components and generalized eigenvectors in a decreasing order of significance. Next, two new adaptive algorithms are described to compute the principal generalized eigenvectors of two matrices (as well as LDA) from two sequences of random matrices. We give a rigorous convergence analysis of our adaptive algorithms by using stochastic approximation theory, and prove that our algorithms converge with probability one.

8.
IEEE Trans Neural Netw ; 6(2): 318-31, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18263315

RESUMO

Learning and convergence properties of linear threshold elements or perceptrons are well understood for the case where the input vectors (or the training sets) to the perceptron are linearly separable. Little is known, however, about the behavior of the perceptron learning algorithm when the training sets are linearly nonseparable. We present the first known results on the structure of linearly nonseparable training sets and on the behavior of perceptrons when the set of input vectors is linearly nonseparable. More precisely, we show that using the well known perceptron learning algorithm, a linear threshold element can learn the input vectors that are provably learnable, and identify those vectors that cannot be learned without committing errors. We also show how a linear threshold element can be used to learn large linearly separable subsets of any given nonseparable training set. In order to develop our results, we first establish formal characterizations of linearly nonseparable training sets and define learnable structures for such patterns. We also prove computational complexity results for the related learning problems. Next, based on such characterizations, we show that a perceptron does the best one can expect for linearly nonseparable sets of input vectors and learns as much as is theoretically possible.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA