Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 15(8): e0237654, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32797071

RESUMO

The present paper proposes a novel kernel adaptive filtering algorithm, where each Gaussian kernel is parameterized by a center vector and a symmetric positive definite (SPD) precision matrix, which is regarded as a generalization of scalar width parameter. In fact, different from conventional kernel adaptive systems, the proposed filter is structured as a superposition of non-isotropic Gaussian kernels, whose non-isotropy makes the filter more flexible. The adaptation algorithm will search for optimal parameters in a wider parameter space. This generalization brings the need of special treatment of parameters that have a geometric structure. In fact, the main contribution of this paper is to establish update rules for precision matrices on the Lie group of SPD matrices in order to ensure their symmetry and positive-definiteness. The parameters of this filter are adapted on the basis of a least-squares criterion to minimize the filtering error, together with an ℓ1-type regularization criterion to avoid overfitting and to prevent the increase of dimensionality of the dictionary. Experimental results confirm the validity of the proposed method.


Assuntos
Aprendizado de Máquina , Distribuição Normal , Algoritmos , Anisotropia , Dinâmica não Linear
2.
Neural Comput ; 29(6): 1631-1666, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28410052

RESUMO

The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.


Assuntos
Mapeamento Encefálico , Interfaces Cérebro-Computador , Encéfalo/fisiologia , Eletroencefalografia , Imaginação/fisiologia , Atividade Motora/fisiologia , Algoritmos , Humanos , Aprendizado de Máquina , Processamento de Sinais Assistido por Computador
3.
Food Chem ; 159: 323-7, 2014 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-24767062

RESUMO

Tannins have a central role in grapevine berries both for their physiological and enological implications. In the skin tissue they can be in vacuolar solution, or associated to the cell walls through weak or strong physicochemical interactions. The present work aims to separate vacuolar, non-covalently and covalently bonded tannins fractions. A specific extraction procedure was developed. A first extraction in ethanol at low temperature allowed the quantification of vacuolar tannins. An urea treatment followed by an ethanol extraction at room temperature was able to separate non-covalently bonded compounds. Finally an acid catalysis was used to break down proanthocyanidin covalent bonds. The method was validated on ripe grape samples of three cultivars, on berries developed in two sun exposure conditions. The Ethephon treatment effect was also evaluated. Beside the method development, a preliminary evaluation of the cultivar, exposition and Ethephon treatment effects are discussed.


Assuntos
Técnicas de Química Analítica/métodos , Extratos Vegetais/química , Proantocianidinas/química , Vitis/química , Parede Celular/química , Frutas/química
4.
Phytochem Anal ; 24(5): 453-9, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23613452

RESUMO

INTRODUCTION: The colour of fruit is an important quality factor for cultivar classification and phenotyping techniques. Besides the subjective visual evaluation, new instruments and techniques can be used. OBJECTIVES: This work aims at developping an objective, fast, easy and non-destructive method as a useful support for evaluating grapes' colour under different cultural and environmental conditions, as well as for breeding process and germplasm evaluation, supporting the plant characterization and the biodiversity preservation. MATERIALS AND METHODS: Colours of 120 grape varieties were studied using reflectance spectra. The classification was realized using cluster and discriminant analysis. Reflectance of the whole berries surface was also compared with absorption properties of single skin extracts. RESULTS: A phenotyping method based on the reflectance spectra was developed, producing reliable colour classifications. A cultivar-independent index for pigment content evaluation has also been obtained. CONCLUSIONS: This work allowed the classification of the berry colour using an objective method.


Assuntos
Cor , Análise Espectral/métodos , Vitis , Reprodutibilidade dos Testes
5.
IEEE Trans Neural Netw Learn Syst ; 23(1): 7-21, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24808452

RESUMO

This paper is the second part of a study initiated with the paper S. Fiori, "Extended Hamiltonian learning on Riemannian manifolds: Theoretical aspects," IEEE Trans. Neural Netw., vol. 22, no. 5, pp. 687-700, May 2011, which aimed at introducing a general framework to develop a theory of learning on differentiable manifolds by extended Hamiltonian stationary-action principle. This paper discusses the numerical implementation of the extended Hamiltonian learning paradigm by making use of notions from geometric numerical integration to numerically solve differential equations on manifolds. The general-purpose integration schemes and the discussion of several cases of interest show that the implementation of the dynamical learning equations exhibits a rich structure. The behavior of the discussed learning paradigm is illustrated via several numerical examples and discussions of case studies. The numerical examples confirm the theoretical developments presented in this paper as well as in its first part.

6.
IEEE Trans Neural Netw ; 22(12): 2132-8, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21984497

RESUMO

This brief tackles the problem of learning over the complex-valued matrix-hypersphere S(α)(n,p)(C). The developed learning theory is formulated in terms of Riemannian-gradient-based optimization of a regular criterion function and is implemented by a geodesic-stepping method. The stepping method is equipped with a geodesic-search sub-algorithm to compute the optimal learning stepsize at any step. Numerical results show the effectiveness of the developed learning method and of its implementation.


Assuntos
Algoritmos , Inteligência Artificial , Modelos Teóricos , Simulação por Computador
7.
IEEE Trans Neural Netw ; 22(5): 687-700, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21427023

RESUMO

This paper introduces a general theory of extended Hamiltonian (second-order) learning on Riemannian manifolds, as an instance of learning by constrained criterion optimization. The dynamical learning equations are derived within the general framework of extended-Hamiltonian stationary-action principle and are expressed in a coordinate-free fashion. A theoretical analysis is carried out in order to compare the features of the dynamical learning theory with the features exhibited by the gradient-based ones. In particular, gradient-based learning is shown to be an instance of dynamical learning, and the classical gradient-based learning modified by a "momentum" term is shown to resemble discrete-time dynamical learning. Moreover, the convergence features of gradient-based and dynamical learning are compared on a theoretical basis. This paper discusses cases of learning by dynamical systems on manifolds of interest in the scientific literature, namely, the Stiefel manifold, the special orthogonal group, the Grassmann manifold, the group of symmetric positive definite matrices, the generalized flag manifold, and the real symplectic group of matrices.


Assuntos
Algoritmos , Inteligência Artificial , Modelos Teóricos , Redes Neurais de Computação , Simulação por Computador , Conceitos Matemáticos
8.
IEEE Trans Neural Netw ; 21(5): 841-52, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20236880

RESUMO

This paper deals with learning by natural-gradient optimization on noncompact manifolds. In a Riemannian manifold, the calculation of entities such as the closed form of geodesic curves over noncompact manifolds might be infeasible. For this reason, it is interesting to study the problem of learning by optimization over noncompact manifolds endowed with pseudo-Riemannian metrics, which may give rise to tractable calculations. A general theory for natural-gradient-based learning on noncompact manifolds as well as specific cases of interest of learning are discussed.


Assuntos
Algoritmos , Inteligência Artificial , Aprendizagem , Redes Neurais de Computação , Humanos
9.
Neural Netw ; 21(10): 1524-9, 2008 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18980831

RESUMO

The present manuscript treats the problem of adapting a neural signal processing system whose parameters belong to a curved manifold, which is assumed to possess the structure of a Lie group. Neural system parameter adapting is effected by optimizing a system performance criterion. Riemannian-gradient-based optimization is suggested, which cannot be performed by standard additive stepping because of the curved nature of the parameter space. Retraction-based stepping is discussed, instead, along with a companion stepsize-schedule selection procedure. A case-study of learning by optimization of a non-quadratic criterion is discussed in detail.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Algoritmos , Software
10.
Comput Intell Neurosci ; : 426080, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18483612

RESUMO

In a previous work (S. Fiori, 2006), we proposed a random number generator based on a tunable non-linear neural system, whose learning rule is designed on the basis of a cardinal equation from statistics and whose implementation is based on look-up tables (LUTs). The aim of the present manuscript is to improve the above-mentioned random number generation method by changing the learning principle, while retaining the efficient LUT-based implementation. The new method proposed here proves easier to implement and relaxes some previous limitations.

11.
Int J Neural Syst ; 18(2): 87-103, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18452244

RESUMO

The present manuscript aims at illustrating fundamental challenges and solutions arising in the design of learning theories by optimization on manifolds in the context of complex-valued neural systems. The special case of a unitary unimodular group of matrices is dealt with. The unitary unimodular group under analysis is a low dimensional and easy-to-handle matrix group. Notwithstanding, it exhibits a rich geometrical structure and gives rise to interesting speculations about methods to solve optimization problems on manifolds. Also, its low dimension allows us to treat most of the quantities involved in computation in closed form as well as to render them in graphical format. Some numerical experiments are presented and discussed within the paper, which deal with complex-valued independent component analysis.


Assuntos
Aprendizagem , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Algoritmos , Simulação por Computador , Humanos , Análise Numérica Assistida por Computador , Análise de Componente Principal
12.
Neural Comput ; 20(4): 1091-117, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18085982

RESUMO

Learning on differential manifolds may involve the optimization of a function of many parameters. In this letter, we deal with Riemannian-gradient-based optimization on a Lie group, namely, the group of unitary unimodular matrices SU(3). In this special case, subalgebras of the associated Lie algebra su(3) may be individuated by computing pair-wise Gell-Mann matrices commutators. Subalgebras generate subgroups of a Lie group, as well as manifold foliation. We show that the Riemannian gradient may be projected over tangent structures to foliation, giving rise to foliation gradients. Exponentiations of foliation gradients may be computed in closed forms, which closely resemble Rodriguez forms for the special orthogonal group SO(3). We thus compare optimization by Riemannian gradient and foliation gradients.


Assuntos
Algoritmos , Inteligência Artificial , Simulação por Computador , Redes Neurais de Computação , Encéfalo/fisiologia , Aprendizagem/fisiologia , Rede Nervosa/fisiologia , Dinâmica não Linear
13.
Comput Intell Neurosci ; : 71859, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-18566641

RESUMO

Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are "holes" in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure.

14.
IEEE Trans Neural Netw ; 16(6): 1697-701, 2005 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16342509

RESUMO

This letter aims at illustrating the relevance of numerical integration of learning differential equations on differential manifolds. In particular, the task of learning with orthonormality constraints is dealt with, which is naturally formulated as an optimization task with the compact Stiefel manifold as neural parameter space. Intrinsic properties of the derived learning algorithms, such as stability and constraints preservation, are illustrated through experiments on minor and independent component analysis (ICA).


Assuntos
Algoritmos , Inteligência Artificial , Simulação por Computador , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Análise Numérica Assistida por Computador
15.
Neural Comput ; 17(4): 779-838, 2005 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15829090

RESUMO

The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of neuron models. The plain Hebbian principle, however, also presents some inherent theoretical limitations that make it impractical in most cases. Therefore, modifications of the basic Hebbian learning paradigm have been proposed over the past 20 years in order to design profitable signal and data processing algorithms. Such modifications led to the principal component analysis type class of learning rules along with their nonlinear extensions. The aim of this review is primarily to present part of the existing fragmented material in the field of principal component learning within a unified view and contextually to motivate and present extensions of previous works on Hebbian learning to complex-weighted linear neural networks. This work benefits from previous studies on linear signal decomposition by artificial neural networks, nonquadratic component optimization and reconstruction error definition, neural parameters adaptation by constrained optimization of learning criteria of complex-valued arguments, and orthonormality expression via the insertion of topological elements in the networks or by modifying the network learning criterion. In particular, the learning principles considered here and their analysis concern complex-valued principal/minor component/subspace linear/nonlinear rules for complex-weighted neural structures, both feedforward and laterally connected.


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Dinâmica não Linear , Matemática , Processamento de Sinais Assistido por Computador
16.
Int J Neural Syst ; 14(5): 293-311, 2004 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-15593378

RESUMO

The aim of this manuscript is to present a detailed analysis of the algebraic and geometric properties of relative uncertainty theory (RUT) applied to neural networks learning. Through the algebraic analysis of the original learning criterion, it is shown that RUT gives rise to principal-subspace-analysis-type learning equations. Through an algebraic-geometric analysis, the behavior of such matrix-type learning equations is illustrated, with particular emphasis to the existence of certain invariant manifolds.


Assuntos
Aprendizagem/fisiologia , Modelos Psicológicos , Rede Nervosa/fisiologia , Incerteza , Humanos , Redes Neurais de Computação
17.
IEEE Trans Neural Netw ; 15(2): 455-9, 2004 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15384537

RESUMO

The aim of this letter is to introduce a new blind-deconvolution algorithm based on fixed-point optimization of a "Bussgang"-type cost function. The cost function relies on approximate Bayesian estimation achieved by an adaptive neuron. The main feature of the presented algorithm is fast convergence that guarantees good deconvolution performances with limited computational demand as compared with algorithms of the same class.


Assuntos
Algoritmos , Redes Neurais de Computação
18.
Int J Neural Syst ; 13(5): 273-90, 2003 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-14652870

RESUMO

In previous contributions we presented a new class of algorithms for orthonormal learning of a linear neural network with p inputs and m outputs, based on the equations describing the dynamics of a massive rigid frame in a submanifold of R(p). While exhibiting interesting features, such as intrinsic numerical stability, strongly binding to the orthonormal submanifolds, and good controllability of the learning dynamics, tested on principal/independent component analysis, the proposed algorithms were not completely satisfactory from a computational-complexity point of view. The main drawback was the need to repeatedly evaluate a matrix exponential map. With the aim to lessen the computational efforts pertaining to these algorithms, we propose here an improvement based on the closed-form Rodriguez formula for the exponential map. Such formula is available in the p = 3 and m = 3 case, which is discussed with details here. In particular, experimental results on independent component analysis (ICA), carried out with both synthetic and real-world data, help confirming the computational gain due to the proposed improvement.


Assuntos
Algoritmos , Teoria da Informação , Aprendizagem , Modelos Teóricos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Inteligência Artificial , Simulação por Computador , Humanos , Modelos Lineares , Dinâmica não Linear , Astronave
19.
Neural Comput ; 15(12): 2909-29, 2003 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-14629873

RESUMO

In recent work, we introduced nonlinear adaptive activation function (FAN) artificial neuron models, which learn their activation functions in an unsupervised way by information-theoretic adapting rules. We also applied networks of these neurons to some blind signal processing problems, such as independent component analysis and blind deconvolution. The aim of this letter is to study some fundamental aspects of FAN units' learning by investigating the properties of the associated learning differential equation systems.


Assuntos
Rede Nervosa/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Dinâmica não Linear , Processos Estocásticos , Potenciais de Ação/fisiologia , Adaptação Fisiológica/fisiologia , Algoritmos , Animais , Inteligência Artificial , Encéfalo/fisiologia , Entropia , Humanos
20.
Neural Netw ; 16(8): 1201-21, 2003 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-13678623

RESUMO

The aim of the present paper is to apply Sudjanto-Hassoun theory of Hebbian learning to neural independent component analysis. The basic learning theory is first recalled and expanded in order to make it suitable for a network of non-linear complex-weighted neurons; then its interpretation and application is shown in the context of blind separation of complex-valued sources. Numerical results are given in order to assess the effectiveness of the proposed learning theory and the related separation algorithm on telecommunication signals; a comparison with other existing techniques finally helps assessing the performances and computational requirements of the proposed algorithm.


Assuntos
Inteligência Artificial , Modelos Teóricos , Redes Neurais de Computação , Algoritmos , Artefatos , Telecomunicações
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...