Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 32(3): 1289-1303, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32452772

RESUMO

Traditional energy-based learning models associate a single energy metric to each configuration of variables involved in the underlying optimization process. Such models associate the lowest energy state with the optimal configuration of variables under consideration and are thus inherently dissipative. In this article, we propose an energy-efficient learning framework that exploits structural and functional similarities between a machine-learning network and a general electrical network satisfying Tellegen's theorem. In contrast to the standard energy-based models, the proposed formulation associates two energy components, namely, active and reactive energy with the network. The formulation ensures that the network's active power is dissipated only during the process of learning, whereas the reactive power is maintained to be zero at all times. As a result, in steady state, the learned parameters are stored and self-sustained by electrical resonance determined by the network's nodal inductances and capacitances. Based on this approach, this article introduces three novel concepts: 1) a learning framework where the network's active-power dissipation is used as a regularization for a learning objective function that is subjected to zero total reactive-power constraint; 2) a dynamical system based on complex-domain, continuous-time growth transforms that optimizes the learning objective function and drives the network toward electrical resonance under steady-state operation; and 3) an annealing procedure that controls the tradeoff between active-power dissipation and the speed of convergence. As a representative example, we show how the proposed framework can be used for designing resonant support vector machines (SVMs), where the support vectors correspond to an LC network with self-sustained oscillations. We also show that this resonant network dissipates less active power compared with its non-resonant counterpart.

2.
IEEE Trans Neural Netw Learn Syst ; 29(12): 6052-6061, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29993647

RESUMO

Conservation principles, such as conservation of charge, energy, or mass, provide a natural way to couple and constrain spatially separated variables. In this paper, we propose a dynamical system model that exploits these constraints for solving nonconvex and discrete global optimization problems. Unlike the traditional simulated annealing or quantum annealing-based global optimization techniques, the proposed method optimizes a target objective function by continuously evolving a driver functional over a conservation manifold, using a generalized variant of growth transformations. As a result, the driver functional asymptotically converges toward a Dirac-delta function that is centered at the global optimum of the target objective function. In this paper, we provide an outline of the proof of convergence for the dynamical system model and investigate different properties of the model using a benchmark nonlinear optimization problem. Also, we demonstrate how a discrete variant of the proposed dynamical system can be used for implementing decentralized optimization algorithms, where an ensemble of spatially separated entities (for example, biological cells or simple computational units) can collectively implement specific functions, such as winner-take-all and ranking, by exchanging signals only with its immediate substrate or environment. The proposed dynamical system model could potentially be used to implement continuous-time optimizers, annealers, and neural networks.

3.
IEEE Trans Neural Netw Learn Syst ; 29(5): 1961-1974, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-28436898

RESUMO

Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions. In this paper, we propose an efficient training algorithm based on polynomial growth transforms, and compare and contrast the properties of different SVM variants using several synthetic and benchmark data sets. The preliminary experiments show that the proposed multiplicative update algorithm is more scalable and yields better convergence compared to standard quadratic and nonlinear programming solvers. While the formulation and the underlying algorithms have been validated in this paper only for SVM-based learning, the proposed approach is general and can be applied to a wide variety of optimization problems and statistical learning models.

4.
Brain Res ; 930(1-2): 58-66, 2002 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-11879796

RESUMO

Parathyroid hormone-related protein (PTHrP) was discovered a dozen years ago as a product of malignant tumors. It is now known that PTHrP is a paracrine factor with multiple biological functions. One such function is to relax smooth muscle by inhibiting calcium influx into the cell. In the central nervous system, PTHrP and its receptor are widely expressed in neurons in the cerebral cortex, hippocampus and cerebellum. The function of PTHrP in the CNS is not known. Previous work has shown that expression of the PTHrP gene is depolarization-dependent in cultured cerebellar granule cells and depends specifically on L-type voltage sensitive calcium channel (L-VSCC) Ca(2+) influx. PTHrP has also been found to be capable of protecting these cells against kainic acid-induced excitotoxicity. Here, we tested the idea that mice with a PTHrP-null CNS might display hypersensitivity to kainic acid excitotoxicity. We found that these mice were six-fold more sensitive than control littermate mice to kainic-acid-induced seizures as well as hippocampal c-Fos expression. PTHrP-null embryonic mixed cerebral cortical cultures were more sensitive to kainic acid than control cultures, and PTHrP addition was found to be protective against kainate toxicity in both PTHrP-null and control cultures. By whole-cell techniques, PTHrP was found to reduce L-VSCC Ca(2+) influx in cultured mouse neuroblastoma cells. We conclude that PTHrP functions as a component of a neuroprotective feedback loop that is structured around the L-type calcium channel. This loop appears to be operative in vivo as well as in vitro.


Assuntos
Fármacos Neuroprotetores , Proteínas/fisiologia , Animais , Neoplasias Encefálicas/metabolismo , Canais de Cálcio/metabolismo , Células Cultivadas , Córtex Cerebral/citologia , Córtex Cerebral/efeitos dos fármacos , Córtex Cerebral/metabolismo , Relação Dose-Resposta a Droga , Agonistas de Aminoácidos Excitatórios/farmacologia , Agonistas de Aminoácidos Excitatórios/toxicidade , Feminino , Injeções Intraperitoneais , Ácido Caínico/farmacologia , Ácido Caínico/toxicidade , L-Lactato Desidrogenase/metabolismo , Camundongos , Camundongos Knockout , Neuroblastoma/metabolismo , Neurônios/efeitos dos fármacos , Neurônios/metabolismo , Proteína Relacionada ao Hormônio Paratireóideo , Técnicas de Patch-Clamp , Gravidez , Proteínas/genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA