Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 21(6)2019 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-33267275

RESUMO

In recent years, selecting appropriate learning models has become more important with the increased need to analyze learning systems, and many model selection methods have been developed. The learning coefficient in Bayesian estimation, which serves to measure the learning efficiency in singular learning models, has an important role in several information criteria. The learning coefficient in regular models is known as the dimension of the parameter space over two, while that in singular models is smaller and varies in learning models. The learning coefficient is known mathematically as the log canonical threshold. In this paper, we provide a new rational blowing-up method for obtaining these coefficients. In the application to Vandermonde matrix-type singularities, we show the efficiency of such methods.

2.
Neural Netw ; 172: 106132, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38278091

RESUMO

In the last two decades, remarkable progress has been done in singular learning machine theories on the basis of algebraic geometry. These theories reveal that we need to find resolution maps of singularities for analyzing asymptotic behavior of state probability functions when the number of data increases. In particular, it is essential to construct normal crossing divisors of average log loss functions. However, there are few examples for obtaining these for singular models. In this paper, we determine the resolution map and normal crossing divisors for multiple-layered neural networks with linear units. Moreover, we have the exact values for the learning efficiency, which is so called learning coefficients. Multiple-layered neural networks with linear units are simple, however, very important models because these models give the essential information from data of input-output pairs. Moreover, these models are very close to multiple-layered neural networks with rectified linear units (ReLU). We show the learning coefficients of multiple-layered neural networks with linear units are bounded even though the number of layers goes to infinity, which means that the main term of asymptotic expansion of the free energy and generalization error of singular models are much smaller than the dimension of its parameter space.


Assuntos
Generalização Psicológica , Redes Neurais de Computação , Funções Verossimilhança
3.
Neural Comput ; 24(6): 1569-610, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22295979

RESUMO

The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.

4.
J Pharm Biomed Anal ; 128: 455-461, 2016 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-27362457

RESUMO

Streptolysin O (SLO), which recognizes sterols and forms nanopores in lipid membranes, is proposed as a sensing element for monitoring cholesterol oxidation in a lipid bilayer. The structural requirements of eight sterols for forming nanopores by SLO confirmed that a free 3-OH group in the ß-configuration of sterols is required for recognition by SLO in a lipid bilayer. The extent of nanopore formation by SLO in lipid bilayers increased in the order of cholestanol

Assuntos
Colesterol/análise , Colesterol/metabolismo , Bicamadas Lipídicas/química , Transdução de Sinais , Estreptolisinas/análise , Estreptolisinas/metabolismo , Proteínas de Bactérias/análise , Proteínas de Bactérias/química , Proteínas de Bactérias/metabolismo , Colesterol Oxidase/metabolismo , Fluoresceínas/química , Lipossomos/química , Nanoporos , Oxirredução , Esteróis/química , Esteróis/metabolismo , Estreptolisinas/química
5.
Neural Netw ; 18(7): 924-33, 2005 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-15993036

RESUMO

Reduced rank regression extracts an essential information from examples of input-output pairs. It is understood as a three-layer neural network with linear hidden units. However, reduced rank approximation is a non-regular statistical model which has a degenerate Fisher information matrix. Its generalization error had been left unknown even in statistics. In this paper, we give the exact asymptotic form of its generalization error in Bayesian estimation, based on resolution of learning machine singularities. For this purpose, the maximum pole of the zeta function for the learning theory is calculated. We propose a new method of recursive blowing-ups which yields the complete desingularization of the reduced rank approximation.


Assuntos
Inteligência Artificial , Teorema de Bayes , Redes Neurais de Computação , Análise de Regressão , Processos Estocásticos
6.
Neural Netw ; 23(1): 35-43, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19692207

RESUMO

Statistical learning machines that have singularities in the parameter space, such as hidden Markov models, Bayesian networks, and neural networks, are widely used in the field of information engineering. Singularities in the parameter space determine the accuracy of estimation in the Bayesian scenario. The Newton diagram in algebraic geometry is recognized as an effective method by which to investigate a singularity. The present paper proposes a new technique to plug the diagram in the Bayesian analysis. The proposed technique allows the generalization error to be clarified and provides a foundation for an efficient model selection. We apply the proposed technique to mixtures of binomial distributions.


Assuntos
Inteligência Artificial , Teorema de Bayes , Generalização Psicológica , Algoritmos , Simulação por Computador , Processamento Eletrônico de Dados , Humanos , Armazenamento e Recuperação da Informação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA