Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neural Comput ; 32(10): 1980-1997, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32795236

RESUMO

In this letter, we study a class of the regularized regression algorithms when the sampling process is unbounded. By choosing different loss functions, the learning algorithms can include a wide range of commonly used algorithms for regression. Unlike the prior work on theoretical analysis of unbounded sampling, no constraint on the output variables is specified in our setting. By an elegant error analysis, we prove consistency and finite sample bounds on the excess risk of the proposed algorithms under regular conditions.

2.
Neural Comput ; 28(1): 71-88, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26599712

RESUMO

We present a better theoretical foundation of support vector machines with polynomial kernels. The sample error is estimated under Tsybakov's noise assumption. In bounding the approximation error, we take advantage of a geometric noise assumption that was introduced to analyze gaussian kernels. Compared with the previous literature, the error analysis in this note does not require any regularity of the marginal distribution or smoothness of Bayes' rule. We thus establish the learning rates for polynomial kernels for a wide class of distributions.

3.
Neural Comput ; 26(1): 158-84, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24102124

RESUMO

We consider a kind of kernel-based regression with general convex loss functions in a regularization scheme. The kernels used in the scheme are not necessarily symmetric and thus are not positive semidefinite; l(1)-norm of the coefficients in the kernel ensembles is taken as the regularizer. Our setting in this letter is quite different from the classical regularized regression algorithms such as regularized networks and support vector machines regression. Under an established error decomposition that consists of approximation error, hypothesis error, and sample error, we present a detailed mathematical analysis for this scheme and, in particular, its learning rate. A reweighted empirical process theory is applied to the analysis of produced learning algorithms, which plays a key role in deriving the explicit learning rate under some assumptions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA