Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Entropy (Basel) ; 20(5)2018 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-33265461

RESUMEN

This paper offers sufficient conditions for the Miller-Madow estimator and the jackknife estimator of entropy to have respective asymptotic normalities on countably infinite alphabets.

2.
PLoS One ; 12(3): e0173305, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28267765

RESUMEN

Modern measures of diversity satisfy reasonable axioms, are parameterized to produce diversity profiles, can be expressed as an effective number of species to simplify their interpretation, and come with estimators that allow one to apply them to real-world data. We introduce the generalized Simpson's entropy as a measure of diversity and investigate its properties. We show that it has many useful features and can be used as a measure of biodiversity. Moreover, unlike most commonly used diversity indices, it has unbiased estimators, which allow for sound estimation of the diversity of poorly sampled, rich communities.


Asunto(s)
Biodiversidad , Entropía , Modelos Teóricos , Algoritmos
3.
Neural Comput ; 26(11): 2570-93, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25058703

RESUMEN

In this letter, we introduce an estimator of Küllback-Leibler divergence based on two independent samples. We show that on any finite alphabet, this estimator has an exponentially decaying bias and that it is consistent and asymptotically normal. To explain the importance of this estimator, we provide a thorough analysis of the more standard plug-in estimator. We show that it is consistent and asymptotically normal, but with an infinite bias. Moreover, if we modify the plug-in estimator to remove the rare events that cause the bias to become infinite, the bias still decays at a rate no faster than O(1/n). Further, we extend our results to estimating the symmetrized Küllback-Leibler divergence. We conclude by providing simulation results, which show that the asymptotic properties of these estimators hold even for relatively small sample sizes.


Asunto(s)
Inteligencia Artificial , Modelos Teóricos , Estadísticas no Paramétricas , Humanos , Procesos Estocásticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...