Your browser doesn't support javascript.
loading
Low Computational Cost for Sample Entropy.
Manis, George; Aktaruzzaman, Md; Sassi, Roberto.
Afiliação
  • Manis G; Department of Computer Science and Engineering, University of Ioannina, Ioannina 45110, Greece.
  • Aktaruzzaman M; Department of Computer Science and Engineering, Islamic University Kushtia, Kushtia 7003, Bangladesh.
  • Sassi R; Dipartimento di Informatica, Università degli Studi di Milano, Crema 26013, Italy.
Entropy (Basel) ; 20(1)2018 Jan 13.
Article em En | MEDLINE | ID: mdl-33265148
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Health_economic_evaluation Idioma: En Ano de publicação: 2018 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Health_economic_evaluation Idioma: En Ano de publicação: 2018 Tipo de documento: Article