Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37204959

RESUMO

We propose to embed time series in a latent space where pairwise Euclidean distances (EDs) between samples are equal to pairwise dissimilarities in the original space, for a given dissimilarity measure. To this end, we use auto-encoder (AE) and encoder-only neural networks to learn elastic dissimilarity measures, e.g., dynamic time warping (DTW), that are central to time series classification (Bagnall et al., 2017). The learned representations are used in the context of one-class classification (Mauceri et al., 2020) on the datasets of UCR/UEA archive (Dau et al., 2019). Using a 1-nearest neighbor (1NN) classifier, we show that learned representations allow classification performance that is close to that of raw data, but in a space of substantially lower dimensionality. This implies substantial and compelling savings in terms of computational and storage requirements for nearest neighbor time series classification.

2.
Genet Program Evolvable Mach ; 22(4): 391-393, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34690538
3.
IEEE Trans Cybern ; 49(8): 3074-3087, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29994493

RESUMO

This paper proposes latent representation models for improving network anomaly detection. Well-known anomaly detection algorithms often suffer from challenges posed by network data, such as high dimension and sparsity, and a lack of anomaly data for training, model selection, and hyperparameter tuning. Our approach is to introduce new regularizers to a classical autoencoder (AE) and a variational AE, which force normal data into a very tight area centered at the origin in the nonsaturating area of the bottleneck unit activations. These trained AEs on normal data will push normal points toward the origin, whereas anomalies, which differ from normal data, will be put far away from the normal region. The models are very different from common regularized AEs, sparse AE, and contractive AE, in which the regularized AEs tend to make their latent representation less sensitive to changes of the input data. The bottleneck feature space is now used as a new data representation. A number of one-class learning algorithms are used for evaluating the proposed models. The experiments testify that our models help these classifiers to perform efficiently and consistently on high-dimensional and sparse network datasets, even with relatively few training points. More importantly, the models can minimize the effect of model selection on these classifiers since their performance is insensitive to a wide range of hyperparameter settings.

4.
Biosystems ; 98(3): 137-48, 2009 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-19577613

RESUMO

A novel approach to generating scale-free network topologies is introduced, based on an existing artificial gene regulatory network model. From this model, different interaction networks can be extracted, based on an activation threshold. By using an evolutionary computation approach, the model is allowed to evolve, in order to reach specific network statistical measures. The results obtained show that, when the model uses a duplication and divergence initialisation, such as seen in nature, the resulting regulation networks not only are closer in topology to scale-free networks, but also require only a few evolutionary cycles to achieve a satisfactory error value.


Assuntos
Evolução Biológica , Redes Reguladoras de Genes , Modelos Teóricos , Genoma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA