Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2508-2517, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34464278

RESUMEN

Several techniques for multivariate time series anomaly detection have been proposed recently, but a systematic comparison on a common set of datasets and metrics is lacking. This article presents a systematic and comprehensive evaluation of unsupervised and semisupervised deep-learning-based methods for anomaly detection and diagnosis on multivariate time series data from cyberphysical systems. Unlike previous works, we vary the model and post-processing of model errors, i.e., the scoring functions independently of each other, through a grid of ten models and four scoring functions, comparing these variants to state-of-the-art methods. In time-series anomaly detection, detecting anomalous events is more important than detecting individual anomalous time points. Through experiments, we find that the existing evaluation metrics either do not take events into account or cannot distinguish between a good detector and trivial detectors, such as a random or an all-positive detector. We propose a new metric to overcome these drawbacks, namely, the composite F-score (Fc1), for evaluating time-series anomaly detection. Our study highlights that dynamic scoring functions work much better than static ones for multivariate time series anomaly detection, and the choice of scoring functions often matters more than the choice of the underlying model. We also find that a simple, channel-wise model-the univariate fully connected auto-encoder, with the dynamic Gaussian scoring function emerges as a winning candidate for both anomaly detection and diagnosis, beating state-of-the-art algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Factores de Tiempo
2.
IEEE Trans Neural Netw Learn Syst ; 24(4): 529-41, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24808375

RESUMEN

This paper presents a fully complex-valued relaxation network (FCRN) with its projection-based learning algorithm. The FCRN is a single hidden layer network with a Gaussian-like sech activation function in the hidden layer and an exponential activation function in the output layer. For a given number of hidden neurons, the input weights are assigned randomly and the output weights are estimated by minimizing a nonlinear logarithmic function (called as an energy function) which explicitly contains both the magnitude and phase errors. A projection-based learning algorithm determines the optimal output weights corresponding to the minima of the energy function by converting the nonlinear programming problem into that of solving a set of simultaneous linear algebraic equations. The resultant FCRN approximates the desired output more accurately with a lower computational effort. The classification ability of FCRN is evaluated using a set of real-valued benchmark classification problems from the University of California, Irvine machine learning repository. Here, a circular transformation is used to transform the real-valued input features to the complex domain. Next, the FCRN is used to solve three practical problems: a quadrature amplitude modulation channel equalization, an adaptive beamforming, and a mammogram classification. Performance results from this paper clearly indicate the superior classification/approximation performance of the FCRN.


Asunto(s)
Algoritmos , Inteligencia Artificial , Redes Neurales de la Computación , Inteligencia Artificial/tendencias , Factores de Tiempo
3.
Neural Netw ; 32: 257-66, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22386786

RESUMEN

Functional link networks are single-layered neural networks that impose nonlinearity in the input layer using nonlinear functions of the original input variables. In this paper, we present a fully complex-valued functional link network (CFLN) with multivariate polynomials as the nonlinear functions. Unlike multilayer neural networks, the CFLN is free from local minima problem, and it offers very fast learning of parameters because of its linear structure. Polynomial based CFLN does not require an activation function which is a major concern in the complex-valued neural networks. However, it is important to select a smaller subset of polynomial terms (monomials) for faster and better performance since the number of all possible monomials may be quite large. Here, we use the orthogonal least squares (OLS) method in a constructive fashion (starting from lower degree to higher) for the selection of a parsimonious subset of monomials. It is argued here that computing CFLN in purely complex domain is advantageous than in double-dimensional real domain, in terms of number of connection parameters, faster design, and possibly generalization performance. Simulation results on a function approximation, wind prediction with real-world data, and a nonlinear channel equalization problem exhibit that the OLS based CFLN yields very simple structure having favorable performance.


Asunto(s)
Análisis de los Mínimos Cuadrados , Redes Neurales de la Computación , Algoritmos , Inteligencia Artificial , Simulación por Computador , Bases de Datos Factuales , Predicción , Meteorología/métodos , Dinámicas no Lineales , Viento
4.
IEEE Trans Neural Netw ; 22(7): 1061-72, 2011 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-21632298

RESUMEN

This paper presents a sequential learning algorithm for a complex-valued resource allocation network with a self-regulating scheme, referred to as complex-valued self-regulating resource allocation network (CSRAN). The self-regulating scheme in CSRAN decides what to learn, when to learn, and how to learn based on the information present in the training samples. CSRAN is a complex-valued radial basis function network with a sech activation function in the hidden layer. The network parameters are updated using a complex-valued extended Kalman filter algorithm. CSRAN starts with no hidden neuron and builds up an appropriate number of hidden neurons, resulting in a compact structure. Performance of the CSRAN is evaluated using a synthetic complex-valued function approximation problem, two real-world applications consisting of a complex quadrature amplitude modulation channel equalization, and an adaptive beam-forming problem. Since complex-valued neural networks are good decision makers, the decision-making ability of the CSRAN is compared with other complex-valued classifiers and the best performing real-valued classifier using two benchmark unbalanced classification problems from UCI machine learning repository. The approximation and classification results show that the CSRAN outperforms other existing complex-valued learning algorithms available in the literature.


Asunto(s)
Algoritmos , Inteligencia Artificial , Aprendizaje , Modelos Neurológicos , Procesamiento de Señales Asistido por Computador , Animales , Humanos , Neuronas/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA