Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
IEEE Trans Cybern ; 53(8): 4908-4922, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35239499

RESUMO

The main challenge for industrial predictive models is how to effectively deal with big data from high-dimensional processes with nonstationary characteristics. Although deep networks, such as the stacked autoencoder (SAE), can learn useful features from massive data with multilevel architecture, it is difficult to adapt them online to track fast time-varying process dynamics. To integrate feature learning and online adaptation, this article proposes a deep cascade gradient radial basis function (GRBF) network for online modeling and prediction of nonlinear and nonstationary processes. The proposed deep learning method consists of three modules. First, a preliminary prediction result is generated by a GRBF weak predictor, which is further combined with raw input data for feature extraction. By incorporating the prior weak prediction information, deep output-relevant features are extracted using a SAE. Online prediction is finally produced upon the extracted features with a GRBF predictor, whose weights and structure are updated online to capture fast time-varying process characteristics. Three real-world industrial case studies demonstrate that the proposed deep cascade GRBF network outperforms existing state-of-the-art online modeling approaches as well as deep networks, in terms of both online prediction accuracy and computational complexity.

2.
IEEE Trans Cybern ; 53(12): 7906-7919, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37022387

RESUMO

Multioutput regression of nonlinear and nonstationary data is largely understudied in both machine learning and control communities. This article develops an adaptive multioutput gradient radial basis function (MGRBF) tracker for online modeling of multioutput nonlinear and nonstationary processes. Specifically, a compact MGRBF network is first constructed with a new two-step training procedure to produce excellent predictive capacity. To improve its tracking ability in fast time-varying scenarios, an adaptive MGRBF (AMGRBF) tracker is proposed, which updates the MGRBF network structure online by replacing the worst performing node with a new node that automatically encodes the newly emerging system state and acts as a perfect local multioutput predictor for the current system state. Extensive experimental results confirm that the proposed AMGRBF tracker significantly outperforms existing state-of-the-art online multioutput regression methods as well as deep-learning-based models, in terms of adaptive modeling accuracy and online computational complexity.

3.
IEEE Trans Neural Netw Learn Syst ; 33(5): 1867-1880, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-33052869

RESUMO

A key characteristic of biological systems is the ability to update the memory by learning new knowledge and removing out-of-date knowledge so that intelligent decision can be made based on the relevant knowledge acquired in the memory. Inspired by this fundamental biological principle, this article proposes a multi-output selective ensemble regression (SER) for online identification of multi-output nonlinear time-varying industrial processes. Specifically, an adaptive local learning approach is developed to automatically identify and encode a newly emerging process state by fitting a local multi-output linear model based on the multi-output hypothesis testing. This growth strategy ensures a highly diverse and independent local model set. The online modeling is constructed as a multi-output SER predictor by optimizing the combining weights of the selected local multi-output models based on a probability metric. An effective pruning strategy is also developed to remove the unwanted out-of-date local multi-output linear models in order to achieve low online computational complexity without scarifying the prediction accuracy. A simulated two-output process and two real-world identification problems are used to demonstrate the effectiveness of the proposed multi-output SER over a range of benchmark schemes for real-time identification of multi-output nonlinear and nonstationary processes, in terms of both online identification accuracy and computational complexity.


Assuntos
Redes Neurais de Computação , Modelos Lineares
4.
IEEE Trans Neural Netw ; 19(1): 193-8, 2008 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-18269951

RESUMO

Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Análise de Regressão , Humanos , Processamento de Sinais Assistido por Computador
5.
IEEE Trans Neural Netw ; 19(5): 737-45, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18467204

RESUMO

In this paper, we propose a powerful symmetric radial basis function (RBF) classifier for nonlinear detection in the so-called "overloaded" multiple-antenna-aided communication systems. By exploiting the inherent symmetry property of the optimal Bayesian detector, the proposed symmetric RBF classifier is capable of approaching the optimal classification performance using noisy training data. The classifier construction process is robust to the choice of the RBF width and is computationally efficient. The proposed solution is capable of providing a signal-to-noise ratio (SNR) gain in excess of 8 dB against the powerful linear minimum bit error rate (BER) benchmark, when supporting four users with the aid of two receive antennas or seven users with four receive antenna elements.


Assuntos
Redes Neurais de Computação , Ondas de Rádio , Algoritmos , Teorema de Bayes , Comunicação , Simulação por Computador , Dinâmica não Linear
6.
IEEE Trans Neural Netw Learn Syst ; 29(3): 560-572, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28026785

RESUMO

Many industrial processes contain both linear and nonlinear parts, and kernel principal component analysis (KPCA), widely used in nonlinear process monitoring, may not offer the most effective means for dealing with these nonlinear processes. This paper proposes a new hybrid linear-nonlinear statistical modeling approach for nonlinear process monitoring by closely integrating linear principal component analysis (PCA) and nonlinear KPCA using a serial model structure, which we refer to as serial PCA (SPCA). Specifically, PCA is first applied to extract PCs as linear features, and to decompose the data into the PC subspace and residual subspace (RS). Then, KPCA is performed in the RS to extract the nonlinear PCs as nonlinear features. Two monitoring statistics are constructed for fault detection, based on both the linear and nonlinear features extracted by the proposed SPCA. To effectively perform fault identification after a fault is detected, an SPCA similarity factor method is built for fault recognition, which fuses both the linear and nonlinear features. Unlike PCA and KPCA, the proposed method takes into account both linear and nonlinear PCs simultaneously, and therefore, it can better exploit the underlying process's structure to enhance fault diagnosis performance. Two case studies involving a simulated nonlinear process and the benchmark Tennessee Eastman process demonstrate that the proposed SPCA approach is more effective than the existing state-of-the-art approach based on KPCA alone, in terms of nonlinear process fault detection and identification.

7.
IEEE Trans Neural Netw ; 18(1): 28-41, 2007 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17278459

RESUMO

Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.


Assuntos
Algoritmos , Inteligência Artificial , Metodologias Computacionais , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise por Conglomerados , Simulação por Computador
8.
IEEE Trans Neural Netw Learn Syst ; 28(12): 2872-2884, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28113992

RESUMO

Complex-valued (CV) B-spline neural network approach offers a highly effective means for identifying and inverting practical Hammerstein systems. Compared with its conventional CV polynomial-based counterpart, a CV B-spline neural network has superior performance in identifying and inverting CV Hammerstein systems, while imposing a similar complexity. This paper reviews the optimality of the CV B-spline neural network approach. Advantages of B-spline neural network approach as compared with the polynomial based modeling approach are extensively discussed, and the effectiveness of the CV neural network-based approach is demonstrated in a real-world application. More specifically, we evaluate the comparative performance of the CV B-spline and polynomial-based approaches for the nonlinear iterative frequency-domain decision feedback equalization (NIFDDFE) of single-carrier Hammerstein channels. Our results confirm the superior performance of the CV B-spline-based NIFDDFE over its CV polynomial-based counterpart.

9.
IEEE Trans Syst Man Cybern B Cybern ; 35(4): 682-93, 2005 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16128453

RESUMO

Many signal processing applications pose optimization problems with multimodal and nonsmooth cost functions. Gradient methods are ineffective in these situations, and optimization methods that require no gradient and can achieve a global optimal solution are highly desired to tackle these difficult problems. The paper proposes a guided global search optimization technique, referred to as the repeated weighted boosting search. The proposed optimization algorithm is extremely simple and easy to implement, involving a minimum programming effort. Heuristic explanation is given for the global search capability of this technique. Comparison is made with the two better known and widely used guided global search techniques, known as the genetic algorithm and adaptive simulated annealing, in terms of the requirements for algorithmic parameter tuning. The effectiveness of the proposed algorithm as a global optimizer are investigated through several application examples.


Assuntos
Algoritmos , Inteligência Artificial , Diabetes Mellitus/diagnóstico , Diagnóstico por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Simulação por Computador , Diabetes Mellitus/classificação , Humanos , Modelos Biológicos , Modelos Estatísticos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processos Estocásticos
10.
IEEE Trans Cybern ; 45(12): 2925-36, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25643422

RESUMO

An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.

11.
IEEE Trans Syst Man Cybern B Cybern ; 34(4): 1708-17, 2004 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-15462438

RESUMO

This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.


Assuntos
Algoritmos , Inteligência Artificial , Análise dos Mínimos Quadrados , Modelos Estatísticos , Análise de Regressão , Distribuições Estatísticas , Simulação por Computador
12.
IEEE Trans Syst Man Cybern B Cybern ; 34(1): 598-608, 2004 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15369096

RESUMO

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

13.
IEEE Trans Syst Man Cybern B Cybern ; 34(2): 898-911, 2004 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-15376838

RESUMO

The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.

14.
IEEE Trans Cybern ; 43(1): 286-95, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22829416

RESUMO

A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.

15.
IEEE Trans Syst Man Cybern B Cybern ; 40(4): 1101-14, 2010 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-20007052

RESUMO

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.


Assuntos
Interpretação Estatística de Dados , Modelos Estatísticos , Distribuições Estatísticas , Simulação por Computador , Análise de Regressão , Tamanho da Amostra
16.
IEEE Trans Syst Man Cybern B Cybern ; 39(2): 457-66, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19095548

RESUMO

An orthogonal forward selection (OFS) algorithm based on leave-one-out (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance matrix, by minimizing the LOO statistics. For regression application, the LOO criterion is chosen to be the LOO mean-square error, while the LOO misclassification rate is adopted in two-class classification application. This OFS-LOO algorithm is computationally efficient, and it is capable of constructing parsimonious RBF networks that generalize well. Moreover, the proposed algorithm is fully automatic, and the user does not need to specify a termination criterion for the construction process. The effectiveness of the proposed RBF network construction procedure is demonstrated using examples taken from both regression and classification applications.

17.
IEEE Trans Neural Netw ; 19(11): 1961-7, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19000965

RESUMO

In this brief, we propose an orthogonal forward regression (OFR) algorithm based on the principles of the branch and bound (BB) and A-optimality experimental design. At each forward regression step, each candidate from a pool of candidate regressors, referred to as S, is evaluated in turn with three possible decisions: 1) one of these is selected and included into the model; 2) some of these remain in S for evaluation in the next forward regression step; and 3) the rest are permanently eliminated from S . Based on the BB principle in combination with an A-optimality composite cost function for model structure determination, a simple adaptive diagnostics test is proposed to determine the decision boundary between 2) and 3). As such the proposed algorithm can significantly reduce the computational cost in the A-optimality OFR algorithm. Numerical examples are used to demonstrate the effectiveness of the proposed algorithm.


Assuntos
Algoritmos , Modelos Estatísticos , Redes Neurais de Computação , Simulação por Computador , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA