Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
2.
IEEE Trans Biomed Eng ; 67(8): 2381-2388, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-31870975

RESUMO

OBJECTIVE: To investigate the use of pre-learnt subspace and spatial constraints for denoising magnetic resonance spectroscopic imaging (MRSI) data. METHOD: We exploit the partial separability or subspace structures of high-dimensional MRSI data for denoising. More specifically, we incorporate a subspace model with pre-learnt spectral basis into the low-rank approximation (LORA) method. Spectral basis is determined based on empirical prior distributions of the spectral parameters variations learnt from auxiliary training data; spatial priors are also incorporated as is done in LORA to further improve denoising performance. RESULTS: The effects of the explicit subspace and spatial constraints in reducing estimation bias and variance have been analyzed using Cramér-Rao Lower bound analysis, Monte-Carlo study, and experimental study. CONCLUSION: The denoising effectiveness of LORA can be significantly improved by incorporating pre-learnt spectral basis and spatial priors into LORA. SIGNIFICANCE: This study provides an effective method for denoising MRSI data along with comprehensive analyses of its performance. The proposed method is expected to be useful for a wide range of studies using MRSI.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Espectroscopia de Ressonância Magnética , Método de Monte Carlo
3.
IEEE Trans Image Process ; 27(5): 2354-2367, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29470171

RESUMO

This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent and update the class labels of all pixel vectors using -expansion min-cut-based algorithm. Compared with the other state-of-the-art methods, the classification method achieves better performance on one synthetic data set and two benchmark HSI data sets in a number of experimental settings.

4.
IEEE Trans Cybern ; 46(5): 1189-201, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26011874

RESUMO

Learning with l1 -regularizer has brought about a great deal of research in learning theory community. Previous known results for the learning with l1 -regularizer are based on the assumption that samples are independent and identically distributed (i.i.d.), and the best obtained learning rate for the l1 -regularization type algorithms is O(1/√m) , where m is the samples size. This paper goes beyond the classic i.i.d. framework and investigates the generalization performance of least square regression with l1 -regularizer ( l1 -LSR) based on uniformly ergodic Markov chain (u.e.M.c) samples. On the theoretical side, we prove that the learning rate of l1 -LSR for u.e.M.c samples l1 -LSR(M) is with the order of O(1/m) , which is faster than O(1/√m) for the i.i.d. counterpart. On the practical side, we propose an algorithm based on resampling scheme to generate u.e.M.c samples. We show that the proposed l1 -LSR(M) improves on the l1 -LSR(i.i.d.) in generalization error at the low cost of u.e.M.c resampling.

5.
Neural Netw ; 63: 57-65, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25481671

RESUMO

Recently, the spherical data processing has emerged in many applications and attracted a lot of attention. Among all the methods for dealing with the spherical data, the spherical neural networks (SNNs) method has been recognized as a very efficient tool due to SNNs possess both good approximation capability and spacial localization property. For better localized approximant, weighted approximation should be considered since different areas of the sphere may play different roles in the approximation process. In this paper, using the minimal Riesz energy points and the spherical cap average operator, we first construct a class of well-localized SNNs with a bounded sigmoidal activation function, and then study their approximation capabilities. More specifically, we establish a Jackson-type error estimate for the weighted SNNs approximation in the metric of L(p) space for the well developed doubling weights.


Assuntos
Algoritmos , Redes Neurais de Computação
6.
IEEE Trans Neural Netw Learn Syst ; 26(3): 628-39, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25343770

RESUMO

In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

7.
IEEE Trans Cybern ; 45(6): 1169-79, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25163077

RESUMO

UNLABELLED: The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. SAMPLES: We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.

8.
IEEE Trans Neural Netw Learn Syst ; 26(1): 7-20, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25069126

RESUMO

An extreme learning machine (ELM) is a feedforward neural network (FNN) like learning system whose connections with output neurons are adjustable, while the connections with and within hidden neurons are randomly fixed. Numerous applications have demonstrated the feasibility and high efficiency of ELM-like systems. It has, however, been open if this is true for any general applications. In this two-part paper, we conduct a comprehensive feasibility analysis of ELM. In Part I, we provide an answer to the question by theoretically justifying the following: 1) for some suitable activation functions, such as polynomials, Nadaraya-Watson and sigmoid functions, the ELM-like systems can attain the theoretical generalization bound of the FNNs with all connections adjusted, i.e., they do not degrade the generalization capability of the FNNs even when the connections with and within hidden neurons are randomly fixed; 2) the number of hidden neurons needed for an ELM-like system to achieve the theoretical bound can be estimated; and 3) whenever the activation function is taken as polynomial, the deduced hidden layer output matrix is of full column-rank, therefore the generalized inverse technique can be efficiently applied to yield the solution of an ELM-like system, and, furthermore, for the nonpolynomial case, the Tikhonov regularization can be applied to guarantee the weak regularity while not sacrificing the generalization capability. In Part II, however, we reveal a different aspect of the feasibility of ELM: there also exists some activation functions, which makes the corresponding ELM degrade the generalization capability. The obtained results underlie the feasibility and efficiency of ELM-like systems, and yield various generalizations and improvements of the systems as well.


Assuntos
Inteligência Artificial , Aprendizagem/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Algoritmos , Simulação por Computador , Generalização Psicológica , Humanos , Rede Nervosa/fisiologia , Redes Neurais de Computação
9.
IEEE Trans Neural Netw Learn Syst ; 26(1): 21-34, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25069128

RESUMO

An extreme learning machine (ELM) can be regarded as a two-stage feed-forward neural network (FNN) learning system that randomly assigns the connections with and within hidden neurons in the first stage and tunes the connections with output neurons in the second stage. Therefore, ELM training is essentially a linear learning problem, which significantly reduces the computational burden. Numerous applications show that such a computation burden reduction does not degrade the generalization capability. It has, however, been open that whether this is true in theory. The aim of this paper is to study the theoretical feasibility of ELM by analyzing the pros and cons of ELM. In the previous part of this topic, we pointed out that via appropriately selected activation functions, ELM does not degrade the generalization capability in the sense of expectation. In this paper, we launch the study in a different direction and show that the randomness of ELM also leads to certain negative consequences. On one hand, we find that the randomness causes an additional uncertainty problem of ELM, both in approximation and learning. On the other hand, we theoretically justify that there also exist activation functions such that the corresponding ELM degrades the generalization capability. In particular, we prove that the generalization capability of ELM with Gaussian kernel is essentially worse than that of FNN with Gaussian kernel. To facilitate the use of ELM, we also provide a remedy to such a degradation. We find that the well-developed coefficient regularization technique can essentially improve the generalization capability. The obtained results reveal the essential characteristic of ELM in a certain sense and give theoretical guidance concerning how to use ELM.


Assuntos
Inteligência Artificial , Aprendizagem/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Algoritmos , Simulação por Computador , Humanos , Redes Neurais de Computação , Probabilidade
10.
Neural Netw ; 53: 40-51, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24531039

RESUMO

In this paper we consider Gaussian RBF kernels support vector machine classification (SVMC) algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples in reproducing kernel Hilbert spaces (RKHS). We analyze the learning rates of Gaussian RBF kernels SVMC based on u.e.M.c. samples and obtain the fast learning rate of Gaussian RBF kernels SVMC based on u.e.M.c. samples by using the strongly mixing property of u.e.M.c. samples. We also present the numerical studies on the learning performance of Gaussian RBF kernels SVMC based on Markov sampling for real-world datasets. These experimental results show that Gaussian RBF kernels SVMC based on Markov sampling has better learning performance compared to randomly independent sampling.


Assuntos
Redes Neurais de Computação , Máquina de Vetores de Suporte , Cadeias de Markov
11.
IEEE Trans Cybern ; 44(9): 1497-507, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24184794

RESUMO

This paper considers the generalization ability of two regularized regression algorithms [least square regularized regression (LSRR) and support vector machine regression (SVMR)] based on non-independent and identically distributed (non-i.i.d.) samples. Different from the previously known works for non-i.i.d. samples, in this paper, we research the generalization bounds of two regularized regression algorithms based on uniformly ergodic Markov chain (u.e.M.c.) samples. Inspired by the idea from Markov chain Monto Carlo (MCMC) methods, we also introduce a new Markov sampling algorithm for regression to generate u.e.M.c. samples from a given dataset, and then, we present the numerical studies on the learning performance of LSRR and SVMR based on Markov sampling, respectively. The experimental results show that LSRR and SVMR based on Markov sampling can present obviously smaller mean square errors and smaller variances compared to random sampling.

12.
IEEE Trans Neural Netw Learn Syst ; 24(2): 288-300, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24808282

RESUMO

Fisher linear discriminant (FLD) is a well-known method for dimensionality reduction and classification that projects high-dimensional data onto a low-dimensional space where the data achieves maximum class separability. The previous works describing the generalization ability of FLD have usually been based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper, we go far beyond this classical framework by studying the generalization ability of FLD based on Markov sampling. We first establish the bounds on the generalization performance of FLD based on uniformly ergodic Markov chain (u.e.M.c.) samples, and prove that FLD based on u.e.M.c. samples is consistent. By following the enlightening idea from Markov chain Monto Carlo methods, we also introduce a Markov sampling algorithm for FLD to generate u.e.M.c. samples from a given data of finite size. Through simulation studies and numerical studies on benchmark repository using FLD, we find that FLD based on u.e.M.c. samples generated by Markov sampling can provide smaller misclassification rates compared to i.i.d. samples.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA