Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(6)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38544221

RESUMO

The BeiDou Navigation Satellite System (BDS) provides real-time absolute location services to users around the world and plays a key role in the rapidly evolving field of autonomous driving. In complex urban environments, the positioning accuracy of BDS often suffers from large deviations due to non-line-of-sight (NLOS) signals. Deep learning (DL) methods have shown strong capabilities in detecting complex and variable NLOS signals. However, these methods still suffer from the following limitations. On the one hand, supervised learning methods require labeled samples for learning, which inevitably encounters the bottleneck of difficulty in constructing databases with a large number of labels. On the other hand, the collected data tend to have varying degrees of noise, leading to low accuracy and poor generalization performance of the detection model, especially when the environment around the receiver changes. In this article, we propose a novel deep neural architecture named convolutional denoising autoencoder network (CDAENet) to detect NLOS in urban forest environments. Specifically, we first design a denoising autoencoder based on unsupervised DL to reduce the long time series signal dimension and extract the deep features of the data. Meanwhile, denoising autoencoders improve the model's robustness in identifying noisy data by introducing a certain amount of noise into the input data. Then, an MLP algorithm is used to identify the non-linearity of the BDS signal. Finally, the performance of the proposed CDAENet model is validated on a real urban forest dataset. The experimental results show that the satellite detection accuracy of our proposed algorithm is more than 95%, which is about an 8% improvement over existing machine-learning-based methods and about 3% improvement over deep-learning-based approaches.

2.
Neural Netw ; 168: 180-193, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37757726

RESUMO

Deep Reinforcement Learning (DRL) is one powerful tool for varied control automation problems. Performances of DRL highly depend on the accuracy of value estimation for states from environments. However, the Value Estimation Network (VEN) in DRL can be easily influenced by the phenomenon of catastrophic interference from environments and training. In this paper, we propose a Dynamic Sparse Coding-based (DSC) VEN model to obtain precise sparse representations for accurate value prediction and sparse parameters for efficient training, which is not only applicable in Q-learning structured discrete-action DRL but also in actor-critic structured continuous-action DRL. In detail, to alleviate interference in VEN, we propose to employ DSC to learn sparse representations for accurate value estimation with dynamic gradients beyond the conventional ℓ1 norm that provides same-value gradients. To avoid influences from redundant parameters, we employ DSC to prune weights with dynamic thresholds more efficiently than static thresholds like ℓ1 norm. Experiments demonstrate that the proposed algorithms with dynamic sparse coding can obtain higher control performances than existing benchmark DRL algorithms in both discrete-action and continuous-action environments, e.g., over 25% increase in Puddle World and about 10% increase in Hopper. Moreover, the proposed algorithm can reach convergence efficiently with fewer episodes in different environments.


Assuntos
Aprendizagem , Reforço Psicológico , Algoritmos , Automação , Benchmarking
3.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3568-3579, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34633934

RESUMO

Direct-optimization-based dictionary learning has attracted increasing attention for improving computational efficiency. However, the existing direct optimization scheme can only be applied to limited dictionary learning problems, and it remains an open problem to prove that the whole sequence obtained by the algorithm converges to a critical point of the objective function. In this article, we propose a novel direct-optimization-based dictionary learning algorithm using the minimax concave penalty (MCP) as a sparsity regularizer that can enforce strong sparsity and obtain accurate estimation. For solving the corresponding optimization problem, we first decompose the nonconvex MCP into two convex components. Then, we employ the difference of the convex functions algorithm and the nonconvex proximal-splitting algorithm to process the resulting subproblems. Thus, the direct optimization approach can be extended to a broader class of dictionary learning problems, even if the sparsity regularizer is nonconvex. In addition, the convergence guarantee for the proposed algorithm can be theoretically proven. Our numerical simulations demonstrate that the proposed algorithm has good convergence performances in different cases and robust dictionary-recovery capabilities. When applied to sparse approximations, the proposed approach can obtain sparser and less error estimation than the different sparsity regularizers in existing methods. In addition, the proposed algorithm has robustness in image denoising and key-frame extraction.


Assuntos
Algoritmos , Redes Neurais de Computação
4.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8825-8839, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35254997

RESUMO

Multiview dictionary learning (DL) is attracting attention in multiview clustering due to the efficient feature learning ability. However, most existing multiview DL algorithms are facing problems in fully utilizing consistent and complementary information simultaneously in the multiview data and learning the most precise representation for multiview clustering because of gaps between views. This article proposes an efficient multiview DL algorithm for multiview clustering, which uses the partially shared DL model with a flexible ratio of shared sparse coefficients to excavate both consistency and complementarity in the multiview data. In particular, a differentiable scale-invariant function is used as the sparsity regularizer, which considers the absolute sparsity of coefficients as the l0 norm regularizer but is continuous and differentiable almost everywhere. The corresponding optimization problem is solved by the proximal splitting method with extrapolation technology; moreover, the proximal operator of the differentiable scale-invariant regularizer can be derived. The synthetic experiment results demonstrate that the proposed algorithm can recover the synthetic dictionary well with reasonable convergence time costs. Multiview clustering experiments include six real-world multiview datasets, and the performances show that the proposed algorithm is not sensitive to the regularizer parameter as the other algorithms. Furthermore, an appropriate coefficient sharing ratio can help to exploit consistent information while keeping complementary information from multiview data and thus enhance performances in multiview clustering. In addition, the convergence performances show that the proposed algorithm can obtain the best performances in multiview clustering among compared algorithms and can converge faster than compared multiview algorithms mostly.

5.
IEEE Trans Cybern ; 53(2): 765-778, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35316206

RESUMO

Deep reinforcement learning (DRL), which highly depends on the data representation, has shown its potential in many practical decision-making problems. However, the process of acquiring representations in DRL is easily affected by interference from models, and moreover leaves unnecessary parameters, leading to control performance reduction. In this article, we propose a double sparse DRL via multilayer sparse coding and nonconvex regularized pruning. To alleviate interference in DRL, we propose a multilayer sparse-coding-structural network to obtain deep sparse representation for control in reinforcement learning. Furthermore, we employ a nonconvex log regularizer to promote strong sparsity, efficiently removing the unnecessary weights with a regularizer-based pruning scheme. Hence, a double sparse DRL algorithm is developed, which can not only learn deep sparse representation to reduce the interference but also remove redundant weights while keeping the robust performance. The experimental results in five benchmark environments of the deep q network (DQN) architecture demonstrate that the proposed method with deep sparse representations from the multilayer sparse-coding structure can outperform existing sparse-coding-based DRL in control, for example, completing Mountain Car with 140.81 steps, achieving near 10% reward increase from the single-layer sparse-coding DRL algorithm, and obtaining 286.08 scores in Catcher, which are over two times the rewards of the other algorithms. Moreover, the proposed algorithm can reduce over 80% parameters while keeping performance improvements from deep sparse representations.

6.
IEEE Trans Cybern ; 52(10): 10785-10799, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33872171

RESUMO

Convolutional transform learning (CTL), learning filters by minimizing the data fidelity loss function in an unsupervised way, is becoming very pervasive, resulting from keeping the best of both worlds: the benefit of unsupervised learning and the success of the convolutional neural network. There have been growing interests in developing efficient CTL algorithms. However, developing a convergent and accelerated CTL algorithm with accurate representations simultaneously with proper sparsity is an open problem. This article presents a new CTL framework with a log regularizer that can not only obtain accurate representations but also yield strong sparsity. To efficiently address our nonconvex composite optimization, we propose to employ the proximal difference of the convex algorithm (PDCA) which relies on decomposing the nonconvex regularizer into the difference of two convex parts and then optimizes the convex subproblems. Furthermore, we introduce the extrapolation technology to accelerate the algorithm, leading to a fast and efficient CTL algorithm. In particular, we provide a rigorous convergence analysis for the proposed algorithm under the accelerated PDCA. The experimental results demonstrate that the proposed algorithm can converge more stably to desirable solutions with lower approximation error and simultaneously with stronger sparsity and, thus, learn filters efficiently. Meanwhile, the convergence speed is faster than the existing CTL algorithms.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...