Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(11)2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38894450

RESUMO

Robust face clustering enjoys a wide range of applications for gate passes, surveillance systems and security analysis in embedded sensors. Nevertheless, existing algorithms have limitations in finding accurate clusters when data contain noise (e.g., occluded face clustering and recognition). It is known that in subspace clustering, the ℓ1- and ℓ2-norm regularizers can improve subspace preservation and connectivity, respectively, and the elastic net regularizer (i.e., the mixture of the ℓ1- and ℓ2-norms) provides a balance between the two properties. However, existing deterministic methods have high per iteration computational complexities, making them inapplicable to large-scale problems. To address this issue, this paper proposes the first accelerated stochastic variance reduction gradient (RASVRG) algorithm for robust subspace clustering. We also introduce a new momentum acceleration technique for the RASVRG algorithm. As a result of the involvement of this momentum, the RASVRG algorithm achieves both the best oracle complexity and the fastest convergence rate, and it reaches higher efficiency in practice for both strongly convex and not strongly convex models. Various experimental results show that the RASVRG algorithm outperformed existing state-of-the-art methods with elastic net and ℓ1-norm regularizers in terms of accuracy in most cases. As demonstrated on real-world face datasets with different manually added levels of pixel corruption and occlusion situations, the RASVRG algorithm achieved much better performance in terms of accuracy and robustness.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38625778

RESUMO

Many research works have shown that the traditional alternating direction multiplier methods (ADMMs) can be better understood by continuous-time differential equations (DEs). On the other hand, many unfolded algorithms directly inherit the traditional iterations to build deep networks. Although they achieve superior practical performance and a faster convergence rate than traditional counterparts, there is a lack of clear insight into unfolded network structures. Thus, we attempt to explore the unfolded linearized ADMM (LADMM) from the perspective of DEs, and design more efficient unfolded networks. First, by proposing an unfolded Euler LADMM scheme and inspired by the trapezoid discretization, we design a new more accurate Trapezoid LADMM scheme. For the convenience of implementation, we provide its explicit version via a prediction-correction strategy. Then, to expand the representation space of unfolded networks, we design an accelerated variant of our Euler LADMM scheme, which can be interpreted as second-order DEs with stronger representation capabilities. To fully explore this representation space, we designed an accelerated Trapezoid LADMM scheme. To the best of our knowledge, this is the first work to explore a comprehensive connection with theoretical guarantees between unfolded ADMMs and first-(second-) order DEs. Finally, we instantiate our schemes as (A-)ELADMM and (A-)TLADMM with the proximal operators, and (A-)ELADMM-Net and (A-)TLADMM-Net with convolutional neural networks (CNNs). Extensive inverse problem experiments show that our Trapezoid LADMM schemes perform better than well-known methods.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37819820

RESUMO

Deep neural networks (DNNs) play key roles in various artificial intelligence applications such as image classification and object recognition. However, a growing number of studies have shown that there exist adversarial examples in DNNs, which are almost imperceptibly different from the original samples but can greatly change the output of DNNs. Recently, many white-box attack algorithms have been proposed, and most of the algorithms concentrate on how to make the best use of gradients per iteration to improve adversarial performance. In this article, we focus on the properties of the widely used activation function, rectified linear unit (ReLU), and find that there exist two phenomena (i.e., wrong blocking and over transmission) misguiding the calculation of gradients for ReLU during backpropagation. Both issues enlarge the difference between the predicted changes of the loss function from gradients and corresponding actual changes and misguide the optimized direction, which results in larger perturbations. Therefore, we propose a universal gradient correction adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient-based white-box attack algorithms such as fast gradient signed method (FGSM), iterative FGSM (I-FGSM), momentum I-FGSM (MI-FGSM), and variance tuning MI-FGSM (VMI-FGSM). Through backpropagation, our approach calculates the gradient of the loss function with respect to the network input, maps the values to scores, and selects a part of them to update the misguided gradients. Comprehensive experimental results on ImageNet and CIFAR10 demonstrate that our ADV-ReLU can be easily integrated into many state-of-the-art gradient-based white-box attack algorithms, as well as transferred to black-box attacks, to further decrease perturbations measured in the l2 -norm.

4.
IEEE Trans Cybern ; 53(10): 6187-6199, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35468073

RESUMO

Model quantization can reduce the model size and computational latency, it has been successfully applied for many applications of mobile phones, embedded devices, and smart chips. Mixed-precision quantization models can match different bit precision according to the sensitivity of different layers to achieve great performance. However, it is difficult to quickly determine the quantization bit precision of each layer in deep neural networks under some constraints (for example, hardware resources, energy consumption, model size, and computational latency). In this article, a novel sequential single-path search (SSPS) method for mixed-precision model quantization is proposed, in which some given constraints are introduced to guide the searching process. A single-path search cell is proposed to combine a fully differentiable supernet, which can be optimized by gradient-based algorithms. Moreover, we sequentially determine the candidate precisions according to the selection certainties to exponentially reduce the search space and speed up the convergence of the searching process. Experiments show that our method can efficiently search the mixed-precision models for different architectures (for example, ResNet-20, 18, 34, 50, and MobileNet-V2) and datasets (for example, CIFAR-10, ImageNet, and COCO) under given constraints, and our experimental results verify that SSPS significantly outperforms their uniform-precision counterparts.

5.
Sensors (Basel) ; 23(1)2022 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-36616990

RESUMO

Panoramic videos are shot by an omnidirectional camera or a collection of cameras, and can display a view in every direction. They can provide viewers with an immersive feeling. The study of super-resolution of panoramic videos has attracted much attention, and many methods have been proposed, especially deep learning-based methods. However, due to complex architectures of all the methods, they always result in a large number of hyperparameters. To address this issue, we propose the first lightweight super-resolution method with self-calibrated convolution for panoramic videos. A new deformable convolution module is designed first, with self-calibration convolution, which can learn more accurate offset and enhance feature alignment. Moreover, we present a new residual dense block for feature reconstruction, which can significantly reduce the parameters while maintaining performance. The performance of the proposed method is compared to those of the state-of-the-art methods, and is verified on the MiG panoramic video dataset.


Assuntos
Emoções , Calibragem
6.
IEEE Trans Cybern ; 52(5): 3276-3288, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-32784147

RESUMO

In recent years, most of the studies have shown that the generalized iterated shrinkage thresholdings (GISTs) have become the commonly used first-order optimization algorithms in sparse learning problems. The nonconvex relaxations of the l0 -norm usually achieve better performance than the convex case (e.g., l1 -norm) since the former can achieve a nearly unbiased solver. To increase the calculation efficiency, this work further provides an accelerated GIST version, that is, AGIST, through the extrapolation-based acceleration technique, which can contribute to reduce the number of iterations when solving a family of nonconvex sparse learning problems. Besides, we present the algorithmic analysis, including both local and global convergence guarantees, as well as other intermediate results for the GIST and AGIST, denoted as (A)GIST, by virtue of the Kurdyka-Lojasiewica (KL) property and some milder assumptions. Numerical experiments on both synthetic data and real-world databases can demonstrate that the convergence results of objective function accord to the theoretical properties and nonconvex sparse learning methods can achieve superior performance over some convex ones.


Assuntos
Tumores do Estroma Gastrointestinal , Algoritmos , Bases de Dados Factuais , Humanos
7.
IEEE Trans Neural Netw Learn Syst ; 33(12): 7806-7817, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34161247

RESUMO

Recently, stochastic hard thresholding (HT) optimization methods [e.g., stochastic variance reduced gradient hard thresholding (SVRGHT)] are becoming more attractive for solving large-scale sparsity/rank-constrained problems. However, they have much higher HT oracle complexities, especially for high-dimensional data or large-scale matrices. To address this issue and inspired by the well-known Gradient Support Pursuit (GraSP) method, this article proposes a new Relaxed Gradient Support Pursuit (RGraSP) framework. Unlike GraSP, RGraSP only requires to yield an approximation solution at each iteration. Based on the property of RGraSP, we also present an efficient stochastic variance reduction-gradient support pursuit algorithm and its fast version (called stochastic variance reduced gradient support pursuit (SVRGSP+). We prove that the gradient oracle complexity of both our algorithms is two times less than that of SVRGHT. In particular, their HT complexity is about κ∧s times less than that of SVRGHT, where κ∧s is the restricted condition number. Moreover, we prove that our algorithms enjoy fast linear convergence to an approximately global optimum, and also present an asynchronous parallel variant to deal with very high-dimensional and sparse data. Experimental results on both synthetic and real-world datasets show that our algorithms yield superior results than the state-of-the-art gradient HT methods.

8.
IEEE Trans Pattern Anal Mach Intell ; 43(12): 4242-4255, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32750780

RESUMO

Recently, many stochastic variance reduced alternating direction methods of multipliers (ADMMs) (e.g., SAG-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rate for strongly convex (SC) problems. However, their best-known convergence rate for non-strongly convex (non-SC) problems is O(1/T) as opposed to O(1/T2) of accelerated deterministic algorithms, where T is the number of iterations. Thus, there remains a gap in the convergence rates of existing stochastic ADMM and deterministic algorithms. To bridge this gap, we introduce a new momentum acceleration trick into stochastic variance reduced ADMM, and propose a novel accelerated SVRG-ADMM method (called ASVRG-ADMM) for the machine learning problems with the constraint Ax + By = c. Then we design a linearized proximal update rule and a simple proximal one for the two classes of ADMM-style problems with B = τI and B ≠ τI, respectively, where I is an identity matrix and τ is an arbitrary bounded constant. Note that our linearized proximal update rule can avoid solving sub-problems iteratively. Moreover, we prove that ASVRG-ADMM converges linearly for SC problems. In particular, ASVRG-ADMM improves the convergence rate from O(1/T) to O(1/T2) for non-SC problems. Finally, we apply ASVRG-ADMM to various machine learning problems, e.g., graph-guided fused Lasso, graph-guided logistic regression, graph-guided SVM, generalized graph-guided fused Lasso and multi-task learning, and show that ASVRG-ADMM consistently converges faster than the state-of-the-art methods.

9.
Sensors (Basel) ; 20(17)2020 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-32872609

RESUMO

In recent years, a series of matching pursuit and hard thresholding algorithms have been proposed to solve the sparse representation problem with ℓ0-norm constraint. In addition, some stochastic hard thresholding methods were also proposed, such as stochastic gradient hard thresholding (SG-HT) and stochastic variance reduced gradient hard thresholding (SVRGHT). However, each iteration of all the algorithms requires one hard thresholding operation, which leads to a high per-iteration complexity and slow convergence, especially for high-dimensional problems. To address this issue, we propose a new stochastic recursive gradient support pursuit (SRGSP) algorithm, in which only one hard thresholding operation is required in each outer-iteration. Thus, SRGSP has a significantly lower computational complexity than existing methods such as SG-HT and SVRGHT. Moreover, we also provide the convergence analysis of SRGSP, which shows that SRGSP attains a linear convergence rate. Our experimental results on large-scale synthetic and real-world datasets verify that SRGSP outperforms state-of-the-art related methods for tackling various sparse representation problems. Moreover, we conduct many experiments on two real-world sparse representation applications such as image denoising and face recognition, and all the results also validate that our SRGSP algorithm obtains much better performance than other sparse representation learning optimization methods in terms of PSNR and recognition rates.

10.
IEEE Trans Neural Netw Learn Syst ; 31(8): 3007-3016, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31536019

RESUMO

In this article, a new deep neural network based on sparse filtering and manifold regularization (DSMR) is proposed for feature extraction and classification of polarimetric synthetic aperture radar (PolSAR) data. DSMR uses a novel deep neural network (DNN) to automatically learn features from raw SAR data. During preprocessing, the spatial information between pixels on PolSAR images is exploited to weight each data sample. Then, in the pretraining and fine-tuning, DSMR uses the population sparsity and the lifetime sparsity (dual sparsity) to learn the global features and preserves the local structure of data by neighborhood-based manifold regularization. The dual sparsity only needs to tune a few parameters, and the manifold regularization cuts down the number of training samples. Experimental results on synthesized and real PolSAR data sets from different SAR systems show that DSMR can improve classification accuracy compared with conventional DNNs, even for data sets with a large angle of incidence.

11.
IEEE Trans Neural Netw Learn Syst ; 31(9): 3245-3258, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31603802

RESUMO

Semi-supervised non-negative matrix factorization (NMF) exploits the strengths of NMF in effectively learning local information contained in data and is also able to achieve effective learning when only a small fraction of data is labeled. NMF is particularly useful for dimensionality reduction of high-dimensional data. However, the mapping between the low-dimensional representation, learned by semi-supervised NMF, and the original high-dimensional data contains complex hierarchical and structural information, which is hard to extract by using only single-layer clustering methods. Therefore, in this article, we propose a new deep learning method, called semi-supervised graph regularized deep NMF with bi-orthogonal constraints (SGDNMF). SGDNMF learns a representation from the hidden layers of a deep network for clustering, which contains varied and unknown attributes. Bi-orthogonal constraints on two factor matrices are introduced into our SGDNMF model, which can make the solution unique and improve clustering performance. This improves the effect of dimensionality reduction because it only requires a small fraction of data to be labeled. In addition, SGDNMF incorporates dual-hypergraph Laplacian regularization, which can reinforce high-order relationships in both data and feature spaces and fully retain the intrinsic geometric structure of the original data. This article presents the details of the SGDNMF algorithm, including the objective function and the iterative updating rules. Empirical experiments on four different data sets demonstrate state-of-the-art performance of SGDNMF in comparison with six other prominent algorithms.

12.
IEEE Trans Cybern ; 2018 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-29993878

RESUMO

Recently, nuclear norm-based low rank representation (LRR) methods have been popular in several applications, such as subspace segmentation. However, there exist two limitations: one is that nuclear norm as the relaxation of rank function will lead to the suboptimal solution since nuclear norm-based minimization subproblem tends to the over-relaxations of singular value elements and treats each of them equally; the other is that solving LRR problems may cause more time consumption due to involving singular value decomposition of the large scale matrix at each iteration. To overcome both disadvantages, this paper mainly considers two tractable variants of LRR: one is Schatten-p norm minimization-based LRR (i.e., SpNM_LRR) and the other is Schatten-p norm factorization-based LRR (i.e., SpNFLRR) for p=1, 2/3 and 1/2. By introducing two or more auxiliary variables in the constraints, the alternating direction method of multiplier (ADMM) with multiple updating variables can be devised to solve these variants of LRR. Furthermore, both computational complexity and convergence property are given to evaluate nonconvex multiblocks ADMM algorithms. Several experiments finally validate the efficacy and efficiency of our methods on both synthetic data and real world data.

13.
IEEE Trans Pattern Anal Mach Intell ; 40(9): 2066-2080, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-28880159

RESUMO

The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to $\ell _{p}$ -norm minimization with two specific values of $p$ , i.e., $p=1/2$ and $p=2/3$ , we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten- $1/2$ and $2/3$ quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g., text removal, moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

14.
IEEE Trans Neural Netw Learn Syst ; 27(12): 2551-2563, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-26595932

RESUMO

Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((RN+NRI)log(√{IN})) observations to reliably recover an N th-order I×Iׅ×I tensor of n -rank (r,r,…,r) , compared with O(rIN-1) observations required by those tensor TNM methods ( I >> R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.

15.
IEEE Trans Cybern ; 45(11): 2437-48, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26470059

RESUMO

In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

16.
Neural Netw ; 48: 8-18, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23891807

RESUMO

In recent years, matrix rank minimization problems have aroused considerable interests from machine learning, data mining and computer vision communities. All of these problems can be solved via their convex relaxations which minimize the trace norm instead of the rank of the matrix, and have to be solved iteratively and involve singular value decomposition (SVD) at each iteration. Therefore, those algorithms for trace norm minimization problems suffer from high computation cost of multiple SVDs. In this paper, we propose an efficient Matrix Bi-Factorization (MBF) method to approximate the original trace norm minimization problem and mitigate the computation cost of performing SVDs. The proposed MBF method can be used to address a wide range of low-rank matrix recovery and completion problems such as low-rank and sparse matrix decomposition (LRSD), low-rank representation (LRR) and low-rank matrix completion (MC). We also present three small scale matrix trace norm models for LRSD, LRR and MC problems, respectively. Moreover, we develop two concrete linearized proximal alternative optimization algorithms for solving the above three problems. Experimental results on a variety of synthetic and real-world data sets validate the efficiency, robustness and effectiveness of our MBF method comparing with the state-of-the-art trace norm minimization algorithms.


Assuntos
Inteligência Artificial , Algoritmos , Análise por Conglomerados , Simulação por Computador , Interpretação Estatística de Dados , Mineração de Dados , Modelos Lineares , Modelos Estatísticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA