Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Bioinformatics ; 35(17): 2982-2990, 2019 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-30668845

RESUMO

MOTIVATION: Protein fold recognition has attracted increasing attention because it is critical for studies of the 3D structures of proteins and drug design. Researchers have been extensively studying this important task, and several features with high discriminative power have been proposed. However, the development of methods that efficiently combine these features to improve the predictive performance remains a challenging problem. RESULTS: In this study, we proposed two algorithms: MV-fold and MT-fold. MV-fold is a new computational predictor based on the multi-view learning model for fold recognition. Different features of proteins were treated as different views of proteins, including the evolutionary information, secondary structure information and physicochemical properties. These different views constituted the latent space. The ε-dragging technique was employed to enlarge the margins between different protein folds, improving the predictive performance of MV-fold. Then, MV-fold was combined with two template-based methods: HHblits and HMMER. The ensemble method is called MT-fold incorporating the advantages of both discriminative methods and template-based methods. Experimental results on five widely used benchmark datasets (DD, RDD, EDD, TG and LE) showed that the proposed methods outperformed some state-of-the-art methods in this field, indicating that MV-fold and MT-fold are useful computational tools for protein fold recognition and protein homology detection and would be efficient tools for protein sequence analysis. Finally, we constructed an update and rigorous benchmark dataset based on SCOPe (version 2.07) to fairly evaluate the performance of the proposed method, and our method achieved stable performance on this new dataset. This new benchmark dataset will become a widely used benchmark dataset to fairly evaluate the performance of different methods for fold recognition. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Algoritmos , Dobramento de Proteína , Proteínas , Análise de Sequência de Proteína
2.
Artigo em Inglês | MEDLINE | ID: mdl-38048245

RESUMO

In the past decades, supervised cross-modal hashing methods have attracted considerable attentions due to their high searching efficiency on large-scale multimedia databases. Many of these methods leverage semantic correlations among heterogeneous modalities by constructing a similarity matrix or building a common semantic space with the collective matrix factorization method. However, the similarity matrix may sacrifice the scalability and cannot preserve more semantic information into hash codes in the existing methods. Meanwhile, the matrix factorization methods cannot embed the main modality-specific information into hash codes. To address these issues, we propose a novel supervised cross-modal hashing method called random online hashing (ROH) in this article. ROH proposes a linear bridging strategy to simplify the pair-wise similarities factorization problem into a linear optimization one. Specifically, a bridging matrix is introduced to establish a bidirectional linear relation between hash codes and labels, which preserves more semantic similarities into hash codes and significantly reduces the semantic distances between hash codes of samples with similar labels. Additionally, a novel maximum eigenvalue direction (MED) embedding method is proposed to identify the direction of maximum eigenvalue for the original features and preserve critical information into modality-specific hash codes. Eventually, to handle real-time data dynamically, an online structure is adopted to solve the problem of dealing with new arrival data chunks without considering pairwise constraints. Extensive experimental results on three benchmark datasets demonstrate that the proposed ROH outperforms several state-of-the-art cross-modal hashing methods.

3.
Neural Netw ; 165: 60-76, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37276811

RESUMO

Hashing-based cross-modal retrieval methods have become increasingly popular due to their advantages in storage and speed. While current methods have demonstrated impressive results, there are still several issues that have not been addressed. Specifically, many of these approaches assume that labels are perfectly assigned, despite the fact that in real-world scenarios, labels are often incomplete or partially missing. There are two reasons for this, as manual labeling can be a complex and time-consuming task, and annotators may only be interested in certain objects. As such, cross-modal retrieval with missing labels is a significant challenge that requires further attention. Moreover, the similarity between labels is frequently ignored, which is important for exploring the high-level semantics of labels. To address these limitations, we propose a novel method called Cross-Modal Hashing with Missing Labels (CMHML). Our method consists of several key components. First, we introduce Reliable Label Learning to preserve reliable information from the observed labels. Next, to infer the uncertain part of the predicted labels, we decompose the predicted labels into latent representations of labels and samples. The representation of samples is extracted from different modalities, which assists in inferring missing labels. We also propose Label Correlation Preservation to enhance the similarity between latent representations of labels. Hash codes are then learned from the representation of samples through Global Approximation Learning. We also construct a similarity matrix according to predicted labels and embed it into hash codes learning to explore the value of labels. Finally, we train linear classifiers to map original samples to a low-dimensional Hamming space. To evaluate the efficacy of CMHML, we conduct extensive experiments on four publicly available datasets. Our method is compared to other state-of-the-art methods, and the results demonstrate that our model performs competitively even when most labels are missing.


Assuntos
Aprendizagem , Semântica , Incerteza
4.
IEEE Trans Cybern ; 52(4): 2618-2629, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32667889

RESUMO

In general, existing cross-domain recognition methods mainly focus on changing the feature representation of data or modifying the classifier parameter and their efficiencies are indicated by the better performance. However, most existing methods do not simultaneously integrate them into a unified optimization objective for further improving the learning efficiency. In this article, we propose a novel cross-domain recognition algorithm framework by integrating both of them. Specifically, we reduce the discrepancies in both the conditional distribution and marginal distribution between different domains in order to learn a new feature representation which pulls the data from different domains closer on the whole. However, the data from different domains but the same class cannot interlace together enough and thus it is not reasonable to mix them for training a single classifier. To this end, we further propose to learn double classifiers on the respective domain and require that they dynamically approximate to each other during learning. This guarantees that we finally learn a suitable classifier from the double classifiers by using the strategy of classifier fusion. The experiments show that the proposed method outperforms over the state-of-the-art methods.


Assuntos
Algoritmos , Aprendizagem
5.
IEEE Trans Cybern ; 52(11): 11780-11793, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34106872

RESUMO

Cross-modal retrieval has attracted considerable attention for searching in large-scale multimedia databases because of its efficiency and effectiveness. As a powerful tool of data analysis, matrix factorization is commonly used to learn hash codes for cross-modal retrieval, but there are still many shortcomings. First, most of these methods only focus on preserving locality of data but they ignore other factors such as preserving reconstruction residual of data during matrix factorization. Second, the energy loss of data is not considered when the data of cross-modal are projected into a common semantic space. Third, the data of cross-modal are directly projected into a unified semantic space which is not reasonable since the data from different modalities have different properties. This article proposes a novel method called average approximate hashing (AAH) to address these problems by: 1) integrating the locality and residual preservation into a graph embedding framework by using the label information; 2) projecting data from different modalities into different semantic spaces and then making the two spaces approximate to each other so that a unified hash code can be obtained; and 3) introducing a principal component analysis (PCA)-like projection matrix into the graph embedding framework to guarantee that the projected data can preserve the main energy of data. AAH obtains the final hash codes by using an average approximate strategy, that is, using the mean of projected data of different modalities as the hash codes. Experiments on standard databases show that the proposed AAH outperforms several state-of-the-art cross-modal hashing methods.


Assuntos
Semântica , Bases de Dados Factuais
6.
IEEE Trans Neural Netw Learn Syst ; 31(12): 5630-5638, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32112684

RESUMO

Linear discriminant analysis (LDA) has been widely used as the technique of feature exaction. However, LDA may be invalid to address the data from different domains. The reasons are as follows: 1) the distribution discrepancy of data may disturb the linear transformation matrix so that it cannot extract the most discriminative feature and 2) the original design of LDA does not consider the unlabeled data so that the unlabeled data cannot take part in the training process for further improving the performance of LDA. To address these problems, in this brief, we propose a novel transferable LDA (TLDA) method to extend LDA into the scenario in which the data have different probability distributions. The whole learning process of TLDA is driven by the philosophy that the data from the same subspace have a low-rank structure. The matrix rank in TLDA is the key learning criterion to conduct local and global linear transformations for restoring the low-rank structure of data from different distributions and enlarging the distances among different subspaces. In doing so, the variations of distribution discrepancy within the same subspace can be reduced, i.e., data can be aligned well and the maximally separated structure can be achieved for the data from different subspaces. A simple projected subgradient-based method is proposed to optimize the objective of TLDA, and a strict theory proof is provided to guarantee a quick convergence. The experimental evaluation on public data sets demonstrates that our TLDA can achieve better classification performance and outperform the state-of-the-art methods.

7.
Artigo em Inglês | MEDLINE | ID: mdl-32970596

RESUMO

Dictionary learning plays a significant role in the field of machine learning. Existing works mainly focus on learning dictionary from a single domain. In this paper, we propose a novel projective double reconstructions (PDR) based dictionary learning algorithm for cross-domain recognition. Owing the distribution discrepancy between different domains, the label information is hard utilized for improving discriminability of dictionary fully. Thus, we propose a more flexible label consistent term and associate it with each dictionary item, which makes the reconstruction coefficients have more discriminability as much as possible. Due to the intrinsic correlation between cross-domain data, the data should be reconstructed with each other. Based on this consideration, we further propose a projective double reconstructions scheme to guarantee that the learned dictionary has the abilities of data itself reconstruction and data crossreconstruction. This also guarantees that the data from different domains can be boosted mutually for obtaining a good data alignment, making the learned dictionary have more transferability. We integrate the double reconstructions, label consistency constraint and classifier learning into a unified objective and its solution can be obtained by proposed optimization algorithm that is more efficient than the conventional l1 optimization based dictionary learning methods. The experiments show that the proposed PDR not only greatly reduces the time complexity for both training and testing, but also outperforms over the stateof- the-art methods.

8.
Neural Netw ; 109: 56-66, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30408694

RESUMO

Manifold based feature extraction has been proved to be an effective technique in dealing with the unsupervised classification tasks. However, most of the existing works cannot guarantee the global optimum of the learned projection, and they are sensitive to different noises. In addition, many methods cannot catch the discriminative information as much as possible since they only exploit the local structure of data while ignoring the global structure. To address the above problems, this paper proposes a novel graph based feature extraction method named low-rank and sparsity preserving embedding (LRSPE) for unsupervised learning. LRSPE attempts to simultaneously learn the graph and projection in a framework so that the global optimal projection can be obtained. Moreover, LRSPE exploits both global and local information of data for projection learning by imposing the low-rank and sparse constraints on the graph, which promotes the method to obtain a better performance. Importantly, LRSPE is more robust to noise by imposing the l2,1 sparsity norm on the reconstruction errors. Experimental results on both clean and noisy datasets prove that the proposed method can significantly improve classification accuracy and it is robust to different noises in comparison with the state-of-the-art methods.


Assuntos
Bases de Dados Factuais/classificação , Aprendizado de Máquina não Supervisionado/classificação , Algoritmos , Humanos
9.
Artigo em Inglês | MEDLINE | ID: mdl-31751275

RESUMO

Subspace learning based transfer learning methods commonly find a common subspace where the discrepancy of the source and target domains is reduced. The final classification is also performed in such subspace. However, the minimum discrepancy does not guarantee the best classification performance and thus the common subspace may be not the best discriminative. In this paper, we propose a latent elastic-net transfer learning (LET) method by simultaneously learning a latent subspace and a discriminative subspace. Specifically, the data from different domains can be well interlaced in the latent subspace by minimizing Maximum Mean Discrepancy (MMD). Since the latent subspace decouples inputs and outputs and, thus a more compact data representation is obtained for discriminative subspace learning. Based on the latent subspace, we further propose a low-rank constraint based matrix elastic-net regression to learn another subspace in which the intrinsic intra-class structure correlations of data from different domains is well captured. In doing so, a better discriminative alignment is guaranteed and thus LET finally learns another discriminative subspace for classification. Experiments on visual domains adaptation tasks show the superiority of the proposed LET method.

10.
IEEE Trans Cybern ; 49(4): 1279-1291, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29994743

RESUMO

Preserving global and local structures during projection learning is very important for feature extraction. Although various methods have been proposed for this goal, they commonly introduce an extra graph regularization term and the corresponding regularization parameter that needs to be tuned. However, tuning the parameter manually not only is time-consuming, but also is difficult to find the optimal value to obtain a satisfactory performance. This greatly limits their applications. Besides, projections learned by many methods do not have good interpretability and their performances are commonly sensitive to the value of the selected feature dimension. To solve the above problems, a novel method named low-rank preserving projection via graph regularized reconstruction (LRPP_GRR) is proposed. In particular, LRPP_GRR imposes the graph constraint on the reconstruction error of data instead of introducing the extra regularization term to capture the local structure of data, which can greatly reduce the complexity of the model. Meanwhile, a low-rank reconstruction term is exploited to preserve the global structure of data. To improve the interpretability of the learned projection, a sparse term with l2,1 norm is imposed on the projection. Furthermore, we introduce an orthogonal reconstruction constraint to make the learned projection hold main energy of data, which enables LRPP_GRR to be more flexible in the selection of feature dimension. Extensive experimental results show the proposed method can obtain competitive performance with other state-of-the-art methods.

11.
IEEE Trans Neural Netw Learn Syst ; 30(4): 1133-1149, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30137017

RESUMO

In this paper, we propose a unified model called flexible affinity matrix learning (FAML) for unsupervised and semisupervised classification by exploiting both the relationship among data and the clustering structure simultaneously. To capture the relationship among data, we exploit the self-expressiveness property of data to learn a structured matrix in which the structures are induced by different norms. A rank constraint is imposed on the Laplacian matrix of the desired affinity matrix, so that the connected components of data are exactly equal to the cluster number. Thus, the clustering structure is explicit in the learned affinity matrix. By making the estimated affinity matrix approximate the structured matrix during the learning procedure, FAML allows the affinity matrix itself to be adaptively adjusted such that the learned affinity matrix can well capture both the relationship among data and the clustering structure. Thus, FAML has the potential to perform better than other related methods. We derive optimization algorithms to solve the corresponding problems. Extensive unsupervised and semisupervised classification experiments on both synthetic data and real-world benchmark data sets show that the proposed FAML consistently outperforms the state-of-the-art methods.

12.
IEEE Trans Neural Netw Learn Syst ; 29(4): 1006-1018, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-28166507

RESUMO

Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

13.
Neural Netw ; 108: 202-216, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30216870

RESUMO

In this paper, we propose a robust subspace learning (SL) framework for dimensionality reduction which further extends the existing SL methods to a low-rank and sparse embedding (LRSE) framework from three aspects: overall optimum, robustness and generalization. Owing to the uses of low-rank and sparse constraints, both the global subspaces and local geometric structures of data are captured by the reconstruction coefficient matrix and at the same time the low-dimensional embedding of data are enforced to respect the low-rankness and sparsity. In this way, the reconstruction coefficient matrix learning and SL are jointly performed, which can guarantee an overall optimum. Moreover, we adopt a sparse matrix to model the noise which makes LRSE robust to the different types of noise. The combination of global subspaces and local geometric structures brings better generalization for LRSE than related methods, i.e., LRSE performs better than conventional SL methods in unsupervised and supervised scenarios, particularly in unsupervised scenario the improvement of classification accuracy is considerable. Seven specific SL methods including unsupervised and supervised methods can be derived from the proposed framework and the experiments on different data sets (including corrupted data) demonstrate the superiority of these methods over the existing, well-established SL methods. Further, we exploit experiments to provide some new insights for SL.


Assuntos
Inteligência Artificial , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial/tendências , Bases de Dados Factuais/tendências , Humanos , Aprendizado de Máquina/tendências , Reconhecimento Automatizado de Padrão/tendências , Estimulação Luminosa/métodos
14.
Neural Netw ; 108: 83-96, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30173056

RESUMO

Low-rank representation (LRR) has aroused much attention in the community of data mining. However, it has the following twoproblems which greatly limit its applications: (1) it cannot discover the intrinsic structure of data owing to the neglect of the local structure of data; (2) the obtained graph is not the optimal graph for clustering. To solve the above problems and improve the clustering performance, we propose a novel graph learning method named low-rank representation with adaptive graph regularization (LRR_AGR) in this paper. Firstly, a distance regularization term and a non-negative constraint are jointly integrated into the framework of LRR, which enables the method to simultaneously exploit the global and local information of data for graph learning. Secondly, a novel rank constraint is further introduced to the model, which encourages the learned graph to have very clear clustering structures, i.e., exactly c connected components for the data with c clusters. These two approaches are meaningful and beneficial to learn the optimal graph that discovers the intrinsic structure of data. Finally, an efficient iterative algorithm is provided to optimize the model. Experimental results on synthetic and real datasets show that the proposed method can significantly improve the clustering performance.


Assuntos
Algoritmos , Mineração de Dados , Aprendizado de Máquina , Análise por Conglomerados , Mineração de Dados/tendências , Aprendizado de Máquina/tendências
15.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2502-2515, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-28500010

RESUMO

This paper proposes a novel method, called robust latent subspace learning (RLSL), for image classification. We formulate an RLSL problem as a joint optimization problem over both the latent SL and classification model parameter predication, which simultaneously minimizes: 1) the regression loss between the learned data representation and objective outputs and 2) the reconstruction error between the learned data representation and original inputs. The latent subspace can be used as a bridge that is expected to seamlessly connect the origin visual features and their class labels and hence improve the overall prediction performance. RLSL combines feature learning with classification so that the learned data representation in the latent subspace is more discriminative for classification. To learn a robust latent subspace, we use a sparse item to compensate error, which helps suppress the interference of noise via weakening its response during regression. An efficient optimization algorithm is designed to solve the proposed optimization problem. To validate the effectiveness of the proposed RLSL method, we conduct experiments on diverse databases and encouraging recognition results are achieved compared with many state-of-the-arts methods.

16.
IEEE Trans Neural Netw Learn Syst ; 29(11): 5228-5241, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29994377

RESUMO

Feature extraction plays a significant role in pattern recognition. Recently, many representation-based feature extraction methods have been proposed and achieved successes in many applications. As an excellent unsupervised feature extraction method, latent low-rank representation (LatLRR) has shown its power in extracting salient features. However, LatLRR has the following three disadvantages: 1) the dimension of features obtained using LatLRR cannot be reduced, which is not preferred in feature extraction; 2) two low-rank matrices are separately learned so that the overall optimality may not be guaranteed; and 3) LatLRR is an unsupervised method, which by far has not been extended to the supervised scenario. To this end, in this paper, we first propose to use two different matrices to approximate the low-rank projection in LatLRR so that the dimension of obtained features can be reduced, which is more flexible than original LatLRR. Then, we treat the two low-rank matrices in LatLRR as a whole in the process of learning. In this way, they can be boosted mutually so that the obtained projection can extract more discriminative features. Finally, we extend LatLRR to the supervised scenario by integrating feature extraction with the ridge regression. Thus, the process of feature extraction is closely related to the classification so that the extracted features are discriminative. Extensive experiments are conducted on different databases for unsupervised and supervised feature extraction, and very encouraging results are achieved in comparison with many state-of-the-arts methods.

17.
IEEE Trans Neural Netw Learn Syst ; 28(6): 1276-1289, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-26955054

RESUMO

Focal underdetermined system solver (FOCUSS) is a powerful method for basis selection and sparse representation, where it employs the [Formula: see text]-norm with p ∈ (0,2) to measure the sparsity of solutions. In this paper, we give a systematical analysis on the rate of convergence of the FOCUSS algorithm with respect to p ∈ (0,2) . We prove that the FOCUSS algorithm converges superlinearly for and linearly for usually, but may superlinearly in some very special scenarios. In addition, we verify its rates of convergence with respect to p by numerical experiments.

18.
Artif Intell Med ; 79: 1-8, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28359635

RESUMO

Knowledge of protein fold type is critical for determining the protein structure and function. Because of its importance, several computational methods for fold recognition have been proposed. Most of them are based on well-known machine learning techniques, such as Support Vector Machines (SVMs), Artificial Neural Network (ANN), etc. Although these machine learning methods play a role in stimulating the development of this important area, new techniques are still needed to further improve the predictive performance for fold recognition. Sparse Representation based Classification (SRC) has been widely used in image processing, and shows better performance than other related machine learning methods. In this study, we apply the SRC to solve the protein fold recognition problem. Experimental results on a widely used benchmark dataset show that the proposed method is able to improve the performance of some basic classifiers and three state-of-the-art methods to feature selection, including autocross-covariance (ACC) fold, D-D, and Bi-gram. Finally, we propose a novel computational predictor called MF-SRC for fold recognition by combining these three features into the framework of SRC to achieve further performance improvement. Compared with other computational methods in this field on DD dataset, EDD dataset and TG dataset, the proposed method achieves stable performance by reducing the influence of the noise in the dataset. It is anticipated that the proposed predictor may become a useful high throughput tool for large-scale fold recognition or at least, play a complementary role to the existing predictors in this regard.


Assuntos
Algoritmos , Dobramento de Proteína , Máquina de Vetores de Suporte , Processamento de Imagem Assistida por Computador , Proteínas
19.
Neural Netw ; 88: 1-8, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28161499

RESUMO

A suitable feature representation can faithfully preserve the intrinsic structure of data. However, traditional dimensionality reduction (DR) methods commonly use the original input features to define the intrinsic structure, which makes the estimated intrinsic structure unreliable since redundant or noisy features may exist in the original input features. Thus a dilemma is that (1) one needs the most suitable feature representation to define the intrinsic structure of data and (2) one should use the proper intrinsic structure of data to perform feature extraction. To address the problem, in this paper we propose a unified learning framework to simultaneously obtain the optimal feature representation and intrinsic structure of data. The structure is learned from the results of feature learning, and the features are learned to preserve the refined structure of data. By leveraging the interactions between the process of determining the most suitable feature representation and intrinsic structure of data, we can capture accurate structure and obtain the optimal feature representation of data. Experimental results demonstrate that our method outperforms state-of-the-art methods in DR and subspace clustering. The code of the proposed method is available at "http://www.yongxu.org/lunwen.html ".


Assuntos
Inteligência Artificial/classificação , Aprendizado de Máquina Supervisionado/classificação , Análise por Conglomerados , Humanos
20.
IEEE Trans Cybern ; 46(8): 1828-38, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-26259210

RESUMO

Low-rank representation (LRR) has been successfully applied in exploring the subspace structures of data. However, in previous LRR-based semi-supervised subspace clustering methods, the label information is not used to guide the affinity matrix construction so that the affinity matrix cannot deliver strong discriminant information. Moreover, these methods cannot guarantee an overall optimum since the affinity matrix construction and subspace clustering are often independent steps. In this paper, we propose a robust semi-supervised subspace clustering method based on non-negative LRR (NNLRR) to address these problems. By combining the LRR framework and the Gaussian fields and harmonic functions method in a single optimization problem, the supervision information is explicitly incorporated to guide the affinity matrix construction and the affinity matrix construction and subspace clustering are accomplished in one step to guarantee the overall optimum. The affinity matrix is obtained by seeking a non-negative low-rank matrix that represents each sample as a linear combination of others. We also explicitly impose the sparse constraint on the affinity matrix such that the affinity matrix obtained by NNLRR is non-negative low-rank and sparse. We introduce an efficient linearized alternating direction method with adaptive penalty to solve the corresponding optimization problem. Extensive experimental results demonstrate that NNLRR is effective in semi-supervised subspace clustering and robust to different types of noise than other state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA