Your browser doesn't support javascript.
loading
Sparse discriminant PCA based on contrastive learning and class-specificity distribution.
Zhou, Qian; Gao, Quanxue; Wang, Qianqian; Yang, Ming; Gao, Xinbo.
Afiliação
  • Zhou Q; School of Telecommunications Engineering, Xidian University, Shaanxi 710071, China.
  • Gao Q; School of Telecommunications Engineering, Xidian University, Shaanxi 710071, China. Electronic address: qxgao@xidian.edu.cn.
  • Wang Q; School of Telecommunications Engineering, Xidian University, Shaanxi 710071, China.
  • Yang M; College of Mathematical Sciences, Harbin Engineering University, Heilongjiang 150001, China.
  • Gao X; Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
Neural Netw ; 167: 775-786, 2023 Oct.
Article em En | MEDLINE | ID: mdl-37729791
Much mathematical effort has been devoted to developing Principal Component Analysis (PCA), which is the most popular feature extraction method. To suppress the negative effect of noise on PCA performance, there have been extensive studies and applications of a large number of robust PCAs achieving outstanding results. However, existing methods suffer from at least two shortcomings: (1) They expressed PCA as a reconstruction model measured by Euclidean distance, which only considers the relationship between the data and its reconstruction and ignores the differences between different data points; (2) They did not consider the class-specificity distribution information contained in the data itself, thus lacking discriminative properties. To overcome the above problems, we propose a Sparse Discriminant Principal Components Analysis (SDPCA) model based on contrastive learning and class-specificity distribution. Specifically, we use contrastive learning to measure the relationship between samples and their reconstructions, which fully takes the discriminative information between data into account in PCA. In order to make the extracted low-dimensional features profoundly reflect the class-specificity distribution of the data, we minimize the squared ℓ1,2-norm of the low-dimensional embedding. In addition, to reduce the effects of redundant features and noise and to improve the interpretability of PCA at the same time, we impose sparsity constraints on the projection matrix using the squared ℓ1,2-norm. Our experimental results on different types of benchmark databases demonstrate that our model has state-of-the-art performance.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado de Máquina Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado de Máquina Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2023 Tipo de documento: Article