Your browser doesn't support javascript.
loading
Enhanced tensor low-rank representation learning for multi-view clustering.
Xie, Deyan; Gao, Quanxue; Yang, Ming.
Afiliación
  • Xie D; School of Science and Information Science, Qingdao Agricultural University, Qingdao, China. Electronic address: xdy0306@163.com.
  • Gao Q; School of Telecommunications Engineering, Xidian University, Xi'an, China. Electronic address: qxgao@xidian.edu.cn.
  • Yang M; Mathematics department of the University of Evansville, Evansville, IN 47722, United States of America. Electronic address: yangmingmath@gmail.com.
Neural Netw ; 161: 93-104, 2023 Apr.
Article en En | MEDLINE | ID: mdl-36738492
ABSTRACT
Multi-view subspace clustering (MSC), assuming the multi-view data are generated from a latent subspace, has attracted considerable attention in multi-view clustering. To recover the underlying subspace structure, a successful approach adopted recently is subspace clustering based on tensor nuclear norm (TNN). But there are some limitations to this approach that the existing TNN-based methods usually fail to exploit the intrinsic cluster structure and high-order correlations well, which leads to limited clustering performance. To address this problem, the main purpose of this paper is to propose a novel tensor low-rank representation (TLRR) learning method to perform multi-view clustering. First, we construct a 3rd-order tensor by organizing the features from all views, and then use the t-product in the tensor space to obtain the self-representation tensor of the tensorial data. Second, we use the ℓ1,2 norm to constrain the self-representation tensor to make it capture the class-specificity distribution, that is important for depicting the intrinsic cluster structure. And simultaneously, we rotate the self-representation tensor, and use the tensor singular value decomposition-based weighted TNN as a tighter tensor rank approximation to constrain the rotated tensor. For the challenged mathematical optimization problem, we present an effective optimization algorithm with a theoretical convergence guarantee and relatively low computation complexity. The constructed convergent sequence to the Karush-Kuhn-Tucker (KKT) critical point solution is mathematically validated in detail. We perform extensive experiments on four datasets and demonstrate that TLRR outperforms state-of-the-art multi-view subspace clustering methods.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Algoritmos / Aprendizaje Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2023 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Algoritmos / Aprendizaje Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2023 Tipo del documento: Article