Your browser doesn't support javascript.
loading
Learning matrix factorization with scalable distance metric and regularizer.
Wang, Shiping; Zhang, Yunhe; Lin, Xincan; Su, Lichao; Xiao, Guobao; Zhu, William; Shi, Yiqing.
Afiliación
  • Wang S; College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China; Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen 518172, China. Electronic address: shipingwangphd@163.com.
  • Zhang Y; College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China. Electronic address: zhangyhannie@163.com.
  • Lin X; College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China. Electronic address: xincanlinms@gmail.com.
  • Su L; College of Computer and Data Science, Fuzhou University, Fuzhou 350116, China. Electronic address: fzu-slc@fzu.edu.cn.
  • Xiao G; College of Computer and Control Engineering, Minjiang University, Fuzhou 350108, China. Electronic address: x-gb@163.com.
  • Zhu W; Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China. Electronic address: wfzhu@uestc.edu.cn.
  • Shi Y; College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou 350117, China. Electronic address: 417shelly@gmail.com.
Neural Netw ; 161: 254-266, 2023 Apr.
Article en En | MEDLINE | ID: mdl-36774864
ABSTRACT
Matrix factorization has always been an encouraging field, which attempts to extract discriminative features from high-dimensional data. However, it suffers from negative generalization ability and high computational complexity when handling large-scale data. In this paper, we propose a learnable deep matrix factorization via the projected gradient descent method, which learns multi-layer low-rank factors from scalable metric distances and flexible regularizers. Accordingly, solving a constrained matrix factorization problem is equivalently transformed into training a neural network with an appropriate activation function induced from the projection onto a feasible set. Distinct from other neural networks, the proposed method activates the connected weights not just the hidden layers. As a result, it is proved that the proposed method can learn several existing well-known matrix factorizations, including singular value decomposition, convex, nonnegative and semi-nonnegative matrix factorizations. Finally, comprehensive experiments demonstrate the superiority of the proposed method against other state-of-the-arts.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Algoritmos / Redes Neurales de la Computación Tipo de estudio: Prognostic_studies Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2023 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Algoritmos / Redes Neurales de la Computación Tipo de estudio: Prognostic_studies Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2023 Tipo del documento: Article
...