Your browser doesn't support javascript.
loading
Nonparametric tensor ring decomposition with scalable amortized inference.
Tao, Zerui; Tanaka, Toshihisa; Zhao, Qibin.
Afiliação
  • Tao Z; Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 184-8588, Tokyo, Japan; RIKEN Center for Advanced Intelligence Project (AIP), 103-0027, Tokyo, Japan. Electronic address: zerui.tao@riken.jp.
  • Tanaka T; Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 184-8588, Tokyo, Japan; RIKEN Center for Advanced Intelligence Project (AIP), 103-0027, Tokyo, Japan. Electronic address: tanakat@cc.tuat.ac.jp.
  • Zhao Q; Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 184-8588, Tokyo, Japan; RIKEN Center for Advanced Intelligence Project (AIP), 103-0027, Tokyo, Japan. Electronic address: qibin.zhao@riken.jp.
Neural Netw ; 169: 431-441, 2024 Jan.
Article em En | MEDLINE | ID: mdl-37931474
ABSTRACT
Multi-dimensional data are common in many applications, such as videos and multi-variate time series. While tensor decomposition (TD) provides promising tools for analyzing such data, there still remains several limitations. First, traditional TDs assume multi-linear structures of the latent embeddings, which greatly limits their expressive power. Second, TDs cannot be straightforwardly applied to datasets with massive samples. To address these issues, we propose a nonparametric TD with amortized inference networks. Specifically, we establish a non-linear extension of tensor ring decomposition, using neural networks, to model complex latent structures. To jointly model the cross-sample correlations and physical structures, a matrix Gaussian process (GP) prior is imposed over the core tensors. From learning perspective, we develop a VAE-like amortized inference network to infer the posterior of core tensors corresponding to new tensor data, which enables TDs to be applied to large datasets. Our model can be also viewed as a kind of decomposition of VAE, which can additionally capture hidden tensor structure and enhance the expressiveness power. Finally, we derive an evidence lower bound such that a scalable optimization algorithm is developed. The advantages of our method have been evaluated extensively by data imputation on the Healing MNIST dataset and four multi-variate time series data.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Algoritmos / Aprendizagem Idioma: En Revista: Neural Netw Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Algoritmos / Aprendizagem Idioma: En Revista: Neural Netw Ano de publicação: 2024 Tipo de documento: Article