Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Neural Comput ; : 1-26, 2023 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-37432867

RESUMEN

Modern data analytics applications are increasingly characterized by exceedingly large and multidimensional data sources. This represents a challenge for traditional machine learning models, as the number of model parameters needed to process such data grows exponentially with the data dimensions, an effect known as the curse of dimensionality. Recently, tensor decomposition (TD) techniques have shown promising results in reducing the computational costs associated with large-dimensional models while achieving comparable performance. However, such tensor models are often unable to incorporate the underlying domain knowledge when compressing high-dimensional models. To this end, we introduce a novel graph-regularized tensor regression (GRTR) framework, whereby domain knowledge about intramodal relations is incorporated into the model in the form of a graph Laplacian matrix. This is then used as a regularization tool to promote a physically meaningful structure within the model parameters. By virtue of tensor algebra, the proposed framework is shown to be fully interpretable, both coefficient-wise and dimension-wise. The GRTR model is validated in a multiway regression setting and compared against competing models and is shown to achieve improved performance at reduced computational costs. Detailed visualizations are provided to help readers gain an intuitive understanding of the employed tensor operations.

2.
IEEE Trans Neural Netw Learn Syst ; 33(10): 5162-5176, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33822727

RESUMEN

Efficient modeling of feature interactions underpins supervised learning for nonsequential tasks, characterized by a lack of inherent ordering of features (variables). The brute force approach of learning a parameter for each interaction of every order comes at an exponential computational and memory cost (curse of dimensionality). To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor, the order of which is equal to the number of features; for efficiency, it can be further factorized into a compact tensor train (TT) format. However, both TT and other tensor networks (TNs), such as tensor ring and hierarchical Tucker, are sensitive to the ordering of their indices (and hence to the features). To establish the desired invariance to feature ordering, we propose to represent the weight tensor through the canonical polyadic (CP) decomposition (CPD) and introduce the associated inference and learning algorithms, including suitable regularization and initialization schemes. It is demonstrated that the proposed CP-based predictor significantly outperforms other TN-based predictors on sparse data while exhibiting comparable performance on dense nonsequential tasks. Furthermore, for enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors. In conjunction with feature vector normalization, this is shown to yield dramatic improvements in performance for dense nonsequential tasks, matching models such as fully connected neural networks.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA