Your browser doesn't support javascript.
loading
A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.
Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick.
Afiliación
  • Yang J; School of Electronic Information Engineering, Tianjin University, Tianjin 300072, China.
  • Ding Z; School of Electronic Information Engineering, Tianjin University, Tianjin 300072, China.
  • Guo F; School of Electronic Information Engineering, Tianjin University, Tianjin 300072, China. Electronic address: gfjy001@yahoo.com.
  • Wang H; School of Electronic Information Engineering, Tianjin University, Tianjin 300072, China.
  • Hughes N; College of Information Science and Technology, University of Nebraska Omaha, Omaha, NE 68182, United States. Electronic address: Nick.Hughes1@yahoo.com.
Neural Netw ; 71: 45-54, 2015 Nov.
Article en En | MEDLINE | ID: mdl-26291045
ABSTRACT
In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Redes Neurales de la Computación Tipo de estudio: Prognostic_studies / Risk_factors_studies Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2015 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Redes Neurales de la Computación Tipo de estudio: Prognostic_studies / Risk_factors_studies Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2015 Tipo del documento: Article País de afiliación: China