Your browser doesn't support javascript.
loading
Teacher-student complementary sample contrastive distillation.
Bao, Zhiqiang; Huang, Zhenhua; Gou, Jianping; Du, Lan; Liu, Kang; Zhou, Jingtao; Chen, Yunwen.
Afiliación
  • Bao Z; School of Computer Science, South China Normal University, South China Normal University, Guangzhou, 510631, Guangdong, China.
  • Huang Z; School of Computer Science, South China Normal University, South China Normal University, Guangzhou, 510631, Guangdong, China. Electronic address: jukiehuang@163.com.
  • Gou J; College of Computer and Information Science, College of Software, Southwest University, Chongqing, 400715, Chongqing, China.
  • Du L; Faculty of Information Technology, Monash University, Melbourne, VIC 3800, Victoria, Australia.
  • Liu K; School of Computer Science, South China Normal University, South China Normal University, Guangzhou, 510631, Guangdong, China.
  • Zhou J; School of Computer Science, South China Normal University, South China Normal University, Guangzhou, 510631, Guangdong, China.
  • Chen Y; Research and Development Department, DataGrand Inc., Shanghai, 201203, China.
Neural Netw ; 170: 176-189, 2024 Feb.
Article en En | MEDLINE | ID: mdl-37989039
ABSTRACT
Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the "dark knowledge" of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teacher model, and overconfident predictions in the student model may degrade its performance. In this work, we propose a novel framework, Teacher-Student Complementary Sample Contrastive Distillation (TSCSCD), that alleviate these challenges. TSCSCD consists of three key components Contrastive Sample Hardness (CSH), Supervision Signal Correction (SSC), and Student Self-Learning (SSL). Specifically, CSH evaluates the teacher's supervision for each sample by comparing the predictions of two compact models, one distilled from the teacher and the other trained from scratch. SSC corrects weak supervision according to CSH, while SSL employs integrated learning among multi-classifiers to regularize overconfident predictions. Extensive experiments on four real-world datasets demonstrate that TSCSCD outperforms recent state-of-the-art knowledge distillation techniques.
Asunto(s)
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Compresión de Datos Límite: Humans Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Bases de datos: MEDLINE Asunto principal: Compresión de Datos Límite: Humans Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: China