Your browser doesn't support javascript.
loading
Knowledge distillation under ideal joint classifier assumption.
Li, Huayu; Chen, Xiwen; Ditzler, Gregory; Roveda, Janet; Li, Ao.
Afiliación
  • Li H; Department of Electrical & Computer Engineering at the University of Arizona, Tucson, 85721, AZ, USA.
  • Chen X; School of Computing at Clemson University, Clemson, 29634, SC, USA.
  • Ditzler G; EpiSys Science, Philadelphia, PA, 19128, USA.
  • Roveda J; Department of Electrical & Computer Engineering at the University of Arizona, Tucson, 85721, AZ, USA; Department of Biomedical Engineering, The University of Arizona, Tucson, 85721, AZ, USA; BIO5 Institute, The University of Arizona, Tucson, 85721, AZ, USA.
  • Li A; Department of Electrical & Computer Engineering at the University of Arizona, Tucson, 85721, AZ, USA; BIO5 Institute, The University of Arizona, Tucson, 85721, AZ, USA. Electronic address: aoli1@arizona.edu.
Neural Netw ; 173: 106160, 2024 May.
Article en En | MEDLINE | ID: mdl-38330746
ABSTRACT
Knowledge distillation constitutes a potent methodology for condensing substantial neural networks into more compact and efficient counterparts. Within this context, softmax regression representation learning serves as a widely embraced approach, leveraging a pre-established teacher network to guide the learning process of a diminutive student network. Notably, despite the extensive inquiry into the efficacy of softmax regression representation learning, the intricate underpinnings governing the knowledge transfer mechanism remain inadequately elucidated. This study introduces the 'Ideal Joint Classifier Knowledge Distillation' (IJCKD) framework, an overarching paradigm that not only furnishes a lucid and exhaustive comprehension of prevailing knowledge distillation techniques but also establishes a theoretical underpinning for prospective investigations. Employing mathematical methodologies derived from domain adaptation theory, this investigation conducts a comprehensive examination of the error boundary of the student network contingent upon the teacher network. Consequently, our framework facilitates efficient knowledge transference between teacher and student networks, thereby accommodating a diverse spectrum of applications.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Conocimiento / Aprendizaje Límite: Humans Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Conocimiento / Aprendizaje Límite: Humans Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Estados Unidos