Your browser doesn't support javascript.
loading
A Theoretical View of Linear Backpropagation and its Convergence.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3972-3980, 2024 May.
Article em En | MEDLINE | ID: mdl-38224500
ABSTRACT
Backpropagation (BP) is widely used for calculating gradients in deep neural networks (DNNs). Applied often along with stochastic gradient descent (SGD) or its variants, BP is considered as a de-facto choice in a variety of machine learning tasks including DNN training and adversarial attack/defense. Recently, a linear variant of BP named LinBP was introduced for generating more transferable adversarial examples for performing black-box attacks, by (Guo et al. 2020). Although it has been shown empirically effective in black-box attacks, theoretical studies and convergence analyses of such a method is lacking. This paper serves as a complement and somewhat an extension to Guo et al. (2020) paper, by providing theoretical analyses on LinBP in neural-network-involved learning tasks, including adversarial attack and model training. We demonstrate that, somewhat surprisingly, LinBP can lead to faster convergence in these tasks in the same hyper-parameter settings, compared to BP. We confirm our theoretical results with extensive experiments.

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: IEEE Trans Pattern Anal Mach Intell Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: IEEE Trans Pattern Anal Mach Intell Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article