Your browser doesn't support javascript.
loading
IFL-GAN: Improved Federated Learning Generative Adversarial Network With Maximum Mean Discrepancy Model Aggregation.
IEEE Trans Neural Netw Learn Syst ; 34(12): 10502-10515, 2023 Dec.
Article em En | MEDLINE | ID: mdl-35471881
The generative adversarial network (GAN) is usually built from the centralized, independent identically distributed (i.i.d.) training data to generate realistic-like instances. In real-world applications, however, the data may be distributed over multiple clients and hard to be gathered due to bandwidth, departmental coordination, or storage concerns. Although existing works, such as federated learning GAN (FL-GAN), adopt different distributed strategies to train GAN models, there are still limitations when data are distributed in a non-i.i.d. manner. These studies suffer from convergence difficulty, producing generated data with low quality. Fortunately, we found that these challenges are often due to the use of a federated averaging strategy to aggregate local GAN models' updates. In this article, we propose an alternative approach to tackling this problem, which learns a globally shared GAN model by aggregating locally trained generators' updates with maximum mean discrepancy (MMD). In this way, we term our approach improved FL-GAN (IFL-GAN). The MMD score helps each local GAN hold different weights, making the global GAN in IFL-GAN getting converged more rapidly than federated averaging. Extensive experiments on MNIST, CIFAR10, and SVHN datasets demonstrate the significant improvement of our IFL-GAN in both achieving the highest inception score and producing high-quality instances.

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2023 Tipo de documento: Article