Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems.
Sensors (Basel)
; 23(1)2022 Dec 20.
Article
em En
| MEDLINE
| ID: mdl-36616609
Deep learning-based Human Activity Recognition (HAR) systems received a lot of interest for health monitoring and activity tracking on wearable devices. The availability of large and representative datasets is often a requirement for training accurate deep learning models. To keep private data on users' devices while utilizing them to train deep learning models on huge datasets, Federated Learning (FL) was introduced as an inherently private distributed training paradigm. However, standard FL (FedAvg) lacks the capability to train heterogeneous model architectures. In this paper, we propose Federated Learning via Augmented Knowledge Distillation (FedAKD) for distributed training of heterogeneous models. FedAKD is evaluated on two HAR datasets: A waist-mounted tabular HAR dataset and a wrist-mounted time-series HAR dataset. FedAKD is more flexible than standard federated learning (FedAvg) as it enables collaborative heterogeneous deep learning models with various learning capacities. In the considered FL experiments, the communication overhead under FedAKD is 200X less compared with FL methods that communicate models' gradients/weights. Relative to other model-agnostic FL methods, results show that FedAKD boosts performance gains of clients by up to 20 percent. Furthermore, FedAKD is shown to be relatively more robust under statistical heterogeneous scenarios.
Palavras-chave
Texto completo:
1
Base de dados:
MEDLINE
Assunto principal:
Práticas Interdisciplinares
Idioma:
En
Revista:
Sensors (Basel)
Ano de publicação:
2022
Tipo de documento:
Article
País de afiliação:
Canadá