Your browser doesn't support javascript.
loading
Accessibility of covariance information creates vulnerability in Federated Learning frameworks.
Huth, Manuel; Arruda, Jonas; Gusinow, Roy; Contento, Lorenzo; Tacconelli, Evelina; Hasenauer, Jan.
Afiliação
  • Huth M; Institute of Computational Biology, Helmholtz Munich, Neuherberg 85764, Germany.
  • Arruda J; Life and Medical Sciences Institute, Faculty of Mathematics and Natural Sciences, University of Bonn, Bonn 53115, Germany.
  • Gusinow R; Life and Medical Sciences Institute, Faculty of Mathematics and Natural Sciences, University of Bonn, Bonn 53115, Germany.
  • Contento L; Institute of Computational Biology, Helmholtz Munich, Neuherberg 85764, Germany.
  • Tacconelli E; Life and Medical Sciences Institute, Faculty of Mathematics and Natural Sciences, University of Bonn, Bonn 53115, Germany.
  • Hasenauer J; Life and Medical Sciences Institute, Faculty of Mathematics and Natural Sciences, University of Bonn, Bonn 53115, Germany.
Bioinformatics ; 39(9)2023 09 02.
Article em En | MEDLINE | ID: mdl-37647639
ABSTRACT
MOTIVATION Federated Learning (FL) is gaining traction in various fields as it enables integrative data analysis without sharing sensitive data, such as in healthcare. However, the risk of data leakage caused by malicious attacks must be considered. In this study, we introduce a novel attack algorithm that relies on being able to compute sample means, sample covariances, and construct known linearly independent vectors on the data owner side.

RESULTS:

We show that these basic functionalities, which are available in several established FL frameworks, are sufficient to reconstruct privacy-protected data. Additionally, the attack algorithm is robust to defense strategies that involve adding random noise. We demonstrate the limitations of existing frameworks and propose potential defense strategies analyzing the implications of using differential privacy. The novel insights presented in this study will aid in the improvement of FL frameworks. AVAILABILITY AND IMPLEMENTATION The code examples are provided at GitHub (https//github.com/manuhuth/Data-Leakage-From-Covariances.git). The CNSIM1 dataset, which we used in the manuscript, is available within the DSData R package (https//github.com/datashield/DSData/tree/main/data).
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Análise de Dados Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Análise de Dados Idioma: En Ano de publicação: 2023 Tipo de documento: Article