Your browser doesn't support javascript.
loading
Insuring against the perils in distributed learning: privacy-preserving empirical risk minimization.
Owusu-Agyemang, Kwabena; Qin, Zhen; Benjamin, Appiah; Xiong, Hu; Qin, Zhiguang.
Afiliação
  • Owusu-Agyemang K; University of Electronic Science and Technology of China, School of Information and Software Engineering, China.
  • Qin Z; University of Electronic Science and Technology of China, School of Information and Software Engineering, China.
  • Benjamin A; University of Electronic Science and Technology of China, School of Information and Software Engineering, China.
  • Xiong H; University of Electronic Science and Technology of China, School of Information and Software Engineering, China.
  • Qin Z; University of Electronic Science and Technology of China, School of Information and Software Engineering, China.
Math Biosci Eng ; 18(4): 3006-3033, 2021 03 29.
Article em En | MEDLINE | ID: mdl-34198373
Multiple organizations would benefit from collaborative learning models trained over aggregated datasets from various human activity recognition applications without privacy leakages. Two of the prevailing privacy-preserving protocols, secure multi-party computation and differential privacy, however, are still confronted with serious privacy leakages: lack of provision for privacy guarantee about individual data and insufficient protection against inference attacks on the resultant models. To mitigate the aforementioned shortfalls, we propose privacy-preserving architecture to explore the potential of secure multi-party computation and differential privacy. We utilize the inherent prospects of output perturbation and gradient perturbation in our differential privacy method, and progress with an innovation for both techniques in the distributed learning domain. Data owners collaboratively aggregate the locally trained models inside a secure multi-party computation domain in the output perturbation algorithm, and later inject appreciable statistical noise before exposing the classifier. We inject noise during every iterative update to collaboratively train a global model in our gradient perturbation algorithm. The utility guarantee of our gradient perturbation method is determined by an expected curvature relative to the minimum curvature. With the application of expected curvature, we theoretically justify the advantage of gradient perturbation in our proposed algorithm, therefore closing existing gap between practice and theory. Validation of our algorithm on real-world human recognition activity datasets establishes that our protocol incurs minimal computational overhead, provides substantial utility gains for typical security and privacy guarantees.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Segurança Computacional / Privacidade Tipo de estudo: Etiology_studies / Guideline / Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Revista: Math Biosci Eng Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Segurança Computacional / Privacidade Tipo de estudo: Etiology_studies / Guideline / Prognostic_studies / Risk_factors_studies Limite: Humans Idioma: En Revista: Math Biosci Eng Ano de publicação: 2021 Tipo de documento: Article País de afiliação: China País de publicação: Estados Unidos