Your browser doesn't support javascript.
loading
SIA: A sustainable inference attack framework in split learning.
Yu, Fangchao; Wang, Lina; Zeng, Bo; Zhao, Kai; Wu, Tian; Pang, Zhi.
Afiliação
  • Yu F; Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China.
  • Wang L; Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China. Electronic address: lnwang@whu.edu.cn.
  • Zeng B; Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China.
  • Zhao K; Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China.
  • Wu T; Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China.
  • Pang Z; Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072, China.
Neural Netw ; 171: 396-409, 2024 Mar.
Article em En | MEDLINE | ID: mdl-38141475
ABSTRACT
Split learning is a widely recognized distributed learning framework suitable for joint training scenarios with limited computing resources. However, recent research indicates that the malicious server can achieve high-quality reconstruction of the client's data through feature space hijacking attacks, leading to severe privacy leakage concerns. In this paper, we further enhance this attack to enable efficient data reconstruction while maintaining acceptable performance on the main task. Another significant advantage of our attack framework lies in its ability to fool the state-of-the-art attack detection mechanism, thus minimizing the risk of attacker exposure and making sustainable attacks possible. Moreover, we adaptively refine and adjust the attack strategy, extending the data reconstruction attack for the first time to the more challenging scenario of vertically partitioned data in split learning. In addition, we introduce three training modes for the attack framework, allowing the attacker to choose according to their requirements freely. Finally, we conduct extensive experiments on three datasets and evaluate the attack performance of attack frameworks in different scenarios, parameter settings, and defense mechanisms. The results demonstrate our attack framework's effectiveness, invisibility, and generality. Our research comprehensively highlights the potential privacy risks associated with split learning and sounds the alarm for secure applications of split learning.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Privacidade / Aprendizagem Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Privacidade / Aprendizagem Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article