Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Assunto principal
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(10)2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38794019

RESUMO

Differential privacy has emerged as a practical technique for privacy-preserving deep learning. However, recent studies on privacy attacks have demonstrated vulnerabilities in the existing differential privacy implementations for deep models. While encryption-based methods offer robust security, their computational overheads are often prohibitive. To address these challenges, we propose a novel differential privacy-based image generation method. Our approach employs two distinct noise types: one makes the image unrecognizable to humans, preserving privacy during transmission, while the other maintains features essential for machine learning analysis. This allows the deep learning service to provide accurate results, without compromising data privacy. We demonstrate the feasibility of our method on the CIFAR100 dataset, which offers a realistic complexity for evaluation.

2.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-36850576

RESUMO

Data are needed to train machine learning (ML) algorithms, and in many cases often include private datasets that contain sensitive information. To preserve the privacy of data used while training ML algorithms, computer scientists have widely deployed anonymization techniques. These anonymization techniques have been widely used but are not foolproof. Many studies showed that ML models using anonymization techniques are vulnerable to various privacy attacks willing to expose sensitive information. As a privacy-preserving machine learning (PPML) technique that protects private data with sensitive information in ML, we propose a new task-specific adaptive differential privacy (DP) technique for structured data. The main idea of the proposed DP method is to adaptively calibrate the amount and distribution of random noise applied to each attribute according to the feature importance for the specific tasks of ML models and different types of data. From experimental results under various datasets, tasks of ML models, different DP mechanisms, and so on, we evaluate the effectiveness of the proposed task-specific adaptive DP method. Thus, we show that the proposed task-specific adaptive DP technique satisfies the model-agnostic property to be applied to a wide range of ML tasks and various types of data while resolving the privacy-utility trade-off problem.

3.
Sensors (Basel) ; 21(23)2021 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-34883809

RESUMO

As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology-which requires a large amount of analysis data-is activated in various service fields, the possibility of exposing sensitive information of users increases, and the user privacy problem is growing more than ever. As a solution to this user's data privacy problem, homomorphic encryption technology, which is an encryption technology that supports arithmetic operations using encrypted data, has been applied to various field including finance and health care in recent years. If so, is it possible to use the deep learning service while preserving the data privacy of users by using the data to which homomorphic encryption is applied? In this paper, we propose three attack methods to infringe user's data privacy by exploiting possible security vulnerabilities in the process of using homomorphic encryption-based deep learning services for the first time. To specify and verify the feasibility of exploiting possible security vulnerabilities, we propose three attacks: (1) an adversarial attack exploiting communication link between client and trusted party; (2) a reconstruction attack using the paired input and output data; and (3) a membership inference attack by malicious insider. In addition, we describe real-world exploit scenarios for financial and medical services. From the experimental evaluation results, we show that the adversarial example and reconstruction attacks are a practical threat to homomorphic encryption-based deep learning models. The adversarial attack decreased average classification accuracy from 0.927 to 0.043, and the reconstruction attack showed average reclassification accuracy of 0.888, respectively.


Assuntos
Aprendizado Profundo , Segurança Computacional , Humanos , Privacidade , Tecnologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA