Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(18)2023 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-37765778

RESUMEN

Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including precision, recall, and F1-score, show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and F1-score of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA.

2.
Sci Rep ; 12(1): 15653, 2022 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-36123385

RESUMEN

We demonstrate the uniqueness, unclonability and secure authentication of N = 56 physical unclonable functions (PUFs) realized from silicon photonic moiré quasicrystal interferometers. Compared to prior photonic-PUF demonstrations typically limited in scale to only a handful of unique devices and on the order of 10 false authentication attempts, this work examines > 103 inter-device comparisons and false authentication attempts. Device fabrication is divided across two separate fabrication facilities, allowing for cross-fab analysis and emulation of a malicious foundry with exact knowledge of the PUF photonic circuit design and process. Our analysis also compares cross-correlation based authentication to the traditional Hamming distance method and experimentally demonstrates an authentication error rate AER = 0%, false authentication rate FAR = 0%, and an estimated probability of cloning below 10-30. This work validates the potential scalability of integrated photonic-PUFs which can attractively leverage mature wafer-scale manufacturing and automated contact-free optical probing. Such structures show promise for authenticating hardware in the untrusted supply chain or augmenting conventional electronic-PUFs to enhance system security.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA