Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neural Netw ; 174: 106199, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38452664

RESUMO

With the widespread application of deep neural networks (DNNs), the risk of privacy breaches against DNN models is constantly on the rise, resulting in an increasing need for intellectual property (IP) protection for such models. Although neural network watermarking techniques are widely used to safeguard the IP of DNNs, they can only achieve passive protection and cannot actively prevent unauthorized users from illicit use or embezzlement of the trained DNN models. Therefore, the development of proactive protection techniques to prevent IP infringement is imperative. To this end, we propose SecureNet, a key-based access license framework for DNN models. The proposed approach involves injecting license keys into the model through backdoor learning, enabling correct model functionality only when the appropriate license key is included in the input. To ensure the reusability of DNN models, we also propose a license key replacement algorithm. In addition, based on SecureNet, we designed defense mechanisms against adversarial attacks and backdoor attacks, respectively. Furthermore, we introduce a fine-grained authorization method that enables flexible granting of model permissions to different users. We have designed four license-key schemes with different privileges, tailored to various scenarios. We evaluated SecureNet on five benchmark datasets including MNIST, Cifar10, Cifar100, FaceScrub, and CelebA, and assessed its performance on six classic DNN models: LeNet-5, VGG16, ResNet18, ResNet101, NFNet-F5, and MobileNetV3. The results demonstrate that our approach outperforms the state-of-the-art model parameter encryption methods by at least 95% in terms of computational efficiency. Additionally, it provides effective defense against adversarial attacks and backdoor attacks without compromising the model's overall performance.


Assuntos
Aprendizagem , Redes Neurais de Computação , Algoritmos , Benchmarking , Propriedade Intelectual
2.
PeerJ Comput Sci ; 9: e1349, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346720

RESUMO

Physical layer security (PLS) is considered one of the most promising solutions to solve the security problems of massive Internet of Things (IoTs) devices because of its lightweight and high efficiency. Significantly, the recent physical layer key generation (PLKG) scheme based on transmission delay proposed by Huang et al. (2021) does not have any restrictions on communication methods and can extend the traditional physical layer security based on wireless channels to the whole Internet scene. However, the secret-sharing strategy adopted in this scheme has hidden dangers of collusion attack, which may lead to security problems such as information tampering and privacy disclosure. By establishing a probability model, this article quantitatively analyzes the relationship between the number of malicious collusion nodes and the probability of key exposure, which proves the existence of this security problem. In order to solve the problem of collusion attack in Huang et al.'s scheme, this article proposes an anti-collusion attack defense method, which minimizes the influence of collusion attack on key security by optimizing parameters including the number of the middle forwarding nodes, the random forwarding times, the time delay measurement times and the out-of-control rate of forwarding nodes. Finally, based on the game model, we prove that the defense method proposed in this article can reduce the risk of key leakage to zero under the scenario of the "Careless Defender" and "Cautious Defender" respectively.

3.
PeerJ Comput Sci ; 7: e494, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33977134

RESUMO

The family homology determination of malware has become a research hotspot as the number of malware variants are on the rise. However, existing studies on malware visualization only determines homology based on the global structure features of executable, which leads creators of some malware variants with the same structure intentionally set to misclassify them as the same family. We sought to develop a homology determination method using the fusion of global structure features and local fine-grained features based on malware visualization. Specifically, the global structural information of the malware executable file was converted into a bytecode image, and the opcode semantic information of the code segment was extracted by the n-gram feature model to generate an opcode image. We also propose a dual-branch convolutional neural network, which features the opcode image and bytecode image as the final family classification basis. Our results demonstrate that the accuracy and F-measure of family homology classification based on the proposed scheme are 99.05% and 98.52% accurate, respectively, which is better than the results from a single image feature or other major schemes.

4.
Sensors (Basel) ; 20(18)2020 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-32967069

RESUMO

Depth estimation of a single image presents a classic problem for computer vision, and is important for the 3D reconstruction of scenes, augmented reality, and object detection. At present, most researchers are beginning to focus on unsupervised monocular depth estimation. This paper proposes solutions to the current depth estimation problem. These solutions include a monocular depth estimation method based on uncertainty analysis, which solves the problem in which a neural network has strong expressive ability but cannot evaluate the reliability of an output result. In addition, this paper proposes a photometric loss function based on the Retinex algorithm, which solves the problem of pulling around pixels due to the presence of moving objects. We objectively compare our method to current mainstream monocular depth estimation methods and obtain satisfactory results.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA