Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(23)2023 Nov 26.
Artículo en Inglés | MEDLINE | ID: mdl-38067790

RESUMEN

In recent years, the number and sophistication of malware attacks on computer systems have increased significantly. One technique employed by malware authors to evade detection and analysis, known as Heaven's Gate, enables 64-bit code to run within a 32-bit process. Heaven's Gate exploits a feature in the operating system that allows the transition from a 32-bit mode to a 64-bit mode during execution, enabling the malware to evade detection by security software designed to monitor only 32-bit processes. Heaven's Gate poses significant challenges for existing security tools, including dynamic binary instrumentation (DBI) tools, widely used for program analysis, unpacking, and de-virtualization. In this paper, we provide a comprehensive analysis of the Heaven's Gate technique. We also propose a novel approach to bypass the Heaven's Gate technique using black-box testing. Our experimental results show that the proposed approach effectively bypasses and prevents the Heaven's Gate technique and strengthens the capabilities of DBI tools in combating advanced malware threats.

2.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36850576

RESUMEN

Data are needed to train machine learning (ML) algorithms, and in many cases often include private datasets that contain sensitive information. To preserve the privacy of data used while training ML algorithms, computer scientists have widely deployed anonymization techniques. These anonymization techniques have been widely used but are not foolproof. Many studies showed that ML models using anonymization techniques are vulnerable to various privacy attacks willing to expose sensitive information. As a privacy-preserving machine learning (PPML) technique that protects private data with sensitive information in ML, we propose a new task-specific adaptive differential privacy (DP) technique for structured data. The main idea of the proposed DP method is to adaptively calibrate the amount and distribution of random noise applied to each attribute according to the feature importance for the specific tasks of ML models and different types of data. From experimental results under various datasets, tasks of ML models, different DP mechanisms, and so on, we evaluate the effectiveness of the proposed task-specific adaptive DP method. Thus, we show that the proposed task-specific adaptive DP technique satisfies the model-agnostic property to be applied to a wide range of ML tasks and various types of data while resolving the privacy-utility trade-off problem.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA