Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors.
Sensors (Basel)
; 22(22)2022 Nov 09.
Article
em En
| MEDLINE
| ID: mdl-36433250
Recently, artificial intelligence (AI) based on IoT sensors has been widely used, which has increased the risk of attacks targeting AI. Adversarial examples are among the most serious types of attacks in which the attacker designs inputs that can cause the machine learning system to generate incorrect outputs. Considering the architecture using multiple sensor devices, hacking even a few sensors can create a significant risk; an attacker can attack the machine learning model through the hacked sensors. Some studies demonstrated the possibility of adversarial examples on the deep neural network (DNN) model based on IoT sensors, but it was assumed that an attacker must access all features. The impact of hacking only a few sensors has not been discussed thus far. Therefore, in this study, we discuss the possibility of attacks on DNN models by hacking only a small number of sensors. In this scenario, the attacker first hacks few sensors in the system, obtains the values of the hacked sensors, and changes them to manipulate the system, but the attacker cannot obtain and change the values of the other sensors. We perform experiments using the human activity recognition model with three sensor devices attached to the chest, wrist, and ankle of a user, and demonstrate that attacks are possible by hacking a small number of sensors.
Palavras-chave
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Inteligência Artificial
/
Aprendizado Profundo
Limite:
Humans
Idioma:
En
Revista:
Sensors (Basel)
Ano de publicação:
2022
Tipo de documento:
Article
País de afiliação:
Japão