This web demo, developed with the support of European Union’s ALOHA project Horizon 2020 Research and Innovation programme, allows the user to evaluate the security level of a neural network against worst-case input perturbation. Adding this specifically designed perturbation is used by attackers to create adversarial examples and perform evasion attacks by feeding them to the network causing it to fail the classification. In order to defend a system we first need to evaluate the effectiveness of the attacks. During the security evaluation process, the network is tested against increasing levels of perturbation, and its accuracy is tracked down in order to create a security evaluation curve. This curve, showing the drop in accuracy with respect to the maximum perturbation allowed for the input, can be directly used by the model designer to compare different networks and countermeasures.

Read more and try the demo...

Contacts

Project Coordinator
Giuseppe Desoli - STMicroelectronics
giuseppe(dot)desoli(at)st(dot)com

Scientific Coordinator
Paolo Meloni - University of Cagliari, EOLAB
paolo(dot)meloni(at)diee(dot)unica(dot)it

Dissemination Manager
Francesca Palumbo - University of Sassari, IDEA Lab
fpalumbo(at)uniss(dot)it

Twitter

Linkedin