This web demo, developed with the support of European Union’s ALOHA project Horizon 2020 Research and Innovation programme, allows the user to evaluate the security level of a neural network against worst-case input perturbation [1]. Adding this specifically designed perturbation is used by attackers to create adversarial examples and perform evasion attacks by feeding them to the network causing it to fail the classification [2]. In order to defend a system we first need to evaluate the effectiveness of the attacks. During the security evaluation process, the network is tested against increasing levels of perturbation, and its accuracy is tracked down in order to create a security evaluation curve. This curve, showing the drop in accuracy with respect to the maximum perturbation allowed for the input, can be directly used by the model designer to compare different networks and countermeasures. The user can further generate and inspect adversarial examples and see how the perturbation affects the outputs of the network.
For further details on different attack algorithms and defense methods, we refer the reader to Biggio and Roli [3].

[1] Szegedy et al., Intriguing Properties of Neural Networks, ICLR 2014
[2] Biggio et al., Evasion attacks against ML at test time, ECML PKDD 2013
[3] Biggio and Roli, Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning, Patt. Rec., 2018

 

 Secure ML Demo - Deep Learning security, powered by Pluribus One.

 (The image represents a preview: click on the image to go to the ALOHA demo webpage)

 

Contacts

Project Coordinator
Giuseppe Desoli - STMicroelectronics
giuseppe(dot)desoli(at)st(dot)com

Scientific Coordinator
Paolo Meloni - University of Cagliari, EOLAB
paolo(dot)meloni(at)diee(dot)unica(dot)it

Dissemination Manager
Francesca Palumbo - University of Sassari, IDEA Lab
fpalumbo(at)uniss(dot)it

Twitter

Linkedin