Foto 7

F. Carrara, R. Becarelli, R. Caldelli, F. Falchi, G. Amato: "Adversarial examples detection in features distance spaces". In Computer Vision – ECCV 2018 Workshops. (Vol. 2, pp. 313–327). Springer, Cham.

Written by

Abstract:

Maliciously manipulated inputs for attacking machine learning methods–in particular deep neural networks–are emerging as a relevant issue for the security of recent artificial intelligence technologies, especially in computer vision. In this paper, we focus on attacks targeting image classifiers implemented with deep neural networks, and we propose a method for detecting adversarial images which focuses on the trajectory of internal representations (ie hidden layers neurons activation, also known as deep features) from the very first, up to the last. We argue that the representations of adversarial inputs follow a different evolution with respect to genuine inputs, and we define a distance-based embedding of features to efficiently encode this information. We train an LSTM network that analyzes the sequence of deep features embedded in a distance space to detect adversarial examples. The results of our preliminary experiments are encouraging: our detection scheme is able to detect adversarial inputs targeted to the ResNet-50 classifier pretrained on the ILSVRC’12 dataset and generated by a variety of crafting algorithms.

Keywords: adversarial examples; distance spaces; deep features; machine learning security

File: http://openaccess.thecvf.com/content_eccv_2018_workshops/w10/html/Carrara_Adversarial_examples_detection_in_features_distance_spaces_ECCVW_2018_paper.html