We consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks in a classification task. Saliency interpretations of deterministic Neural Networks are remarkably brittle even when the attacks fail, i.e. for attacks that do not change the classification label. We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and even under direct attacks to the explanations. By leveraging recent results, we also provide a theoretical explanation of this result in terms of the geometry of the data manifold. Additionally, we discuss the stability of the interpretations of high level representations of the inputs in the internal layers of a Network. Our results demonstrate that Bayesian methods, in addition to being more robust to adversarial attacks, have the potential to provide more stable and interpretable assessments of Neural Network predictions.

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks / Carbone, G.; Bortolussi, L.; Sanguinetti, G.. - 2022:(2022), pp. 1-8. (Intervento presentato al convegno 2022 International Joint Conference on Neural Networks, IJCNN 2022 tenutosi a Padova, Italia nel 18-23 luglio 2022) [10.1109/IJCNN55064.2022.9892788].

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

Carbone G.;Sanguinetti G.
2022-01-01

Abstract

We consider the problem of the stability of saliency-based explanations of Neural Network predictions under adversarial attacks in a classification task. Saliency interpretations of deterministic Neural Networks are remarkably brittle even when the attacks fail, i.e. for attacks that do not change the classification label. We empirically show that interpretations provided by Bayesian Neural Networks are considerably more stable under adversarial perturbations of the inputs and even under direct attacks to the explanations. By leveraging recent results, we also provide a theoretical explanation of this result in terms of the geometry of the data manifold. Additionally, we discuss the stability of the interpretations of high level representations of the inputs in the internal layers of a Network. Our results demonstrate that Bayesian methods, in addition to being more robust to adversarial attacks, have the potential to provide more stable and interpretable assessments of Neural Network predictions.
2022
Proceedings of the International Joint Conference on Neural Networks
2022
1
8
978-1-7281-8671-9
https://arxiv.org/abs/2102.11010
Institute of Electrical and Electronics Engineers Inc.
Carbone, G.; Bortolussi, L.; Sanguinetti, G.
File in questo prodotto:
File Dimensione Formato  
Carbone_etal.pdf

accesso aperto

Descrizione: preprint
Tipologia: Documento in Pre-print
Licenza: Non specificato
Dimensione 1.64 MB
Formato Adobe PDF
1.64 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11767/132270
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact