Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation

Jiang, J., Lan, J. , Leofante, F., Rago, A. and Toni, F. (2024) Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation. In: 15th Asian Conference on Machine Learning (ACML 2023), Istanbul, Turkey, 11-14 Nov 2023, pp. 582-597.

[img] Text
308490.pdf - Accepted Version

458kB

Publisher's URL: https://proceedings.mlr.press/v222/jiang24a.html

Abstract

Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded by a norm ball. However, existing methods targeting this form of robustness are not sound or complete, and they may generate implausible CEs, i.e., outliers wrt the training dataset. In fact, no existing method simultaneously optimises for closeness and plausibility while preserving robustness guarantees. In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature. We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness. Through a comparative experiment involving six baselines, five of which target robustness, we show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.

Item Type:Conference Proceedings
Additional Information:Jiang, Rago and Toni were partially funded by J.P. Morgan and by the Royal Academy of Engineering under the Research Chairs and Senior Research Fellowships scheme. Jianglin Lan is supported by a Leverhulme Trust Early Career Fellowship under Award ECF-2021-517. Leofante is supported by an Imperial College Research Fellowship grant. Rago and Toni were partially funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101020934).
Keywords:Explainable AI, counterfactual explanations, robustness of explanations.
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Lan, Dr Jianglin
Authors: Jiang, J., Lan, J., Leofante, F., Rago, A., and Toni, F.
College/School:College of Science and Engineering > School of Engineering > Autonomous Systems and Connectivity
ISSN:2640-3498
Copyright Holders:Copyright © 2023 J. Jiang, J. Lan, F. Leofante, A. Rago & F. Toni.
First Published:First published in Proceedings of Machine Learning Research 222: 582-597
Publisher Policy:Reproduced in accordance with the publisher copyright policy
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
314249Decarbonising Machine Learning for Safe and Robust Autonomous SystemsJianglin LanLeverhulme Trust (LEVERHUL)ECF-2021-517ENG - Autonomous Systems & Connectivity