Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack
Zhao Yang1,2; Yuanzhe Zhang1,2; Zhongtao Jiang1,2; Yiming Ju1,2
2022
会议日期2022-10
会议地点Nanchang
英文摘要

Explanations can increase the transparency of neural networks and make them more trustworthy. However, can we really trust explanations generated by the existing explanation methods? If the explanation methods are not stable enough, the credibility of the explanation will be greatly reduced. Previous studies seldom considered such an important issue. To this end, this paper proposes a new evaluation frame to evaluate the stability of current typical feature attribution explanation methods via textual adversarial attack. Our frame could generate adversarial examples with similar textual semantics. Such adversarial examples will make the original models have the same outputs, but make most current explanation methods deduce completely different explanations. Under this frame, we test five classical explanation methods and show their performance on several stability-related metrics. Experimental results show our evaluation is effective and could reveal the stability performance of existing explanation methods.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/56725]  
专题复杂系统认知与决策实验室
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China
推荐引用方式
GB/T 7714
Zhao Yang,Yuanzhe Zhang,Zhongtao Jiang,et al. Can We Really Trust Explanations? Evaluating the Stability of Feature Attribution Explanation Methods via Adversarial Attack[C]. 见:. Nanchang. 2022-10.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace