Alignment Rationale for Natural Language Inference
Zhongtao Jiang1,2; Yuanzhe Zhang1,2; Zhao Yang1,2; Jun Zhao1,2; Kang Liu1,2
2021-08-01
会议日期2021-8-1
会议地点Online
英文摘要

Deep learning models have achieved great success on the task of Natural Language Inference (NLI), though only a few attempts try to explain their behaviors. Existing explanation methods usually pick prominent features such as words or phrases from the input text. However, for NLI, alignments among words or phrases are more enlightening clues to explain the model. To this end, this paper presents AREC, a post-hoc approach to generate alignment rationale explanations for co-attention based models in NLI. The explanation is based on feature selection, which keeps few but sufficient alignments while maintaining the same prediction of the target model. Experimental results show that our method is more faithful and human-readable compared with many existing approaches. We further study and re-evaluate three typical models through our explanation beyond accuracy, and propose a simple method that greatly improves the model robustness.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/57261]  
专题复杂系统认知与决策实验室
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China
推荐引用方式
GB/T 7714
Zhongtao Jiang,Yuanzhe Zhang,Zhao Yang,et al. Alignment Rationale for Natural Language Inference[C]. 见:. Online. 2021-8-1.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace