Understanding Memory Modules on Learning Simple Algorithms
Wang, Kexin1,3; Zhou, Yu1,3; Wang, Shaonan1,3; Zhang, Jiajun1,3; Zong, Chengqing1,2,3
2019-08
会议日期2019-8-11
会议地点Macau, China
英文摘要

Recent work has shown that memory modules are crucial for the generalization ability of neural networks on learning simple algorithms. However, we still have little understanding of the working mechanism of memory modules. To alleviate this problem, we apply a two-step analysis pipeline consisting of first inferring hypothesis about what strategy the model has learned according to visualization and then verify it by a novel proposed qualitative analysis method based on dimension reduction. Using this method, we have analyzed two popular memory-augmented neural networks, neural Turing machine and stack-augmented neural network on two simple algorithm tasks including reversing a random sequence and evaluation of arithmetic expressions. Results have shown that on the former task both models can learn to generalize and on the latter task only the stack-augmented model can do so. We show that different strategies are learned by the models, in which specific categories of input are monitored and different policies are made based on that to change the memory.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/38557]  
专题模式识别国家重点实验室_自然语言处理
通讯作者Wang, Kexin
作者单位1.National Laboratory of Pattern Recognition, CASIA, Beijing, China
2.CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China
3.University of Chinese Academy of Sciences, Beijing, China
推荐引用方式
GB/T 7714
Wang, Kexin,Zhou, Yu,Wang, Shaonan,et al. Understanding Memory Modules on Learning Simple Algorithms[C]. 见:. Macau, China. 2019-8-11.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace