Memory, Show the Way: Memory Based Few ShotWord Representation Learning
jingyuan sun1,2; shaonan wang1,2; jiajun zhang1,2; chengqing zong1,2,3
2018-10
会议日期2018.10
会议地点Brussel
英文摘要

Distributional semantic models (DSMs) generally
require sufficient examples for a word
to learn a high quality representation. This is
in stark contrast with human who can guess
the meaning of a word from one or a few
referents only. In this paper, we propose
Mem2Vec, a memory based embedding learning
method capable of acquiring high quality
word representations from fairly limited context.
Our method directly adapts the representations
produced by a DSM with a longterm
memory to guide its guess of a novel word.
Based on a pre-trained embedding space, the
proposed method delivers impressive performance
on two challenging few-shot word similarity
tasks. Embeddings learned with our
method also lead to considerable improvements
over strong baselines on NER and sentiment
classification.

会议录出版者Conference on Empirical Methods in Natural Language Processing
语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/40577]  
专题模式识别国家重点实验室_自然语言处理
作者单位1.National Laboratory of Pattern Recognition, CASIA, Beijing, China
2.CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China
3.University of Chinese Academy of Sciences, Beijing, China
推荐引用方式
GB/T 7714
jingyuan sun,shaonan wang,jiajun zhang,et al. Memory, Show the Way: Memory Based Few ShotWord Representation Learning[C]. 见:. Brussel. 2018.10.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace