M3: Multimodal Memory Modelling for Video Captioning | |
Wang, Junbo; Wang, Wei; Huang, Yan; Wang, Liang; Tan, Tieniu | |
2018 | |
会议日期 | 2018-6 |
会议地点 | Salt Lake City, USA |
英文摘要 | Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. By virtue of recent deep learning technologies, video captioning has made great progress. However, learning an effective mapping from the visual sequence space to the language space is still a challenging problem due to the long-term multimodal dependency modelling and semantic misalignment. Inspired by the facts that memory modelling poses potential advantages to long-term sequential problems [35] and working memory is the key factor of visual attention [33], we propose a Multi-modal Memory Model (M3) to describe videos, which build- s a visual and textual shared memory to model the long-term visual-textual dependency and further guide visual attention on described visual targets to solve visual-textual alignments. Specifically, similar to [10], the proposed M3 attaches an external memory to store and retrieve both visual and textual contents by interacting with video and sentence with multiple read and write operations. To evaluate the proposed model, we perform experiments on two public datasets: MSVD and MSR-VTT. The experimental results demonstrate that our method outperforms most of the state- of-the-art methods in terms of BLEU and METEOR. |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/28358] |
专题 | 自动化研究所_智能感知与计算研究中心 |
作者单位 | 1.University of Chinese Academy of Sciences 2.Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences 3.Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition |
推荐引用方式 GB/T 7714 | Wang, Junbo,Wang, Wei,Huang, Yan,et al. M3: Multimodal Memory Modelling for Video Captioning[C]. 见:. Salt Lake City, USA. 2018-6. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论