MULTIMODAL LATENT FACTOR MODEL WITH LANGUAGE CONSTRAINT FOR PREDICATE DETECTION
Ma, Xuan1,4; Bao, Bingkun3; Yao, Lingling2; Xu, Changsheng1,4
2019-08
会议日期2019-9-22
会议地点台湾台北
英文摘要

Nowadays, visual relationship detection has shown an important utility in scene understanding. Predicate detection, which aims to detect the predicate between entities in an image, is an important part of visual relationship detection. In this paper, we propose Multimodal Latent Factor Model with Language Constraint (MMLFM-LC) for predicate detection
with the novelty of integrating knowledge learned from multiple modalities, valid relationships and semantical similarities. Representations of visual and textual modalities are firstly input into the constructed model. Secondly, a bilinear structure is introduced to model the relationships using valid relationships, while a language constraint is also built utilizing semantical similarities. Lastly, visual and textual representations are fused in an embedded subspace for predicate detection. Experiments on both Visual Relationship and Visual Genome datasets show that our method outperforms other methods on predicate detection.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/44777]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
作者单位1.University of Chinese Academy of Sciences
2.Tencent, Shenzhen, China
3.College of Telecommunications & Information engineering, Nanjing University of Posts and Telecommunications
4.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Ma, Xuan,Bao, Bingkun,Yao, Lingling,et al. MULTIMODAL LATENT FACTOR MODEL WITH LANGUAGE CONSTRAINT FOR PREDICATE DETECTION[C]. 见:. 台湾台北. 2019-9-22.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace