ViP-CNN: Visual Phrase Guided Convolutional Neural Network
Yikang Li; Wanli Ouyang; Xiaogang Wang; Xiaoou Tang
2017
会议地点美国
英文摘要As the intermediate level task connecting image cap- tioning and object detection, visual relationship detection started to catch researchers’ attention because of its de- scriptive power and clear structure. It detects the objects and captures their pair-wise interactions with a subject- predicate-object triplet, e.g. hperson-ride-horsei. In this paper, each visual relationship is considered as a phrase with three components. We formulate the visual relationship detection as three inter-connected recognition problems and propose a Visual Phrase guided Convolutional Neural Net- work (ViP-CNN) to address them simultaneously. In ViP- CNN, we present a Phrase-guided Message Passing Struc- ture (PMPS) to establish the connection among relationship components and help the model consider the three problems jointly. Corresponding non-maximum suppression method and model training strategy are also proposed. Experimen- tal results show that our ViP-CNN outperforms the state- of-art method both in speed and accuracy. We further pre- train ViP-CNN on our cleansed Visual Genome Relation- ship dataset, which is found to perform better than the pre- training on the ImageNet for this task.
语种英语
内容类型会议论文
源URL[http://ir.siat.ac.cn:8080/handle/172644/11767]  
专题深圳先进技术研究院_集成所
作者单位2017
推荐引用方式
GB/T 7714
Yikang Li,Wanli Ouyang,Xiaogang Wang,et al. ViP-CNN: Visual Phrase Guided Convolutional Neural Network[C]. 见:. 美国.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace