Graph Convolutional Tracking
Gao, Junyu1,3,4; Zhang, Tianzhu2,3,4; Xu, Changsheng1,3,4
2019-06
会议日期2019-6
会议地点Long Beach, USA
英文摘要

Tracking by siamese networks has achieved favorable performance in recent years. However, most of existing siamese methods do not take full advantage of spatial-temporal target appearance modeling
under different contextual situations. In fact, the spatial-temporal information can provide diverse features to enhance the target representation, and the
context information is important for online adaption of target localization. To comprehensively leverage the spatial-temporal structure of historical
target exemplars and get benefit from the context information, in this work, we present a novel Graph Convolutional Tracking (GCT) method for
high-performance visual tracking. Specifically, the GCT jointly incorporates two types of Graph Convolutional Networks (GCNs) into a
siamese framework for target appearance modeling. Here,  we adopt a spatial-temporal GCN to model the structured representation of historical target exemplars. Furthermore, a context GCN is designed
to utilize the context of the current frame to learn adaptive features for target localization. Extensive results on $4$ challenging benchmarks show that our GCT method
performs favorably against state-of-the-art trackers while running around 50 frames per second.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/39174]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
作者单位1.Peng Cheng Laboratory, ShenZhen, China
2.University of Science and Technology of China
3.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
4.University of Chinese Academy of Sciences (UCAS)
推荐引用方式
GB/T 7714
Gao, Junyu,Zhang, Tianzhu,Xu, Changsheng. Graph Convolutional Tracking[C]. 见:. Long Beach, USA. 2019-6.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace