CLIP-Driven hierarchical fusion for referring image segmentation
Yichen Yan1,2; Xingjian He2; Jing Liu1,2
2024-05
会议日期2024/03/08
会议地点Kunming, China
关键词Referring Image Segmentation, CLIP, Hierarchical Fusion, Computer Vision
英文摘要

Referring image segmentation aims to segment an object mentioned in natural language from an image. It is a fundamental computer vision task. This task is challenging because it involves both vision and language features that need to be aligned and fused effectively. For alignment, pre-trained CLIP is widely used in many vision-language tasks for its notable success in aligning these two modalities. However, in the majority of existing methods, vision and language information are independent in the encoder stage, which is a suboptimal fusion approach. In this paper, we introduce an innovative CLIP Driven Hierarchical Fusion framework named CHRIS. We utilize CLIP as the encoder for its valuable vision-language alignment, we also design an effective early fusion approach in the encoder stage called hierarchical attention. Moreover, we introduce a novel hierarchical fusion neck to fuse vision and language information. In this way, the vision and language features contained in CLIP are further fused effectively. We perform comprehensive experiments on the three datasets widely adopted in the research community, RefCOCO, RefCOCO+, and G-Ref. Our proposed framework demonstrates superior performance compared to previous approaches by just using ResNet as the backbone.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/58526]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Jing Liu
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Yichen Yan,Xingjian He,Jing Liu. CLIP-Driven hierarchical fusion for referring image segmentation[C]. 见:. Kunming, China. 2024/03/08.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace