Robust Feature Rectification of Pretrained Vision Models for Object Recognition
Zhou SC(周圣超)1,2; Meng GF(孟高峰)1,2,3; Zhang ZX(张兆翔)1,2,3; Xu YD(徐亦达)4; Xiang SM(向世明)1,2
2023-06-26
会议日期2023-2-7
会议地点美国华盛顿
英文摘要

Pretrained vision models for object recognition often suffer a dramatic performance drop with degradations unseen during training. In this work, we propose a RObust FEature Rectification module (ROFER) to improve the performance of pretrained models against degradations. Specifically, ROFER first estimates the type and intensity of the degradation that corrupts the image features. Then, it leverages a Fully Convolutional Network (FCN) to rectify the features from the degradation by pulling them back to clear features. ROFER is a general-purpose module that can address various degradations simultaneously, including blur, noise, and low contrast. Besides, it can be plugged into pretrained models seamlessly to rectify the degraded features without retraining the whole model. Furthermore, ROFER can be easily extended to address composite degradations by adopting a beam search algorithm to find the composition order. Evaluations on CIFAR-10 and Tiny-ImageNet demonstrate that the accuracy of ROFER is 5% higher than that of SOTA methods on different degradations. With respect to composite degradations, ROFER improves the accuracy of a pretrained CNN by 10% and 6% on CIFAR-10 and Tiny-ImageNet respectively.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/56491]  
专题多模态人工智能系统全国重点实验室
通讯作者Meng GF(孟高峰)
作者单位1.中国科学院自动化研究所
2.中国科学院大学人工智能学院
3.中国科学院香港创新研究院人工智能与机器人创新研究中心
4.香港浸会大学
推荐引用方式
GB/T 7714
Zhou SC,Meng GF,Zhang ZX,et al. Robust Feature Rectification of Pretrained Vision Models for Object Recognition[C]. 见:. 美国华盛顿. 2023-2-7.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace