Unsupervised Network Quantization via Fixed-Point Factorization
Wang, Peisong2,3; He, Xiangyu2,3; Chen, Qiang2,3; Cheng, Anda2,3; Liu, Qingshan1; Cheng, Jian2,3
刊名IEEE Transactions on Neural Networks and Learning Systems
2020
期号1页码:1
关键词Acceleration , compression , deep neural networks (DNNs) , fixed-point quantization , unsupervised quantization.
英文摘要

The deep neural network (DNN) has achieved remarkable performance in a wide range of applications at the cost of huge memory and computational complexity. Fixed-point network quantization emerges as a popular acceleration and compression method but still suffers from huge performance degradation when extremely low-bit quantization is utilized. Moreover, current fixed-point quantization methods rely heavily on supervised retraining using large amounts of the labeled training data, while the labeled data are hard to obtain in the real-world applications. In this article, we propose an efficient framework, namely, fixed-point factorized network (FFN), to turn all weights into ternary values, i.e., {-1, 0, 1}. We highlight that the proposed FFN framework can achieve negligible degradation even without any supervised retraining on the labeled data. Note that the activations can be easily quantized into an 8-bit format; thus, the resulting networks only have low-bit fixed-point additions that are significantly more efficient than 32-bit floating-point multiply-accumulate operations (MACs). Extensive experiments on large-scale ImageNet classification and object detection on MS COCO show that the proposed FFN can achieve about more than 20x compression and remove most of the multiply operations with comparable accuracy. Codes are available on GitHub at https://github.com/wps712/FFN.

内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/40616]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Liu, Qingshan
作者单位1.Nanjing University of Information Science and Technology
2.University of Chinese Academy of Sciences
3.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Wang, Peisong,He, Xiangyu,Chen, Qiang,et al. Unsupervised Network Quantization via Fixed-Point Factorization[J]. IEEE Transactions on Neural Networks and Learning Systems,2020(1):1.
APA Wang, Peisong,He, Xiangyu,Chen, Qiang,Cheng, Anda,Liu, Qingshan,&Cheng, Jian.(2020).Unsupervised Network Quantization via Fixed-Point Factorization.IEEE Transactions on Neural Networks and Learning Systems(1),1.
MLA Wang, Peisong,et al."Unsupervised Network Quantization via Fixed-Point Factorization".IEEE Transactions on Neural Networks and Learning Systems .1(2020):1.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace