A hybrid optimization method for acceleration of building linear classification models | |
Lv, Junchao; Wang, Qiang; Huang, Joshua Zhexue | |
2013 | |
会议名称 | 2013 International Joint Conference on Neural Networks, IJCNN 2013 |
会议地点 | Dallas, TX, United states |
英文摘要 | Linear classification is an important technique in machine learning and data mining, and development of fast optimization methods for training linear classification models is a hot research topic. Stochastic gradient descent (SGD) can achieve relatively good results quickly, but unstable to converge. Limited-memory BFGS (L-BFGS) method converges, but takes a long time to train the model, as it needs to compute the gradient from the entire data set to make an update. In this paper, we investigate a hybrid method that integrates SGD and L-BFGS into a new optimization process SGD-LBFGS to take advantages of both optimization methods. In SGD-LBFGS, SGD is used to run initial iterations to obtain a suboptimal result, and then L-BFGS takes over to continue the optimization process until the process converges and a better model is built. We present a theoretical result to prove that SGD-LBFGS converges faster than SGD and L-BFGS. Experiment analysis on 6 real world data sets have shown that SGD-LBFGS converged 77% faster than L-BFGS on average and demonstrated more stable results than SGD. |
收录类别 | EI |
语种 | 英语 |
内容类型 | 会议论文 |
源URL | [http://ir.siat.ac.cn:8080/handle/172644/4973] ![]() |
专题 | 深圳先进技术研究院_医工所 |
作者单位 | 2013 |
推荐引用方式 GB/T 7714 | Lv, Junchao,Wang, Qiang,Huang, Joshua Zhexue. A hybrid optimization method for acceleration of building linear classification models[C]. 见:2013 International Joint Conference on Neural Networks, IJCNN 2013. Dallas, TX, United states. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论