CORC  > 自动化研究所  > 中国科学院自动化研究所  > 毕业生  > 博士学位论文
题名基于梯度下降的分类学习-BN,LVQ,AdaBoost
作者靳小波
学位类别工学博士
答辩日期2009-06-03
授予单位中国科学院研究生院
授予地点中国科学院自动化研究所
导师刘成林 ; 侯新文
关键词分类学习 梯度下降 参数空间 泛函空间 贝叶斯网络 学习矢量量化 AdaBoost Classification Learning Gradient Descent Parameter Space Functional Space Bayesian Network Learning Vector Quantization AdaBoost
其他题名Classification Learning with Gradient Descent-BN,LVQ,AdaBoost
学位专业模式识别与智能系统
中文摘要机器学习的研究尤其是分类学习在过去的20年里取得了长足的发展,涌现了许多新的学习理论和方法,其中最重要的发展是支持矢量机和Adaboost理论的提出,它们都是基于判别函数的分类方法。在判别函数的优化方法中,梯度下降算法由于其简单高效,成为本文的研究重点。本文主要研究了梯度下降算法在分类方面的三个有代表性的应用: 一.贝叶斯网络(离散参数空间中的梯度下降):对于贝叶斯网络的结构学习而言,以前的方法大多数采用朴素贝叶斯结构作为初始的状态,随后通过在属性之间不断地加边来得到邻域结构。这个受限的邻域使得最终学习到的结构仍然是增量的贝叶斯网络(ANB)。由于类变量不是以属性变量为条件,在某种意义上,它仍然是生成式的结构。我们提出了一种生成式判别式的学习方法来学习贝叶斯网络结构(HGD算法),算法以交叉验证的分类率为准则,通过梯度下降搜索来构造一个判别式的网络结构。我们的经验结果表明HGD+CR产生了比仅仅使用判别准则的贝叶斯网络更好的分类性能。 二、 学习矢量量化(连续参数空间中的梯度下降):尽管学习矢量量化算法如MCE算法、GLVQ算法在各种应用中取得了非常好的性能,但是这些损失函数是非凸的。本文的第三章主要探讨了凸损失函数在原型算法中的效果,旨在使用凸损失函数来提高原型算法的分类性能,我们提出了两个基于条件对数似然损失的原型学习算法:基于边缘的似然(LOGM)和基于概率的似然(LOGP)。我们在训练过程中加入正则项避免过拟合。在LOGM中,损失函数是边缘的凸损失,这个特性确保存在唯一的极大边缘。LOGP类似于MAXP和SNPC,给定一个训练样本时,需要更新所有的原型矢量。在30个标准数据集上的结果表明,LOGM和LOGP取得了比MCE,LVQ和SNPC更高的分类精度。 三、 Adaboost算法(泛函空间中的梯度下降):AdaBoost算法最初被用来处理两类分类问题,对于多类分类,大多数算法将多类问题转化为多个两类分类问题。这类算法的缺点在于,要解决的两类分类问题的数目至少与类别数成线性比例,因而限制了它的应用。我们利用多类弱分类器直接构造多类Adaboost算法,提出了一个基于假设边缘(Hypothesis Margin)的AdaBoost算法(AdaBoost.HM),一方面实现了AdaBoost.MH算法中的最大化正类判别输出最小化负类判别输出的思想,另一方面它类似AdaBoost.M1算法,直接构造多类的AdaBoost算法,从而节省了训练时间。另外,我们给出AdaBoost.HM 算法的训练错误率上界。最后,在同时以神经网络作弱分类器的条件下,AdaBoost.HM显示了比AdaBoost.MH 更好的泛化性能。
英文摘要Machine learning especially the classification learning has made rapid progress and there are many new learning theories and methods such as support vector machine and AdaBoost, which are the classical examples of the discriminant function method. Gradient descent is on the focus of the paper for it is simple and effective in the learning of the discriminant function. We do some research work on its representative application to the classification learning: 1. Bayesian Network (gradient descent in discrete space): previous methods mostly use the Naive Bayesian structure as the initial state and the neighborhood structures are obtained by adding augmenting arcs between attributes. This restricted neighborhood makes the final learned structure remains an Augmented Naive Bayesian, which is still generative in the sense that the class variable is not conditioned on attribute variables. We proposed a new hybrid generative-discriminative (HGD) algorithm for learning Bayesian network structure. It builds a discriminative structure by gradient descent search with the criterion of cross-valid classification rate. Our empirical study on a large suite of benchmark datasets shows that the proposed HGD+CR algorithm yields better classification results than BN classifiers with only discriminative scores. 2. Learning vector quantization (gradient descent in continuous space): although the prototype algorithms (MCE, GLVQ) obtain good performance in different application domains, the loss functions of the algorithms are non-convex. Then we investigate into the loss functions of prototype learning algorithms and aim to improve the classification performance using new forms of loss functions. We put forward two prototype learning algorithms based on the conditional log-likelihood loss (CLL), called log-likelihood of margin (LOGM) and log-likelihood of probability (LOGP), respectively. A regularization term is added to avoid over-fitting in training. Our empirical study on 30 benchmark datasets demonstrates that the LOGM and LOGP algorithms yield higher classification accuracies than the MCE, generalized learning vector quantization (GLVQ), and SNPC. 3. AdaBoost (gradient descent in functional space): AdaBoost has been shown to be an effective method for improving the classification accuracy of weak classifiers for solving the two-class classification problem. Regarding to the handling of multi-classification, most algorithms have been restricted to reducing the multi-...
语种中文
其他标识符200518014628056
内容类型学位论文
源URL[http://ir.ia.ac.cn/handle/173211/6216]  
专题毕业生_博士学位论文
推荐引用方式
GB/T 7714
靳小波. 基于梯度下降的分类学习-BN,LVQ,AdaBoost[D]. 中国科学院自动化研究所. 中国科学院研究生院. 2009.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace