2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

约束高斯分类网研究

王双成 高瑞 杜瑞杰

王双成, 高瑞, 杜瑞杰. 约束高斯分类网研究. 自动化学报, 2015, 41(12): 2164-2176. doi: 10.16383/j.aas.2015.c150106
引用本文: 王双成, 高瑞, 杜瑞杰. 约束高斯分类网研究. 自动化学报, 2015, 41(12): 2164-2176. doi: 10.16383/j.aas.2015.c150106
WANG Shuang-Cheng, GAO Rui, DU Rui-Jie. Restricted Gaussian Classification Network. ACTA AUTOMATICA SINICA, 2015, 41(12): 2164-2176. doi: 10.16383/j.aas.2015.c150106
Citation: WANG Shuang-Cheng, GAO Rui, DU Rui-Jie. Restricted Gaussian Classification Network. ACTA AUTOMATICA SINICA, 2015, 41(12): 2164-2176. doi: 10.16383/j.aas.2015.c150106

约束高斯分类网研究

doi: 10.16383/j.aas.2015.c150106
基金项目: 

国家自然科学基金(61272209),上海市自然科学基金(15ZR1429700),上海市教委科研创新项目(15ZZ099)资助

详细信息
    作者简介:

    高瑞上海财经大学数学与统计学院博士研究生. 上海立信会计学院数学与信息学院讲师. 2006 年获上海理工大学理学院硕士学位. 主要研究方向为机器学习和数据挖掘.E-mail: gaorui@lixin.edu.cn

    通讯作者:

    王双成上海立信会计学院数学与信息学院教授.2004 年获吉林大学通信工程学院博士学位.主要研究方向为人工智能, 机器学习和数据挖掘.本文通信作者.

Restricted Gaussian Classification Network

Funds: 

Supported by National Natural Science Foundation of China (61272209), Natural Science Foundation of Shanghai (15ZR1429 700), and Innovation Program of Shanghai Municipal Education Commission (15ZZ099)

  • 摘要: 针对基于一元高斯函数估计属性边缘密度的朴素贝叶斯分类器不能有效利 用属性之间的依赖信息和使用多元高斯函数估计属性联合密度的完全贝叶斯分类器 易于导致对数据的过度拟合而且高阶协方差矩阵的计算也非常困难等情况,在建立 属性联合密度分解与组合定理和属性条件密度计算定理的基础上,将朴素贝叶斯分类 器的属性选择、分类准确性标准和属性父结点的贪婪选择相结合,进行约束高斯 分类网学习与优化,并依据贝叶斯网络理论,对贝叶斯衍生分类器中属性为类提供 的信息构成进行分析.使用UCI数据库中连续属性分类数据进行实验,结果显示,经过 优化的约束高斯分类网具有良好的分类准确性.
  • [1] Wang Shuang-Cheng, Du Rui-Jie, Liu Ying. The learning and optimization of full Bayes classifiers with continuous attributes. Chinese Journal of Computers, 2012, 35(10): 2129 -2138(王双成, 杜瑞杰, 刘颖. 连续属性完全贝叶斯分类器的学习与优化. 计算机学报, 2012, 35(10): 2129-2138)
    [2] Wang S C, Xu G L, Du R J. Restricted Bayesian classification networks. Science China Information Sciences, 2013, 56(7): 1-15
    [3] Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers. Machine Learning, 1997, 29(2-3): 131-163
    [4] Jing Y S, Pavlović V, Rehg J M. Boosted Bayesian network classifiers. Machine Learning, 2008, 73(2): 155-184
    [5] Wang Zhong-Feng, Wang Zhi-Hai. An optimization algorithm of Bayesian network classifiers by derivatives of conditional log likelihood. Chinese Journal of Computers, 2012, 35(2): 364-374(王中锋, 王志海. 基于条件对数似然函数导数的贝叶斯网络分类器优化算法. 计算机学报, 2012, 35(2): 364-374)
    [6] Webb G I, Boughton J R, Zheng F, Ting K M, Salem H. Learning by extrapolation from marginal to full-multivariate probability distributions: decreasingly naive Bayesian classification. Machine Learning, 2012, 86(2): 233-272
    [7] Feng Yue-Jin, Zhang Feng-Bin. Max-relevance min-redundancy restrictive BAN classifier learning algorithm. Journal of Chongqing University, 2014, 37(6): 71-77(冯月进, 张凤斌. 最大相关最小冗余限定性贝叶斯网络分类器学习算法. 重庆大学学报, 2014, 37(6): 71-77)
    [8] López-Cruz P L, Larrañaga P, DeFelipe J, Bielza C. Bayesian network modeling of the consensus between experts: an application to neuron classification. International Journal of Approximate Reasoning, 2014, 55(1): 3-22
    [9] John G H, Langley P. Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence (UAI-1995). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1995. 338-345
    [10] Pérez A, Larrañaga P, Inza I. Supervised classification with conditional Gaussian networks: increasing the structure complexity from naive Bayes. International Journal of Approximate Reasoning, 2006, 43(1): 1-25
    [11] Pérez A, Larrañaga P, Inza I. Bayesian classifiers based on kernel density estimation: flexible classifiers. International Journal of Approximate Reasoning, 2009, 50(2): 341-362
    [12] Bounhas M, Mellouli K, Prade H, Serrurier M. Possibilistic classifiers for numerical data. Soft Computing, 2013, 17(5): 733-751
    [13] He Y L, Wang R, Kwong S, Wang X Z. Bayesian classifiers based on probability density estimation and their applications to simultaneous fault diagnosis. Information Sciences, 2014, 259(2): 252-268
    [14] Xia Zhan-Guo, Xia Shi-Xiong, Cai Shi-Yu, Wan Ling. Semi-supervised Gaussian process classification algorithm addressing the class imbalance. Journal on Communications, 2013, 34(5): 42-51(夏战国, 夏士雄, 蔡世玉, 万玲. 类不均衡的半监督高斯过程分类算法. 通信学报, 2013, 34(5): 42-51)
    [15] Liu G Q, Wu J X, Zhou S P. Probabilistic classifiers with a generalized Gaussian scale mixture prior. Pattern Recognition, 2013, 46(1): 332-345
    [16] Dong W Y, Zhou M C. Gaussian classifier-based evolutionary strategy for multimodal optimization. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(6): 1200-1216
    [17] Pavani S K, Gomez D D, Frangi A F. Gaussian weak classifiers based on co-occurring Haar-like features for face detection. Pattern Analysis and Applications, 2014, 17(2): 431439
    [18] Geiger D, Heckerman D. Learning Gaussian Networks. Technical Report MSR-TR-94-10, Microsoft Research, Advanced Technology Division, 1994.
    [19] Pearl J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, California, USA: Morgan Kaufmann Publishers Inc., 1988. 383-408
    [20] Shachter R D, Kenley C R. Gaussian influence diagrams. Management Science, 1989, 35(5): 527-550
    [21] Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1995. 1137-1143
    [22] Quinlan J R. Induction of decision trees. Machine Learning, 1986, 1(1): 81-106
    [23] Wang Shuang-Cheng, Yuan Sen-Miao. Research on learning Bayesian networks structure with missing data. Journal of Software, 2004, 15(7): 1042-1048(王双成, 苑森淼. 具有丢失数据的贝叶斯网络结构学习研究. 软件学报, 2004, 15(7): 1042-1048)
    [24] Fayyad U M, Irani K B. Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the 13th International Joint Conference on Artificial Intelligence. Chambery, France, 1993. 1022-1027
    [25] Demşar J. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 2006, 7(1): 1-30
  • 加载中
计量
  • 文章访问数:  1540
  • HTML全文浏览量:  43
  • PDF下载量:  1129
  • 被引次数: 0
出版历程
  • 收稿日期:  2015-03-11
  • 修回日期:  2015-09-06
  • 刊出日期:  2015-12-20

目录

    /

    返回文章
    返回