Unsupervised Pool-Based Active Learning for Linear Regression
-
摘要: 在许多现实的机器学习应用场景中, 获取大量未标注的数据是很容易的, 但标注过程需要花费大量的时间和经济成本. 因此, 在这种情况下, 需要选择一些最有价值的样本进行标注, 从而只利用较少的标注数据就能训练出较好的机器学习模型. 主动学习已被广泛应用于解决这种场景下的问题. 但是, 大多数现有的主动学习方法都是基于有监督场景: 能够从少量带标签的样本中训练初始模型, 基于模型查询新的样本, 然后迭代更新模型. 无监督情况下的主动学习却很少有人考虑, 即在不知道任何标签信息的情况下最佳地选择要标注的初始训练样本. 这种场景下, 主动学习问题变得更加困难, 因为无法利用任何标签信息. 针对这一场景, 本文研究了基于池的无监督线性回归问题, 提出了一种新的主动学习方法, 该方法同时考虑了信息性、代表性和多样性这三个标准. 本文在3个不同的线性回归模型(岭回归, LASSO和线性支持向量回归)和来自不同应用领域的12个数据集上进行了广泛的实验, 验证了其有效性.Abstract: In many real-world machine learning applications, unlabeled data can be easily obtained, but it is very time-consuming and/or expensive to label them. So, it is desirable to be able to select the optimal samples to label, so that a good machine learning model can be trained from a minimum amount of labeled data. Active learning (AL) has been widely used for this purpose. However, most existing AL approaches are supervised: they train an initial model from a small amount of labeled samples, query new samples based on the model, and then update the model iteratively. Few of them have considered the completely unsupervised AL problem, i.e., starting from zero, how to optimally select the very first few samples to label, without knowing any label information at all. This problem is very challenging, as no label information can be utilized. This paper studies unsupervised pool-based AL for linear regression problems. We propose a novel AL approach that considers simultaneously the informativeness, representativeness, and diversity, three essential criteria in AL. Extensive experiments on 12 datasets from various application domains, using three different linear regression models (ridge regression, LASSO, and linear support vector regression), demonstrated the effectiveness of our proposed approach.
-
Key words:
- active learning /
- unsupervised learning /
- linear regression /
- support vector regression /
- LASSO /
- ridge regression
-
表 1 基于池的无监督ALR方法中考虑的标准.
Table 1 Criteria considered in the three existing and the proposed unsupervised pool-based ALR approaches.
方法 信息性 代表性 多样性 现有方法 P-ALICE $\checkmark$ $-$ $-$ GSx $-$ $-$ $\checkmark$ RD $-$ $\checkmark$ $\checkmark$ 本文方法 IRD $\checkmark$ $\checkmark$ $\checkmark$ 表 2 12个数据集的总结.
Table 2 Summary of the 12 regression datasets.
数据集 来源 样本个数 原始特征个数 数字型特征个数 类别型特征个数 总的特征个数 Concrete-CS $^1$ UCI 103 7 7 0 7 Yacht $^2$ UCI 308 6 6 0 6 autoMPG $^3$ UCI 392 7 6 1 9 NO2 $^4$ StatLib 500 7 7 0 7 Housing $^5$ UCI 506 13 13 0 13 CPS $^6$ StatLib 534 10 7 3 19 EE-Cooling $^7$ UCI 768 7 7 0 7 VAM-Arousal $^8$ ICME 947 46 46 0 46 Concrete $^9$ UCI 1030 8 8 0 8 Airfoil $^{10}$ UCI 1503 5 5 0 5 Wine-Red $^{11}$ UCI 1599 11 11 0 11 Wine-White $^{11}$ UCI 4898 11 11 0 11 $^1$ https://archive.ics.uci.edu/ml/datasets/Concrete+Slump+Test
$^2$ https://archive.ics.uci.edu/ml/datasets/Yacht+Hydrodynamics
$^3$ https://archive.ics.uci.edu/ml/datasets/auto+mpg
$^4$ http://lib.stat.cmu.edu/datasets/
$^5$ https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
$^6$ http://lib.stat.cmu.edu/datasets/CPS_85_Wages
$^7$ http://archive.ics.uci.edu/ml/datasets/energy+efficiency
$^8$ https://dblp.uni-trier.de/db/conf/icmcs/icme2008.html
$^9$ https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength
$^{10}$ https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise
$^{11}$ https://archive.ics.uci.edu/ml/datasets/Wine+Quality表 3 AUC-mRMSE/sRMSE和AUC-mCC/sCC的提升百分比.
Table 3 Percentage improvements of the AUCs of the mean/std RMSEs and the mean/std CCs.
回归模型 性能指标 相对于RS的提升百分比 P-ALICE GSx RD IRD RR RMSE Mean 2.58 -2.57 4.15 8.63 std 2.75 3.98 36.60 34.84 CC Mean 6.54 -3.43 10.39 18.70 std 12.74 29.47 35.03 42.97 LASSO RMSE Mean 4.22 0.84 7.58 10.81 std 6.77 0.85 43.45 39.84 CC Mean 25.06 69.41 25.67 60.63 std 6.39 31.05 22.46 29.82 SVR RMSE Mean 4.21 0.66 5.23 12.12 std 6.62 -0.19 33.99 38.69 CC Mean 9.71 -1.65 12.46 28.99 std 11.10 25.78 34.97 43.25 表 4 非参数多重检验的
$p$ 值($\alpha=0.05$ ; 如果$p<\alpha/2$ 拒绝$H_0$ ).Table 4
$p$ -values of non-parametric multiple comparisons ($\alpha=0.05$ ; reject$H_0$ if$p<\alpha/2$ ).回归模型 性能指标 IRD versus RS P-ALICE GSx RD RR RMSE 0.0000 0.0003 0.0000 0.0284 CC 0.0000 0.0000 0.0000 0.0005 LASSO RMSE 0.0000 0.0004 0.0000 0.0596 CC 0.0000 0.0000 0.0000 0.0000 SVR RMSE 0.0000 0.0000 0.0000 0.0018 CC 0.0000 0.0000 0.0000 0.0000 -
[1] Mehrabian A. Basic Dimensions for a General Psychological Theory: Implications for Personality, Social, Environmental, and Developmental Studies. Oelgeschlager, Gunn & Hain, 1980 [2] Grimm M, Kroschel K, and Narayanan S S. The Vera Am Mittag German audio-visual emotional speech database. In: Proc. Int’l Conf. on Multimedia & Expo (ICME). Hannover, German, June 2008. 865−868 [3] Bradley M M, Lang P J. The international affective digitized sounds (2nd edition; IADS-2): Affective ratings of sounds and instruction manual, Technical Report B-3, University of Florida, Gainesville, FL, 2007 [4] Joo J, Wu D, Mendel J M, Bugacov A. Forecasting the post fracturing response of oil wells in a tight reservoir. In: Proc. SPE Western Regional Meeting. San Jose, CA, March 2009 [5] Settles B. Active learning literature survey, Computer Sciences Technical Report 1648, University of Wisconsin– Madison, Madison, WI, 2009 [6] Burbidge R, Rowland J J, King R D. Active learning for regression based on query by committee. Lecture Notes in Computer Science, 2007, 4881: 209−218 [7] Cai W, Zhang M, Zhang Y. Batch mode active learning for regression with expected model change. IEEE Trans. on Neural Networks and Learning Systems, 2017, 28(7): 1668−1681 doi: 10.1109/TNNLS.2016.2542184 [8] Cai W, Zhang Y, Zhou J. Maximizing expected model change for active learning in regression. In: Proc. IEEE 13th Int’l. Conf. on Data Mining. Dallas, TX, December 2013 [9] Cohn D A, Ghahramani Z, Jordan M I. Active learning with statistical models. Journal of Artificial Intelligence Research, 1996, 4(1): 129−145 [10] Freund Y, Seung H, Shamir E, Tishby N. Selective sampling using the query by committee algorithm. Machine Learning, 1997, 28(2-3): 133−168 [11] MacKay D. Information-based objective functions for active data selection. Neural Computation, 1992, 4(4): 590−604 doi: 10.1162/neco.1992.4.4.590 [12] Sugiyama M. Active learning in approximately linear regression based on conditional expectation of generalization error. Journal of Machine Learning Research, 2006, 7: 141−166 [13] Sugiyama M, Nakajima S. Pool-based active learning in approximate linear regression. Machine Learning, 2009, 75(3): 249−274 doi: 10.1007/s10994-009-5100-3 [14] Willett R, Nowak R, Castro R M. Faster rates in regression via active learning. In: Proc. Advances in Neural Information Processing Systems. Vancouver, Canada, December 2006. 179−186 [15] Wu D, Lawhern V J, Gordon S, Lance B J, Lin C T. Offline EEG-based driver drowsiness estimation using enhanced batch-mode active learning (EBMAL) for regression. In: Proc. IEEE Int’l Conf. on Systems, Man and Cybernetics. Budapest, Hungary, October 2016. 730−736 [16] Yu H, Kim S. Passive sampling for regression. In: Proc. IEEE Int’l. Conf. on Data Mining. Sydney, Australia, December 2010. 1151−1156 [17] Wu D. Pool-based sequential active learning for regression. IEEE Trans. on Neural Networks and Learning Systems, 2019, 30(5): 1348−1359 doi: 10.1109/TNNLS.2018.2868649 [18] Wu D, Lin C T, Huang J. Active learning for regression using greedy sampling. Information Sciences, 2019, 474: 90−105 doi: 10.1016/j.ins.2018.09.060 [19] Liu Z, Wu D. Integrating informativeness, representativeness and diversity in pool-based sequential active learning for regression. In: Proc. Int’l Joint Conf. on Neural Networks. Glasgow, UK, July 2020 [20] Wu D, Huang J. Affect estimation in 3D space using multitask active learning for regression. IEEE Trans. on Affective Computing, to be published [21] Tong S, Koller D. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2001, 2(Nov): 45−66 [22] Lewis D, Catlett J. Heterogeneous uncertainty sampling for supervised learning. In: Proc. Int’l Conf. on Machine Learning (ICML). New Brunswick, NJ, July 1994. 148−156 [23] Lewis D, Gale W. A sequential algorithm for training text classifiers. In: Proc. ACM SIGIR Conf. on Research and Development in Information Retrieval. Dublin, Ireland, July 1994. 3−12 [24] Grimm M, Kroschel K. Emotion estimation in speech using a 3D emotion space concept. Robust Speech Recognition and Understanding, Austria: InTech, 2007. 281−300 [25] Grimm M, Kroschel K, Mower E, Narayanan S S. Primitivesbased evaluation and estimation of emotions in speech. Speech Communication, 2007, 49: 787−800 doi: 10.1016/j.specom.2007.01.010 [26] Wu D, Parsons T D, Mower E, Narayanan S S. Speech emotion estimation in 3D space. In: Proc. IEEE Int’l Conf. on Multimedia & Expo (ICME). Singapore, July 2010. 737−742 [27] Wu D, Parsons T D, Narayanan S S. Acoustic feature analysis in speech emotion primitives estimation. In: Proc. InterSpeech. Makuhari, Japan, September 2010 [28] Dunn O J. Multiple comparisons among means. Journal of the American Statistical Association, 1961, 56: 62−64 [29] Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B (Methodological), 1995, 57: 289−300 doi: 10.1111/j.2517-6161.1995.tb02031.x [30] Maaten L V D, Hinton G. Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, 9: 2579−2605 -

计量
- 文章访问数: 90
- HTML全文浏览量: 9
- 被引次数: 0