2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向全量测点耦合结构分析与估计的工业过程监测方法

赵健程 赵春晖

赵健程, 赵春晖. 面向全量测点耦合结构分析与估计的工业过程监测方法. 自动化学报, 2022, 48(x): 1−21 doi: 10.16383/j.aas.c220090
引用本文: 赵健程, 赵春晖. 面向全量测点耦合结构分析与估计的工业过程监测方法. 自动化学报, 2022, 48(x): 1−21 doi: 10.16383/j.aas.c220090
Zhao Jian-Cheng, Zhao Chun-Hui. An industrial process monitoring method based on total measurement point coupling structure analysis and estimation. Acta Automatica Sinica, 2022, 48(x): 1−21 doi: 10.16383/j.aas.c220090
Citation: Zhao Jian-Cheng, Zhao Chun-Hui. An industrial process monitoring method based on total measurement point coupling structure analysis and estimation. Acta Automatica Sinica, 2022, 48(x): 1−21 doi: 10.16383/j.aas.c220090

面向全量测点耦合结构分析与估计的工业过程监测方法

doi: 10.16383/j.aas.c220090
基金项目: 国家自然科学基金杰出青年基金 (62125306), 工业控制技术国家重点实验室自主课题 (ICT2021A15), 中央高校基本科研业务费专项资金资助(浙江大学NGICS大平台)资助
详细信息
    作者简介:

    赵健程:浙江大学控制科学与工程学院博士研究生. 2021年获得浙江大学控制科学与工程学院自动化专业学士学位. 主要研究方向为大数据分析, 深度学习和零样本学习. E-mail: zhaojiancheng@zju.edu.cn

    赵春晖:浙江大学控制科学与工程学院教授. 2003年获得中国东北大学自动化专业学士学位, 2009年获得中国东北大学控制理论与控制工程专业博士学位, 先后在中国香港科技大学、美国加州大学圣塔芭芭拉分校做博士后研究工作. 主要研究方向为机器学习, 工业大数据解析与应用, 包括化工, 能源以及医疗领域. 本文通信作者. E-mail: chhzhao@zju.edu.cn

An Industrial Process Monitoring Method Based on Total Measurement Point Coupling Structure Analysis and Estimation

Funds: Supported by National Natural Science Foundation of China (62125306), the Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University, China (ICT2021A15), and the Fundamental Research Funds for the Central Universities (Zhejiang University NGICS Platform)
More Information
    Author Bio:

    ZHAO Jian-Cheng Ph.D. candidate at the College of Control Science and Engineering, Zhejiang University. He received the B.Eng. degree from the College of Control Science and Engineering, Zhejiang University in 2021. His research interest covers big data analysis, deep learning, and zero-shot learning

    ZHAO Chun-Hui Professor at the College of Control Science and Engineering, Zhejiang University. She received her bachelor, master, and Ph.D. degrees from the Department of Automation, Northeastern University, Shenyang, China in 2003, 2006, and 2009, respectively. She was a postdoctoral fellow (January 2009-January 2012) at the Hong Kong University of Science and Technology, China and the University of California, Santa Barbara, Los Angeles, CA, USA. Her research interest covers machine learning, analytics of industrial big data, and their applications in energy and medical fields. Corresponding author of this paper

  • 摘要: 实际工业场景中, 需要在生产过程中收集大量测点的数据, 从而掌握生产过程运行状态. 传统的过程监测方法通常仅评估运行状态整体的异常与否, 或对运行状态进行分级评估, 这种方式并不会直接定位故障部位, 不利于故障的高效检修. 为此, 提出了一种基于全量测点估计的监测模型, 根据全量测点估计值与实际值的偏差定义监测指标, 从而实现全量测点的分别精准监测. 为了克服原有的基于工况估计的监测方法监测不全面且对测点间耦合关系建模不充分的问题, 提出了多核图卷积网络(Multi-kernel graph convolution network, MKGCN), 通过将全量传感器测点视为一张全量测点图, 显式地对测点间耦合关系进行建模, 从而实现了全量传感器测点的同步工况估计. 此外, 面向在线监测场景, 设计了基于特征逼近的自迭代方法, 从而克服了在异常情况下由于测点间强耦合导致的部分测点估计值异常的问题. 所提出的方法在电厂百万千瓦超超临界机组中引风机的实际数据上进行了验证, 结果显示, 提出的监测方法与其他典型方法相比能够更精准地检测出发生故障的测点.
  • 图  1  LSTM内部结构

    Fig.  1  Internal structure of LSTM

    图  2  面向全量测点估计的多核图卷积模型结构

    Fig.  2  Structure of multi-kernel graph convolution model for total measurement points estimation

    图  3  MKGCN层的计算过程

    Fig.  3  Calculation process of MKGCN layer

    图  4  MKGCN层的堆叠使用

    Fig.  4  The stacking use of MKGCN layers

    图  5  自迭代方法

    Fig.  5  Self-iterative method

    图  6  训练数据中测点间相关性

    Fig.  6  Correlation between measuring points on modeling data

    图  7  基于MKGCN的模型监测效果图($ \text{var} \in F$)

    Fig.  7  Monitoring diagram of model based on MKGCN($ \text{var} \in F$)

    图  8  MEST方法漏报的部分异常变量

    Fig.  8  Some abnormal variables partially missed by MEST method

    图  9  AE模型误报的部分正常变量

    Fig.  9  Some normal variables with serious false alarm(AE)

    图  10  不同通道的邻接核可视化结果

    Fig.  10  Visualization results of adjacency kernels in different channels

    图  11  测点12, 测点25的估计值对比

    Fig.  11  Comparison of working condition estimated values of measuring point 12 and measuring point 25

    表  1  引风机测点对应表

    Table  1  Measuring points of induced draft fan

    0功率信号三选值11引风机水平振动22引风机油箱温度
    1进气温度12引风机后轴承温度123引风机中轴承温度1
    2引风机电机定子线圈温度113引风机后轴承温度224引风机中轴承温度2
    3引风机电机定子线圈温度214引风机后轴承温度325引风机中轴承温度3
    4引风机电机定子线圈温度315引风机键相26炉膛压力
    5引风机电机水平振动116引风机静叶位置反馈27引风机出口风温
    6引风机电机水平振动217引风机前轴承温度128引风机入口压力
    7引风机电机轴承温度118引风机前轴承温度229引风机出口风压
    8引风机电机轴承温度219引风机前轴承温度330引风机静叶开度指令
    9引风机电流20引风机润滑油温度31总燃料量
    10引风机风垂直振动21引风机润滑油压力32炉膛压力
    下载: 导出CSV

    表  2  基于MKGCN层的估计模型的结构

    Table  2  Structure of working condition estimation model based on MKGCN layer

    序号网络层数目参数激活函数
    1BiLSTM$n$$[{ {\rm{input}}\_{\rm{size}}} = len, {{\rm{hidden}}\_{\rm{size}}} = ld]$None
    FC$n$$[{ {\rm{input}}\_{\rm{size}}} = len, {{\rm{output}}\_{\rm{size}}} = 2 \times ld]$
    2MKGCN$1$$[ {{c_{{\rm{in}}}} = 1,n{o_{{\rm{in}}}} = n,f{e_{{\rm{in}}}} = 2 \times ld} $Tanh
    $ {{c_{{\rm{out}}}} = oc,n{o_{{\rm{out}}}} = n,f{e_{{\rm{out}}}} = 4 \times ld}] $
    3FC 0$n$$[{ {\rm{input}}\_{\rm{size}}} = 4 \times ld, {{\rm{output}}\_{\rm{size}}} = 2 \times ld]$Tanh
    4FC 1$n$$[{ {\rm{input}}\_{\rm{size}}} = 2 \times ld, {{\rm{output}}\_{\rm{size}}} = 1]$Tanh
    5FC 2$n$$[{ {\rm{input}}\_{\rm{size}}} = \;oc, {{\rm{output}}\_{\rm{size}}} = 1]$None
    特征逼近层(FC)$n$$[{ {\rm{input}}\_{\rm{size}}} = oc, {{\rm{output}}\_{\rm{size}}} = 1]$None
    下载: 导出CSV

    表  3  基于GCN的估计模型结构

    Table  3  Structure of working condition estimation model based on GCN

    序号网络层数目参数激活函数
    1BiLSTM$n$$[{ {\rm{input}}\_{\rm{size}}} = len, {{\rm{hidden}}\_{\rm{size}}} = ld]$None
    2GCN1$[{\rm{in}}\_{\rm{feature}} = 2 \times ld, {\rm{out}}\_{\rm{feature}} = 4 \times ld]$Tanh
    3FC 0$n$$[{ {\rm{input}}\_{\rm{size}}} = 4 \times ld, {{\rm{output}}\_{\rm{size}}} = 2 \times ld]$Tanh
    4FC 1$n$$[{ {\rm{input}}\_{\rm{size}}} = 2 \times ld, {{\rm{output}}\_{\rm{size}}} = ld]$Tanh
    5FC 2$n$$[{ {\rm{input}}\_{\rm{size}}} = ld, {{\rm{output}}\_{\rm{size}}} = 1]$None
    下载: 导出CSV

    表  4  模型实现和参数网格搜索范围

    Table  4  Model implementation and parameter grid search range

    方法Python包超参数超参数调整范围
    PLSRscikit-learn$nc$$nc = \left\{ {5,10,15,20,25} \right\}$
    ELMD.C. Lambert$E,\alpha $$ E = \left\{ {50,100,150,200,250} \right\}, $
    $ \alpha = \left\{ {0.1,0.3,0.5,0.7,0.9} \right\} $
    FCPaddlePaddle$ld$$ld = \left\{ {8,16,32,64,128} \right\}$
    BiLSTMPaddlePaddle$ld$$ld = \left\{ {8,16,32,64,128} \right\}$
    Conv1dPaddlePaddle$ld$$ld = \left\{ {8,16,32,64,128} \right\}$
    GCNPaddlePaddle$ld$$ld = \left\{ {8,16,32,64,128} \right\}$
    MKGCNPaddlePaddle$ld,oc$$ ld = \left\{ {8,16,32,64,128} \right\},$
    $ oc = \left\{ {2,4,8,16,32} \right\} $
    下载: 导出CSV

    表  5  网格搜索结果与深度神经网络方法在最优超参数下总参数量

    Table  5  Grid search results and total parameters of depth neural network method with optimal hyperparameters

    方法最优超参数模型数总参数量
    PLSR$nc = 15$$n$$/$
    ELMR$E = 200,\alpha = 0.9$$n$$/$
    MEST$/$$1$$/$
    FC$ld = 128$$n$$5 \times {10^5}$
    BiLSTM$ld = 128$$n$$6.9 \times {10^6}$
    Conv1d$ld = 128$ $n$$9 \times {10^5}$
    GCN$ld = 64$$1$$9.8 \times {10^6}$
    MKGCN$ld = 8,oc = 32$$1$$1.8 \times {10^5}$
    下载: 导出CSV

    表  6  测试数据上不同模型的估计结果(RMSE)

    Table  6  Results of different working condition estimation models on test data (RMSE)

    变量PLSRELMFCBiLSTMConv1dGCNMESTMKGCN
    $\text{var} \in N$0.0420.0640.0590.0520.0600.0420.0050.044
    $\text{var} \in F$0.0460.0760.0590.0490.0820.0490.0060.046
    下载: 导出CSV

    表  7  测试数据上不同模型的估计结果(MAE)

    Table  7  Results of different working condition estimation models on test data (MAE)

    变量PLSRELMFCBiLSTMConv1dGCNMESTMKGCN
    $\text{var} \in N$0.0340.0520.0490.0430.0510.0340.0040.036
    $\text{var} \in F$0.0390.0660.0500.0410.0700.0430.0050.039
    下载: 导出CSV

    表  8  监测数据上各监测指标($ \text{var} \in N$)

    Table  8  Monitoring indicators on monitoring data($ \text{var} \in N$)

    指标PLSRELMFCBiLSTMConv1dGCNMESTMKGCN
    $\text{False}_\text{p}$13.26729.57334.26727.39242.58123.5682.8534.500
    $\text{False}_\text{n}$00000000
    F192.89582.64879.32484.13172.95186.64298.55397.698
    下载: 导出CSV

    表  9  监测数据上各监测指标($ \text{var} \in F$)

    Table  9  Monitoring indicators on monitoring data($ \text{var} \in F$)

    指标PLSRELMFCBiLSTMConv1dGCNMESTMKGCN
    $\text{False}_\text{p}$15.95830.58331.37532.39037.16210.769010.769
    $\text{False}_\text{n}$24.2505.0425.9171.1401.7746.96833.2081.056
    F179.68180.20379.36280.30276.64491.09280.09093.836
    下载: 导出CSV

    表  10  基于AE的估计模型的结构

    Table  10  Structure of working condition estimation model based on AE

    序号网络层数目参数激活函数
    1BiLSTM1$[{ {\rm{input}}\_{\rm{size}}} = \;len,{{\rm{hidden}}\_{\rm{size}}} = 2 \times ld]$None
    2FC 01$[{ {\rm{input}}\_{\rm{size}}} = 4 \times ld,{{\rm{output}}\_{\rm{size}}} = 2 \times ld]$Tanh
    3FC 11$[{ {\rm{input}}\_{\rm{size}}} = 2 \times ld,{{\rm{output}}\_{\rm{size}}} = ld]$Tanh
    4FC 21$[{ {\rm{input}}\_{\rm{size}}} = ld,{{\rm{output}}\_{\rm{size}}} = 2 \times ld]$Tanh
    5FC 31$[{ {\rm{input}}\_{\rm{size}}} = 2 \times ld,{{\rm{output}}\_{\rm{size}}} = n]$None
    下载: 导出CSV

    表  11  AE与MKGCN实验结果对比(MKGCN实验结果同表6 ~ 表9)

    Table  11  Comparison of experimental results between AE and MKGCN (The experimental results of MKGCN are the same as Table 6 ~ Table 9)

    指标AEMKGCN
    RMSE, $\text{var} \in N$0.0200.044
    RMSE, $\text{var} \in F$0.0220.046
    MAE, $\text{var} \in N$0.0160.036
    MAE, $\text{var} \in F$0.0190.039
    $\text{False}_\text{p}$, $\text{var} \in N$38.8114.500
    $\text{False}_\text{n}$, $\text{var} \in N$00
    F1, $\text{var} \in N$75.92297.698
    $\text{False}_\text{p}$, $\text{var} \in F$35.00910.769
    $\text{False}_\text{n}$, $\text{var} \in F$0.8871.056
    F1, $\text{var} \in F$78.50593.836
    下载: 导出CSV

    表  12  单输出通道与多输出通道性能对比

    Table  12  Performance comparison between single output channel and multiple output channels

    指标$oc = 1$$oc = 32$
    RMSE, $\text{var} \in N$0.0430.044
    RMSE, $\text{var} \in F$0.0550.046
    MAE, $\text{var} \in N$0.0350.036
    MAE, $\text{var} \in F$0.0490.039
    $\text{False}_\text{p}$, $\text{var} \in N$38.4864.500
    $\text{False}_\text{n}$, $\text{var} \in N$00
    F1, $\text{var} \in N$76.17297.698
    $\text{False}_\text{p}$, $\text{var} \in F$36.02210.769
    $\text{False}_\text{n}$, $\text{var} \in F$5.2371.056
    F1, $\text{var} \in F$76.38593.836
    下载: 导出CSV

    表  13  单输入通道与多输入通道性能对比

    Table  13  Performance comparison between single channel and multi-channels

    指标$c_{\rm{in}}^1$$c_{\rm{in}}^2$$c_{\rm{in}}^{1,2}$
    RMSE, $\text{var} \in N$0.0840.0460.044
    RMSE, $\text{var} \in F$0.0440.0440.046
    MAE, $\text{var} \in N$0.0720.0380.036
    MAE, $\text{var} \in F$0.0370.0380.039
    $\text{False}_\text{p}$, $\text{var} \in N$22.7035.5274.500
    $\text{False}_\text{n}$, $\text{var} \in N$000
    F1, $\text{var} \in N$87.19597.15897.698
    $\text{False}_\text{p}$, $\text{var} \in F$33.40515.37210.769
    $\text{False}_\text{n}$, $\text{var} \in F$16.76510.0931.056
    F1, $\text{var} \in F$73.99187.18793.836
    下载: 导出CSV

    表  14  自迭代效果对比

    Table  14  Comparison of self iteration effect

    指标$it = 0$$it = 5$$it = 50$
    $\text{False}_\text{p}$, $\text{var} \in N$11.6627.5274.500
    $\text{False}_\text{n}$, $\text{var} \in N$000
    F1, $\text{var} \in N$93.80896.08997.698
    $\text{False}_\text{p}$, $\text{var} \in F$11.74012.28910.769
    $\text{False}_\text{n}$, $\text{var} \in F$1.7320.6761.055
    F1, $\text{var} \in F$93.00093.15793.837
    下载: 导出CSV
  • [1] 柴天佑. 工业人工智能发展方向. 自动化学报, 2020, 46(10): 2003-2012

    Chai Tian-You. Development directions of industrial artificial intelligence. Acta Automatica Sinica, 2020, 46(10): 2003-2012
    [2] 马亮, 彭开香, 董洁. 工业过程故障根源诊断与传播路径识别技术综述. 自动化学报, 2022, 48(7): 1650-1663

    Ma Liang, Peng Kai-Xiang, Dong Jie. Review of root cause diagnosis and propagation path identification techniques for faults in industrial processes. Acta Automatica Sinica, 2022, 48(7): 1650-1663
    [3] Zhao C H. Perspectives on nonstationary process monitoring in the era of industrial artificial intelligence. Journal of Process Control, 2022, 116: 255-272 doi: 10.1016/j.jprocont.2022.06.011
    [4] He Y L, Geng Z Q, Zhu Q X. Soft sensor development for the key variables of complex chemical processes using a novel robust bagging nonlinear model integrating improved extreme learning machine with partial least square. Chemometrics and Intelligent Laboratory Systems, 2016, 151: 78-88 doi: 10.1016/j.chemolab.2015.12.010
    [5] 赵春晖, 胡赟昀, 郑嘉乐, 陈军豪. 数据驱动的燃煤发电装备运行工况监控——现状与展望. 自动化学报, (未出版)

    Zhao Chun-Hui, Hu Yun-Yun, Zheng Jia-Le, Chen Jun-Hao. Data-driven operating monitoring for coal-fired power generation equipment: The state of the art and challenge. Acta Automatica Sinica, to be published
    [6] Sun X, Marquez H J, Chen T W, Riaz M. An improved PCA method with application to boiler leak detection. ISA Transactions, 2005, 44(3): 379-397 doi: 10.1016/S0019-0578(07)60211-0
    [7] You L X, Chen J. A variable relevant multi-local PCA modeling scheme to monitor a nonlinear chemical process. Chemical Engineering Science, 2021, 246: Article No. 116851 doi: 10.1016/j.ces.2021.116851
    [8] Zhao C H, Sun H. Dynamic distributed monitoring strategy for large-scale nonstationary processes subject to frequently varying conditions under closed-loop control. IEEE Transactions on Industrial Electronics, 2019, 66(6): 4749-4758 doi: 10.1109/TIE.2018.2864703
    [9] Song P Y, Zhao C H. Slow down to go better: A survey on slow feature analysis. IEEE Transactions on Neural Networks and Learning Systems, to be published
    [10] Zhao C H, Chen J H, Jing H. Condition-driven data analytics and monitoring for wide-range nonstationary and transient continuous processes. IEEE Transactions on Automation Science and Engineering, 2021, 18(4): 1563-1574 doi: 10.1109/TASE.2020.3010536
    [11] 樊继聪, 王友清, 秦泗钊. 联合指标独立成分分析在多变量过程故障诊断中的应用. 自动化学报, 2013, 39(5): 494-501

    Fan Ji-Cong, Wang You-Qing, Qin Si-Zhao. Combined indices for ICA and their applications to multivariate process fault diagnosis. Acta Automatica Sinica, 2013, 39(5): 494-501
    [12] Ma L Y, Ma Y G, Lee K Y. An intelligent power plant fault diagnostics for varying degree of severity and loading conditions. IEEE Transactions on Energy Conversion, 2010, 25(2): 546-554 doi: 10.1109/TEC.2009.2037435
    [13] Zhao R, Yan R Q, Wang J J, Mao K Z. Learning to monitor machine health with convolutional bi-directional LSTM networks. Sensors, 2017, 17(2): Article No. 273 doi: 10.3390/s17020273
    [14] Shen Y, Abubakar M, Liu H, Hussain F. Power quality disturbance monitoring and classification based on improved PCA and convolution neural network for wind-grid distribution systems. Energies, 2019, 12(7): Article No. 1280 doi: 10.3390/en12071280
    [15] Yu J, Rashid M M. A novel dynamic Bayesian network-based networked process monitoring approach for fault detection, propagation identification, and root cause diagnosis. AIChE Journal, 2013, 59(7): 2348-2365 doi: 10.1002/aic.14013
    [16] Dimokranitou A. Adversarial Autoencoders for Anomalous Event Detection in Images[Master dissertation], Purdue University, USA, 2017.
    [17] De Castro-Cros M, Rosso S, Bahilo E, Velasco M, Angulo C. Condition assessment of industrial gas turbine compressor using a drift soft sensor based in autoencoder. Sensors, 2021, 21(8): Article No. 2708 doi: 10.3390/s21082708
    [18] Lutz M A, Vogt S, Berkhout V, Faulstich S, Dienst S, Steinmetz U, et al. Evaluation of anomaly detection of an autoencoder based on maintenace information and scada-data. Energies, 2020, 13(5): Article No. 1063 doi: 10.3390/en13051063
    [19] Guo Y F, Liao W X, Wang Q L, Yu L X, Ji T X, Li P. Multidimensional time series anomaly detection: A GRU-based Gaussian mixture variational autoencoder approach. In: Proceedings of the 10th Asian Conference on Machine Learning. Beijing, China: PMLR, 2018. 97−112 (查阅所有网上资料, 出版地信息不确定, 请核对)
    [20] Yu W K, Zhao C H. Robust monitoring and fault isolation of nonlinear industrial processes using denoising autoencoder and elastic net. IEEE Transactions on Control Systems Technology, 2020, 28(3): 1083-1091 doi: 10.1109/TCST.2019.2897946
    [21] Hu Y Y, Wang Y, Zhao C H. A sparse fault degradation oriented fisher discriminant analysis (FDFDA) algorithm for faulty variable isolation and its industrial application. Control Engineering Practice, 2019, 90: 311-320 doi: 10.1016/j.conengprac.2019.07.007
    [22] 赵春晖, 余万科, 高福荣. 非平稳间歇过程数据解析与状态监控——回顾与展望. 自动化学报, 2020, 46(10): 2072-2091 doi: 10.16383/j.aas.c190586

    Zhao Chun-Hui, Yu Wan-Ke, Gao Fu-Rong. Data analytics and condition monitoring methods for nonstationary batch processes-Current status and future. Acta Automatica Sinica, 2020, 46(10): 2072-2091 doi: 10.16383/j.aas.c190586
    [23] Gross K C, Singer R M, Wegerich S W, Herzog J P, VanAlstine R, Bockhorst F. Application of a model-based fault detection system to nuclear plant signals. In: Proceedings of the 9th International Conference on Intelligent Systems Applications to Power Systems. Seoul, Korea: Argonne National Lab., 1997.
    [24] Zavaljevski N, Gross K C. Sensor fault detection in nuclear power plants using multivariate state estimation technique and support vector machines. In: Proceedings of the 3rd International Conference of the Yugoslav Nuclear Society. Belgrade, Yugoslavia: Argonne National Lab., 2020.
    [25] Cheng S F, Pecht M. Multivariate state estimation technique for remaining useful life prediction of electronic products. In: Proceedings of the 2007 AAAI Fall Symposium on Artificial Intelligence for Prognostics. Arlington, USA: AAAI, 2007.
    [26] Wang Z Q, Liu C L. Wind turbine condition monitoring based on a novel multivariate state estimation technique. Measurement, 2021, 168: Article No. 108388 doi: 10.1016/j.measurement.2020.108388
    [27] Bockhorst F K, Gross K C, Herzog J P, Wegerich S W. MSET modeling of crystal river-3 venturi flow meters. In: Proceedings of the 6th International Conference on Nuclear Engineering. San Diego, USA: Argonne National Lab., 1998.
    [28] Fan Y J, Tao B, Zheng Y, Jang S S. A data-driven soft sensor based on multilayer perceptron neural network with a double LASSO approach. IEEE Transactions on Instrumentation and Measurement, 2020, 69(7): 3972-3979 doi: 10.1109/TIM.2019.2947126
    [29] Zhang M, Liu X G, Zhang Z Y. A soft sensor for industrial melt index prediction based on evolutionary extreme learning machine. Chinese Journal of Chemical Engineering, 2016, 24(8): 1013-1019 doi: 10.1016/j.cjche.2016.05.030
    [30] Ke W S, Huang D X, Yang F, Jiang Y H. Soft sensor development and applications based on LSTM in deep neural networks. In: Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI). Honolulu, USA: IEEE, 2017. 1−6
    [31] Yuan X F, Qi S B, Wang Y L, Xia H B. A dynamic CNN for nonlinear dynamic feature learning in soft sensor modeling of industrial process data. Control Engineering Practice, 2020, 104: Article No. 104614 doi: 10.1016/j.conengprac.2020.104614
    [32] Zhu W B, Ma Y, Zhou Y Z, Benton M, Romagnoli J. Deep learning based soft sensor and its application on a pyrolysis reactor for compositions predictions of gas phase components. Computer Aided Chemical Engineering, 2018, 44: 2245-2250
    [33] 常树超, 赵春晖. 一种时空协同的图卷积长短期记忆网络及其工业软测量应用. 控制与决策, 2022, 37(1): 77-86

    Chang Shu-Chao, Zhao Chun-Hui. A spatio-temporal synergistic graph convolution long short-term memory network and its application for industrial soft sensors. Control and Decision, 2022, 37(1): 77-86
    [34] Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. arXiv: 1609.02907, 2016.(查阅所有网上资料, 不确定文献类型及格式是否正确, 请核对)
    [35] Feng L J, Zhao C H, Li Y L, Zhou M, Qiao H L, Fu C. Multichannel diffusion graph convolutional network for the prediction of endpoint composition in the converter steelmaking process. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-13
    [36] Wu Z H, Pan S R, Long G D, Jiang J, Chang X J, Zhang C Q. Connecting the dots: Multivariate time series forecasting with graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York, USA: Association for Computing Machinery, 2020. 753−763(查阅所有网上资料, 出版地信息不确定, 请核对)
    [37] Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735-1780 doi: 10.1162/neco.1997.9.8.1735
    [38] Gers F A, Schmidhuber J, Cummins F. Learning to forget: Continual prediction with LSTM. Neural Computation, 2000, 12(10): 2451-2471 doi: 10.1162/089976600300015015
    [39] Feng L J, Zhao C H, Sun Y X. Dual attention-based encoder-decoder: A customized sequence-to-sequence learning for soft sensor development. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(8): 3306-3317 doi: 10.1109/TNNLS.2020.3015929
    [40] Feng L J, Zhao C H, Huang B. Adversarial smoothing tri-regression for robust semi-supervised industrial soft sensor. Journal of Process Control, 2021, 108: 86-97 doi: 10.1016/j.jprocont.2021.11.001
    [41] Schuster M, Paliwal K K. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 1997, 45(11): 2673-2681 doi: 10.1109/78.650093
    [42] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60(6): 84-90 doi: 10.1145/3065386
    [43] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: ACM, 2017. 6000−6010
    [44] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. Sardinia, Italy: PMLR, 2010. 249−256
    [45] Li Q M, Han Z C, Wu X M. Deeper insights into graph convolutional networks for semi-supervised learning. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA: AAAI, 2018. 3538−3545
    [46] Chiang W L, Liu X Q, Si S, Li Y, Bengio S, Hsieh C J. Cluster-GCN: An efficient algorithm for training deep and large graph convolutional networks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, USA: Association for Computing Machinery, 2019. 257−266
    [47] Terrell G R, Scott D W. Variable kernel density estimation. The Annals of Statistics, 1992, 20(3): 1236-1265
    [48] Gilbertson D D, Kent M, Pyatt F B. Data analysis and interpretation III: Correlation and regression using spearman’s rank correlation coefficient and semi-averages regression. Practical Ecology for Geography and Biology. New York, USA: Springer, 1985. 218−236
    [49] Geladi P, Kowalski B R. Partial least-squares regression: A tutorial. Analytica Chimica Acta, 1986, 185: 1-17 doi: 10.1016/0003-2670(86)80028-9
    [50] Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70(1-3): 489-501 doi: 10.1016/j.neucom.2005.12.126
    [51] Kiranyaz S, Avci O, Abdeljaber O, Ince T, Gabbouj M, Inman D J. 1D convolutional neural networks and applications: A survey. Mechanical Systems and Signal Processing, 2021, 151: Article No. 107398 doi: 10.1016/j.ymssp.2020.107398
  • 加载中
计量
  • 文章访问数:  554
  • HTML全文浏览量:  202
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-02-08
  • 录用日期:  2022-09-06
  • 网络出版日期:  2022-10-08

目录

    /

    返回文章
    返回