• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于模体增强对比学习的图神经网络后门防御方法

陈晋音 穆文博 郑海斌

陈晋音, 穆文博, 郑海斌. 基于模体增强对比学习的图神经网络后门防御方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.240767
引用本文: 陈晋音, 穆文博, 郑海斌. 基于模体增强对比学习的图神经网络后门防御方法. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.240767
Chen Jin-yin, Mu Wen-Bo, Zheng Hai-Bin. Motif-augmented contrastive learning based defense against backdoor attack on graph neural networks. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.240767
Citation: Chen Jin-yin, Mu Wen-Bo, Zheng Hai-Bin. Motif-augmented contrastive learning based defense against backdoor attack on graph neural networks. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.240767

基于模体增强对比学习的图神经网络后门防御方法

doi: 10.16383/j.aas.240767 cstr: 32138.14.j.aas.c250767
基金项目: 国家自然科学基金(62406286, 62072406), 工业和信息化部电子第五研究所重点实验室开放课题(HK00202503455), 北京生命科技研究院有限公司开放基金(2024200CD0210), 四川大学数据安全防护与智能治理教育部重点实验室开放课题(SCUSAKFKT202402Z), 浙江省自然科学基金(LDQ23F020001, LD22F020002), 基于多源数据融合的数据赋能方法的研究及应用资助
详细信息
    作者简介:

    陈晋音:浙江工业大学教授. 主要研究方向为人工智能安全、图数据挖掘和进化计算. E-mail: chenjinyin@zjut.edu.cn

    穆文博:浙江工业大学硕士研究生. 主要研究方向为人工智能安全、深度学习和图数据挖掘. E-mail: 211123030043@zjut.edu.cn

    郑海斌:浙江工业大学讲师.主要研究方向为深度学习、人工智能安全和图像识别. 本文通信作者. E-mail: haibinzheng320@gmail.com

Motif-Augmented Contrastive Learning Based Defense Against Backdoor Attack on Graph Neural Networks

Funds: Supported by National Natural Science Foundation of China (62406286, 62072406), Key Laboratory of the Fifth Research Institute of Electronics, Ministry of Industry and Information Technology (HK00202503455), Beijing Life Science Academy (BLSA) (2024200CD0210), Key Laboratory of Data Protection and Intelligent Management, Ministry of Education, Sichuan University (SCUSAKFKT202402Z), Zhejiang Provincial Natural Science Foundation (LDQ23F020001, LD22F020002), Research and Application of Data Empowerment Methods Based on Multi-Source Data Fusion.
More Information
    Author Bio:

    CHEN Jin-Yin Professor, Zhejiang University of Technology. Her research include artificial intelligence security, graph data mining and evolutionary computation

    MU Wen-Bo Master student at Zhejiang University of Technology. His research interests are AI security, deep learning and privacy security

    ZHENG Hai-Bin Lecturer Zhejiang University of Technology. His research interests are deep learning, artificial intelligence security and image recognition. Corresponding author of this paper

  • 摘要: 图神经网络在图数据挖掘任务中表现出卓越性能, 因此广泛应用于社交网络、商品推荐等领域. 在图分类任务中, 模型决策高度依赖全局拓扑结构, 使得图神经网络易受后门攻击. 已有研究表明, 在训练数据中注入中毒信息使得训练获得的模型容易被触发样本欺骗, 严重威胁模型安全. 因此, 相继提出一些针对后门攻击的防御方法, 一定程度上检测中毒样本并过滤, 或者去除模型内的后门. 然而, 防御方法仍然存在一些挑战, 即对不同后门攻击的防御泛化性弱、无法有效均衡主任务性能与防御成功率等问题. 针对这些问题, 首次提出一种基于模体增强对比学习的图神经网络后门攻击防御方法(Motif-Defense), 可高效防御多种未知类型的后门攻击, 且在主任务性能仅略有下降. 首先, 设计了模体角度增强图的对比学习模型选取出可疑后门样本, 并使用Jaccard相似度和标签平滑策略将可疑后门样本净化为干净样本, 实现了对图后门攻击的防御. 最终, 在四个真实数据集上展开防御实验, Motif-Defense平均降低84.61%的攻击成功率, 且分类准确率平均仅下降5.32%.
  • 图  1  三节点、四节点模体示意图

    Fig.  1  Illustration of three-node and four-node motifs

    图  2  Motif-Defense系统框图

    Fig.  2  The framework of Motif-Defense

    图  3  Motif-Defense消融实验

    Fig.  3  Ablation experiment results for Motif-Defense

    图  4  决策边界可视化

    Fig.  4  Visualization of decision boundary change

    图  5  Motif-Defense对后门样本神经元激活情况的影响

    Fig.  5  The impact of Motif-Defense on the activation of neurons in backdoor samples

    图  6  Motif-Defense时间复杂度分析

    Fig.  6  Time complexity analysis of Motif-Defense

    图  7  Motif-Defense在面对自适应攻击下的表现

    Fig.  7  The performance of Motif-Defense against adaptive attacks

    表  1  数据集基本统计数据

    Table  1  The basic statistics of graph datasets

    数据集 图样本数 节点数 链路数 图标签分布 目标类 网络类型
    PROTEINS 1113 39.06 72.82 663$ [0] $, 450$ [1] $ 1 生物信息
    AIDS 2000 15.69 16.20 400$ [0] $, 1600$ [1] $ 0 小分子
    NCI1 4110 29.87 32.30 2053$ [0] $, 2057$ [1] $ 0 小分子
    DBLP_v1 19456 10.48 19.65 9530$ [0] $, 9926$ [1] $ 0 社交网络
    下载: 导出CSV

    表  2  不同攻击场景下, Motif-Defense的防御性能

    Table  2  The defense performance of Motif-Defense in different attack scenarios

    数据集 评价指标 防御方法 后门攻击方法
    MIA GTA ER-B MaxDcc Motif-Backdoor UGBA
    PROTEINS(76.23) ASR(%) 无防御 68.07 95.80 72.23 93.28 94.12 98.94
    Prune 20.56 18.63 28.57 10.76 79.85 80.69
    BloGGaD 20.67 9.75 13.92 11.26 24.87 23.63
    MD-GNN 22.35 16.47 18.15 30.00 25.43 16.64
    CLB-Defense 4.86 3.58 9.39 15.86 11.76 8.96
    ES 13.61 12.35 9.71 11.76 16.64 15.37
    Motif-Defense 4.03 2.26 8.93 9.07 10.92 4.58
    ACC(%) 无防御 66.82 65.02 64.13 66.37 74.89 73.95
    Prune 73.45 72.38 72.56 73.00 76.23 72.72
    BloGGaD 73.90 72.56 72.56 70.76 75.93 74.52
    MD-GNN 66.69 69.23 65.47 69.06 71.33 70.97
    CLB-Defense 73.35 72.44 72.24 74.39 75.03 71.45
    ES 71.57 71.30 68.43 72.55 76.23 71.14
    Motif-Defense 73.69 73.93 73.49 69.24 72.11 74.69
    AIDS(98.92) ASR(%) 无防御 92.19 99.34 80.94 91.88 98.75 96.72
    Prune 38.99 47.33 35.63 22.43 39.63 93.47
    BloGGaD 14.75 20.75 14.63 9.19 19.87 22.41
    MD-GNN 16.87 25.19 16.00 23.62 27.75 19.25
    ES 23.99 17.56 25.38 8.13 20.75 15.84
    CLB-Defense 1.21 9.63 4.33 11.33 19.63 18.25
    Motif-Defense 0.68 4.12 7.87 7.03 10.62 9.54
    ACC(%) 无防御 94.00 98.25 94.00 95.25 99.00 98.65
    Prune 96.50 96.33 96.25 82.33 97.45 94.93
    BloGGaD 96.30 96.25 95.25 97.10 97.20 95.92
    MD-GNN 95.45 94.47 94.75 95.85 98.11 96.38
    CLB-Defense 97.45 97.07 96.57 97.86 98.16 97.05
    ES 95.45 95.25 96.10 97.00 98.15 96.72
    Motif-Defense 97.55 97.15 97.55 97.95 98.25 97.65
    NCI1(80.41) ASR(%) 无防御 96.98 100.00 98.33 100.00 100.00 100.00
    Prune 50.50 45.10 38.20 46.00 53.30 98.32
    BloGGaD 33.26 39.44 27.36 30.25 20.64 28.34
    MD-GNN 37.23 41.04 34.92 38.54 62.66 32.63
    CLB-Defense 55.36 46.85 45.39 43.58 48.96 23.09
    ES 39.57 40.31 39.46 32.07 41.00 18.36
    Motif-Defense 32.74 30.38 25.58 23.58 34.36 13.64
    ACC(%) 无防御 73.36 76.52 70.80 76.89 80.41 81.45
    Prune 76.23 76.74 73.48 77.47 74.96 78.33
    BloGGaD 74.31 73.55 74.92 74.33 73.84 80.84
    MD-GNN 75.50 72.90 77.10 75.00 73.80 77.97
    CLB-Defense 76.06 76.87 75.74 76.53 75.78 79.36
    ES 75.06 77.20 73.58 77.13 75.11 80.33
    Motif-Defense 77.41 78.99 77.88 77.34 76.47 79.67
    DBLP_v1(80.83) ASR(%) 无防御 48.75 62.17 62.29 69.86 70.84 72.56
    Prune 19.20 11.50 25.00 23.00 60.50 68.00
    BloGGaD 8.60 10.90 19.20 17.50 24.00 19.20
    MD-GNN 18.40 20.10 32.77 24.00 29.50 35.40
    CLB-Defense 10.33 10.28 15.28 13.66 24.39 18.52
    ES 18.80 10.36 18.89 28.25 31.00 26.79
    Motif-Defense 14.21 6.60 13.67 8.75 22.33 10.35
    ACC(%) 无防御 73.46 67.78 79.52 76.23 78.85 80.36
    Prune 76.00 75.20 74.05 78.00 73.70 70.03
    BloGGaD 75.90 72.80 76.50 74.10 72.50 75.03
    MD-GNN 77.20 73.60 75.80 79.30 74.70 76.32
    CLB-Defense 79.52 79.33 79.62 79.78 78.54 78.14
    ES 77.92 77.72 77.01 77.84 75.11 76.94
    Motif-Defense 79.68 79.39 79.87 79.96 80.24 79.11
    下载: 导出CSV

    表  3  不同阈值$ o_1 $下的详细性能结果(AIDS + GTA)

    Table  3  The performance results under different threshold $ o_1 $ (AIDS + GTA)

    $ o_1 $ ASR (%) ACC (%) DR (%) FDR (%)
    0.3 8.53 95.81 99.00 6.81
    0.4 5.27 96.59 97.51 3.02
    0.5 4.12 97.15 95.84 1.53
    0.6 6.81 97.45 92.07 0.82
    0.7 10.34 97.86 86.23 0.45
    下载: 导出CSV
  • [1] Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs. Advances in Neural Information Processing Systems, 2017, 30.
    [2] Wang D, Lin J, Cui P, et al. A semi-supervised graph attentive network for financial fraud detection. In: Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM). Beijing, China: IEEE, 2019. 598-607.
    [3] Bai L, Yao L, Kanhere S, et al. Spatio-temporal graph convolutional and recurrent networks for citywide passenger demand prediction. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. Beijing, China: ACM, 2019. 2293-2296.
    [4] Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv: 1609.02907, 2016.
    [5] Cui H, Dai W, Zhu Y, et al. Braingb: a benchmark for brain network analysis with graph neural networks. IEEE Transactions on Medical Imaging, 2022, 42(2): 493−506 doi: 10.1109/bigdata55660.2022.10020992
    [6] Stokes J M, Yang K, Swanson K, et al. A deep learning approach to antibiotic discovery. Cell, 2020, 180(4): 688−702 doi: 10.1016/j.cell.2020.01.021
    [7] Yumlembam R, Issac B, Jacob S M, et al. Iot-based android malware detection using graph neural network with adversarial defense. IEEE Internet of Things Journal, 2022, 10(10): 8432−8444 doi: 10.1109/jiot.2022.3188583
    [8] Zhang Z, Jia J, Wang B, et al. Backdoor attacks to graph neural networks. In: Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. New York, USA: ACM, 2021. 15-26.
    [9] Xu J, Xue M, Picek S. Explainability-based backdoor attacks against graph neural networks. In: Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning. New York, USA: ACM, 2021. 31-36.
    [10] Xi Z, Pang R, Ji S, et al. Graph backdoor. In: Proceedings of the 30th USENIX Security Symposium (USENIX Security 21). Berkeley, USA: USENIX Association, 2021. 1523-1540.
    [11] Sheng Y, Chen R, Cai G, et al. Backdoor attack of graph neural networks based on subgraph trigger. In: Proceedings of the 17th EAI International Conference on Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom 2021). Virtual Event: Springer, 2021. 276-296.
    [12] Zheng H, Xiong H, Chen J, et al. Motif-backdoor: Rethinking the backdoor attack on graph neural networks via motifs. IEEE Transactions on Computational Social Systems, 2023, 11(2): 2479−2493 doi: 10.1109/tcss.2023.3267094
    [13] Dai E, Lin M, Zhang X, et al. Unnoticeable backdoor attacks on graph neural networks. In: Proceedings of the ACM Web Conference 2023. Austin, USA: ACM, 2023. 2263-2273.
    [14] Alon U. Network motifs: theory and experimental approaches. Nature Reviews Genetics, 2007, 8(6): 450−461 doi: 10.1038/nrg2102
    [15] Liu K, Dolan-Gavitt B, Garg S. Fine-pruning: Defending against backdooring attacks on deep neural networks. In: Proceedings of the International Symposium on Research in Attacks, Intrusions, and Defenses (RAID). Cham, Switzerland: Springer, 2018. 273-294.
    [16] Yang X, Li G, Tao X, et al. Black-box graph backdoor defense. In: Proceedings of the International Conference on Algorithms and Architectures for Parallel Processing. Singapore: Springer, 2023. 163-180.
    [17] Jiang B, Li Z. Defending against backdoor attack on graph neural network by explainability. arXiv preprint arXiv: 2209.02902, 2022.
    [18] 陈晋音, 熊海洋, 马浩男. 基于对比学习的图神经网络后门攻击防御方法. 通信学报, 2023, 44(4): 154−166 doi: 10.11959/j.issn.1000-436x.2023074

    Chen Jin-yin, Xiong Hai-yang, Ma Hao-nan, Zheng Ya-yu. CLB-Defense: based on contrastive learning defense for graph neural network against backdoor attack. Journal on Communications, 2023, 44(4): 154−166 doi: 10.11959/j.issn.1000-436x.2023074
    [19] Xiao Y, Li J, Su W. A lightweight metric defence strategy for graph neural networks against poisoning attacks. In: Proceedings of the 23rd International Conference on Information and Communications Security (ICICS 2021). Chongqing, China: Springer, 2021. 55-72.
    [20] You Y, Chen T, Sui Y, et al. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 2020, 33: 5812−5823 doi: 10.1101/2021.12.03.471150
    [21] Qiu J, Chen Q, Dong Y, et al. Gcc: Graph contrastive coding for graph neural network pre-training. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York, USA: ACM, 2020. 1150-1160.
    [22] Hassani K, Khasahmadi A H. Contrastive multi-view representation learning on graphs. In: Proceedings of the International Conference on Machine Learning. Virtual Event: PMLR, 2020. 4116-4126.
    [23] Yu J, Yin H, Xia X, et al. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, USA: ACM, 2022. 1294-1303.
    [24] Cai X, Huang C, Xia L, et al. LightGCL: Simple yet effective graph contrastive learning for recommendation. arXiv preprint arXiv: 2302.08191, 2023.
    [25] Xu B, Wang X, Liu Z, et al. A GAN combined with graph contrastive learning for traffic forecasting. In: Proceedings of the 2023 4th International Conference on Computing, Networks and Internet of Things. New York, USA: ACM, 2023. 866-873.
    [26] Sankar A, Zhang X, Chang K C C. Motif-based convolutional neural network on graphs. arXiv preprint arXiv: 1711.05697, 2017.
    [27] Yang C, Liu M, Zheng V W, et al. Node, motif and subgraph: Leveraging network functional blocks through structural convolution. In: Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). Barcelona, Spain: IEEE, 2018. 47-52.
    [28] Zhao H, Zhou Y, Song Y, et al. Motif enhanced recommendation over heterogeneous information network. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. Beijing, China: ACM, 2019. 2189-2192.
    [29] Dareddy M R, Das M, Yang H. Motif2vec: Motif aware node representation learning for heterogeneous networks. In: Proceedings of the 2019 IEEE International Conference on Big Data (Big Data). Los Angeles, USA: IEEE, 2019. 1052-1059.
    [30] Shao P, Yang Y, Xu S, et al. Network embedding via motifs. ACM Transactions on Knowledge Discovery from Data, 2021, 16(3): 1−20
    [31] Zhao M, Zhang Y, Xia X, et al. Motif-aware adversarial graph representation learning. IEEE Access, 2022, 10: 8617−8626 doi: 10.1109/ACCESS.2022.3144233
    [32] Wang L, Ren J, Xu B, et al. Model: Motif-based deep feature learning for link prediction. IEEE Transactions on Computational Social Systems, 2020, 7(2): 503−516 doi: 10.1109/TCSS.2019.2962819
    [33] Milo R, Shen-Orr S, Itzkovitz S, et al. Network motifs: simple building blocks of complex networks. Science, 2002, 298(5594): 824−827 doi: 10.1126/science.298.5594.824
    [34] Hočevar T, Demšar J. A combinatorial approach to graphlet counting. Bioinformatics, 2014, 30(4): 559−565 doi: 10.1093/bioinformatics/btt717
    [35] Freeman L C. Centrality in social networks: Conceptual clarification. In: Social Network: Critical Concepts in Sociology. London, UK: Routledge, 2002. 238-263.
    [36] 张重生, 陈杰, 李岐龙, 邓斌权, 王杰, 陈承功. 深度对比学习综述. 自动化学报, 2023, 49(1): 15−39 doi: 10.16383/j.aas.c220421

    Zhang Chong-sheng, Chen Jie, Li Qi-long, Deng Bin-quan, Wang Jie, Chen Cheng-gong. Deep contrastive learning: A survey. Acta Automatica Sinica, 2023, 49(1): 15−39 doi: 10.16383/j.aas.c220421
    [37] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 9729-9738.
    [38] Freeman L C. A set of measures of centrality based on betweenness. Sociometry, 1977, 40(1): 35−41
    [39] Bonacich P. Factoring and weighting approaches to status scores and clique identification. Journal of Mathematical Sociology, 1972, 2(1): 113−120 doi: 10.1080/0022250X.1972.9989806
    [40] 李晓庆, 唐昊, 司加胜, 苗刚中. 面向混合属性数据集的改进半监督FCM聚类方法. 自动化学报, 2018, 44(12): 2259−2268 doi: 10.16383/j.aas.2018.c170510

    Li Xiao-qing, Tang Hao, Si Jia-sheng, Miao Gang-zhong. An improved semi-supervised FCM clustering method for mixed data sets. Acta Automatica Sinica, 2018, 44(12): 2259−2268 doi: 10.16383/j.aas.2018.c170510
    [41] Xu K, Hu W, Leskovec J, et al. How powerful are graph neural networks? arXiv preprint arXiv: 1810.00826, 2018.
  • 加载中
计量
  • 文章访问数:  25
  • HTML全文浏览量:  8
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-12-02
  • 录用日期:  2026-01-04
  • 网络出版日期:  2026-03-25

目录

    /

    返回文章
    返回