2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于RefineNet的端到端语音增强方法

蓝天 彭川 李森 钱宇欣 陈聪 刘峤

蓝天, 彭川, 李森, 钱宇欣, 陈聪, 刘峤. 基于RefineNet的端到端语音增强方法. 自动化学报, 2022, 48(2): 554−563 doi: 10.16383/j.aas.c190433
引用本文: 蓝天, 彭川, 李森, 钱宇欣, 陈聪, 刘峤. 基于RefineNet的端到端语音增强方法. 自动化学报, 2022, 48(2): 554−563 doi: 10.16383/j.aas.c190433
Lan Tian, Peng Chuan, Li Sen, Qian Yu-Xin, Chen Cong, Liu Qiao. RefineNet-based end-to-end speech enhancement. Acta Automatica Sinica, 2022, 48(2): 554−563 doi: 10.16383/j.aas.c190433
Citation: Lan Tian, Peng Chuan, Li Sen, Qian Yu-Xin, Chen Cong, Liu Qiao. RefineNet-based end-to-end speech enhancement. Acta Automatica Sinica, 2022, 48(2): 554−563 doi: 10.16383/j.aas.c190433

基于RefineNet的端到端语音增强方法

doi: 10.16383/j.aas.c190433
基金项目: 国家自然科学基金(U19B2028, 61772117), 科技委创新特区项目(19-163-21-TS-001-042-01), 提升政府治理能力大数据应用技术国家工程实验室重点项目(10-2018039), 四川省科技服务业示范项目(2018GFW0150), 中央高校基本科研业务费项目(ZYGX2019J077)资助
详细信息
    作者简介:

    蓝天:电子科技大学信息与软件工程学院副教授. 2008年获得电子科技大学计算机应用技术专业博士学位. 主要研究方向为语音识别, 语音增强, 自然语言处理, 医学图像分析. E-mail: lantian1029@uestc.edu.cn

    彭川:电子科技大学信息与软件工程学院硕士研究生. 主要研究方向自然语言处理, 语音增强与语音识别. E-mail: pengchuan@std.uestc.edu.cn

    李森:电子科技大学信息与软件工程学院硕士研究生. 主要研究方向自然语言处理, 语音增强. E-mail: sen@std.uestc.edu.cn

    钱宇欣:电子科技大学信息与软件工程学院硕士研究生. 主要研究方向为语音增强, 语音分离. E-mail: yxqian@std.uestc.edu.cn

    陈聪:电子科技大学信息与软件工程学院硕士研究生. 主要研究方向为语音增强, 语音识别. E-mail: chencong@std.uestc.edu.cn

    刘峤:电子科技大学信息与软件工程学院教授. 2010年获得电子科技大学计算机应用技术专业博士学位. 主要研究方向为自然语言处理, 机器学习, 数据挖掘. 本文通信作者. E-mail: qliu@uestc.edu.cn

RefineNet-based End-to-end Speech Enhancement

Funds: Supported by National Natural Science Foundation of China (U19B2028, 61772117), National Science and Technology Commission Innovation Project (19-163-21-TS-001-042-01), CETC Big Data Research Institute (10-2018039), Sichuan Hi-Tech Industrialization Program (2018GFW0150), Fundamental Research Funds for the Central Universities (ZYGX2019J077)
More Information
    Author Bio:

    LAN Tian Associate professor at the School of Information and Software Engineering, University of Electronic Science and Technology of China. He received his Ph. D. degree in technology of computer application from University of Electronic Science and Technology of China in 2008. His research interest covers speech recognition, speech enhancement, natural language processing and medical image processing

    PENG Chuan Master student at the School of Information and Software Engineering, University of Electronic Science and Technology of China. His research interest covers natural language processing, speech enhancement and speech recognition

    LI Sen Master student at the School of Information and Software Engineering, University of Electronic Science and Technology of China. His research interest covers natural language processing, and speech enhancement

    QIAN Yu-Xin Master student at the School of Information and Software Engineering, University of Electronic Science and Technology of China. His research interest covers speech enhancement and speech separation

    CHEN Cong Master student at the School of Information and Software Engineering, University of Electronic Science and Technology of China. His research interest covers speech enhancement and speech recognition

    LIU Qiao Professor at the School of Information and Software Engineering, University of Electronic Science and Technology of China. He received his Ph. D. degree in technology of computer application from University of Electronic Science and Technology of China in 2010. His research interest covers natural language processing, machine learning, and data mining. Corresponding author of this paper

  • 摘要: 为提高神经网络对语音信号时域波形的直接处理能力, 提出了一种基于RefineNet的端到端语音增强方法. 本文构建了一个时频分析神经网络, 模拟语音信号处理中的短时傅里叶变换, 利用RefineNet网络学习含噪语音到纯净语音的特征映射. 在模型训练阶段, 用多目标联合优化的训练策略将语音增强的评价指标短时客观可懂度(Short-time objective intelligibility, STOI)与信源失真比(Source to distortion ratio, SDR)融入到训练的损失函数. 在与具有代表性的传统方法和端到端的深度学习方法的对比实验中, 本文提出的算法在客观评价指标上均取得了最好的增强效果, 并且在未知噪声和低信噪比条件下表现出更好的抗噪性.
  • 图  1  RNSE模型结构图

    Fig.  1  The diagram for RNSE architecture

    图  2  ResNet模型结构图(Conv后用, 分隔的分别是卷积层的输出通道数、步长, 若未指明步长, 默认为1)

    Fig.  2  The diagram for ResNet architecture

    图  3  RefineBlock结构图

    Fig.  3  The diagram for RefineBlock architecture

    图  4  不同噪声不同信噪比下实验结果图(从第一行到第三行评价指标分别为PESQ、STOI与SDR, 图(a) ~ (c)、图(d) ~ (f)、图(g) ~ (i)、图(j) ~ (l)分别为Babble, Factory1, White, HFChannel噪声下的结果;每簇信噪比中的柱状图从左至右依次对应Log-MMSE, CNN-SE, WaveUNet, AET以及RNSE)

    Fig.  4  Experimental results under different noise and SNR

    图  5  0 dB的Babble噪声下的语音增强语谱图示例

    Fig.  5  An example of spectrogram of enhanced speech under Babble noise at 0 dB SNR

    图  6  基于不同损失函数的测试结果

    Fig.  6  Results based on different objective functions

    表  1  可见噪声的测试结果

    Table  1  The performance of baseline systems compared to the proposed RNSE approach in seen noise condition

    指标模型可见噪声
    −10−505
    PESQ(a)1.111.461.792.10
    (b)1.651.922.242.51
    (c)1.661.922.232.50
    (d)1.702.002.252.48
    (e)2.112.462.732.93
    STOI(a)0.580.680.770.85
    (b)0.640.720.800.86
    (c)0.660.740.810.86
    (d)0.630.720.790.84
    (e)0.770.850.900.93
    SDR(a)−6.67−1.723.077.58
    (b)−2.242.026.359.76
    (c)−0.613.307.2510.38
    (d)1.435.768.6710.87
    (e)7.019.9612.1613.98
    注: (a) Log-MMSE, (b) CNN-SE, (c) WaveUnet, (d) AET, (e) RNSE
    下载: 导出CSV

    表  2  不可见噪声的测试结果

    Table  2  The performance of baseline systems compared to the proposed RNSE approach in unseen noise condition

    指标模型不可见噪声
    −10−505
    PESQ(a)1.331.702.042.35
    (b)1.481.772.092.39
    (c)1.491.762.082.36
    (d)1.581.872.152.39
    (e)1.802.242.612.88
    STOI(a)0.520.630.740.83
    (b)0.560.660.760.83
    (c)0.590.690.780.85
    (d)0.570.690.770.83
    (e)0.670.790.870.92
    SDR(a)−0.174.778.6912.03
    (b)−2.971.966.349.81
    (c)−1.283.257.0510.22
    (d)1.505.658.6610.99
    (e)4.868.4511.3913.78
    注: (a) Log-MMSE, (b) CNN-SE, (c) WaveUnet, (d) AET, (e) RNSE
    下载: 导出CSV
  • [1] 刘文举, 聂帅, 梁山, 张学良. 基于深度学习语音分离技术的研究现状与进展. 自动化学报, 2016, 42(6): 819-833

    Liu Wen-Ju, Nie Shuai, Liang Shan, Zhang Xue-Liang. Deep learning based speech separation technology and its developments. Acta Automatica Sinica, 2016, 42(6): 819-833 (in Chinese)
    [2] Wang D, Chen J. Supervised speech separation based on deep learning: An overview. IEEE/ACM Transactions on Audio, Speech and Language Processing, 2018, 26(10): 1702-1726 doi: 10.1109/TASLP.2018.2842159
    [3] Lecun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436-444 doi: 10.1038/nature14539
    [4] Wang Y, Wang D. Towards scaling up classification-based speech separation. IEEE Trans Audio Speech Lang Process, 2013, 21(7): 10
    [5] Narayanan A, Wang D. Ideal ratio mask estimation using deep neural networks for robust speech recognition. In: Proceedings of the 38th IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada: IEEE, 2013. 7092−7096
    [6] Williamson D S, Wang Y, Wang D. Complex ratio masking for monaural speech separation. IEEE/ACM Transactions on Audio, Speech and Language Processing, 2016, 24(3): 483-492 doi: 10.1109/TASLP.2015.2512042
    [7] Xu Y, Du J, Dai L-R, Lee C-H. A regression approach to speech enhancement based on deep neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2015, 23(1): 7-19 doi: 10.1109/TASLP.2014.2364452
    [8] Park S R, Lee J W. A fully convolutional neural network for speech enhancement. In: Proceedings of the 18th Annual Conference of the International Speech Communication Association. Stockholm, Sweden: ISCA, 2017. 1993−1997
    [9] Qian K, Zhang Y, Chang S, Yang X. Speech enhancement using bayesian Wavenet. In: Proceedings of the 18th Annual Conference of the International Speech Communication Association. Stockholm, Sweden: ISCA, 2017. 2013−2017
    [10] Oord A v d, Dieleman S, Zen H, Simonyan K. WaveNet: A generative model for raw audio. In: Proceedings of the 9th ISCA Speech Synthesis Workshop. Sunnyvale, USA: ISCA, 2016. 125−125
    [11] Rethage D, Pons J, Serra X. A wavenet for speech denoising. In: Proceedings of the 43rd IEEE International Conference on Acoustics, Speech and Signal Processing. Calgary, Canada: IEEE, 2018. 5069−5073
    [12] Pascual S, Bonafonte A, Serrà J. SEGAN: Speech enhancement generative adversarial network. In: Proceedings of the 18th Annual Conference of the International Speech Communication Association. Stockholm, Sweden: ISCA, 2017. 3642−3646
    [13] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B. Generative adversarial nets. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: Curran Associates, 2014. 2672−2680
    [14] Wang K, Gou C, Duan Y, Lin Y. Generative adversarial networks: introduction and outlook. IEEE/CAA Journal of Automatica Sinica, 2017, 4(4): 588-598 doi: 10.1109/JAS.2017.7510583
    [15] Fu S W, Tsao Y, Lu X, Kawai H. Raw waveform-based speech enhancement by fully convolutional networks. In: Proceedings of the 9th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. Kuala Lumpur, Malaysia: IEEE, 2017. 006−012
    [16] Fu S-W, Wang T W, Tsao Y, Lu X G. End-to-End waveform utterance enhancement for direct evaluation metrics optimization by fully convolutional neural networks. IEEE/ACM Transactions on Audio, Speech and Language Processing, 2018, 26(9): 1570-1584 doi: 10.1109/TASLP.2018.2821903
    [17] Dieleman S, Schrauwen B. End-to-end learning for music audio. In: Proceedings of the 39th IEEE International Conference on Acoustics, Speech and Signal Processing. Florence, Italy: IEEE, 2014. 6964−6968
    [18] Sainath T N, Weiss R J, Senior A, Wilson K W. Learning the speech front-end with raw waveform CLDNNs. In: Proceedings of the 16th Annual Conference of the International Speech Communication Association. Dresden, Germany: ISCA, 2015. 1−5
    [19] Venkataramani S, Casebeer J, Smaragdis P. End-to-end source separation with adaptive front-ends. In: Proceedings of the 52nd Asilomar Conference on Signals, Systems, and Computers. Pacific Grove, USA: IEEE, 2018. 684−688
    [20] Hui L, Cai M, Guo C, He L. Convolutional maxout neural networks for speech separation. In: Proceedings of the 15th IEEE International Symposium on Signal Processing and Information Technology. Abu Dhabi, United Arab Emirates: IEEE, 2015. 24−27
    [21] Grais E M, Plumbley M D. Single channel audio source separation using convolutional denoising autoencoders. In: Proceedings of the 5th IEEE Global Conference on Signal and Information Processing. Montreal, Canada: IEEE, 2017. 1265−1269
    [22] Kounovsky T, Malek J. Single channel speech enhancement using convolutional neural network. In: Proceedings of the 15th IEEE International Workshop of Electronics, Control, Measurement, Signals and their Application to Mechatronics. Donostia-San Sebastian, Spain: IEEE, 2017. 1−5
    [23] Lin G, Milan A, Shen C, Reid I. Refinenet: multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 5168−5177
    [24] He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 630−645
    [25] Zhao Y, Xu B, Giri R, Zhang T. Perceptually guided speech enhancement using deep neural networks. In: Proceedings of the 43rd IEEE International Conference on Acoustics, Speech and Signal Processing. Calgary, Canada: IEEE, 2018. 5074−5078
    [26] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770−778
    [27] Bertsekas D P. Nondifferentiable optimization via approximation [M]. Nondifferentiable optimization. Springer. 1975: 1-25
    [28] Hendriks R C, Heusdens R, Jensen J. MMSE based noise PSD tracking with low complexity. In: Proceedings of the 35th IEEE International Conference on Acoustics, Speech and Signal Processing. Dallas, USA: IEEE, 2010. 4266−4269
    [29] Zhao Z, Liu H, Fingscheidt T. Convolutional neural networks to enhance coded speech. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019, 27(4): 663-678 doi: 10.1109/TASLP.2018.2887337
    [30] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Proceedings of the 18th International Conference on Medical image computing and computer-assisted intervention. Munich, Germany: Springer, 2015. 234−241
    [31] Stoller D, Ewert S, Dixon S. Wave-u-net: A multi-scale neural network for end-to-end audio source separation. In: Proceedings of the 19th International Society for Music Information Retrieval Conference. Paris, France, 2018. 334−340
  • 加载中
图(6) / 表(2)
计量
  • 文章访问数:  700
  • HTML全文浏览量:  523
  • PDF下载量:  193
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-06-04
  • 录用日期:  2020-01-09
  • 网络出版日期:  2022-01-04
  • 刊出日期:  2022-02-18

目录

    /

    返回文章
    返回