2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于组信息蒸馏残差网络的轻量级图像超分辨率重建

王云涛 赵蔺 刘李漫 陶文兵

王云涛, 赵蔺, 刘李漫, 陶文兵. 基于组-信息蒸馏残差网络的轻量级图像超分辨率重建. 自动化学报, 2024, 50(10): 1−16 doi: 10.16383/j.aas.c211089
引用本文: 王云涛, 赵蔺, 刘李漫, 陶文兵. 基于组信息蒸馏残差网络的轻量级图像超分辨率重建. 自动化学报, 2024, 50(10): 1−16 doi: 10.16383/j.aas.c211089
Wang Yun-Tao, Zhao Lin, Liu Li-Man, Tao Wen-Bing. G-IDRN: An group-information distillation residual network for lightweight image super-resolution. Acta Automatica Sinica, 2024, 50(10): 1−16 doi: 10.16383/j.aas.c211089
Citation: Wang Yun-Tao, Zhao Lin, Liu Li-Man, Tao Wen-Bing. G-IDRN: An group-information distillation residual network for lightweight image super-resolution. Acta Automatica Sinica, 2024, 50(10): 1−16 doi: 10.16383/j.aas.c211089

基于组信息蒸馏残差网络的轻量级图像超分辨率重建

doi: 10.16383/j.aas.c211089
基金项目: 国家自然科学基金(61976227, 62176096), 湖北省国家自然科学基金(2019CFB622)资助
详细信息
    作者简介:

    王云涛:中南民族大学生物医学工程学院硕士研究生. 主要研究方向为图像处理, 深度学习和图像超分辨率. E-mail: ytao-wang@scuec.edu.cn

    赵蔺:华中科技大学人工智能与自动化学院博士研究生. 主要研究方向为图像识别, 图像超分辨率和点云实例语义分割. E-mail: linzhao@hust.edu.cn

    刘李漫:中南民族大学生物医学工程学院副教授. 主要研究方向为图像处理, 深度学习和计算机视觉. 本文通信作者. E-mail: limanliu@mail.scuec.edu.cn

    陶文兵:华中科技大学人工智能与自动化学院教授. 主要研究方向为图像分割, 目标识别和3D重建. E-mail: wenbingtao@hust.edu.cn

G-IDRN: An Group-information Distillation Residual Network for Lightweight Image Super-resolution

Funds: Supported by National Natural Science Foundation of China (61976227, 62176096) and National Natural Science Foundation of Hubei Province (2019CFB622)
More Information
    Author Bio:

    WANG Yun-Tao Master student at the School of Biomedical Engineering, South-central Minzu University. His research interest covers image processing, deep learning, and image super-resolution

    ZHAO Lin Ph.D. candidate at the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology. His research interest covers image recognition, image super-resolution, and point cloud instance semantic segmentation

    LIU Li-Man Associate professor at the School of Biomedical Engineering, South-central Minzu University. Her research interest covers image processing, deep learning, and computer vision. Corresponding author of this paper

    TAO Wen-Bing Professor at the School of Artificial Intelligence and Automation, Huazhong University of Science and Technology. His research interest covers image segmentation, target recognition, and 3D reconstruction

  • 摘要: 目前, 基于深度学习的超分辨算法已经取得了很好性能, 但这些方法通常具有较大的内存消耗和较高的计算复杂度, 很难应用到低算力或便携式设备上. 为了解决这个问题, 设计一种轻量级的组−信息蒸馏残差网络(Group-information distillation residual network, G-IDRN)用于快速且精确的单图像超分辨率. 具体来说, 提出一个更加有效的组−信息蒸馏模块(Group-information distillation block, G-IDB)作为网络特征提取基本块. 同时, 引入密集快捷连接对多个基本块进行组合, 构建组−信息蒸馏残差组(Group-information distillation residual group, G-IDRG), 捕获多层级信息和有效重利用特征. 另外, 还提出一个轻量的非对称残差Non-local模块, 对长距离依赖关系进行建模, 进一步提升超分的性能. 最后, 设计一个高频损失函数, 去解决像素损失带来图像细节平滑的问题. 大量的实验表明, 该算法相较于其他先进方法可以在图像超分辨率性能和模型复杂度之间取得更好平衡, 其在公开测试数据集B100上, 4倍超分速率达到56 FPS, 比残差注意力网络快15倍.
  • 图  1  Urban100数据集中, 图像放大2倍的参数量和峰值信噪比的对比结果

    Fig.  1  Comparison results of the number of parameters and the peak single-to-noise ration on Urban100 with 2 times factor

    图  2  Urban100数据集中, img024放大4倍,不同SR方法在超分的重建结果

    Fig.  2  The reconstruction results of various SR methods for 4 times img024 on Urban100

    图  3  组−信息蒸馏残差网络整体架构

    Fig.  3  The architecture of the group-information distillation residual network

    图  4  G-IDB对RFDB的改进图

    Fig.  4  G-IDB improvements to RFDB

    图  5  非对称残差Non-Local模块

    Fig.  5  The asymmetric Non-local residual block

    图  6  Set14数据集中, barbara.png放大3倍的高频提取图像

    Fig.  6  High-frequency extraction images for 3 times barbara.png on Set14

    图  7  HR图像和对应使用低频滤波器提取的低频信息图

    Fig.  7  HR images and their low-frequency information images extracted by low-pass filter

    图  8  使用不同损失权重系数的PSNR分数差值对比结果

    Fig.  8  Comparison results of PSNR with differences loss weights

    图  9  PSNR和SSIM的差值图

    Fig.  9  Differential results of PSNR and SIMM scores

    图  10  各方法在Urban100上4倍SR的定性比较

    Fig.  10  Qualitative comparisons of each method for 4 times SRs on Urban100

    图  11  在真实图像上的可视化对比结果

    Fig.  11  Visual comparison on a real-world image

    图  12  Urban100上4倍因子时SSIM和参数量的比较结果

    Fig.  12  Comparison results of SSIM and the number of parameters for 4 times factors on Urban100

    表  1  消融实验结果

    Table  1  Ablation experiment results

    基本块双路重建策略DS连接ANRBPSNR (dB)参数量 (K)增幅PSNR (dB) | 参数量 (K)
    RFDB37.893534.00 | 0
    $ \checkmark$37.931514.2$\uparrow$ 0.038 | $\downarrow$ 19.8
    $ \checkmark$37.891520.2$ \downarrow$ 0.002 | $ \downarrow$ 13.8
    $ \checkmark$37.916534.3$ \uparrow$ 0.023 | $ \uparrow$ 0.3
    $ \checkmark$$ \checkmark$37.934514.4$ \uparrow$ 0.041 | $ \downarrow$ 19.6
    $ \checkmark$$ \checkmark$$ \checkmark$37.940500.5$ \uparrow$ 0.047 | $ \downarrow$ 33.5
    G-IDB37.955449.4$ \uparrow$ 0.062 | $ \downarrow$ 84.6
    $ \checkmark$$ \checkmark$$ \checkmark$37.965383.2$ \uparrow$ 0.072 | $ \downarrow$ 150.8
    下载: 导出CSV

    表  2  ANRB中不同采样特征点数的实验结果

    Table  2  The experimental results for different sampled feature points in ANRB

    特征点数Set5
    PSNR (dB)
    Manga109
    PSNR (dB)
    $128\times 128$像素
    内存 (MB)
    $180\times 180$像素
    内存 (MB)
    无ANRB37.88838.396216419
    $S=50$37.89338.439224436
    $S=110$37.89538.443232452
    $S=222$37.86138.325246480
    $S=\infty$37.883内存溢出22668431
    下载: 导出CSV

    表  3  使用不同损失权重系数的PSNR对比结果 (dB)

    Table  3  Comparison results of PSNR with different loss weights (dB)

    权重系数Set5Set14B100Urban100Manga109
    $\alpha =1.0$, $\beta =0$37.90733.42332.06331.83038.483
    $\alpha =0.8$, $\beta =0.2$37.90033.40632.07131.85038.476
    $\alpha =0.6$, $\beta =0.4$37.93033.42132.07531.84338.483
    $\alpha =0.4$, $\beta =0.6$37.97533.44432.08431.87838.576
    $\alpha =0.2$, $\beta =0.8$37.90133.46732.08431.86038.462
    下载: 导出CSV

    表  4  不同算法在5个基准数据集上2、3和4倍因子的参数量、PSNR和SSIM定量比较

    Table  4  Parameters, PSNR and SSIM quantitative comparisons of various algorithms for 2, 3, and 4 times factors on the five benchmark data-sets

    方法放大
    尺度
    参数量
    (K)
    Set5
    PSNR (dB) / SSIM
    Set14
    PSNR (dB) / SSIM
    B100
    PSNR (dB) / SSIM
    Uban100
    PSNR (dB) / SSIM
    Manga109
    PSNR (dB) / SSIM
    Bicubic$2倍$33.66 / 0.929930.24 / 0.868829.56 / 0.843126.88 / 0.840330.80 / 0.9339
    SRCNN 836.66 / 0.954232.45 / 0.906731.36 / 0.887929.50 / 0.894635.60 / 0.9663
    DRCN 177437.63 / 0.958833.04 / 0.911831.85 / 0.894230.75 / 0.913337.55 / 0.9732
    LapSRN 25137.52 / 0.959132.99 / 0.912431.80 / 0.895230.41 / 0.910337.27 / 0.9740
    DRRN 29837.74 / 0.959133.23 / 0.913632.05 / 0.897331.23 / 0.918837.88 / 0.9749
    MemNet 67837.78 / 0.959733.28 / 0.914232.08 / 0.897831.31 / 0.919537.72 / 0.9740
    IDN 55337.83 / 0.960033.30 / 0.914832.08 / 0.898531.27 / 0.919638.01 / 0.9749
    SRMDNF 151137.79 / 0.960133.32 / 0.915932.05 / 0.898531.33 / 0.920438.07 / 0.9761
    CARN 159237.76 / 0.959033.52 / 0.916632.09 / 0.897831.92 / 0.925638.36 / 0.9765
    SMSR 98538.00 / 0.960133.64 / 0.917932.17 / 0.899332.19 / 0.928438.76 / 0.9771
    IMDN 69438.00 / 0.960533.63 / 0.917732.19 / 0.899732.17 / 0.928238.88 / 0.9774
    IMDN-JDSR 69438.03 / 0.960533.57 / 0.917632.16 / 0.899532.09 / 0.9271− / −
    PAN 26138.00 / 0.960533.59 / 0.918132.18 / 0.899732.01 / 0.927338.70 / 0.9773
    RFDN-L 62638.03 / 0.960633.65 / 0.918332.18 / 0.899732.16 / 0.928238.88 / 0.9772
    LatticeNet 75938.03 / 0.960733.70 / 0.918732.20 / 0.899932.25 / 0.9288− / −
    G-IDRN 55438.09 / 0.960833.80 / 0.920332.42 / 0.900332.42 / 0.931138.96 / 0.9773
    Bicubic$3倍$30.39 / 0.868227.55 / 0.774227.21 / 0.738524.46 / 0.734926.95 / 0.8556
    SRCNN 832.75 / 0.909029.30 / 0.821528.41 / 0.786326.24 / 0.798930.48 / 0.9117
    DRCN 177433.82 / 0.922629.76 / 0.831128.80 / 0.796327.15 / 0.827632.24 / 0.9343
    LapSRN 50233.81 / 0.922029.79 / 0.832528.82 / 0.798027.07 / 0.827532.21 / 0.9350
    DRRN 29834.03 / 0.924429.96 / 0.834928.95 / 0.800427.53 / 0.837832.71 / 0.9379
    MemNet 67834.09 / 0.924830.00 / 0.835028.96 / 0.800127.56 / 0.837632.51 / 0.9369
    IDN 55334.11 / 0.925329.99 / 0.835428.95 / 0.801327.42 / 0.835932.71 / 0.9381
    SRMDNF 152834.12 / 0.925430.04 / 0.838228.97 / 0.802527.57 / 0.839833.00 / 0.9403
    CARN 159234.29 / 0.925530.29 / 0.840729.06 / 0.803428.06 / 0.849333.50 / 0.9440
    SMSR 99334.40 / 0.927030.33 / 0.841229.10 / 0.805028.25 / 0.853633.68 / 0.9445
    IMDN 70334.36 / 0.927030.32 / 0.841729.09 / 0.804728.16 / 0.851933.61 / 0.9445
    IMDN-JDSR 70334.36 / 0.926930.32 / 0.841329.08 / 0.804528.12 / 0.8498− / −
    PAN 26134.40 / 0.927130.36 / 0.842329.11 / 0.805028.11 / 0.851133.61 / 0.9448
    RFDN-L 63334.39 / 0.927130.35 / 0.841929.11 / 0.805428.24 / 0.853433.74 / 0.9453
    LatticeNet 76534.40 / 0.927230.32 / 0.841629.10 / 0.804928.19 / 0.8513− / −
    G-IDRN 56534.43 / 0.927730.41 / 0.843129.14 / 0.806128.32 / 0.855233.79 / 0.9456
    Bicubic$4倍$28.42 / 0.810426.00 / 0.702725.96 / 0.667523.14 / 0.657724.89 / 0.7866
    SRCNN 830.48 / 0.862627.50 / 0.751326.90 / 0.710124.52 / 0.722127.58 / 0.8555
    DRCN 177431.53 / 0.885428.02 / 0.767027.23 / 0.723325.14 / 0.751028.93 / 0.8854
    LapSRN 50231.54 / 0.885228.09 / 0.770027.32 / 0.727525.21 / 0.756229.09 / 0.8900
    DRRN 29831.68 / 0.888828.21 / 0.772027.38 / 0.728425.44 / 0.763829.45 / 0.8946
    MemNet 67831.74 / 0.889328.26 / 0.772327.40 / 0.728125.50 / 0.763029.42 / 0.8942
    IDN 55331.82 / 0.890328.25 / 0.773027.41 / 0.729725.41 / 0.763229.41 / 0.8942
    SRMDNF 155231.96 / 0.892528.35 / 0.778727.49 / 0.733725.68 / 0.773130.09 / 0.9024
    CARN 159232.13 / 0.893728.60 / 0.780627.58 / 0.734926.07 / 0.783730.47 / 0.9084
    SMSR 100632.13 / 0.893728.60 / 0.780627.58 / 0.734926.11 / 0.786830.54 / 0.9084
    IMDN 71532.21 / 0.894828.58 / 0.781127.56 / 0.735426.04 / 0.783830.45 / 0.9075
    IMDN-JDSR 71532.17 / 0.894228.62 / 0.781427.55 / 0.735026.06 / 0.7820− / −
    PAN 27232.13 / 0.894828.61 / 0.782227.59 / 0.736326.11 / 0.785430.51 / 0.9095
    RFDN-L 64332.23 / 0.895328.59 / 0.781427.57 / 0.736326.14 / 0.787130.61 / 0.9095
    LatticeNet 77732.18 / 0.894328.61 / 0.781227.57 / 0.735526.14 / 0.7844− / −
    G-IDRN 58032.24 / 0.895828.64 / 0.782427.61 / 0.737826.24 / 0.790330.63 / 0.9096
    下载: 导出CSV

    表  5  Set14上4倍因子时FLOPs、PSNR和SSIM的比较结果

    Table  5  Comparison results of FLOPs, PSNR and SSIM for 4 times factors on Set14

    指标CARNIMDNRFDN-LG-IDRN
    SSIM0.78060.78100.78140.7826
    PSNR (dB)28.6028.5828.5928.64
    FLOPs (GB)103.5846.6041.5436.19
    下载: 导出CSV

    表  6  B100上4倍因子时平均运行时间的比较结果

    Table  6  Comparison results of average running time for4 times factors on B100

    方法PSNR (dB) / SSIM参数量 (K)训练时间 (s)推理时间 (s)
    EDSR27.71 / 0.7420430900.2178
    RCAN27.77 / 0.7436155920.2596
    IMDN27.56 / 0.73547155.40.0217
    RFDN-L27.57 / 0.73636336.10.0250
    G-IDRN27.61 / 0.737858012.70.0177
    IDRN27.64 / 0.738920478.50.0692
    下载: 导出CSV
  • [1] Isaac J S, Kulkarni R. Super resolution techniques for medical image processing. In: Proceedings of the International Conference on Technologies for Sustainable Development. Mumbai, India: IEEE, 2015. 1−6
    [2] Rasti P, Uiboupin T, Escalera S. Anbarjafari G. Convolutional neural network super resolution for face recognition in surveillance monitoring. In: Proceedings of the International Conference on Articulated Motion and Deformable Objects. Cham, Netherlands: Springer, 2016. 175−184
    [3] Sajjadi M S M, Scholkopf B, Hirsch M. Enhancenet: Single image super-resolution through automated texture synthesis. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 4491−4500
    [4] Tan Y, Cai J, Zhang S, Zhong W, Ye L. Image compression algorithms based on super-resolution reconstruction technology. In: Proceedings of the IEEE 4th International Conference on Image, Vision and Computing. Xiamen, China: IEEE, 2019. 162−166
    [5] Luo Y, Zhou L, Wang S, Wang Z. Video satellite imagery super resolution via convolutional neural networks. IEEE Geoscience and Remote Sensing Letters, 2017, 14(12): 2398-2402 doi: 10.1109/LGRS.2017.2766204
    [6] 杨欣, 周大可, 费树岷. 基于自适应双边全变差的图像超分辨率重建. 计算机研究与发展, 2012, 49(12): 2696

    Yang Xin, Zhou Da-Ke, Fei Shu-Min. A self-adapting bilateral total aariation technology for image super-resolution reconstruction. Journal of Computer Research and Development, 2012, 49(12): 2696
    [7] Zhang L, Wu X. An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Transactions on Image Processing, 2006, 15(8): 2226-2238 doi: 10.1109/TIP.2006.877407
    [8] 潘宗序, 禹晶, 胡少兴, 孙卫东. 基于多尺度结构自相似性的单幅图像超分辨率算法. 自动化学报, 2014, 40(4): 594-603

    Pan Zong-Xu, Yu Jing, Hu Shao-Xing, Sun Wei-Dong. Single image super resolution based on multi-scale structural self-similarity. Acta Automatica Sinica, 2014, 40(4): 594-603
    [9] 张毅锋, 刘袁, 蒋程, 程旭. 用于超分辨率重建的深度网络递进学习方法. 自动化学报, 2020, 40(2): 274-282

    Zhang Yi-Feng, Liu Yuan, Jiang Cheng, Cheng Xu. A curriculum learning approach for single image superresolution. Acta Automatica Sinica, 2020, 46(2): 274-282
    [10] Dai T, Cai J, Zhang Y, Xia S T, Zhang L. Second-order attention network for single image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 11065−11074
    [11] Hui Z, Gao X, Yang Y, Wang X. Lightweight image super-resolution with information multi-distillation network. In: Proceedings of the 27th ACM International Conference on Multimedia. New York, USA: Association for Computing Machinery, 2019. 2024−2032
    [12] Liu J, Tang J, Wu G. Residual feature distillation network for lightweight image super-resolution. In: Proceedings of the 20th European Conference on Computer Vision. Cham, Netherlands: Springer, 2020. 41−55
    [13] 孙超文, 陈晓. 基于多尺度特征融合反投影网络的图像超分辨率重建. 自动化学报, 2021, 47(7): 1689-1700

    Sun Chao-Wen, Chen Xiao. Multiscale feature fusion back-projection network for image super-resolution. Acta Automatica Sinica, 2021, 47(7): 1689-1700
    [14] 孙玉宝, 费选, 韦志辉, 肖亮. 基于前向后向算子分裂的稀疏性正则化图像超分辨率算法. 自动化学报, 2010, 36(9): 1232-1238 doi: 10.3724/SP.J.1004.2010.01232

    Sun Yu-Bao, Fei Xuan, Wei Zhi-Hui, Xiao Liang. Sparsity regularized image super-resolution model via forward-backward operator splitting method. Acta Automatica Sinica, 2010, 36(9): 1232-1238 doi: 10.3724/SP.J.1004.2010.01232
    [15] Dong C, Loy C C, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 38(2): 295-307
    [16] Dong C, Loy C C, Tang X. Accelerating the super-resolution convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 391−407
    [17] Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1646−1654
    [18] Kim J, Lee J K, Lee K M. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1637−1645
    [19] Lim B, Son S, Kim H, Nah S, Mu Lee K. Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017. 136−144
    [20] Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y. Image super-resolution using very deep residual channel attention networks. In: Proceedings of the 18th European Conference on Computer Vision. Mohini, Germany: Springer, 2018. 286−301
    [21] Ahn N, Kang B, Sohn K A. Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the 18th European Conference on Computer Vision. Mohini, Germany: Springer, 2018: 252−268
    [22] Hui Z, Wang X, Gao X. Fast and accurate single image super-resolution via information distillation network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 723−731
    [23] Zhang C, Benz P, Argaw D M, Lee S, Kim J, Rameau F, et al. Resnet or densenet? introducing dense shortcuts to resnet. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa, USA: IEEE, 2021. 3550−3559
    [24] Zhu Z, Xu M, Bai S, Huang T, Bai X. Asymmetric non-local neural networks for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 593−602
    [25] Huang J B, Singh A, Ahuja N. Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 5197−5206
    [26] 安耀祖, 陆耀, 赵红. 一种自适应正则化的图像超分辨率算法. 自动化学报, 2012, 38(4): 601-608 doi: 10.3724/SP.J.1004.2012.00601

    An Yao-Zu, Lu Yao, Zhao Hong. An adaptive-regularized image super-resolution. Acta Automatica Sinica, 2012, 38(4): 601-608 doi: 10.3724/SP.J.1004.2012.00601
    [27] Tai Y, Yang J, Liu X, Xu C. MemNet: A persistent memory network for image restoration. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 4539−4547
    [28] Li Z, Yang J, Liu Z, Jeon G, Wu W. Feedback network for image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 3867−3876
    [29] Qiu Y, Wang R, Tao D, Cheng J. Embedded block residual network: A recursive restoration model for single-image super-resolution. In: Proceedings of IEEE/CVF International Conference on Computer Vision. Seoul, Korea: IEEE, 2019. 4180−4189
    [30] Chu X, Zhang B, Ma H, Xu R, Li Q. Fast, accurate and lightweight super-resolution with neural architecture search. In: Proceedings of the 25th International Conference on Pattern Recognition. Milan, Italy: IEEE, 2021. 59−64
    [31] Chu X, Zhang B, Xu R. Multi-objective reinforced evolution in mobile neural architecture search. In: Proceedings of the 20th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 99−113
    [32] Luo X, Xie Y, Zhang Y, Qu Y, Li C, Fu Y. LatticeNet: Towards lightweight image super-resolution with lattice block. In: Proceedings of the 20th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 23−28
    [33] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 7132−7141
    [34] Wang X, Girshick R, Gupta A, He K. Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 7794−7803
    [35] Liu D, Wen B, Fan Y, Loy C C, Huang T S. Non-local recurrent network for image restoration. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: MIT Press, 2018. 1680–1689
    [36] Mei Y, Fan Y, Zhou Y, Huang L, Huang T S, Shi, H. Image super-resolution with cross-scale non-local attention and exhaustive self-exemplars mining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 5690−5699
    [37] Niu B, Wen W, Ren W, Zhang X, Yang L, Wang S, et al. Single image super-resolution via a holistic attention network. In: Proceedings of the 20th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 191−207
    [38] Johnson J, Alahi A, Li F F. Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, Netherlands: Springer, 2016. 694−711
    [39] Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 4681−4690
    [40] Yuan Y, Liu S, Zhang J, Zhang Y, Dong C, Lin L. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 701−710
    [41] Yu J, Fan Y, Huang T. Wide activation for efficient image and video super-resolution. In: Proceedings of the 30th British Machine Vision Conference. Cardiff, UK: BMVA Press, 2020. 1−13
    [42] Shi W, Caballero J, Huszár F, Totz J, Aitken A P, Bishop R, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1874−1883
    [43] Howard A G, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications [Online], available: https://arxiv.org/abs/1704.04861, April 17, 2017
    [44] Xie S, Girshick R, Dollár P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 1492−1500
    [45] Szegedy C, Ioffe S, Vanhoucke V, Alemi A A. Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, USA: AAAI Press, 2017. 4278–4284
    [46] Huang G, Liu Z, Van Der Maaten L, Weinberger K Q. Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 4700−4708
    [47] Zhao H, Shi J, Qi X, Wang X, Jia J. Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 2881−2890
    [48] Timofte R, Agustsson E, Van Gool L, Yang M H, Zhang L. Ntire 2017 challenge on single image super-resolution: Methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017. 114−125
    [49] Bevilacqua M, Roumy A, Guillemot C, Morel M L A. Lowcomplexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of the British Machine Vision Conference. Surrey, UK: BMVA Press, 2012. 1−10
    [50] Zeyde R, Elad M, Protter M. On single image scale-up using sparse-representations. In: Proceedings of the International conference on curves and surfaces. Heidelberg, Berlin: Springer, 2010. 711−730
    [51] Arbelaez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 33(5): 898-916
    [52] Matsui Y, Ito K, Aramaki Y, Fujimoto A, Ogawa T, Yamasaki T, et al. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 2017, 76(20): 21811-21838 doi: 10.1007/s11042-016-4020-z
    [53] Gao X, Lu W, Tao D, Li X. Image quality assessment based on multiscale geometric analysis. IEEE Transactions on Image Processing, 2009, 18(7): 1409-1423 doi: 10.1109/TIP.2009.2018014
    [54] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861
    [55] Chollet F. Xception: Deep learning with depth-wise separable convolutions. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 1251−1258
    [56] Lai W S, Huang J B, Ahuja N, Yang M H. Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 624−632
    [57] Tai Y, Yang J, Liu X. Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 3147−3155
    [58] Zhang K, Zuo W, Zhang L. Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, USA: IEEE, 2018. 3262−3271
    [59] Wang L, Dong X, Wang Y, Ying X, Lin Z, An W, et al. Exploring sparsity in image super-resolution for efficient inference. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville, USA: IEEE, 2021. 4917−4926
    [60] Luo X, Liang Q, Liu D, Qu Y. Boosting lightweight single image super-resolution via joint-distillation. In: Proceedings of the 29th ACM International Conference on Multimedia. Virtual Event: Association for Computing Machinery, 2021. 1535−1543
    [61] Zhao H, Kong X, He J, Qiao Y, Dong C. Efficient image super-resolution using pixel attention. In: Proceedings of the European Conference on Computer Vision. Cham, Netherlands: Springer, 2020. 56−72
    [62] Cai J, Zeng H, Yong H, Cao Z, Zhang L. Toward real-world single image super-resolution: A new benchmark and a new model. In: Proceedings of IEEE/CVF International Conference on Computer Vision. Seoul, South Korea: IEEE, 2019. 3086−3095
  • 加载中
计量
  • 文章访问数:  576
  • HTML全文浏览量:  587
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-11-17
  • 录用日期:  2022-06-17
  • 网络出版日期:  2022-07-30

目录

    /

    返回文章
    返回