2.793

2018影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

深度域适应综述: 一般情况与复杂情况

范苍宁 刘鹏 肖婷 赵巍 唐降龙

范苍宁, 刘鹏, 肖婷, 赵巍, 唐降龙. 深度域适应综述: 一般情况与复杂情况. 自动化学报, 2020, 46(x): 1−34 doi: 10.16383/j.aas.c200238
引用本文: 范苍宁, 刘鹏, 肖婷, 赵巍, 唐降龙. 深度域适应综述: 一般情况与复杂情况. 自动化学报, 2020, 46(x): 1−34 doi: 10.16383/j.aas.c200238
Fan Cang-Ning, Li Peng, Xiao Ting, Zhao Wei, Tang Xiang-Long. A review of deep domain adaptation: general situation and complex situation. Acta Automatica Sinica, 2020, 46(x): 1−34 doi: 10.16383/j.aas.c200238
Citation: Fan Cang-Ning, Li Peng, Xiao Ting, Zhao Wei, Tang Xiang-Long. A review of deep domain adaptation: general situation and complex situation. Acta Automatica Sinica, 2020, 46(x): 1−34 doi: 10.16383/j.aas.c200238

深度域适应综述: 一般情况与复杂情况

doi: 10.16383/j.aas.c200238
基金项目: 国家自然科学基金(61671175), 四川省科技计划项目(2019YFS0069), 空间智能控制技术重点实验室基金项目(ZDSXS-2018-02)资助
详细信息
    作者简介:

    范苍宁:哈尔滨工业大学模式识别与智能系统研究中心博士生. 分别在2016年和2018年获得哈尔滨工业大学的学士学位和硕士学位. 主要研究方向包括迁移学习和机器学习. E-mail: fancangning@gmail.com

    刘鹏:哈尔滨工业大学计算机科学与技术学院副教授. 2007年获得哈尔滨工业大学微电子和固体电子学博士学位. 主要研究方向包括图像处理, 视频分析, 模式识别和大规模集成电路设计. E-mail: pengliu@hit.edu.cn

    肖婷:哈尔滨工业大学计算机科学与技术学院博士生. 在2016年获得哈尔滨工业大学计算机应用硕士学位. 主要研究方向包括图像处理, 计算机视觉和机器学习. E-mail: xiaoting1@hit.edu.cn

    赵巍:哈尔滨工业大学计算机科学与技术学院副教授. 曾获黑龙江省科学技术进步一等奖. 主要研究领域包括模式识别, 机器学习和计算机视觉. E-mail: zhaowei@hit.edu.cn

    唐降龙:哈尔滨工业大学计算机科学与技术学院教授. 在1995年获得哈尔滨工业大学计算机应用技术博士学位. 主要研究方向包括模式识别, 图像处理和机器学习. E-mail: tangxl@hit.edu.cn

A Review of Deep Domain Adaptation: General Situation and Complex Situation

Funds: Supported by National Natural Science Foundation of P. R. China (61671175), Science and Technology Planning Project of Sichuan Province(2019YFS0069) and Space Intelligent Control Technology Key Laboratory Foundation Project(ZDSXS-2018-02)
  • 摘要: 信息时代产生的大量数据使机器学习技术成功地应用在许多领域. 大多数机器学习技术需要满足训练集与测试集独立同分布的假设, 但在实际应用中这个假设很难满足. 域适应是一种在训练集和测试集不满足独立同分布条件下的机器学习技术. 一般情况下的域适应只适用于源域目标域特征空间与标签空间都相同的情况, 然而实际上这个条件很难满足. 为了增强域适应技术的适用性, 复杂情况下的域适应逐渐成为研究热点, 其中标签空间不一致和复杂目标域情况下的域适应技术是近年来的新兴方向. 随着深度学习技术的崛起, 深度域适应已经成为域适应研究领域中的主流方法. 本文对一般情况与复杂情况下的深度域适应的研究进展进行综述, 对其缺点进行总结, 并对其未来的发展趋势进行预测. 本文的内容安排为首先对迁移学习相关概念进行介绍, 然后分别对一般情况与复杂情况下的域适应, 域适应技术的应用以及域适应方法性能的实验结果进行综述, 最后对域适应领域的未来发展趋势进行展望并对全文内容进行总结.
  • 图  1  本文的组织结构

    Fig.  1  The structure of this article

    图  2  部分使用MMD的深度域适应方法 (a)深度适配网络 (b)联合适配网络 (c)残差迁移网络

    Fig.  2  Some deep domain adaptation methods based on MMD (a)DAN (b)JAN (c)RTN

    图  3  领域对抗神经网络的网络结构

    Fig.  3  The network structure of DANN

    图  4  条件领域对抗网络

    Fig.  4  Conditional domain adversarial network

    图  5  领域分离网络

    Fig.  5  Domain separation network

    图  6  文献[92]所使用的网络结构

    Fig.  6  The network structure used in [92]

    图  7  CycleGAN的训练过程 (a)源域图像通过翻译网络G变换到目标域, 目标域图像通过翻译网络F变换到源域 (b)在源域中计算循环一致性损失 (c)在目标域中计算循环一致性损失

    Fig.  7  The training process of CycleGAN (a)source images are transformed to target domain through translation network G, target images are transformed to source domain through translation network F (b)calculate the cycle-consistency loss in source domain (c)calculate the cycle-consistency loss in target domain.

    图  8  选择对抗网络的网络结构

    Fig.  8  The network structure of SAN

    图  9  文献[111]中的网络结构

    Fig.  9  The network structure in [111]

    图  10  通用适配网络的训练过程

    Fig.  10  The training process of univerial adaptation network

    图  11  文献[113]中的网络结构

    Fig.  11  The network structure in [113]

    图  12  文献[112]使用元学习来提取泛化性能优异的特征

    Fig.  12  Literature [112] uses meta learning to extract features with excellent generalization performance

    表  1  深度域适应的四类方法

    Table  1  Four kinds of methods for deep domain adaptation

    四类深度域适应方法方法特点典型方法
    基于领域分布差异的方法通过最小化两个领域分布差异, 提取领域不变性特征深度适配网络(DAN)[20]
    基于对抗的方法通过对抗学习来对齐分布多对抗域适应(MADA)[2]
    基于重构的方法抑制信息损失/解耦特征领域分离网络(DSN)[5]
    基于样本生成的方法合成符合目标域分布的假样本协助训练循环生成对抗网络(CycleGAN)[21]
    下载: 导出CSV

    表  2  标签空间不一致的域适应问题

    Table  2  Domain adaptation with inconsistent label space

    不同标签空间集合
    关系下的域适应
    问题特点典型方法
    部分域适应目标域标签空间是源域
    标签空间的子集
    选择对抗网络(SAN)[1]
    开集域适应源域标签空间是目标域
    标签空间的子集
    分离适配(STA)[105]
    通用域适应源域目标域标签空间
    集合关系无法确定
    通用适配网络(UDA)[106]
    下载: 导出CSV

    表  3  复杂目标域情况下的域适应问题

    Table  3  Domain adaptation in the case of complex target domain

    复杂目标域情况
    下的域适应
    问题特点典型方法
    多目标域域适应目标域样本来自
    于多个子目标域
    多目标域适配网络
    (MTDA-ITA)[3]
    领域泛化目标域样本不可
    得; 目标域未知
    特征评价网络(Feature-
    Critic Network)[112]
    下载: 导出CSV

    表  4  在Office31数据集上各深度域适应方法的准确率(%)

    Table  4  Accuracy of each deep domain adaptation method on Office31 dataset(%)

    方法$ A\to W $$ D\to W $$ W\to D $$ A\to D $$ D\to A $$ W\to A $平均
    ResNet50[26]68.496.799.368.962.560.776.1
    DAN[20]80.597.199.678.663.662.880.4
    DCORAL[42]79.098.0100.082.765.364.581.6
    RTN[39]84.596.899.477.566.264.881.6
    DANN[27]82.096.999.179.768.267.482.2
    ADDA[75]86.296.298.477.869.568.982.9
    JAN[23]85.497.499.884.768.670.084.3
    MADA[2]90.197.499.687.870.366.485.2
    GTA[99]89.597.999.887.772.871.486.5
    CDAN[22]94.198.6100.092.971.069.387.7
    下载: 导出CSV

    表  5  在OfficeHome数据集上各深度域适应方法的准确率(%)

    Table  5  Accuracy of each deep domain adaptation method on OfficeHome dataset(%)

    方法$ A\to C $$ A\to P $$ A\to R $$ C\to A $$ C\to P $$ C\to R $
    ResNet50[26]34.950.058.037.441.946.2
    DAN[20]43.657.067.945.856.560.4
    DANN[27]45.659.370.147.058.560.9
    JAN[23]45.961.268.950.459.761.0
    CDAN[22]50.770.676.057.670.070.0
    方法$ P\to A $$ P\to C $$ P\to R $$ R\to A $$ R\to C $$ R\to P $平均
    ResNet50[26]38.531.260.453.941.259.946.1
    DAN[20]44.043.667.763.151.574.356.3
    DANN[27]46.143.768.563.251.876.857.6
    JAN[23]45.843.470.363.952.476.858.3
    CDAN[22]57.450.977.370.956.781.665.8
    下载: 导出CSV

    表  6  在Office31数据集上各部分域适应方法的准确率(%)

    Table  6  Accuracy of each partial domain adaptation method on Office31 dataset(%)

    方法$ A\to W $$ D\to W $$ W\to D $$ A\to D $$ D\to A $$ W\to A $平均
    ResNet50[26]75.596.298.083.483.984.987.0
    DAN[20]59.373.990.461.774.967.671.3
    DANN[27]73.596.298.781.582.786.186.5
    IWAN[109]89.199.399.390.495.694.294.6
    SAN[1]93.999.399.394.294.188.794.9
    PADA[107]86.599.3100.082.192.695.492.6
    ETN[108]94.5100.0100.095.096.294.696.7
    下载: 导出CSV

    表  7  在Office31数据集上各开集域适应方法的准确率(%)

    Table  7  Accuracy of each open set domain adaptation method on Office31 dataset(%)

    方法$ A\to W $$ A\to D $$ D\to W $
    OSOS*OSOS*OSOS*
    ResNet50[26]82.582.785.285.594.194.3
    RTN[39]85.688.189.590.194.896.2
    DANN[27]85.387.786.587.797.598.3
    OpenMax[145]87.487.587.188.496.196.2
    ATI-$ \lambda $[110]87.488.984.386.693.695.3
    OSBP[111]86.587.688.689.297.096.5
    STA[105]89.592.193.796.197.596.5
    方法$ W\to D $$ D\to A $$ W\to A $平均
    OSOS*OSOS*OSOS*OSOS*
    ResNet50[26]96.697.071.671.575.575.284.284.4
    RTN[39]97.198.772.372.873.573.985.486.8
    DANN[27]99.5100.075.776.274.975.686.687.6
    OpenMax[145]98.498.583.482.182.882.889.089.3
    ATI-$ \lambda $[110]96.598.778.079.680.481.486.788.4
    OSBP[111]97.998.788.990.685.884.990.891.3
    STA[105]99.599.689.193.587.987.492.994.1
    下载: 导出CSV

    表  8  在OfficeHome数据集上通用域适应及其它方法的准确率(%)

    Table  8  Accuracy of universal domain adaptation and other methods on OfficeHome dataset(%)

    方法$ A\to C $$ A\to P $$ A\to R $$ C\to A $$ C\to P $$ C\to R $
    ResNet[26]59.476.687.569.971.181.7
    DANN[27]56.281.786.968.773.483.8
    RTN[39]50.577.886.965.173.485.1
    IWAN[109]52.681.486.570.671.085.3
    PADA[107]39.669.476.362.667.477.5
    ATI-$ \lambda $[110]52.980.485.971.172.484.4
    OSBP[111]47.860.976.859.261.674.3
    UAN[106]63.082.887.976.978.785.4
    方法$ P\to A $$ P\to C $$ P\to R $$ R\to A $$ R\to C $$ R\to P $平均
    ResNet[26]73.756.386.178.759.278.673.2
    DANN[27]69.956.885.879.457.378.373.2
    RTN[39]67.945.285.579.255.678.870.9
    IWAN[109]74.957.385.177.559.778.973.4
    PADA[107]48.435.879.675.944.578.162.9
    ATI-$ \lambda $[110]74.357.885.676.160.278.473.3
    OSBP[111]61.744.579.370.655.075.263.9
    UAN[106]78.258.686.883.463.279.477.0
    下载: 导出CSV

    表  9  在Office31数据集上AMEAN及其它方法的准确率(%)

    Table  9  Accuracy of AMEAN and other methods on Office31 dataset(%)

    方法$ A\to D,W $$ D\to A,W $$ W\to A,D $平均
    ResNet[26]68.670.066.568.4
    DAN[20]78.064.466.769.7
    RTN[39]84.367.564.872.2
    JAN[23]84.274.472.076.9
    AMEAN[113]90.177.073.480.2
    下载: 导出CSV

    表  10  在Office31数据集上DADA及其它方法的准确率(%)

    Table  10  Accuracy of DADA and other methods on Office31 dataset(%)

    方法$A\to C, D,W$$C\to A, D,W$$D\to A, C,W$$W\to A, C,D$平均
    ResNet[26]90.594.388.782.589.0
    MCD[28]91.795.389.584.390.2
    DANN[27]91.594.390.586.390.6
    DADA[4]92.095.191.393.192.9
    下载: 导出CSV

    表  11  在MNIST数据集上领域泛化方法的准确率(%)

    Table  11  Accuracy of domain generalization methods on MNIST dataset(%)

    源域目标域DAEDICAD-MTAEMMD-AAE
    $ {M}_{{15,30,45,60,75}} $$ {M}_{0} $76.970.382.583.7
    $ {M}_{{0,30,45,60,75}} $$ {M}_{15} $93.288.996.396.9
    $ {M}_{{0,15,45,60,75}} $$ {M}_{30} $91.390.493.495.7
    $ {M}_{{0,15,30,60,75}} $$ {M}_{45} $81.180.178.685.2
    $ {M}_{{0,15,30,45,75}} $$ {M}_{60} $92.888.594.295.9
    $ {M}_{{0,15,30,45,60}} $$ {M}_{75} $76.571.380.581.2
    平均85.381.687.689.8
    下载: 导出CSV
  • [1] Cao Z, Long M, Wang J, Jordan M I. Partial transfer learning with selective adversarial networks. In: Proceedings of Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA: IEEE, 2018.2724−2732.
    [2] Pei Z, Cao Z, Long M, Wang J. Multi-adversarial domain adaptation. In: Proceedings of the Thirty-Second Conference on Artificial Intelligence, New Orleans, Louisiana, USA: AAAI, 2018.3934−3941.
    [3] Gholami B, Sahu P, Rudovic O, Bousmalis K, Pavlovic V. Unsupervised multi-target domain adaptation: an information theoretic approach. CoRR, 2018: abs/1810.1
    [4] Peng X, Huang Z, Sun X, Saenko K. Domain agnostic learning with disentangled representations. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, USA: ACM, 2019.5102−5112.
    [5] Bousmalis K, Trigeorgis G, Silberman N, Krishnan D, Erhan D. Domain separation networks. In: Proceedings of Annual Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016.343−351.
    [6] Pan S J, Yang Q. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345−1359 doi: 10.1109/TKDE.2009.191
    [7] Shao L, Zhu F, Li X. Transfer learning for visual categorization: a survey. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(5): 1019−1034 doi: 10.1109/TNNLS.2014.2330900
    [8] Liang H, Fu W, Yi F. A survey of recent advances in transfer learning. In: Proceedings of 19th International Conference on Communication Technology. Xian, China, 2019.1516−1523.
    [9] Chu C, Wang R. A survey of domain adaptation for neural machine translation. In: Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA: ACM, 2018.1304−1319.
    [10] Ramponi A, Plank B. Neural unsupervised domain adaptation in nlp - a survey. CoRR, 2020: abs/2006.0
    [11] Day O, Khoshgoftaar T M. A survey on heterogeneous transfer learning. Journal of Big Data, 2017, 4: 29 doi: 10.1186/s40537-017-0089-0
    [12] Patel V M, Gopalan R, Li R, Chellappa R. Visual domain adaptation: a survey of recent advances. IEEE Signal Processing Magazine, 2015, 32(3): 53−69 doi: 10.1109/MSP.2014.2347059
    [13] Weiss K R, Khoshgoftaar T M, Wang D. A survey of transfer learning. Journal of Big Data, 2016, 3: 9 doi: 10.1186/s40537-016-0043-6
    [14] Csurka G. Domain adaptation for visual applications: a comprehensive survey. CoRR, 2017: abs/1702.0
    [15] Wang M, Deng W. Deep visual domain adaptation: a survey. Neurocomputing, 2018, 312: 135−153 doi: 10.1016/j.neucom.2018.05.083
    [16] Tan C, Sun F, Kong T, Zhang W, Yang C, Liu C. A survey on deep transfer learning. In: Proceedings of the 27th International Conference on Artificial Neural Networks, Rhodes, Greece: ACM, 2018.270−279.
    [17] Wilson G, Cook D J. A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology, 2020, 11(5): 1−46
    [18] Zhuang F, Qi Z, Duan K, et al. A comprehensive survey on transfer learning. arXiv preprint arXiv: 1911.02685, 2019.
    [19] Ben-David S, Blitzer J, Crammer K, Kulesza A, Pereira F, Vaughan J W. A theory of learning from different domains. Machine Learning, 2010, 79(1–2): 151−175
    [20] Long M, Cao Y, Wang J, Jordan M I. Learning transferable features with deep adaptation networks. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR, 2015.97−105.
    [21] Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of International Conference on Computer Vision, Venice, Italy: IEEE, 2017.2242−2251.
    [22] Long M, Cao Z, Wang J, Jordan M I. Conditional adversarial domain adaptation. In: Proceedings of Annual Conference on Neural Information Processing Systems, Montréal, Canada: ACM, 2018.1647−1657.
    [23] Long M, Zhu H, Wang J, Jordan M I. Deep transfer learning with joint adaptation networks. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, NSW, Australia: JMLR, 2017.2208−2217.
    [24] Zhao H, Des C R T, Zhang K, Gordon G J. On learning invariant representations for domain adaptation. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, California, USA: JMLR. 2019.7523−7532.
    [25] Liu H, Long M, Wang J, Jordan M I. Transferable adversarial training: a general approach to adapting deep classifiers. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, California, USA: JMLR, 2019.4013−4022.
    [26] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 770−778.
    [27] Ganin Y, Lempitsky V S. Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR, 2015.1180−1189.
    [28] Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018.3723−3732.
    [29] Wang Z, Dai Z, Póczos B, Carbonell J G. Characterizing and avoiding negative transfer. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.11293–11302.
    [30] Chen C, Xie W, Huang W, et al. Progressive feature alignment for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.627–636.
    [31] Tan B, Song Y, Zhong E, Yang Q. Transitive transfer learning. In: Proceedings of the 21th International Conference on Knowledge Discovery and Data Mining. Sydney, NSW, Australia: ACM, 2015.1155–1164.
    [32] Tan B, Zhang Y, Pan S J, Yang Q. Distant domain transfer learning. In: Proceedings of the Thirty-First Conference on Artificial Intelligence. San Francisco, California, USA: AAAI, 2017.2604–2610.
    [33] Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks? In: Proceedings of Annual Conference on Neural Information Processing Systems. Montreal, Quebec, Canada: ACM, 2014.3320–3328.
    [34] Ghifary M, Kleijn W B, Zhang M. Domain adaptive neural networks for object recognition. In: Proceedings of the 13th Pacific Rim International Conference on Artificial Intelligence, Gold Coast, QLD, Australia: Springer, 2014.898–904.
    [35] Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T. Deep domain confusion: maximizing for domain invariance. CoRR, 2014: abs/1412.3
    [36] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 26th Annual Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, United States: ACM, 2012.1106–1114.
    [37] Zhang X, Yu F X, Chang S F, Wang S. Deep transfer network: unsupervised domain adaptation. CoRR, 2015: abs/1503.0
    [38] Yan H, Ding Y, Li P, Wang Q, Xu Y, Zuo W. Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.945–954.
    [39] Long M, Zhu H, Wang J, Jordan M I. Unsupervised domain adaptation with residual transfer networks. In: Proceedings of Annual Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016.136–144.
    [40] 皋军, 黄丽莉, 孙长银. 一种基于局部加权均值的领域适应学习框架. 自动化学报, 2013, 39(7): 1037−1052

    GAO Jun, HUANG Li-Li, SUN Chang-Yin. A local weighted mean based domain adaptation learning framework. Acta Automatica Sinica, 2013, 39(7): 1037−1052
    [41] Sun B, Feng J, Saenko K. Return of frustratingly easy domain adaptation. In: Proceedings of the Thirtieth Conference on Artificial Intelligence. Phoenix, Arizona, USA: AAAI, 2016.2058–2065.
    [42] Sun B, Saenko K. Deep CORAL: correlation alignment for deep domain adaptation. In: Proceedings of European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016.443–450.
    [43] Chen C, Chen Z, Jiang B, Jin X. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. In: Proceedings of the Thirty-Third Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI, 2019.3296–3303.
    [44] Li Y, Swersky K, Zemel R S. Generative moment matching networks. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR, 2015.1718–1727.
    [45] Zellinger W, Grubinger T, Lughofer E, Natschläger T, Saminger P S. Central moment discrepancy for domain-invariant representation learning. The 5th International Conference on Learning Representations. Toulon, France: Springer, 2017.
    [46] Courty N, Flamary R, Tuia D, Rakotomamonjy A. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(9): 1853−1865 doi: 10.1109/TPAMI.2016.2615921
    [47] Courty N, Flamary R, Habrard A, Rakotomamonjy A. Joint distribution optimal transportation for domain adaptation. In: Proceedings of Advances in Neural Information Processing Systems. California, USA: ACM, 2017.3730–3739.
    [48] Bhushan D B, Kellenberger B, Flamary R, Tuia D, Courty N. Deepjdot: deep joint distribution optimal transport for unsupervised domain adaptation. In: Proceedings of the European Conference on Computer Vision. Munich, Germany: Springer, 2018.447–463.
    [49] Lee C Y, Batra T, Baig M H, Ulbricht D. Sliced wasserstein discrepancy for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.10285–10295.
    [50] Shen J, Qu Y, Zhang W, Yu Y. Wasserstein distance guided representation learning for domain adaptation. In: Proceedings of the Thirty-Second Conference on Artificial Intelligence. New Orleans, Louisiana, USA: AAAI, 2018.4058–4065.
    [51] Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. In: Proceedings of International conference on machine learning. Sydney, Australia: JMLR, 2017.214–223.
    [52] Herath S, Harandi M, Fernando B, Nock R. Min-max statistical alignment for transfer learning. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.9288–9297.
    [53] Rozantsev A, Salzmann M, Fua P. Beyond sharing weights for deep domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(4): 801−814 doi: 10.1109/TPAMI.2018.2814042
    [54] Shu X, Qi G J, Tang J, Wang J. Weakly-shared deep transfer networks for heterogeneous-domain knowledge Propagation. In: Proceedings of the 23rd Annual Conference on Multimedia Conference. Brisbane, Australia: ACM, 2015.35–44.
    [55] 许夙晖, 慕晓冬, 柴栋, 罗畅. 基于极限学习机参数迁移的域适应算法. 自动化学报, 2018, 44(2): 311−317

    XU Su-Hui, MU Xiao-Dong, CHAI Dong, LUO Chang. Domain adaption algorithm with elm parameter transfer. Acta Automatica Sinica, 2018, 44(2): 311−317
    [56] Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR, 2015.448–456.
    [57] Chang W G, You T, Seo S, Kwak S, Han B. Domain-specific batch normalization for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.7354–7362.
    [58] Roy S, Siarohin A, Sangineto E, Bulò S R, Sebe N, Ricci E. Unsupervised domain adaptation using feature-whitening and consensus loss. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.9471–9480.
    [59] Li Y, Wang N, Shi J, Liu J, Hou X. Revisiting batch normalization for practical domain adaptation. In: Proceedings of the 5th International Conference on Learning Representations. Toulon, France: Springer, 2017.
    [60] Ulyanov D, Vedaldi A, Lempitsky V S. Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.4105–4113.
    [61] Carlucci F M, Porzi L, Caputo B, Ricci E, Bulò S R. AutoDIAL: automatic domain alignment layers. In: Proceedings of International Conference on Computer Vision. Venice, Italy: IEEE, 2017.5077–5085.
    [62] Xiao T, Li H, Ouyang W, Wang X. Learning deep feature representations with domain guided dropout for person re-identification. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016.1249–1258.
    [63] Wu S, Zhong J, Cao W, Li R, Yu Z, Wong H S. Improving domain-specific classification by collaborative learning with adaptation networks. In: Proceedings of the Thirty-Third Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI, 2019.5450–5457.
    [64] Zhang Y, Tang H, Jia K, Tan M. Domain-symmetric networks for adversarial domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.5031–5040.
    [65] Gopalan R, Li R, Chellappa R. Domain adaptation for object recognition: an unsupervised approach. In: Proceedings of International Conference on Computer Vision. Barcelona, Spain: IEEE, 2011.999–1006.
    [66] Gong B, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012.2066–2073.
    [67] Chopra S, Balakrishnan S, Gopalan R. Dlid: deep learning for domain adaptation by interpolating between domains. International Conference on Machine Learning workshop on challenges in representation learning. 2013, 2(6).
    [68] Gong R, Li W, Chen Y, Gool L Van. DLOW: domain flow for adaptation and generalization. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2477–2486.
    [69] Xu X, Zhou X, Venkatesan R, Swaminathan G, Majumder O. d-SNE: domain adaptation using stochastic neighborhood embedding. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2497–2506.
    [70] Yang B, Yuen P C. Cross-domain visual representations via unsupervised graph alignment. In: Proceedings of the Thirty-Third Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI, 2019.5613–5620.
    [71] Ma X, Zhang T, Xu C. GCAN: Graph convolutional adversarial network for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.8266–8276.
    [72] Yang Z, Zhao J J, Dhingra B, et al. GLoMo: unsupervised learning of transferable relational graphs. In: Proceedings of Annual Conference on Neural Information Processing Systems. Montréal, Canada: ACM, 2018.8964–8975.
    [73] Goodfellow I J, Pouget A J, Mirza M, et al. Generative adversarial nets. In: Proceedings of Annual Conference on Neural Information Processing Systems. Montreal, Quebec, Canada: ACM. 2014.2672–2680.
    [74] Chen X, Wang S, Long M, Wang J. Transferability vs. Discriminability: batch spectral penalization for adversarial domain adaptation. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, California, USA: JMLR, 2019: 1081−1090
    [75] Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.2962–2971.
    [76] Volpi R, Morerio P, Savarese S, Murino V. Adversarial feature augmentation for unsupervised domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018.5495–5504.
    [77] Vu T H, Jain H, Bucher M, Cord M, Pérez P. ADVENT: adversarial entropy minimization for domain adaptation in semantic segmentation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2517–2526.
    [78] Tzeng E, Hoffman J, Darrell T, Saenko K. Simultaneous deep transfer across domains and tasks. In: Proceedings of International Conference on Computer Vision. Santiago, Chile: IEEE, 2015.4068–4076.
    [79] Kurmi V K, Kumar S, Namboodiri V P. Attending to discriminative certainty for domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.491–500.
    [80] Wang X, Li L, Ye W, Long M, Wang J. Transferable attention for domain adaptation. In: Proceedings of The Thirty-Third Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI, 2019.5345–5352.
    [81] Luo Y, Zheng L, Guan T, Yu J, Yang Y. Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2507–2516.
    [82] Springenberg J T, Dosovitskiy A, Brox T, Riedmiller M A. Striving for simplicity: the all convolutional net. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, CA, USA: ACM, 2015.
    [83] Selvaraju R R, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of International Conference on Computer Vision. Venice, Italy: IEEE, 2017.618–626.
    [84] Chattopadhyay A, Sarkar A, Howlader P, Balasubramanian V N. Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: Proceedings of Winter Conference on Applications of Computer Vision. Lake Tahoe, NV, USA: IEEE, 2018.839–847.
    [85] Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1−127 doi: 10.1561/2200000006
    [86] Glorot X, Bordes A, Bengio Y. Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th International Conference on Machine Learning. Bellevue, Washington, USA: JMLR, 2011.513–520.
    [87] Chen M, Xu Z E, Weinberger K Q, Sha F. Marginalized denoising autoencoders for domain adaptation. In: Proceedings of the 29th International Conference on Machine Learning. Edinburgh, Scotland, UK: JMLR, 2012.
    [88] Ghifary M, Kleijn W B, Zhang M, Balduzzi D, Li W. Deep reconstruction-classification networks for unsupervised domain adaptation. In: Proceedings of the 14th European Conference. Amsterdam, The Netherlands: Springer, 2016.597–613.
    [89] Zhuang F, Cheng X, Luo P, Pan S J, He Q. Supervised representation learning: transfer learning with deep autoencoders. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence. Buenos Aires, Argentina: ACM, 2015.4119–4125.
    [90] Sun R, Zhu X, Wu C, Huang C, Shi J, Ma L. Not all areas are equal: transfer learning for semantic segmentation via hierarchical region selection. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.4360–4369.
    [91] Tsai J C, Chien J T. Adversarial domain separation and adaptation. In: Proceedings of the 27th International Workshop on Machine Learning for Signal Processing. Tokyo, Japan: IEEE, 2017.1–6.
    [92] Zhu P, Wang H, Saligrama V. Learning classifiers for target domain with limited or no labels. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, California, USA: JMLR, 2019.7643–7653.
    [93] Zheng H, Fu J, Mei T, Luo J. Learning multi-attention convolutional neural network for fine-grained image recognition. In: Proceedings of International Conference on Computer Vision. Venice, Italy: IEEE, 2017.5219–5227.
    [94] Zhao A, Ding M, Guan J, Lu Z, Xiang T, Wen J R. Domain-invariant projection learning for zero-shot recognition. In: Proceedings of Annual Conference on Neural Information Processing Systems. Montréal, Canada: ACM, 2018.1027–1038.
    [95] Liu M Y, Tuzel O. Coupled generative adversarial networks. In: Proceedings of Annual Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016.469–477.
    [96] He D, Xia Y, Qin T, et al. Dual Learning for machine translation. In: Proceedings of Annual Conference on Neural Information Processing Systems. Barcelona, Spain: ACM, 2016.820–828.
    [97] Yi Z, Zhang H, Tan P, Gong M. DualGAN: unsupervised dual learning for image-to-image translation. In: Proceedings of International Conference on Computer Vision. Venice, Italy: IEEE, 2017.2868–2876.
    [98] Kim T, Cha M, Kim H, Lee J K, Kim J. Learning to discover cross-domain relations with generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, NSW, Australia: IEEE, 2017.1857–1865.
    [99] Sankaranarayanan S, Balaji Y, Castillo C D, Chellappa R. Generate to adapt: aligning domains using generative adversarial networks. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018.8503–8512.
    [100] Yoo D, Kim N, Park S, Paek A S, Kweon I S. Pixel-level domain transfer. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016.517–532.
    [101] Chen Y C, Lin Y Y, Yang M H, Huang J B. CrDoCo: pixel-level domain transfer with cross-domain consistency. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.1791–1800.
    [102] Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W, Webb R. Learning from simulated and unsupervised images through adversarial training. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 2242–2251.
    [103] Li Y, Yuan L, Vasconcelos N. Bidirectional learning for domain adaptation of semantic segmentation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.6936–6945.
    [104] Bousmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.95–104.
    [105] Liu H, Cao Z, Long M, Wang J, Yang Q. Separate to adapt: open set domain adaptation via progressive separation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2927–2936.
    [106] You K, Long M, Cao Z, Wang J, Jordan M I. Universal domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2720–2729.
    [107] Cao Z, Ma L, Long M, Wang J. Partial adversarial domain adaptation. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018, 139–155.
    [108] Cao Z, You K, Long M, Wang J, Yang Q. Learning to transfer examples for partial domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2985–2994.
    [109] Zhang J, Ding Z, Li W, Ogunbona P. Importance weighted adversarial nets for partial domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018.8156–8164.
    [110] Busto P P, Gall J. Open set domain adaptation. In: Proceedings of International Conference on Computer Vision. Venice, Italy: IEEE, 2017.754–763.
    [111] Saito K, Yamamoto S, Ushiku Y, Harada T. Open set domain adaptation by backpropagation. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: IEEE, 2018.156–171.
    [112] Li Y, Yang Y, Zhou W, Hospedales T M. Feature-critic networks for heterogeneous domain generalization. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, California, USA: JMLR, 2019.3915–3924.
    [113] Chen Z, Zhuang J, Liang X, Lin L. Blending-target domain adaptation by adversarial meta-adaptation networks. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2248–2257.
    [114] Shankar S, Piratla V, Chakrabarti S, Chaudhuri S, Jyothi P, Sarawagi S. Generalizing across domains via cross-gradient training. 6th International Conference on Learning Representations, In: Proceedings of International Conference on Learning Representations. Vancouver, BC, Canada: IEEE, 2018.
    [115] Volpi R, Namkoong H, Sener O, Duchi J C, Murino V, Savarese S. Generalizing to unseen domains via adversarial data augmentation. In: Proceedings of Annual Conference on Neural Information Processing Systems. Montréal, Canada: ACM, 2018.5339–5349.
    [116] Muandet K, Balduzzi D, Schölkopf B. Domain generalization via invariant feature representation. In: Proceedings of the 30th International Conference on Machine Learning. Atlanta, GA, USA: JMLR, 2013.10–18.
    [117] Li D, Yang Y, Song Y Z, Hospedales T M. Learning to generalize: meta-learning for domain generalization. In: Proceedings of the Thirty-Second Conference on Artificial Intelligence. New Orleans, Louisiana, USA: AAAI, 2018.3490–3497.
    [118] Li H, Pan S J, Wang S, Kot A C. Domain generalization with adversarial feature learning. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018.5400–5409.
    [119] Carlucci F M, D’Innocente A, Bucci S, Caputo B, Tommasi T. Domain generalization by solving jigsaw puzzles. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.2229–2238.
    [120] Ghifary M, Kleijn W B, Zhang M, Balduzzi D. Domain generalization for object recognition with multi-task autoencoders. In: Proceedings of International Conference on Computer Vision. Santiago, Chile: IEEE, 2015.2551–2559.
    [121] Xu Z, Li W, Niu L, Xu D. Exploiting low-rank structure from latent domains for domain generalization. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: IEEE, 2014.628–643.
    [122] Niu L, Li W, Xu D. Multi-view domain generalization for visual recognition. In: Proceedings of International Conference on Computer Vision. Santiago, Chile: IEEE, 2015.4193–4201.
    [123] Niu L, Li W, Xu D. Visual recognition by learning from web data: a weakly supervised domain generalization approach. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015.2774–2783.
    [124] Chen Y, Li W, Sakaridis C, Dai D, Van G L. Domain adaptive faster r-cnn for object detection in the wild. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 3339–3348.
    [125] Inoue N, Furuta R, Yamasaki T, Aizawa K. Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 5001–5009.
    [126] Xu Y, Du B, Zhang L, Zhang Q, Wang G, Zhang L. Self-ensembling attention networks: addressing domain shift for semantic segmentation. In: Proceedings of the Thirty-Third Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI, 2019.5581–5588.
    [127] Chen C, Dou Q, Chen H, Qin J, Heng P A. Synergistic image and feature adaptation: towards cross-modality domain adaptation for medical image segmentation. In: Proceedings of Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI, 2019.865–872.
    [128] Agresti G, Schaefer H, Sartor P, Zanuttigh P. Unsupervised domain adaptation for ToF data denoising with adversarial learning. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019.5584–5593.
    [129] Yoo J, Hong Y, Noh Y, Yoon S. Domain adaptation using adversarial learning for autonomous navigation. arXiv preprint arXiv: 1712.03742, 2017.
    [130] Choi D, An T H, Ahn K, Choi J. Driving experience transfer method for end-to-end control of self-driving cars. CoRR, 2018: abs/1809.0
    [131] Bak S, Carr P, Lalonde J F. Domain adaptation through synthesis for unsupervised person re-identification. In: Proceedings of the European Conference on Computer Vision. Munich, Germany: Springer, 2018.189–205.
    [132] Deng W, Zheng L, Ye Q, Kang G, Yang Y, Jiao J. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In: Proceedings of Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE, 2018: 994–1003.
    [133] Li Y J, Yang F E, Liu Y C, Yeh Y Y, Du X, Frank Wang Y-C. Adaptation and re-identification network: an unsupervised deep transfer learning approach to person re-identification. In: Proceedings of Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, UT, USA: IEEE, 2018: 172–178.
    [134] 吴彦丞, 陈鸿昶, 李邵梅, 高超. 基于行人属性先验分布的行人再识别. 自动化学报, 2019, 45(5): 953−964

    WU Yan-Cheng, CHEN Hong-Chang, LI Shao-Mei, GAO Chao. Person re-identification using attribute priori distribution. Acta Automatica Sinica, 2019, 45(5): 953−964
    [135] Côté A U, Fall C L, Drouin A, et al. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2019, 27(4): 760−771 doi: 10.1109/TNSRE.2019.2896269
    [136] Ren C X, Dai D Q, Huang K K, Lai Z R. Transfer learning of structured representation for face recognition. IEEE Transactions on Image Processing, 2014, 23(12): 5440−5454 doi: 10.1109/TIP.2014.2365725
    [137] Shin H C, Roth H R, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 2016, 35(5): 1285−1298 doi: 10.1109/TMI.2016.2528162
    [138] Byra M, Wu M, Zhang X, et al. Knee menisci segmentation and relaxometry of 3D ultrashort echo time (UTE) cones MR imaging using attention U-Net with transfer learning. CoRR, 2019: abs/1908.0
    [139] Cao H, Bernard S, Heutte L, Sabourin R. Improve the performance of transfer learning without fine-tuning using dissimilarity-based multi-view learning for breast cancer histology images. In: Proceedings of the 15th International Conference on Image Analysis and Recognition. Póvoa de Varzim, Portugal: Springer, 2018.779–787.
    [140] Kachuee M, Fazeli S, Sarrafzadeh M. Ecg heartbeat classification: a deep transferable representation. In: Proceedings of International Conference on Healthcare Informatics. New York, USA: IEEE, 2018.443–444.
    [141] 贺敏, 汤健, 郭旭琦, 阎高伟. 基于流形正则化域适应随机权神经网络的湿式球磨机负荷参数软测量. 自动化学报, 2019, 45(2): 398−406

    HE Min, TANG Jian, GUO Xu-Qi, YAN Gao-Wei. Soft sensor for ball mill load using damrrwnn model. Acta Automatica Sinica, 2019, 45(2): 398−406
    [142] Zhao M, Yue S, Katabi D, Jaakkola T S, Bianchi M T. Learning sleep stages from radio signals: a conditional adversarial architecture. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, NSW, Australia: JMLR, 2017. 70: 4100–4109.
    [143] Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of European Conference on Computer Vision. Crete, Greece: Springer. 2010.213–226.
    [144] Venkateswara H, Eusebio J, Chakraborty S, Panchanathan S. Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii, USA: IEEE, 2017.5018–5027.
    [145] Bendale A, Boult T E. Towards open set deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016.
    [146] Shu Y, Cao Z, Long M, Wang J. Transferable curriculum for weakly-supervised domain adaptation. In: Proceedings of The Thirty-Third Conference on Artificial Intelligence. Honolulu, Hawaii, USA: AAAI 2019.4951–4958.
  • 加载中
计量
  • 文章访问数:  216
  • HTML全文浏览量:  53
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-04-22
  • 录用日期:  2020-09-14

目录

    /

    返回文章
    返回