A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 2
Feb.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
W. Zhang, L. F. Deng, L. Zhang, and D. R. Wu, “A survey on negative transfer,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 2, pp. 305–329, Feb. 2023. doi: 10.1109/JAS.2022.106004
Citation: W. Zhang, L. F. Deng, L. Zhang, and D. R. Wu, “A survey on negative transfer,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 2, pp. 305–329, Feb. 2023. doi: 10.1109/JAS.2022.106004

A Survey on Negative Transfer

doi: 10.1109/JAS.2022.106004
Funds:  This work was partially supported by the National Key Research and Development Program of China (2021ZD0201300), the Hubei Province Funds for Distinguished Young Scholars (2020CFA050), and the National Natural Science Foundation of China (61873321)
More Information
  • Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate learning in a target domain. It is particularly useful when the target domain has very few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., leveraging source domain data/knowledge undesirably reduces learning performance in the target domain, and has been a long-standing and challenging problem in TL. Various approaches have been proposed in the literature to address this issue. However, there does not exist a systematic survey. This paper fills this gap, by first introducing the definition of NT and its causes, and reviewing over fifty representative approaches for overcoming NT, which fall into three categories: domain similarity estimation, safe transfer, and NT mitigation. Many areas, including computer vision, bioinformatics, natural language processing, recommender systems, and robotics, that use NT mitigation strategies to facilitate positive transfers, are also reviewed. Finally, we give guidelines on NT task construction and baseline algorithms, benchmark existing TL and NT mitigation approaches on three NT-specific datasets, and point out challenges and future research directions. To ensure reproducibility, our code is publicized at https://github.com/chamwen/NT-Benchmark.

     

  • loading
  • 1 https://scikit-learn.org/
    2 https://bcmi.sjtu.edu.cn/seed/seed.html
    Wen Zhang and Lingfei Deng contributed equally to this work.
  • [1]
    D. R. Wu, Y. F. Xu, and B. L. Lu, “Transfer learning for EEG-based brain-computer interfaces: A review of progress made since 2016,” IEEE Trans. Cogn. Dev. Syst., vol. 14, no. 1, pp. 4–19, Mar. 2022. doi: 10.1109/TCDS.2020.3007453
    [2]
    S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010. doi: 10.1109/TKDE.2009.191
    [3]
    L. Zhang and X. B. Gao, “Transfer adaptation learning: A decade survey,” IEEE Trans. Neural Netw. Learn. Syst., 2022, DOI: 10.1109/TNNLS.2022.3183326
    [4]
    Z. Chen and M. W. Daehler, “Positive and negative transfer in analogical problem solving by 6-year-old children,” Cogn. Dev., vol. 4, no. 4, pp. 327–344, Oct. 1989. doi: 10.1016/S0885-2014(89)90031-2
    [5]
    J. Y. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Schölkopf, “Correcting sample selection bias by unlabeled data,” in Proc. 19th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2006, pp. 601–608.
    [6]
    M. S. Long, J. M. Wang, G. G. Ding, J. G. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” in Proc. IEEE Int. Conf. Computer Vision, Sydney, Australia, 2013, pp. 2200–2207.
    [7]
    J. Zhang, W. Q. Li, and P. Ogunbona, “Joint geometrical and statistical alignment for visual domain adaptation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 5150–5158.
    [8]
    D. R. Wu, “Online and offline domain adaptation for reducing BCI calibration effort,” IEEE Trans. Human-Mach. Syst., vol. 47, no. 4, pp. 550–563, Aug. 2017. doi: 10.1109/THMS.2016.2608931
    [9]
    P. Agrawal, R. Girshick, and J. Malik, “Analyzing the performance of multilayer neural networks for object recognition,” in Proc. 13th European Conf. Computer Vision, Zurich, Switzerland, 2014, pp. 329–344.
    [10]
    M. S. Long, Y. Cao, J. M. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” in Proc. 32nd Int. Conf. Machine Learning, Lille, France, 2015, pp. 97–105.
    [11]
    Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proc. 32nd Int. Conf. Machine Learning, Lille, France, 2015, pp. 1180–1189.
    [12]
    Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 2096–2030, Jan. 2016.
    [13]
    M. S. Long, Z. J. Cao, J. M. Wang, and M. I. Jordan, “Conditional adversarial domain adaptation,” in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018.
    [14]
    M. T. Rosenstein, Z. Marx, L. P. Kaelbling, and T. G. Dietterich, “To transfer or not to transfer,” in Proc. NIPS Workshop Transfer Learning, Vancouver, Canada, 2005, pp. 1–4.
    [15]
    B. Cao, S. J. Pan, Y. Zhang, D. Y. Yeung, and Q. Yang, “Adaptive transfer learning,” in Proc. 24th AAAI Conf. Artificial Intelligence, Atlanta, USA, 2010, pp. 7.
    [16]
    Z. R. Wang, Z. H. Dai, B. Póczos, and J. Carbonell, “Characterizing and avoiding negative transfer,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 11285–11294.
    [17]
    S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Machine Learning, vol. 79, no. 1–2, pp. 151–175, May 2010. doi: 10.1007/s10994-009-5152-4
    [18]
    I. Redko, E. Morvant, A. Habrard, M. Sebban, and Y. Bennani, “A survey on domain adaptation theory: Learning bounds and theoretical guarantees,” arXiv preprint arXiv: 2004.11829, 2020.
    [19]
    B. Watson, “The similarity factor in transfer and inhibition,” J. Educational Psychology, vol. 29, no. 2, pp. 145–157, 1938. doi: 10.1037/h0054963
    [20]
    C. Chen, “A study on positive transfer of native language and second language teaching methods,” Theory Pract. Lang. Stud., vol. 10, no. 3, pp. 306–312, Mar. 2020. doi: 10.17507/tpls.1003.06
    [21]
    M. J. Sorocky, S. Q. Zhou, and A. P. Schoellig, “Experience selection using dynamics similarity for efficient multi-source transfer learning between robots,” in Proc. IEEE Int. Conf. Robotics and Automation, Paris, France, 2020, pp. 2739–2745.
    [22]
    S. Wang, S. Nepal, C. Rudolph, M. Grobler, S. Y. Chen, and T. L. Chen, “Backdoor attacks against transfer learning with pre-trained deep learning models,” IEEE Trans. Serv. Comput., vol. 15, no. 3, May–Jun. 2022.
    [23]
    Q. Yang, Y. Zhang, W. Y. Dai, and S. J. Pan, Transfer Learning. Cambridge, UK: Cambridge University Press, 2020.
    [24]
    C. Q. Tan, F. C. Sun, T. Kong, W. C. Zhang, C. Yang, and C. F. Liu, “A survey on deep transfer learning,” in Proc. 27th Int. Conf. Artificial Neural Networks, Rhodes, Greece, 2018, pp. 270–279.
    [25]
    Y. F. Li, L. Z. Guo, and Z. H. Zhou, “Towards safe weakly supervised learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 1, pp. 334–346, Jan. 2021.
    [26]
    Z. Wang, “Mitigating negative transfer for better generalization and efficiency in transfer learning,” Ph.D. dissertation, Carnegie Mellon University, 2022.
    [27]
    F. Z. Zhuang, Z. Y. Qi, K. Y. Duan, D. B. Xi, Y. C. Zhu, H. S. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, Jan. 2021. doi: 10.1109/JPROC.2020.3004555
    [28]
    B. Zadrozny, “Learning and evaluating classifiers under sample selection bias,” in Proc. 21st Int. Conf. Machine Learning, Banff, Canada, 2004, pp. 114.
    [29]
    Z. C. Lipton, Y. X. Wang, and A. J. Smola, “Detecting and correcting for label shift with black box predictors,” in Proc. 35th Int. Conf. Machine Learning, Stockholm, Sweden, 2018, pp. 3128–3136.
    [30]
    R. Tachet des Combes, H. Zhao, Y. X. Wang, and G. Gordon, “Domain adaptation with conditional distribution matching and generalized label shift,” in Proc. 34th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 1617.
    [31]
    C. Wei, K. Shen, Y. N. Chen, and T. Y. Ma, “Theoretical analysis of self-training with deep networks on unlabeled data,” in Proc. 9th Int. Conf. Learning Representations, Austria, 2021.
    [32]
    C. M. Bishop, Pattern Recognition and Machine Learning. New York, USA: Springer, 2006.
    [33]
    Y. Z. Cao and J. F. Yang, “Towards making systems forget with machine unlearning,” in Proc. IEEE Symp. Security and Privacy, San Jose, USA, 2015, pp. 463–480.
    [34]
    L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. R. Jia, A. Travers, B. W. Zhang, D. Lie, and N. Papernot, “Machine unlearning,” in Proc. IEEE Symp. Security and Privacy, San Francisco, USA, 2021, pp. 141–159.
    [35]
    L. Graves, V. Nagisetty, and V. Ganesh, “Amnesiac machine learning,” in Proc. 35th AAAI Conf. Artificial Intelligence, 2021.
    [36]
    Y. Zhang and Q. Yang, “An overview of multi-task learning,” Natl. Sci. Rev., vol. 5, no. 1, pp. 30–43, Jan. 2018. doi: 10.1093/nsr/nwx105
    [37]
    T. H. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn, “Gradient surgery for multi-task learning,” in Proc. 34th Annu. Conf. Neural Information Processing Systems, 2020.
    [38]
    Z. R. Wang, Y. Tsvetkov, O. Firat, and Y. Cao, “Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models,” in Proc. 9th Int. Conf. Learning Representations, Vienna, Austria, 2021.
    [39]
    M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” in Psychol. Learn. Motiv., vol. 24, pp. 109–165, 1989.
    [40]
    T. Doan, M. A. Bennani, B. Mazoure, G. Rabusseau, and P. Alquier, “A theoretical analysis of catastrophic forgetting through the NTK overlap matrix,” in Proc. 24th Int. Conf. Artificial Intelligence and Statistics, San Diego, USA, 2021, pp. 1072–1080.
    [41]
    A. Abdelkader, M. J. Curry, L. Fowl, T. Goldstein, A. Schwarzschild, M. L. Shu, C. Studer, and C. Zhu, “Headless horseman: Adversarial attacks on transfer learning models,” in Proc. Int. Conf. Acoustics, Speech and Signal Processing, Barcelona, Spain, 2020, pp. 3087–3091.
    [42]
    A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, pp. 723–773, Mar. 2012.
    [43]
    Z. R. Wang and J. Carbonell, “Towards more reliable transfer learning,” in Proc. Joint European Conf. Machine Learning and Knowledge Discovery in Databases, Dublin, Ireland, 2018, pp. 794–810.
    [44]
    B. Q. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, USA, 2012, pp. 2066–2073.
    [45]
    A. M. Azab, L. Mihaylova, K. K. Ang, and M. Arvaneh, “Weighted transfer learning for improving motor imagery-based brain-computer interface,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 7, pp. 1352–1359, Jul. 2019. doi: 10.1109/TNSRE.2019.2923315
    [46]
    Y. P. Lin and T. P. Jung, “Improving EEG-based emotion classification using conditional transfer learning,” Front. Hum. Neurosci., vol. 11, p. 334, Jun. 2017. doi: 10.3389/fnhum.2017.00334
    [47]
    W. Zhang and D. R. Wu, “Manifold embedded knowledge transfer for brain-computer interfaces,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 28, no. 5, pp. 1117–1127, May 2020. doi: 10.1109/TNSRE.2020.2985996
    [48]
    G. Xie, Y. Sun, M. L. Lin, and K. Tang, “A selective transfer learning method for concept drift adaptation,” in Proc. 14th Int. Symp. Neural Networks, Sapporo, Japan, 2017, pp. 353–361.
    [49]
    Y. Sun, K. Tang, Z. Zhu, and X. Yao, “Concept drift adaptation by exploiting historical knowledge,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 10, pp. 4822–4832, Oct. 2018. doi: 10.1109/TNNLS.2017.2775225
    [50]
    S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira, “Analysis of representations for domain adaptation,” in Proc. 19th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2006, pp. 137–144.
    [51]
    R. J. Xu, Z. L. Chen, W. M. Zuo, J. J. Yan, and L. Lin, “Deep cocktail network: Multi-source unsupervised domain adaptation with category shift,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 3964–3973.
    [52]
    J. Wu and J. R. He, “Continuous transfer learning with label-informed distribution alignment,” arXiv preprint arXiv: 2006.03230, 2020.
    [53]
    H. Z. Feng, Z. Y. You, M. H. Chen, T. Y. Zhang, M. F. Zhu, F. Wu, C. Wu, and W. Chen, “KD3A: Unsupervised multi-source decentralized domain adaptation via knowledge distillation,” in Proc. 38th Int. Conf. Machine Learning, 2021.
    [54]
    A. T. Tran, C. V. Nguyen, and T. Hassner, “Transferability and hardness of supervised classification tasks,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 1395–1405.
    [55]
    C. Nguyen, T. Hassner, M. Seeger, and C. Archambeau, “LEEP: A new measure to evaluate transferability of learned representations,” in Proc. 37th Int. Conf. Machine Learning, Vienna, Austria, 2020, pp. 7294–7305.
    [56]
    M. J. Afridi, A. Ross, and E. M. Shapiro, “On automated source selection for transfer learning in convolutional neural networks,” Pattern Recognit., vol. 73, pp. 65–75, Jan. 2018. doi: 10.1016/j.patcog.2017.07.019
    [57]
    L. K. Huang, J. Z. Huang, Y. Rong, Q. Yang, and Y. Wei, “Frustratingly easy transferability estimation,” in Proc. 39th Int. Conf. on Machine Learning, Baltimore, USA, 2022.
    [58]
    Y. J. Bao, Y. Li, S. L. Huang, L. Zhang, L. Z. Zheng, A. Zamir, and L. Guibas, “An information-theoretic approach to transferability in task transfer learning,” in Proc. IEEE Int. Conf. Image Processing, Taipei, China, 2019, pp. 2309–2313.
    [59]
    S. Ibrahim, N. Ponomareva, and R. Mazumder, “Newer is not always better: Rethinking transferability metrics, their peculiarities, stability and performance,” in Proc. 35th Conf. Neural Information Processing Systems Workshop on Distribution Shifts, 2021.
    [60]
    A. Meiseles and L. Rokach, “Source model selection for deep learning in the time series domain,” IEEE Access, vol. 8, pp. 6190–6200, Jan. 2020. doi: 10.1109/ACCESS.2019.2963742
    [61]
    S. Kullback and R. A. Leibler, “On information and sufficiency,” Ann. Math. Statist., vol. 22, no. 1, pp. 79–86, Mar. 1951. doi: 10.1214/aoms/1177729694
    [62]
    P. J. Rousseeuw, “Silhouettes: A graphical aid to the interpretation and validation of cluster analysis,” J. Comput. Appl. Math., vol. 20, pp. 53–65, Nov. 1987. doi: 10.1016/0377-0427(87)90125-7
    [63]
    A. Gretton, O. Bousquet, A. Smola, and B. Schölkopf, “Measuring statistical dependence with Hilbert-Schmidt norms,” in Proc. 16th Int. Conf. Algorithmic Learning Theory, Singapore, 2005, pp. 63–77.
    [64]
    S. Si, D. C. Tao, and B. Geng, “Bregman divergence-based regularization for transfer subspace learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 7, pp. 929–942, Jul. 2010. doi: 10.1109/TKDE.2009.126
    [65]
    J. Shen, Y. R. Qu, W. N. Zhang, and Y. Yu, “Wasserstein distance guided representation learning for domain adaptation,” in Proc. 32nd AAAI Conf. Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symp. Educational Advances in Artificial Intelligence, New Orleans, USA, 2018, pp. 4058–4065.
    [66]
    Y. K. Zuo, H. T. Yao, and C. S. Xu, “Attention-based multi-source domain adaptation,” IEEE Trans. Image Process., vol. 30, pp. 3793–3803, Mar. 2021. doi: 10.1109/TIP.2021.3065254
    [67]
    J. Donahue, Y. Q. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “DeCAF: A deep convolutional activation feature for generic visual recognition,” in Proc. 31st Int. Conf. Machine Learning, Beijing, China, 2014, pp. 647–655.
    [68]
    K. C. You, Y. Liu, J. M. Wang, and M. S. Long, “LogME: Practical assessment of pre-trained models for transfer learning,” in Proc. 38th Int. Conf. Machine Learning, 2021, pp. 12133–12143.
    [69]
    M. Abdullah Jamal, H. X. Li, and B. Q. Gong, “Deep face detector adaptation without negative transfer or catastrophic forgetting,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 5608–5618.
    [70]
    I. Kuzborskij and F. Orabona, “Stability and hypothesis transfer learning,” in Proc. 30th Int. Conf. Machine Learning, Atlanta, USA, 2013, pp. III-942–III-950.
    [71]
    M. J. Sorocky, S. Q. Zhou, and A. P. Schoellig, “To share or not to share? Performance guarantees and the asymmetric nature of cross-robot experience transfer” IEEE Control Syst. Lett., vol. 5, no. 3, pp. 923–928, Jul. 2021. doi: 10.1109/LCSYS.2020.3005886
    [72]
    C. Cortes and M. Mohri, “Domain adaptation in regression,” in Proc. 22nd Int. Conf. Algorithmic Learning Theory, Espoo, Finland, 2011, pp. 308–323.
    [73]
    G. Richard, A. de Mathelin, G. Hébrail, M. Mougeot, and N. Vayatis, “Unsupervised multi-source domain adaptation for regression,” in Proc. Joint European Conf. Machine Learning and Knowledge Discovery in Databases, Ghent, Belgium, 2020, pp. 395–411.
    [74]
    X. Y. Chen, S. N. Wang, J. M. Wang, and M. S. Long, “Representation subspace distance for domain adaptation regression,” in Proc. 38th Int. Conf. Machine Learning, 2021, pp. 1749–1759.
    [75]
    S. M. Ahmed, D. S. Raychaudhuri, S. Paul, S. Oymak, and A. K. Roy-Chowdhury, “Unsupervised multi-source domain adaptation without access to source data,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021.
    [76]
    R. Turrisi, R. Flamary, A. Rakotomamonjy, and M. Pontil, “Multi-source domain adaptation via weighted joint distributions optimal transport,” in Proc. 38th Conf. Uncertainty in Artificial Intelligence, 2022.
    [77]
    Y. C. Zhu, F. Z. Zhuang, J. D. Wang, G. L. Ke, J. W. Chen, J. Bian, H. Xiong, and Q. He, “Deep subdomain adaptation network for image classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 4, pp. 1713–1722, Apr. 2021. doi: 10.1109/TNNLS.2020.2988928
    [78]
    T. Kim and C. Kim, “Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 591–607.
    [79]
    B. Tan, Y. Q. Song, E. H. Zhong, and Q. Yang, “Transitive transfer learning,” in Proc. 21st ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Sydney, Australia, 2015, pp. 1155–1164.
    [80]
    B. Tan, Y. Zhang, S. J. Pan, and Q. Yang, “Distant domain transfer learning,” in Proc. 31st AAAI Conf. Artificial Intelligence, San Francisco, CA, 2017, pp. 2604–2610.
    [81]
    J. Na, H. Jung, H. J. Chang, and W. Hwang, “FixBi: Bridging domain spaces for unsupervised domain adaptation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 11844–11854.
    [82]
    Y. X. Dai, J. Liu, Y. F. Sun, Z. K. Tong, C. Zhang, and L. Y. Duan, “IDM: An intermediate domain module for domain adaptive person Re-ID,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, Canada, 2021, pp. 11864–11874.
    [83]
    A. Kumar, T. Y. Ma, and P. Liang, “Understanding self-training for gradual domain adaptation,” in Proc. 37th Int. Conf. Machine Learning, 2020, pp. 5468–5479.
    [84]
    H. Y. Chen and W. L. Chao, “Gradual domain adaptation without indexed intermediate domains,” in Proc. 35th Conf. Neural Information Processing Systems, 2021.
    [85]
    C. W. Seah, Y. S. Ong, and I. W. Tsang, “Combating negative transfer from predictive distribution differences,” IEEE Trans. Cybern., vol. 43, no. 4, pp. 1153–1165, Aug. 2013. doi: 10.1109/TSMCB.2012.2225102
    [86]
    D. R. Wu, “Pool-based sequential active learning for regression,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 5, pp. 1348–1359, May 2019. doi: 10.1109/TNNLS.2018.2868649
    [87]
    D. R. Wu, C. T. Lin, and J. Huang, “Active learning for regression using greedy sampling,” Inf. Sci., vol. 474, pp. 90–105, Feb. 2019. doi: 10.1016/j.ins.2018.09.060
    [88]
    Z. H. Peng, W. Zhang, N. Han, X. Z. Fang, P. P. Kang, and L. Y. Teng, “Active transfer learning,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 4, pp. 1022–1036, Apr. 2020. doi: 10.1109/TCSVT.2019.2900467
    [89]
    Z. H. Peng, Y. H. Jia, and J. H. Hou, “Non-negative transfer learning with consistent inter-domain distribution,” IEEE Signal Process. Lett., vol. 27, pp. 1720–1724, Oct. 2020. doi: 10.1109/LSP.2020.3025061
    [90]
    C. A. Yi, Y. H. Xu, H. Yu, Y. G. Yan, and Y. Liu, “Multi-component transfer metric learning for handling unrelated source domain samples,” Knowl.-Based Syst., vol. 203, p. 106132, Sept. 2020. doi: 10.1016/j.knosys.2020.106132
    [91]
    L. Y. Yang, Y. Balaji, S. N. Lim, and A. Shrivastava, “Curriculum manager for source selection in multi-source domain adaptation,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 608–624.
    [92]
    J. Garcke and T. Vanck, “Importance weighted inductive transfer learning for regression,” in Proc. Joint European Conf. Machine Learning and Knowledge Discovery in Databases, Nancy, France, 2014, pp. 466–481.
    [93]
    A. de Mathelin, G. Richard, F. Deheeger, M. Mougeot, and N. Vayatis, “Adversarial weighting for domain adaptation in regression,” in Proc. 33rd IEEE Int. Conf. Tools with Artificial Intelligence, Washington, USA, 2021, pp. 49–56.
    [94]
    M. S. Long, J. M. Wang, G. G. Ding, W. Cheng, X. Zhang, and W. Wang, “Dual transfer learning,” in Proc. SIAM Int. Conf. Data Mining, Anaheim, USA, 2012, pp. 540–551.
    [95]
    J. F. Shi, M. S. Long, Q. Liu, G. G. Ding, and J. M. Wang, “Twin bridge transfer learning for sparse collaborative filtering,” in Proc. 17th Pacific-Asia Conf. Knowledge Discovery and Data Mining, Gold Coast, Australia, 2013, pp. 496–507.
    [96]
    X. X. Shi, Q. Liu, W. Fan, P. S. Yu, and R. X. Zhu, “Transfer learning on heterogenous feature spaces via spectral transformation,” in Proc. IEEE Int. Conf. Data Mining, Sydney, Australia, 2010, pp. 1049–1054.
    [97]
    J. Y. Chen, F. Lécué, J. Pan, I. Horrocks, and H. J. Chen, “Knowledge-based transfer learning explanation,” in Proc. 16th Int. Conf. Principles of Knowledge Representation and Reasoning, Tempe, USA, 2018, pp. 349–358.
    [98]
    X. Y. Chen, S. N. Wang, B. Fu, M. S. Long, and J. M. Wang, “Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning,” in Proc. 33rd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 171.
    [99]
    X. Y. Chen, S. N. Wang, M. S. Long, and J. M. Wang, “Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation,” in Proc. 36th Int. Conf. Machine Learning, Long Beach, USA, 2019, pp. 1081–1090.
    [100]
    C. Q. Chen, Z. B. Zheng, X. H. Ding, Y. Huang, and Q. Dou, “Harmonizing transferability and discriminability for adapting object detectors,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 8866–8875.
    [101]
    K. Q. Y. Li, J. Lu, H. Zuo, and G. Q. Zhang, “Dynamic classifier alignment for unsupervised multi-source domain adaptation,” IEEE Trans. Knowl. Data Eng., 2022, DOI: 10.1109/TKDE.2022.3144423
    [102]
    V. Prabhu, S. Khare, D. Kartik, and J. Hoffman, “SENTRY: Selective entropy optimization via committee consistency for unsupervised domain adaptation,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, Canada, 2021, pp. 8538–8547.
    [103]
    X. H. Li, J. J. Li, L. Zhu, G. Q. Wang, and Z. Huang, “Imbalanced source-free domain adaptation,” in Proc. 29th ACM Int. Conf. Multimedia, Chengdu, China, 2021, pp. 3330–3339.
    [104]
    P. Panareda Busto and J. Gall, “Open set domain adaptation,” in Proc. IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 754–763.
    [105]
    J. Zhang, Z. W. Ding, W. Q. Li, and P. Ogunbona, “Importance weighted adversarial nets for partial domain adaptation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 8156–8164.
    [106]
    Z. J. Cao, M. S. Long, J. M. Wang, and M. I. Jordan, “Partial transfer learning with selective adversarial networks,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 2724–2732.
    [107]
    Z. J. Cao, L. J. Ma, M. S. Long, and J. M. Wang, “Partial adversarial domain adaptation,” in Proc. 15th European Conf. Computer Vision, Munich, Germany, 2018, pp. 139–155.
    [108]
    Z. J. Cao, K. C. You, M. S. Long, J. M. Wang, and Q. Yang, “Learning to transfer examples for partial domain adaptation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 2980–2989.
    [109]
    K. C. You, M. S. Long, Z. J. Cao, J. M. Wang, and M. I. Jordan, “Universal domain adaptation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, USA, 2019, pp. 2715–2724.
    [110]
    K. Saito, D. Kim, S. Sclaroff, and K. Saenko, “Universal domain adaptation through self-supervision,” in Proc. 34th Conf. Neural Information Processing Systems, Vancouver, Canada, 2020.
    [111]
    J. N. Kundu, N. Venkat, M. V. Rahul, and R. V. Babu, “Universal source-free domain adaptation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, pp. 4543–4552.
    [112]
    Y. H. Li, N. Y. Wang, J. P. Shi, J. Y. Liu, and X. D. Hou, “Revisiting batch normalization for practical domain adaptation,” in Proc. 5th Int. Conf. Learning Representations, Toulon, France, 2017.
    [113]
    X. M. Wang, Y. Jin, M. S. Long, J. M. Wang, and M. I. Jordan, “Transferable normalization: Towards improving transferability of deep neural networks,” in Proc. 33rd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 175.
    [114]
    Z. Q. Tang, Y. H. Gao, Y. Zhu, Z. Zhang, M. Li, and D. N. Metaxas, “CrossNorm and SelfNorm for generalization under distribution shifts,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, Canada, 2021, pp. 52–61.
    [115]
    K. Z. Liang, J. Y. Zhang, O. O. Koyejo, and B. Li, “Does adversarial transferability indicate knowledge transferability?” in Proc. 38th Int. Conf. Machine Learning, 2021.
    [116]
    H. Salman, A. Ilyas, L. Engstrom, A. Kapoor, and A. Mądry, “Do adversarially robust ImageNet models transfer better?” in Proc. 34th Conf. Neural Information Processing Systems, Vancouver, Canada, 2020.
    [117]
    Z. Deng, L. J. Zhang, K. Vodrahalli, K. Kawaguchi, and J. Y. Zou, “Adversarial training helps transfer learning via better represen-tations,” in Proc. 35th Conf. Neural Information Processing Systems, 2021.
    [118]
    Q. C. Lao, X. Jiang, and M. Havaei, “Hypothesis disparity regularized mutual information maximization,” in Proc. 35th AAAI Conf. Artificial Intelligence, 2021.
    [119]
    D. Benavides-Prado, Y. S. Koh, and P. Riddle, “AccGenSVM: Selectively transferring from previous hypotheses,” in Proc. 26th Int. Joint Conf. Artificial Intelligence, Melbourne, Australia, 2017, pp. 1440–1446.
    [120]
    W. J. Fang, C. C. Chen, B. W. Song, L. Wang, J. Zhou, and K. Q. Zhu, “Adapted tree boosting for transfer learning,” in Proc. IEEE Int. Conf. Big Data, Los Angeles, USA, 2019, pp. 741–750.
    [121]
    Z. Y. Han, H. L. Sun, and Y. L. Yin, “Learning transferable parameters for unsupervised domain adaptation,” IEEE Trans. Image Process., vol. 31, pp. 6424–6439, Jun. 2022. doi: 10.1109/TIP.2022.3184848
    [122]
    X. H. Li, Y. Grandvalet, and F. Davoine, “Explicit inductive bias for transfer learning with convolutional networks,” in Proc. 35th Int. Conf. Machine Learning, Stockholm, Sweden, 2018, pp. 2830–2839.
    [123]
    J. Liang, D. P. Hu, and J. S. Feng, “Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation,” in Proc. 37th Int. Conf. Machine Learning, Vienna, Austria, 2020, pp. 560.
    [124]
    X. J. Li, H. Y. Xiong, H. C. Wang, Y. X. Rao, L. P. Liu, and J. Huan, “DELTA: Deep learning transfer using feature map with attention for convolutional networks,” in Proc. 7th Int. Conf. Learning Representations, New Orleans, USA, 2019.
    [125]
    K. C. You, Z. Kou, M. S. Long, and J. M. Wang, “Co-Tuning for transfer learning,” in Proc. 34th Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, pp. 17236–17246.
    [126]
    J. O. Zhang, A. Sax, A. Zamir, L. Guibas, and J. Malik, “Side-tuning: A baseline for network adaptation via additive side networks,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 698–714.
    [127]
    M. Ishii and M. Sugiyama, “Source-free domain adaptation via distributional alignment by matching batch normalization statistics,” arXiv preprint arXiv: 2101.10842, 2021.
    [128]
    Z. Q. Gao, S. F. Zhang, K. Z. Huang, Q. F. Wang, and C. L. Zhong, “Gradient distribution alignment certificates better adversarial domain adaptation,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, Canada, 2021, pp. 8917–8926.
    [129]
    H. Yoon and J. Li, “A novel positive transfer learning approach for telemonitoring of Parkinson’s disease,” IEEE Trans. Autom. Sci. Eng., vol. 16, no. 1, pp. 180–191, 2019. doi: 10.1109/TASE.2018.2874233
    [130]
    N. Xiao and L. Zhang, “Dynamic weighted learning for unsupervised domain adaptation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 15237–15246.
    [131]
    K. Saito, D. Kim, P. Teterwak, S. Sclaroff, T. Darrell, and K. Saenko, “Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Montreal, Canada, 2021, pp. 9164–9173.
    [132]
    K. C. You, X. M. Wang, M. S. Long, and M. I. Jordan, “Towards accurate model selection in deep unsupervised domain adaptation,” in Proc. 36th Int. Conf. Machine Learning, Long Beach, USA, 2019, pp. 7124–7133.
    [133]
    R. S. Wan, H. Y. Xiong, X. J. Li, Z. X. Zhu, and J. Huan, “Towards making deep transfer learning never hurt,” in Proc. IEEE Int. Conf. Data Mining, Beijing, China, 2019, pp. 578–587.
    [134]
    F. R. Lv, J. Liang, K. X. Gong, S. Li, C. H. Liu, H. Li, D. Liu, and G. R. Wang, “Pareto domain adaptation,” in Proc. 35th Conf. Neural Information Processing Systems, 2021.
    [135]
    Z. Y. Pei, Z. J. Cao, M. S. Long, and J. M. Wang, “Multi-adversarial domain adaptation,” in Proc. 32nd AAAI Conf. Artificial Intelligence, New Orleans, USA, 2018.
    [136]
    Y. X. Ge, D. P. Chen, and H. S. Li, “Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification,” in Proc. 8th Int. Conf. Learning Representations, Addis Ababa, Ethiopia, 2020.
    [137]
    Z. M. Ding, S. Li, M. Shao, and Y. Fu, “Graph adaptive knowledge transfer for unsupervised domain adaptation,” in Proc. 15th European Conf. Computer Vision, Munich, Germany, 2018, pp. 36–52.
    [138]
    L. Gui, F. R. Xu, Q. Lu, J. C. Du, and Y. Zhou, “Negative transfer detection in transductive transfer learning,” Int. J. Mach. Learn. Cybern., vol. 9, no. 2, pp. 185–197, Feb. 2018. doi: 10.1007/s13042-016-0634-8
    [139]
    Q. Wang and T. Breckon, “Unsupervised domain adaptation via structured prediction based selective pseudo-labeling,” in Proc. 34th AAAI Conf. Artificial Intelligence, New York, USA, 2020, pp. 6243–6250.
    [140]
    J. Liang, R. He, Z. N. Sun, and T. N. Tan, “Exploring uncertainty in pseudo-label guided unsupervised domain adaptation,” Pattern Recognit., vol. 96, p. 106996, Dec. 2019. doi: 10.1016/j.patcog.2019.106996
    [141]
    L. Tian, Y. Q. Tang, L. C. Hu, Z. D. Ren, and W. S. Zhang, “Domain adaptation by class centroid matching and local manifold self-learning,” IEEE Trans. Image Process., vol. 29, pp. 9703–9718, Oct. 2020. doi: 10.1109/TIP.2020.3031220
    [142]
    Y. Jin, X. M. Wang, M. S. Long, and J. M. Wang, “Minimum class confusion for versatile domain adaptation,” in Proc. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 464–480.
    [143]
    E. Eaton and M. DesJardins, “Selective transfer between learning tasks using task-based boosting,” in Proc. 25th AAAI Conf. Artificial Intelligence, San Francisco, USA, 2011, pp. 337–342.
    [144]
    D. R. Wu, B. Lance, and V. Lawhern, “Transfer learning and active transfer learning for reducing calibration data in single-trial classification of visually-evoked potentials,” in Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, San Diego, USA, 2014.
    [145]
    D. R. Wu, V. J. Lawhern, W. D. Hairston, and B. J. Lance, “Switching EEG headsets made easy: Reducing offline calibration effort using active weighted adaptation regularization,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 24, no. 11, pp. 1125–1137, Nov. 2016. doi: 10.1109/TNSRE.2016.2544108
    [146]
    M. Sugiyama, M. Krauledat, and K. R. Müller, “Covariate shift adaptation by importance weighted cross validation,” J. Mach. Learn. Res., vol. 8, pp. 985–1005, Dec. 2007.
    [147]
    J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” in Proc. 27th Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2014, pp. 3320–3328.
    [148]
    S. Li, S. J. Song, G. Huang, Z. M. Ding, and C. Wu, “Domain invariant and class discriminative feature learning for visual domain adaptation,” IEEE Trans. Image Proc., vol. 27, no. 9, pp. 4260–4273, Sept. 2018. doi: 10.1109/TIP.2018.2839528
    [149]
    W. Zhang and D. R. Wu, “Discriminative joint probability maximum mean discrepancy (DJP-MMD) for domain adaptation,” in Proc. Int. Joint Conf. Neural Networks, Glasgow, UK, 2020, pp. 1–8.
    [150]
    H. He and D. R. Wu, “Different set domain adaptation for brain-computer interfaces: A label alignment approach,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 28, no. 5, pp. 1091–1108, May 2020. doi: 10.1109/TNSRE.2020.2980299
    [151]
    S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. 32nd Int. Conf. Machine Learning, Lille, France, 2015, pp. 448–456.
    [152]
    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Proc. 6th Int. Conf. Learning Representations, Vancouver, Canada, 2018.
    [153]
    X. Zhang, D. R. Wu, L. Y. Ding, H. B. Luo, C. T. Lin, T. P. Jung, and R. Chavarriaga, “Tiny noise, big mistakes: Adversarial perturbations induce errors in brain-computer interface spellers,” Natl. Sci. Rev., vol. 8, no. 4, p. nwaa233, Apr. 2021. doi: 10.1093/nsr/nwaa233
    [154]
    Z. H. Liu, L. B. Meng, X. Zhang, W. L. Fang, and D. R. Wu, “Universal adversarial perturbations for CNN classifiers in EEG-based BCIs,” J. Neural Eng., vol. 18, p. 0460a4, Jul. 2021. doi: 10.1088/1741-2552/ac0f4c
    [155]
    P. Jiang, A. M. Wu, Y. H. Han, Y. F. Shao, M. Y. Qi, and B. S. Li, “Bidirectional adversarial training for semi-supervised domain adaptation,” in Proc. 29th Int. Joint Conf. Artificial Intelligence, Yokohama, Japan, 2020, pp. 934–940.
    [156]
    Y. Grandvalet and Y. Bengio, “Semi-supervised learning by entropy minimization,” in Proc. 17th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2004.
    [157]
    K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko, “Semi-supervised domain adaptation via minimax entropy,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 8049–8057.
    [158]
    P. P. Busto, A. Iqbal, and J. Gall, “Open set domain adaptation for image and action recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 413–429, Nov. 2018.
    [159]
    X. C. Peng, Q. X. Bai, X. D. Xia, Z. J. Huang, K. Saenko, and B. Wang, “Moment matching for multi-source domain adaptation,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 1406–1415.
    [160]
    X. B. Lv, Y. Guan, and B. Y. Deng, “Transfer learning based clinical concept extraction on data from multiple sources,” J. Biomed. Inf., vol. 52, pp. 55–64, Dec. 2014. doi: 10.1016/j.jbi.2014.05.006
    [161]
    S. T. Niu, M. Liu, Y. X. Liu, J. Wang, and H. B. Song, “Distant domain transfer learning for medical imaging,” IEEE J. Biomed. Health Inf., vol. 25, no. 10, pp. 3784–3793, Oct. 2021. doi: 10.1109/JBHI.2021.3051470
    [162]
    R. Peres da Silva, C. Suphavilai, and N. Nagarajan, “TUGDA: Task uncertainty guided domain adaptation for robust generalization of cancer drug response prediction from in vitro to in vivo settings,” Bioinformatics, vol. 37, no. S1, pp. i76–i83, Jul. 2021.
    [163]
    L. B. Meng, J. Huang, Z. G. Zeng, X. Jiang, S. Yu, T. P. Jung, C. T. Lin, R. Chavarriaga, and D. R. Wu, “EEG-based brain-computer interfaces are vulnerable to backdoor attacks,” arXiv preprint arXiv: 2011.00101, 2020.
    [164]
    H. He and D. R. Wu, “Transfer learning for brain-computer interfaces: A Euclidean space data alignment approach,” IEEE Trans. Biomed. Eng., vol. 67, no. 2, pp. 399–410, Feb. 2020. doi: 10.1109/TBME.2019.2913914
    [165]
    N. Arivazhagan, A. Bapna, O. Firat, D. Lepikhin, M. Johnson, M. Krikun, M. X. Chen, Y. Cao, G. Foster, C. Cherry, W. Macherey, Z. F. Chen, and Y. H. Wu, “Massively multilingual neural machine translation in the wild: Findings and challenges,” arXiv preprint arXiv: 1907.05019, 2019.
    [166]
    Z. R. Wang, Z. C. Lipton, and Y. Tsvetkov, “On negative interference in multilingual models: Findings and a meta-learning treatment,” in Proc. Conf. Empirical Methods in Natural Language Processing, 2020.
    [167]
    J. Guo, D. J. Shah, and R. Barzilay, “Multi-source domain adaptation with mixture of experts,” in Proc. Conf. Empirical Methods in Natural Language Processing, Brussels, Belgium, 2018, pp. 4694–4703.
    [168]
    S. Gao, H. Luo, D. Chen, S. T. Li, P. Gallinari, and J. Guo, “Cross-domain recommendation via cluster-level latent factor model,” in Proc. Joint European Conf. Machine Learning and Knowledge Discovery in Databases, Prague, Czech Republic, 2013, pp. 161–176.
    [169]
    C. Perlich, B. Dalessandro, T. Raeder, O. Stitelman, and F. Provost, “Machine learning for targeted display advertising: Transfer learning in action,” Mach. Learn., vol. 95, no. 1, pp. 103–127, Apr. 2014. doi: 10.1007/s10994-013-5375-2
    [170]
    H. W. Zhang, X. W. Kong, and Y. J. Zhang, “Selective knowledge transfer for cross-domain collaborative recommendation,” IEEE Access, vol. 9, pp. 48039–48051, Feb. 2021. doi: 10.1109/ACCESS.2021.3061279
    [171]
    O. Moreno, B. Shapira, L. Rokach, and G. Shani, “TALMUD: Transfer learning for multiple domains,” in Proc. 21st ACM Int. Conf. Information and Knowledge Management, Maui, USA, 2012, pp. 425–434.
    [172]
    A. Corso and M. J. Kochenderfer, “Transfer learning for efficient iterative safety validation,” in Proc. 35th AAAI Conf. Artificial Intelligence, 2021.
    [173]
    Z. B. Zhao, Q. Y. Zhang, X. L. Yu, C. Sun, S. B. Wang, R. Q. Yan, and X. F. Chen, “Applications of unsupervised deep transfer learning to intelligent fault diagnosis: A survey and comparative study,” IEEE Trans. Instrum. Meas., vol. 70, p. 3525828, Sept. 2021.
    [174]
    F. L. Da Silva and A. H. R. Costa, “A survey on transfer learning for multiagent reinforcement learning systems,” J. Artif. Intell. Res., vol. 64, pp. 645–703, Mar. 2019. doi: 10.1613/jair.1.11396
    [175]
    M. Ranaweera and Q. H. Mahmoud, “Virtual to real-world transfer learning: A systematic review,” Electronics, vol. 10, no. 12, p. 1491, Jun. 2021. doi: 10.3390/electronics10121491
    [176]
    X. Li, W. Zhang, H. Ma, Z. Luo, and X. Li, “Partial transfer learning in machinery cross-domain fault diagnostics using class-weighted adversarial networks,” Neural Netw., vol. 129, pp. 313–322, Sept. 2020. doi: 10.1016/j.neunet.2020.06.014
    [177]
    A. Azulay and Y. Weiss, “Why do deep convolutional networks generalize so poorly to small image transformations?” J. Mach. Learn. Res., vol. 20, no. 184, pp. 1–25, 2019.
    [178]
    M. Jiménez-Guarneros and P. Gómez-Gil, “A study of the effects of negative transfer on deep unsupervised domain adaptation methods,” Exp. Syst. Appl., vol. 167, p. 114088, Apr. 2021. doi: 10.1016/j.eswa.2020.114088
    [179]
    D. Berthelot, R. Roelofs, K. Sohn, N. Carlini, and A. Kurakin, “AdaMatch: A unified approach to semi-supervised learning and domain adaptation,” in Proc. 10th Int. Conf. Learning Representations, 2022.
    [180]
    S. H. Tan, X. C. Peng, and K. Saenko, “Class-imbalanced domain adaptation: An empirical odyssey,” in Proc. European Conf. Computer Vision, Glasgow, UK, 2020, pp. 585–602.
    [181]
    R. N. Duan, J. Y. Zhu, and B. L. Lu, “Differential entropy feature for EEG-based emotion classification,” in Proc. 6th Int. IEEE/EMBS Conf. Neural Engineering, San Diego, USA, 2013, pp. 81–84.
    [182]
    J. Liang, D. P. Hu, and J. S. Feng, “Domain adaptation with auxiliary target domain-oriented classifier,” in Proc. IEEE/CVF Int. Conf. Computer Vision and Pattern Recognition, Nashville, USA, 2021, pp. 16627–16637.
    [183]
    K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770–778.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(9)

    Article Metrics

    Article views (872) PDF downloads(185) Cited by()

    Highlights

    • This paper performs the first comprehensive review of approaches to overcome or mitigate negative transfer (NT), a long-standing and challenging problem in transfer learning
    • This paper provides the definition of NT and its reasons, and reviews over fifty representative approaches for overcoming/mitigating NT, according to three categories: domain similarity estimation, safe transfer, and NT mitigation. Several application areas, existing challenges, and future research directions of NT, are also discussed
    • This paper provides guidelines on NT task constructions and baseline selections, and publicizes an open-source benchmark for NT study with three datasets and over twenty algorithms

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return