A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 12
Dec.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Z. F. Zheng, L. Y. Teng, W. Zhang, N. Q. Wu, and  S. H. Teng,  “Knowledge transfer learning via dual density sampling for resource-limited domain adaptation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 12, pp. 2269–2291, Dec. 2023. doi: 10.1109/JAS.2023.123342
Citation: Z. F. Zheng, L. Y. Teng, W. Zhang, N. Q. Wu, and  S. H. Teng,  “Knowledge transfer learning via dual density sampling for resource-limited domain adaptation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 12, pp. 2269–2291, Dec. 2023. doi: 10.1109/JAS.2023.123342

Knowledge Transfer Learning via Dual Density Sampling for Resource-Limited Domain Adaptation

doi: 10.1109/JAS.2023.123342
Funds:  This work was supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006), the National Natural Science Foundation of China (61972102), the Guangzhou Science and Technology Plan Project (023A04J1729), and the Science and Technology development fund (FDCT), Macau SAR (015/2020/AMJ)
More Information
  • Most existing domain adaptation (DA) methods aim to explore favorable performance under complicated environments by sampling. However, there are three unsolved problems that limit their efficiencies: i) they adopt global sampling but neglect to exploit global and local sampling simultaneously; ii) they either transfer knowledge from a global perspective or a local perspective, while overlooking transmission of confident knowledge from both perspectives; and iii) they apply repeated sampling during iteration, which takes a lot of time. To address these problems, knowledge transfer learning via dual density sampling (KTL-DDS) is proposed in this study, which consists of three parts: i) Dual density sampling (DDS) that jointly leverages two sampling methods associated with different views, i.e., global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information; ii) Consistent maximum mean discrepancy (CMMD) that reduces intra- and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS; and iii) Knowledge dissemination (KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain. Mathematical analyses show that DDS avoids repeated sampling during the iteration. With the above three actions, confident knowledge with both global and local properties is transferred, and the memory and running time are greatly reduced. In addition, a general framework named dual density sampling approximation (DDSA) is extended, which can be easily applied to other DA algorithms. Extensive experiments on five datasets in clean, label corruption (LC), feature missing (FM), and LC&FM environments demonstrate the encouraging performance of KTL-DDS.

     

  • loading
  • 1 Generally, a deep learning method takes several hours or days, while a shallow learning method takes a few seconds or minutes.
    2 Global sampling denotes that the samples are selected according to global metrics (e.g., the overall shape of the object, and the average distance to all the other samples), while local sampling denotes that by local metrics (e.g., a small area of the image, and the average distance to the other neighborhood samples).
    3 Due to the overlapping samples (as shown in Figs. 6(a)−6(j)), we may repeatedly select the same sample twice. However, we can programmatically avoid repeat selection, which means that $ 0<q_{\rm total}\le 1 $.
    4 Generally, the deep learning methods take at least 6 and 24 hours on Office31 and Office-Home with one GeForce RTX 3080, respectively.
    5 The null hypothesis
  • [1]
    L. Zhong, Z. Fang, F. Liu, J. Lu, B. Yuan, and G. Zhang, “How does the combined risk affect the performance of unsupervised domain adaptation approaches?” in Proc. AAAI Conf. Artif. Intell., 2021, pp. 11079–11087.
    [2]
    J. Sun, Z. Wang, W. Wang, H. Li, and F. Sun, “Domain adaptation with geometrical preservation and distribution alignment,” Neurocomput., vol. 454, pp. 152–167, 2021. doi: 10.1016/j.neucom.2021.04.098
    [3]
    L. Yang, M. Men, Y. Xue, J. Wen, and P. Zhong, “Transfer subspace learning based on structure preservation for JPEG image mismatched steganalysis,” Signal Process. Image Commun., vol. 90, 2021, doi: 10.1016/j.image.2020.116052.
    [4]
    G. Cai, Y. Wang, L. He, and M. Zhou, “Unsupervised domain adaptation with adversarial residual transform networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 8, pp. 3073–3086, 2019.
    [5]
    S. Yao, Q. Kang, M. Zhou, M. J. Rawa, Rawa, and A. Albeshri, “Discriminative manifold distribution alignment for domain adaptation,” IEEE Trans. Syst., Man, Cybern.: Syst., 2022, DOI: 10.1109/TSMC.2022.3195239.
    [6]
    F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, 2021. doi: 10.1109/JPROC.2020.3004555
    [7]
    Y. Zhang, T. Liu, M. Long, and M. Jordan, “Bridging theory and algorithm for domain adaptation,” in Int. Conf. Mach. Learn., vol. 97, 2019, pp. 7404–7413.
    [8]
    W. Zhang, L. Deng, L. Zhang, and D. Wu, “A survey on negative transfer,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 2, pp. 305–329, 2023.
    [9]
    L. Tian, Y. Tang, L. Hu, Z. Ren, and W. Zhang, “Domain adaptation by class centroid matching and local manifold self-learning,” IEEE Trans. Image Process., vol. 29, pp. 9703–9718, 2020. doi: 10.1109/TIP.2020.3031220
    [10]
    W. Zhang and D. Wu, “Discriminative joint probability maximum mean discrepancy (DJP-MMD) for domain adaptation,” in Proc. IEEE Int. J. Conf. Neural Netw., 2020, pp. 1–8.
    [11]
    L. Zhang, J. Fu, S. Wang, D. Zhang, Z. Dong, and C. L. Chen, “Guide subspace learning for unsupervised domain adaptation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9, pp. 3374–3388, 2020. doi: 10.1109/TNNLS.2019.2944455
    [12]
    J. Jiang, X. Wang, M. Long, and J. Wang, “Resource efficient domain adaptation,” in Proc. Process. 28th ACM Int. Conf. Multimedia, 2020, pp. 2220–2228.
    [13]
    S. Li, J. Zhang, W. Ma, C. H. Liu, and W. Li, “Dynamic domain adaptation for efficient inference,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit., 2021, pp. 7832–7841.
    [14]
    Q. Kang, S. Yao, M. Zhou, K. Zhang, and A. Abusorrah, “Enhanced subspace distribution matching for fast visual domain adaptation,” IEEE Trans. Comput. Social Syst., vol. 7, no. 4, pp. 1047–1057, 2020. doi: 10.1109/TCSS.2020.3001517
    [15]
    A. Sharma, T. Kalluri, and M. Chandraker, “Instance level affinity-based transfer for unsupervised domain adaptation,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit., 2021, pp. 5361–5371.
    [16]
    Q. Kang, S. Yao, M. Zhou, K. Zhang, and A. Abusorrah, “Effective visual domain adaptation via generative adversarial distribution matching,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 9, pp. 3919–3929, 2020.
    [17]
    Q. Wang and T. Breckon, “Unsupervised domain adaptation via structured prediction based selective pseudo-labeling,” in Proc. AAAI Conf. Artificial Intell., vol. 34, no. 4, 2020, pp. 6243–6250.
    [18]
    J. Zhao, L. Li, F. Deng, H. He, and J. Chen, “Discriminant geometrical and statistical alignment with density peaks for domain adaptation,” IEEE Trans. Cybern., vol. 52, no. 2, pp. 1193–1206, 2022. doi: 10.1109/TCYB.2020.2994875
    [19]
    S. Teng, Z. Zheng, N. Wu, L. Fei, and W. Zhang, “Domain adaptation via incremental confidence samples into classification,” Int. J. Intell. Syst., vol. 37, no. 1, pp. 365–385, 2022. doi: 10.1002/int.22629
    [20]
    J. Wang, Y. Chen, S. Hao, W. Feng, and Z. Shen, “Balanced distribution adaptation for transfer learning,” in Proc. IEEE Int. Conf. Data Min., 2017, pp. 1129–1134.
    [21]
    S. Li, S. Song, G. Huang, Z. Ding, and C. Wu, “Domain invariant and class discriminative feature learning for visual domain adaptation,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4260–4273, 2018. doi: 10.1109/TIP.2018.2839528
    [22]
    L. Yang and P. Zhong, “Discriminative and informative joint distribution adaptation for unsupervised domain adaptation,” Knowledge-Based Syst., vol. 207, 2020, DOI: 10.1016/j.knosys.2020.106394.
    [23]
    Y. Du, Y. Chen, F. Cui, X. Zhang, and C. Wang, “Cross-domain error minimization for unsupervised domain adaptation,” in Proc. Int. Conf. Database Syst. Adv. Appl., Springer, 2021, pp. 429–448.
    [24]
    J. Wang, W. Feng, Y. Chen, H. Yu, M. Huang, and P. S. Yu, “Visual domain adaptation with manifold embedded distribution alignment,” in Proc. 26th ACM Int. Conf. Multimedia, 2018, pp. 402–410.
    [25]
    N. Xiao, L. Zhang, X. Xu, T. Guo, and H. Ma, “Label disentangled analysis for unsupervised visual domain adaptation,” Knowledge-Based Syst., vol. 229, 2021, DOI: 10.1016/j.knosys.2021.107309.
    [26]
    W. Wang, B. Li, S. Yang, J. Sun, Z. Ding, J. Chen, X. Dong, Z. Wang, and H. Li, “A unified joint maximum mean discrepancy for domain adaptation,” arXiv preprint arXiv: 2101.09979, 2021.
    [27]
    Y. Li, D. Li, Y. Lu, C. Gao, W. Wang, and J. Lu, “Progressive distribution alignment based on label correction for unsupervised domain adaptation,” in Proc. IEEE Int. Conf. Multimedia Expo., 2021, pp. 1–6.
    [28]
    G. Belous, A. Busch, and Y. Gao, “Dual subspace discriminative projection learning,” Pattern Recognit., vol. 111, 2021, DOI: 10.1016/j.patcog.2020.107581.
    [29]
    V. Titouan, I. Redko, R. Flamary, and N. Courty, “Co-optimal transport,” Adv. Neural Inf. Process. Syst., vol. 33, pp. 17559–17570, 2020.
    [30]
    W. Lu, Y. Chen, J. Wang, and X. Qin, “Cross-domain activity recognition via substructural optimal transport,” Neurocomput., vol. 454, pp. 65–75, 2021. doi: 10.1016/j.neucom.2021.04.124
    [31]
    B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain adaptation,” in Proc. AAAI Conf. Artificial Intell., vol. 30, no. 1, 2016, pp. 2058–2065.
    [32]
    J. Wang, Y. Chen, H. Yu, M. Huang, and Q. Yang, “Easy transfer learning by exploiting intra-domain structures,” in Proc. IEEE Int. Conf. Multimedia Expo, 2019, pp. 1210–1215.
    [33]
    L. Zhang, S. Wang, G.-B. Huang, W. Zuo, J. Yang, and D. Zhang, “Manifold criterion guided transfer learning via intermediate domain generation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 12, pp. 3759–3773, 2019. doi: 10.1109/TNNLS.2019.2899037
    [34]
    L. Zhang and X. Gao, “Transfer adaptation learning: A decade survey,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–22, 2022, DOI: 10.1109/TNNLS.2022.3183326.
    [35]
    H. Yan, Y. Ding, P. Li, Q. Wang, Y. Xu, and W. Zuo, “Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2017, pp. 2272–2281.
    [36]
    J. Huang, N. Xiao, and L. Zhang, “Balancing transferability and discriminability for unsupervised domain adaptation,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–8, 2022, DOI: 10.1109/TNNLS.2022.3201623.
    [37]
    Y. Zhu, F. Zhuang, J. Wang, G. Ke, J. Chen, J. Bian, H. Xiong, and Q. He, “Deep subdomain adaptation network for image classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 4, pp. 1713–1722, 2020.
    [38]
    N. Xiao and L. Zhang, “Dynamic weighted learning for unsupervised domain adaptation,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognit., 2021, pp. 15242–15251.
    [39]
    D. Cheng, J. Huang, S. Zhang, X. Zhang, and X. Luo, “A novel approximate spectral clustering algorithm with dense cores and density peaks,” IEEE Trans. Syst.,Man,Cybern. Syst., vol. 52, no. 4, pp. 2348–2360, 2022. doi: 10.1109/TSMC.2021.3049490
    [40]
    L. Luo, L. Chen, S. Hu, Y. Lu, and X. Wang, “Discriminative and geometry-aware unsupervised domain adaptation,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3914–3927, 2020. doi: 10.1109/TCYB.2019.2962000
    [41]
    C. Han, D. Zhou, Y. Xie, Y. Lei, and J. Shi, “Label propagation with multi-stage inference for visual domain adaptation,” Knowledge-Based Syst., vol. 216, 2021, DOI: 10.1016/j.knosys.2021.106809.
    [42]
    C. Lu, J. Feng, Z. Lin, T. Mei, and S. Yan, “Subspace clustering by block diagonal representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 2, pp. 487–501, 2019. doi: 10.1109/TPAMI.2018.2794348
    [43]
    S. Teng, Z. Zheng, N. Wu, L. Teng, and W. Zhang, “Adaptive graph embedding with consistency and specificity for domain adaptation,” IEEE/CAA J. Autom. Sinica, 2023, DOI: 10.1109/JAS.2023.123318.
    [44]
    Y. Du, D. Zhou, J. Shi, Y. Lei, and M. Gong, “Dynamic-graph-based unsupervised domain adaptation,” in Proc. Int. Joint. Conf. Nerual Netw., 2021, pp. 1–7.
    [45]
    B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2012, pp. 2066–2073.
    [46]
    K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in Proc. Eur. Conf. Comput. Vis., Springer, 2010, pp. 213–226.
    [47]
    T. Ringwald and R. Stiefelhagen, “Adaptiope: A modern benchmark for unsupervised domain adaptation,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2021, pp. 101–110.
    [48]
    H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2017, pp. 5018–5027.
    [49]
    B. Caputo et al., “Imageclef 2014: Overview and analysis of the results”, in Proc. Int. Conf. Cross-Lang. Eval. Forum Eur. Lang., pp. 192–211, 2014.
    [50]
    J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “DeCAF: A deep convolutional activation feature for generic visual recognition,” in Proc. Int. Conf. Mach. Learn. PMLR, 2014, pp. 647–655.
    [51]
    W. Zhang, X. Zhang, L. Lan, and Z. Luo, “Maximum mean and covariance discrepancy for unsupervised domain adaptation,” Neural Process. Lett., vol. 51, no. 1, pp. 347–366, 2020. doi: 10.1007/s11063-019-10090-0
    [52]
    C. Han, D. Zhou, Y. Xie, M. Gong, Y. Lei, and J. Shi, “Collaborative representation with curriculum classifier boosting for unsupervised domain adaptation,” Pattern Recognit., vol. 113, 2021, DOI: 10.1016/j.patcog.2020.107802.
    [53]
    Y. Lu, D. Li, W. Wang, Z. Lai, J. Zhou, and X. Li, “Discriminative invariant alignment for unsupervised domain adaptation,” IEEE Trans. Multimedia, vol. 24, pp. 1871–1882, 2021.
    [54]
    H. Liu, M. Long, J. Wang, and M. Jordan, “Transferable adversarial training: A general approach to adapting deep classifiers,” in Proc. Int. Conf. Mach. Learn., PMLR, 2019, pp. 4013–4022.
    [55]
    J. Li, E. Chen, Z. Ding, L. Zhu, K. Lu, and H. T. Shen, “Maximum density divergence for domain adaptation,” IEEE Trans. Pattern. Anal. Mach. Intell., vol. 43, no. 11, pp. 3918–3930, 2021. doi: 10.1109/TPAMI.2020.2991050
    [56]
    V. N. Vapnik, “An overview of statistical learning theory,” IEEE Trans. Neural Netw., vol. 10, no. 5, pp. 988–999, 1999. doi: 10.1109/72.788640
    [57]
    L. Teng, Z. Feng, X. Fang, et al., “Unsupervised feature selection with adaptive residual preserving,” Neurocomput., vol. 367, pp. 259–272, 2019. doi: 10.1016/j.neucom.2019.05.097

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(14)

    Article Metrics

    Article views (381) PDF downloads(98) Cited by()

    Highlights

    • A method called Knowledge Transfer Learning via Dual Density Sampling (KTL-DDS) is proposed
    • The proposed method picks up the confidence samples by utilizing the global and local densities
    • The proposed method ensures the consistency of sampling and no resampling
    • It achieves satisfactory accuracy with reduced computational time
    • The proposed method is open and can be extended to the existing algorithms

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return