A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 11
Nov.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
S. H. Teng, Z. F. Zheng, N. Q. Wu, L. Y. Teng, and W. Zhang, “Adaptive graph embedding with consistency and specificity for domain adaptation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2094–2107, Nov. 2023. doi: 10.1109/JAS.2023.123318
Citation: S. H. Teng, Z. F. Zheng, N. Q. Wu, L. Y. Teng, and W. Zhang, “Adaptive graph embedding with consistency and specificity for domain adaptation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2094–2107, Nov. 2023. doi: 10.1109/JAS.2023.123318

Adaptive Graph Embedding With Consistency and Specificity for Domain Adaptation

doi: 10.1109/JAS.2023.123318
Funds:  This work was supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006), the National Natural Science Foundation of China (61972102), the Guangzhou Science and Technology Plan Project (023A04J1729), and the Science and Technology development fund (FDCT), Macau SAR (015/2020/AMJ)
More Information
  • Domain adaptation (DA) aims to find a subspace, where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabeled target samples well. Existing approaches leverage Graph Embedding Learning to explore such a subspace. Unfortunately, due to 1) the interaction of the consistency and specificity between samples, and 2) the joint impact of the degenerated features and incorrect labels in the samples, the existing approaches might assign unsuitable similarity, which restricts their performance. In this paper, we propose an approach called adaptive graph embedding with consistency and specificity (AGE-CS) to cope with these issues. AGE-CS consists of two methods, i.e., graph embedding with consistency and specificity (GECS), and adaptive graph embedding (AGE). GECS jointly learns the similarity of samples under the geometric distance and semantic similarity metrics, while AGE adaptively adjusts the relative importance between the geometric distance and semantic similarity during the iterations. By AGE-CS, the neighborhood samples with the same label are rewarded, while the neighborhood samples with different labels are punished. As a result, compact structures are preserved, and advanced performance is achieved. Extensive experiments on five benchmark datasets demonstrate that the proposed method performs better than other Graph Embedding methods.

     

  • loading
  • 1 Right hand side
    2 Left hand side
  • [1]
    F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, 2020.
    [2]
    M. F. Aslan, M. F. Unlersen, K. Sabanci, and A. Durdu, “CNN-based transfer learning–BiLSTM network: A novel approach for COVID-19 infection detection,” Appl. Soft Comput., vol. 98, 2021, DOI: 10.1016/j.asoc.2020.106912.
    [3]
    E. F. Ohata, G. M. Bezerra, J. V. S. das Chagas, A. V. L. Neto, A. B. Albuquerque, V. H. C. de Albuquerque, and P. Reboucas Filho, “Automatic detection of COVID-19 infection using chest x-ray images through transfer learning,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 239–248, 2020.
    [4]
    S. Khan, N. Islam, Z. Jan, I. U. Din, and J. J. C. Rodrigues, “A novel deep learning based framework for the detection and classification of breast cancer using transfer learning,” Pattern Recognit. Lett., vol. 125, pp. 1–6, 2019. doi: 10.1016/j.patrec.2019.03.022
    [5]
    G. Michau and O. Fink, “Unsupervised transfer learning for anomaly detection: Application to complementary operating condition transfer,” Knowledge-Based Syst., vol. 216, 2021, DOI: 10.1016/j.knosys.2021.106816.
    [6]
    X. Wang, X. Liu, and Y. Li, “An incremental model transfer method for complex process fault diagnosis,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 5, pp. 1268–1280, 2019. doi: 10.1109/JAS.2019.1911618
    [7]
    S. Teng, N. Wu, H. Zhu, L. Teng, and W. Zhang, “SVM-DT-based adaptive and collaborative intrusion detection,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 1, pp. 108–118, 2018. doi: 10.1109/JAS.2017.7510730
    [8]
    Y. Wang, S. Qiu, D. Li, C. Du, B.-L. Lu, and H. He, “Multi-modal domain adaptation variational autoencoder for EEG-based emotion recognition,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1612–1626, 2022.
    [9]
    H. Hu, H. Wang, Z. Liu, and W. Chen, “Domain-invariant similarity activation map contrastive learning for retrieval-based long-term visual localization,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 2, pp. 313–328, 2022. doi: 10.1109/JAS.2021.1003907
    [10]
    L. Zhang and X. Gao, “Transfer Adaptation Learning: A Decade Survey,” IEEE Trans. Neural Netw. Learn. Syst., 2022,
    [11]
    L. Feng, F. Qian, X. He, Y. Fan, H. Cai, and G. Hu, “Transitive transfer sparse coding for distant domain,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal, 2021, pp. 3165–3169.
    [12]
    M. Meng, Y. Liu, and J. Wu, “Robust discriminant projection via joint margin and locality structure preservation,” Neural Proc. Lett., vol. 53, no. 2, pp. 959–982, 2021. doi: 10.1007/s11063-020-10418-1
    [13]
    E. Gholenji and J. Tahmoresnezhad, “Joint local and statistical discriminant learning via feature alignment,” Signal,Image,Video Proc., vol. 14, no. 3, pp. 609–616, 2020. doi: 10.1007/s11760-019-01587-1
    [14]
    M. Wang, G. Liu, Zh ao, C. Yan, and C. Jiang, “Behavior consistency computation for workflow nets with unknown correspondence,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 1, pp. 281–291, 2018. doi: 10.1109/JAS.2017.7510775
    [15]
    Y. Huang, Z. Shen, F. Cai, T. Li, and F. Lv, “Adaptive graph-based generalized regression model for unsupervised feature selection,” Knowledge-Based Syst., vol. 227, 2021, DOI: 10.1016/j.knosys.2021.107156.
    [16]
    B. Han, Q. Yao, T. Liu, G. Niu, I. W. Tsang, J. T. Kwok, and M. Sugiyama, “A survey of label-noise representation learning: Past, present and future,” arXiv preprint arXiv: 2011.04406, 2020.
    [17]
    J. Liu, J. Li, and K. Lu, “Coupled local-global adaptation for multi-source transfer learning,” Neurocomputing, vol. 275, pp. 247–254, 2018. doi: 10.1016/j.neucom.2017.06.051
    [18]
    J. Wang, W. Feng, Y. Chen, H. Yu, M. Huang, and P. S. Yu, “Visual domain adaptation with manifold embedded distribution alignment,” in Proc. 26th ACM Int. Conf. Multimedia, 2018, pp. 402–410.
    [19]
    S. Vascon, S. Aslan, A. Torcinovich, T. van Laarhoven, E. Marchiori, and M. Pelillo, “Unsupervised domain adaptation using graph transduction games,” in Proc. Int. IEEE Joint Conf. Neural Netw., 2019, pp. 1–8.
    [20]
    T. Xiao, L iu, W. Zhao, H. Liu, and X. Tang, “Structure preservation and distribution alignment in discriminative transfer subspace learning,” Neurocomputing, vol. 337, pp. 218–234, 2019. doi: 10.1016/j.neucom.2019.01.069
    [21]
    J. Sun, Z. Wang, W. Wang, H. Li, and F. Sun, “Domain adaptation with geometrical preservation and distribution alignment,” Neurocomputing, vol. 454, pp. 152–167, 2021. doi: 10.1016/j.neucom.2021.04.098
    [22]
    S. Li, S. Song, G. Huang, Z. Ding, and C. Wu, “Domain invariant and class discriminative feature learning for visual domain adaptation,” IEEE Trans. Image Proc., vol. 27, no. 9, pp. 4260–4273, 2018. doi: 10.1109/TIP.2018.2839528
    [23]
    J. Li, M. Jing, K. Lu, L. Zhu, and H. T. Shen, “Locality preserving joint transfer for domain adaptation,” IEEE Trans. Image Proc., vol. 28, no. 12, pp. 6103–6115, 2019. doi: 10.1109/TIP.2019.2924174
    [24]
    J. Zhao, L. Li, F. Deng, H. He, and J. Chen, “Discriminant geometrical and statistical alignment with density peaks for domain adaptation,” IEEE Trans. Cybern., vol. 52, no. 2, pp. 1193–1206, 2022. doi: 10.1109/TCYB.2020.2994875
    [25]
    Y. Li, D. Li, Y. Lu, C. Gao, W. Wang, and J. Lu, “Progressive distribution alignment based on label correction for unsupervised domain adaptation,” in Proc. IEEE Int. Conf. Multimedia Expo., 2021, pp. 1–6.
    [26]
    M. Ghifary, D. Balduzzi, W. B. Kleijn, and M. Zhang, “Scatter component analysis: A unified framework for domain adaptation and domain generalization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 7, pp. 1414–1430, 2016.
    [27]
    Z. Peng, W. Zhang, N. Han, X. Fang, Ka ng, and L. Teng, “Active transfer learning,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 4, pp. 1022–1036, 2019.
    [28]
    Goyal and E. Ferrara, “Graph embedding techniques, applications, and performance: A survey,” Knowledge-Based Syst., vol. 151, pp. 78–94, 2018. doi: 10.1016/j.knosys.2018.03.022
    [29]
    S. Rezaei and J. Tahmoresnezhad, “Discriminative and domain invariant subspace alignment for visual tasks,” Iran J. Comput. Sci., vol. 2, no. 4, pp. 219–230, 2019. doi: 10.1007/s42044-019-00037-y
    [30]
    S. Noori Saray and J. Tahmoresnezhad, “Joint distinct subspace learning and unsupervised transfer classification for visual domain adaptation,” Signal,Image and Video Proc., vol. 15, no. 2, pp. 279–287, 2021. doi: 10.1007/s11760-020-01745-w
    [31]
    M. Jing, J. Li, K. Lu, J. Liu, and Z. Huang, “Adaptive component embedding for unsupervised domain adaptation,” in Proc. IEEE Int. Conf. Multimedia Expo., 2019, pp. 1660–1665.
    [32]
    Q. Wang and T. Breckon, “Unsupervised domain adaptation via structured prediction based selective pseudo-labeling,” in Proc. AAAI Conf. Artificial Intell., vol. 34, no. 4, 2020, pp. 6243–6250.
    [33]
    F. Nie, X. Wang, and H. Huang, “Clustering and projected clustering with adaptive neighbors,” in Proc. 20th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2014, pp. 977–986.
    [34]
    F. Nie, C.-L. Wang, and X. Li, “K-multiple-means: A multiple-means clustering method with specified k clusters,” in Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2019, pp. 959–967.
    [35]
    B. Gong, Y. Shi, F. Sha, and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2012, pp. 2066–2073.
    [36]
    K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual category models to new domains,” in Proc. European Conf. Comput. Vision. Springer, 2010, pp. 213–226.
    [37]
    H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep hashing network for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2017, pp. 5018–5027.
    [38]
    S. A. Nene, S. K. Nayar, H. Murase et al., “Columbia object image library (coil-100),” Technical report, Columbia University, 1996.
    [39]
    J. Wang, Y. Chen, H. Yu, M. Huang, and Q. Yang, “Easy transfer learning by exploiting intra-domain structures,” in Proc. IEEE Int. Conf. Multimedia Expo., 2019, pp. 1210–1215.
    [40]
    W. Zhang and D. Wu, “Discriminative joint probability maximum mean discrepancy (DJP-MMD) for domain adaptation,” in Proc. Int. IEEE Joint Conf. Neural Netw., 2020, pp. 1–8.
    [41]
    S. Teng, Z. Zheng, N. Wu, L. Fei, and W. Zhang, “Domain adaptation via incremental confidence samples into classification,” Int. J. Intell. Syst., vol. 37, no. 1, pp. 365–385, 2022. doi: 10.1002/int.22629
    [42]
    L. Hu, X. Yuan, X. Liu, S. Xiong, and X. Luo, “Efficiently detecting protein complexes from protein interaction networks via alternating direction method of multipliers,” IEEE/ACM Trans. Comput. Biol. Bioinf., vol. 16, no. 6, pp. 1922–1935, 2019. doi: 10.1109/TCBB.2018.2844256
    [43]
    X. Luo, Y. Yuan, S. Chen, N. Zeng, and Z. Wang, “Position-transitional particle swarm optimization-incorporated latent factor analysis,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 8, pp. 3958–3970, 2022. doi: 10.1109/TKDE.2020.3033324
    [44]
    D. Wu, Q. He, X. Luo, M. Shang, Y. He, and G. Wang, “A posterior-neighborhood-regularized latent factor model for highly accurate web service QoS prediction,” IEEE Trans. on Services Comput., vol. 15, no. 2, pp. 793–805, 2022. doi: 10.1109/TSC.2019.2961895
    [45]
    X. Luo, W. Qin, A. Dong, K. Sedraoui, and M. Zhou, “Efficient and high-quality recommendations via momentum-incorporated parallel stochastic gradient descent-based learning,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 2, pp. 402–411, 2021. doi: 10.1109/JAS.2020.1003396

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)  / Tables(8)

    Article Metrics

    Article views (622) PDF downloads(139) Cited by()

    Highlights

    • Consistency and specificity components are deeply mined to transfer more knowledge
    • A graph learning unified framework is built to acquire additional knowledge
    • An algorithm is implemented to adaptively adjust the significance of consistency and specificity
    • The proposed method is validated mathematically and empirically

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return