A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 11
Nov.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
H. N. Huang, G. X. Zhou, N. Y. Liang, Q. B. Zhao, and S. L. Xie, “Diverse deep matrix factorization with hypergraph regularization for multi-view data representation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2154–2167, Nov. 2023. doi: 10.1109/JAS.2022.105980
Citation: H. N. Huang, G. X. Zhou, N. Y. Liang, Q. B. Zhao, and S. L. Xie, “Diverse deep matrix factorization with hypergraph regularization for multi-view data representation,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2154–2167, Nov. 2023. doi: 10.1109/JAS.2022.105980

Diverse Deep Matrix Factorization With Hypergraph Regularization for Multi-View Data Representation

doi: 10.1109/JAS.2022.105980
Funds:  This work was supported by the National Natural Science Foundation of China (62073087, 62071132, 61973090)
More Information
  • Deep matrix factorization (DMF) has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data (MDR). However, existing multi-view DMF methods mainly explore the consistency of multi-view data, while neglecting the diversity among different views as well as the high-order relationships of data, resulting in the loss of valuable complementary information. In this paper, we design a hypergraph regularized diverse deep matrix factorization (HDDMF) model for multi-view data representation, to jointly utilize multi-view diversity and a high-order manifold in a multi-layer factorization framework. A novel diversity enhancement term is designed to exploit the structural complementarity between different views of data. Hypergraph regularization is utilized to preserve the high-order geometry structure of data in each view. An efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence analysis. Experimental results on five real-world data sets demonstrate that the proposed method significantly outperforms state-of-the-art multi-view learning approaches.

     

  • loading
  • [1]
    N. Liang, Z. Yang, and S. Xie, “Incomplete multi-view clustering with sample-level auto-weighted graph fusion,” IEEE Trans. Knowl. Data Eng., 2022.
    [2]
    Q. Xiao, J. Dai, J. Luo, and H. Fujita, “Multi-view manifold regularized learning-based method for prioritizing candidate disease mirnas,” Knowl. Based Syst., vol. 175, pp. 118–129, 2019. doi: 10.1016/j.knosys.2019.03.023
    [3]
    M. Kan, S. Shan, H. Zhang, S. Lao, and X. Chen, “Multi-view discriminant analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 188–194, 2015.
    [4]
    R. Bai, R. Huang, Y. Chen, and Y. Qin, “Deep multi-view document clustering with enhanced semantic embedding,” Inf. Sci., vol. 564, pp. 273–287, 2021. doi: 10.1016/j.ins.2021.02.027
    [5]
    L. Fu, P. Lin, A. V. Vasilakos, and S. Wang, “An overview of recent multi-view clustering,” Neurocomputing, vol. 402, pp. 148–161, 2020. doi: 10.1016/j.neucom.2020.02.104
    [6]
    J. Yu, G. Zhou, W. Sun, and S. Xie, “Robust to rank selection: Low-rank sparse tensor-ring completion,” IEEE Trans. Neural Netw. Learn. Syst., 2021.
    [7]
    Y. Xie, D. Tao, W. Zhang, Y. Liu, L. Zhang, and Y. Qu, “On unifying multi-view self-representations for clustering by tensor multi-rank minimization,” Int. J. Comput. Vis., vol. 126, no. 11, pp. 1157–1179, 2018. doi: 10.1007/s11263-018-1086-2
    [8]
    A. Kumar, P. Rai, and H. Daume, “Co-regularized multi-view spectral clustering,” Advances in Neural Information Processing Systems, vol. 24, pp. 1413–1421, 2011.
    [9]
    Z. Li, Z. Yang, H. Zhao, and S. Xie, “Direct-optimization-based DC dictionary learning with the mcp regularizer,” IEEE Trans. Neural Netw. Learn. Syst., 2021.
    [10]
    Z. Li, Y. Li, B. Tan, S. Ding, and S. Xie, “Structured sparse coding with the group log-regularizer for key frame extraction,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 10, pp. 1818–1830, 2022.
    [11]
    Y. Yu, G. Zhou, N. Zheng, Y. Qiu, S. Xie, and Q. Zhao, “Graph-regularized non-negative tensor-ring decomposition for multiway representation learning,” IEEE Trans. Cybern., vol. 53, no. 5, pp. 3114–3127, 2023.
    [12]
    Y. Qiu, G. Zhou, Y. Wang, Y. Zhang, and S. Xie, “A generalized graph regularized non-negative tucker decomposition framework for tensor data representation,” IEEE Trans. Cybern., vol. 52, no. 1, pp. 594–607, 2022. doi: 10.1109/TCYB.2020.2979344
    [13]
    H. Gao, F. Nie, X. Li, and H. Huang, “Multi-view subspace clustering,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015, pp. 4238–4246.
    [14]
    C. Zhang, H. Fu, S. Liu, G. Liu, and X. Cao, “Low-rank tensor constrained multiview subspace clustering,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 1582–1590.
    [15]
    Y. Qiu, G. Zhou, Q. Zhao, and S. Xie, “Noisy tensor completion via low-rank tensor ring,” IEEE Trans. Neural Netw. Learn. Syst., 2022. DOI: 10.1109/TNNLS.2022.3181378
    [16]
    J. Yu, T. Zou, and G. Zhou, “Online subspace learning and imputation by tensor-ring decomposition,” Neural Netw., vol. 153, pp. 314–324, 2022.
    [17]
    H. Xu, X. Zhang, W. Xia, Q. Gao, and X. Gao, “Low-rank tensor constrained co-regularized multi-view spectral clustering,” Neural Netw., vol. 132, pp. 245–252, 2020. doi: 10.1016/j.neunet.2020.08.019
    [18]
    Y. Qiu, G. Zhou, Z. Huang, Q. Zhao, and S. Xie, “Efficient tensor robust pca under hybrid model of tucker and tensor train,” IEEE Signal Process. Lett., vol. 29, pp. 627–631, 2022. doi: 10.1109/LSP.2022.3143721
    [19]
    C. Leng, H. Zhang, G. Cai, I. Cheng, and A. Basu, “Graph regularized LP smooth non-negative matrix factorization for data representation,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 2, pp. 584–595, 2019. doi: 10.1109/JAS.2019.1911417
    [20]
    J. Yu, G. Zhou, C. Li, Q. Zhao, and S. Xie, “Low tensor-ring rank completion by parallel matrix factorization,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 7, pp. 3020–3033, 2020.
    [21]
    J. Liu, C. Wang, J. Gao, and J. Han, “Multi-view clustering via joint nonnegative matrix factorization,” in Proc. SIAM Int. Conf. Data Mining. SIAM, 2013, pp. 252–260.
    [22]
    J. Liu, Y. Jiang, Z. Li, Z.-H. Zhou, and H. Lu, “Partially shared latent factor learning with multiview data,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 6, pp. 1233–1246, 2014.
    [23]
    Z. Yang, N. Liang, W. Yan, Z. Li, and S. Xie, “Uniform distribution non-negative matrix factorization for multiview clustering,” IEEE Trans. Cybern., vol. 51, no. 6, pp. 3249–3262, 2020.
    [24]
    Y. Yu, K. Xie, J. Yu, Q. Jiang, and S. Xie, “Fast nonnegative tensor ring decomposition based on the modulus method and low-rank approximation,” Sci. China-Technol. Sci., vol. 64, no. 9, pp. 1843–1853, 2021.
    [25]
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. doi: 10.1038/nature14539
    [26]
    G. Trigeorgis, K. Bousmalis, S. Zafeiriou, and B. W. Schuller, “A deep matrix factorization method for learning attribute representations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 3, pp. 417–429, 2017. doi: 10.1109/TPAMI.2016.2554555
    [27]
    H. Huang, Z. Yang, Z. Li, and W. Sun, “A converged deep graph semi-nmf algorithm for learning data representation,” Circuits Syst. Signal Process., vol. 41, no. 2, pp. 1146–1165, 2022. doi: 10.1007/s00034-021-01833-3
    [28]
    F. Ye, C. Chen, and Z. Zheng, “Deep autoencoder-like nonnegative matrix factorization for community detection,” in Proc. 27th ACM Int. Conf. Information and Knowledge Management, 2018, pp. 1393–1402.
    [29]
    H.-C. Li, G. Yang, W. Yang, Q. Du, and W. J. Emery, “Deep nonsmooth nonnegative matrix factorization network with semi-supervised learning for sar image change detection,” ISPRS J. Photogramm. Remote Sens., vol. 160, pp. 167–179, 2020. doi: 10.1016/j.isprsjprs.2019.12.002
    [30]
    H. Zhao, Z. Ding, and Y. Fu, “Multi-view clustering via deep matrix factorization,” in Proc. AAAI Conf. Artificial Intelligence, vol. 31, no. 1, 2017.
    [31]
    B. Cui, H. Yu, T. Zhang, and S. Li, “Self-weighted multi-view clustering with deep matrix factorization,” in Proc. Asian Conf. Machine Learning. PMLR, 2019, pp. 567–582.
    [32]
    H. Huang, N. Liang, W. Yan, Z. Yang, Z. Li, and W. Sun, “Partially shared semi-supervised deep matrix factorization with multi-view data,” in Proc. IEEE Int. Conf. Data Mining Workshops. 2020, pp. 564–570.
    [33]
    S. Chang, J. Hu, T. Li, H. Wang, and B. Peng, “Multi-view clustering via deep concept factorization,” Knowl. Based Syst., vol. 217, p. 106807, 2021. doi: 10.1016/j.knosys.2021.106807
    [34]
    W. Zhao, C. Xu, Z. Guan, and Y. Liu, “Multiview concept learning via deep matrix factorization,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 2, pp. 814–825, 2020.
    [35]
    S. Huang, Z. Kang, and Z. Xu, “Auto-weighted multi-view clustering via deep matrix decomposition,” Pattern Recognit., vol. 97, p. 107015, 2020.
    [36]
    H. Huang, Y. Luo, G. Zhou, and Q. Zhao, “Multi-view data representation via deep autoencoder-like nonnegative matrix factorization,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing. 2022, pp. 3338–3342.
    [37]
    X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang, “Diversity-induced multi-view subspace clustering,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015, pp. 586–594.
    [38]
    J. Wang, F. Tian, H. Yu, C. H. Liu, K. Zhan, and X. Wang, “Diverse non-negative matrix factorization for multiview data representation,” IEEE Trans. Cybern., vol. 48, no. 9, pp. 2620–2632, 2017.
    [39]
    Y. Meng, R. Shang, F. Shang, L. Jiao, S. Yang, and R. Stolkin, “Semi-supervised graph regularized deep nmf with bi-orthogonal constraints for data representatio,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9, pp. 3245–3258, 2019.
    [40]
    Y. Xie, W. Zhang, Y. Qu, L. Dai, and D. Tao, “Hyper-laplacian regularized multilinear multiview self-representations for clustering and semisupervised learning,” IEEE Trans. Cybern., vol. 50, no. 2, pp. 572–586, 2020. doi: 10.1109/TCYB.2018.2869789
    [41]
    C. Leng, H. Zhang, G. Cai, Z. Chen, and A. Basu, “Total variation constrained non-negative matrix factorization for medical image registration,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 5, pp. 1025–1037, 2021. doi: 10.1109/JAS.2021.1003979
    [42]
    D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999. doi: 10.1038/44565
    [43]
    D. Y. Zhou, J. Y. Huang, and B. Schölkopf, “Learning with hypergraphs: Clustering, classification, and embedding,” in Proc. 19th Inter. Conf. Neural Information Processing Systems, 2006, pp. 1601–1608.
    [44]
    X. Zhao, Y. Yu, G. Zhou, Q. Zhao, and W. Sun, “Fast hypergraph regularized nonnegative tensor ring decomposition based on low-rank approximation,” Appl. Intell., vol. 52, no. 15, pp. 17684–17707, 2022.
    [45]
    D. Wu and X. Luo, “Robust latent factor analysis for precise representation of high-dimensional and sparse data,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 4, pp. 796–805, 2020.
    [46]
    X. Wang, X. Guo, Z. Lei, C. Zhang, and S. Z. Li, “Exclusivity-consistency regularized multi-view subspace clustering,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 923–931.
    [47]
    G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. doi: 10.1126/science.1127647
    [48]
    C. H. Ding, T. Li, and M. I. Jordan, “Convex and semi-nonnegative matrix factorizations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 1, pp. 45–55, 2010. doi: 10.1109/TPAMI.2008.277
    [49]
    D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems, T. Leen, T. Dietterich, and V. Tresp, Eds., vol. 13. MIT Press, 2001.
    [50]
    F. F. Li, R. Fergus, and P. Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops. 2004, pp. 178–178.
    [51]
    M. Yin, J. Gao, S. Xie, and Y. Guo, “Multiview subspace clustering via tensorial t-product representation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 3, pp. 851–864, 2019. doi: 10.1109/TNNLS.2018.2851444
    [52]
    K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 5, pp. 684–698, 2005. doi: 10.1109/TPAMI.2005.92
    [53]
    A. Coates, A. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proc. Int. Conf. Artificial Intelligence and Statistics, 2011, pp. 215–223.
    [54]
    N. Liang, Z. Yang, Z. Li, W. Sun, and S. Xie, “Multi-view clustering by non-negative matrix factorization with co-orthogonal constraints,” Knowl. Based Syst., vol. 194, p. 105582, 2020.
    [55]
    K. Luong and R. Nayak, “A novel approach to learning consensus and complementary information for multi-view data clustering,” in Proc. IEEE Int. Conf. Data Engineering. 2020, pp. 865–876.
    [56]
    L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res., vol. 9, no. 11, 2008.
    [57]
    Y. Xie, J. Liu, Y. Qu, D. Tao, W. Zhang, L. Dai, and L. Ma, “Robust kernelized multiview self-representation for subspace clustering,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 2, pp. 868–881, 2020.
    [58]
    K. Luong and R. Nayak, “Learning inter-and intra-manifolds for matrix factorization-based multi-aspect data clustering,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 7, pp. 3349–3362, 2020.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(7)

    Article Metrics

    Article views (724) PDF downloads(80) Cited by()

    Highlights

    • Under the assumption of diverse information among multiple views of data, a diversity-enhanced deep matrix factorization- based multi-view representation learning model is established to explore the structural complementarity that exists inter-and intra-views
    • Hypergraph regularization is performed to preserve the intrinsic geometrical structure, which can capture a high-order relation of the view-specific data locality and strengthen the model’s representation ability
    • We develop an efficient algorithm for optimizing the HDDMF and demonstrate that it decreases the objective function of the HDDMF monotonically and converges to a stationary point

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return