A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 11
Nov.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Y. Lin, Z. Yu, K. Yang, Z. Fan, and  C. L. P. Chen,  “Boosting adaptive weighted broad learning system for multi-label learning,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 11, pp. 2204–2219, Nov. 2024. doi: 10.1109/JAS.2024.124557
Citation: Y. Lin, Z. Yu, K. Yang, Z. Fan, and  C. L. P. Chen,  “Boosting adaptive weighted broad learning system for multi-label learning,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 11, pp. 2204–2219, Nov. 2024. doi: 10.1109/JAS.2024.124557

Boosting Adaptive Weighted Broad Learning System for Multi-Label Learning

doi: 10.1109/JAS.2024.124557
Funds:  This work was supported in part by the National Key R&D Program of China (2023YFA1011601), the Major Key Project of PCL, China (PCL2023AS7-1), in part by the National Natural Science Foundation of China (U21A20478, 62106224, 92267203), in part by the Science and Technology Major Project of Guangzhou (202007030006), in part by the Major Key Project of PCL (PCL2021A09), and in part by the Guangzhou Science and Technology Plan Project (2024A04J3749)
More Information
  • Multi-label classification is a challenging problem that has attracted significant attention from researchers, particularly in the domain of image and text attribute annotation. However, multi-label datasets are prone to serious intra-class and inter-class imbalance problems, which can significantly degrade the classification performance. To address the above issues, we propose the multi-label weighted broad learning system (MLW-BLS) from the perspective of label imbalance weighting and label correlation mining. Further, we propose the multi-label adaptive weighted broad learning system (MLAW-BLS) to adaptively adjust the specific weights and values of labels of MLW-BLS and construct an efficient imbalanced classifier set. Extensive experiments are conducted on various datasets to evaluate the effectiveness of the proposed model, and the results demonstrate its superiority over other advanced approaches.

     

  • loading
  • [1]
    Z. Cai and W. Zhu, “Feature selection for multi-label classification using neighborhood preservation,” IEEE/CAA J. Autom. Sinica, vol. 5, no. 1, pp. 320–330, Jan. 2018. doi: 10.1109/JAS.2017.7510781
    [2]
    C. L. P. Chen and Z. Liu, “Broad learning system: An effective and efficient incremental learning system without the need for deep architecture,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 10–24, Jan. 2018. doi: 10.1109/TNNLS.2017.2716952
    [3]
    V. Chauhan and A. Tiwari, “Randomized neural networks for multilabel classification,” Appl. Soft Comput., vol. 115, p. 108184, Jan. 2022. doi: 10.1016/j.asoc.2021.108184
    [4]
    Y. Shi, K. Yang, Z. Yu, C. L. P. Chen, and H. Zeng, “Adaptive ensemble clustering with boosting BLS-based autoencoder,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 12, pp. 12369–12383, Dec. 2023. doi: 10.1109/TKDE.2023.3271120
    [5]
    K. Yang, Y. Shi, Z. Yu, Q. Yang, A. K. Sangaiah, and H. Zeng, “Stacked one-class broad learning system for intrusion detection in industry 4.0,” IEEE Trans. Industr. Inform., vol. 19, no. 1, pp. 251–260, Jan. 2023. doi: 10.1109/TII.2022.3157727
    [6]
    K. Yang, Z. Yu, C. L. P. Chen, W. Cao, H.-S. Wong, J. You, and G. Han, “Progressive hybrid classifier ensemble for imbalanced data,” IEEE Trans. Syst. Man Cybern. Syst., vol. 52, no. 4, pp. 2464–2478, Apr. 2021.
    [7]
    Y. Wang, K. Li, and Z. Chen, “Battery full life cycle management and health prognosis based on cloud service and broad learning,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 8, pp. 1540–1542, Aug. 2022. doi: 10.1109/JAS.2022.105779
    [8]
    S. Zhang, Z. Liu, and C. L. P. Chen, “Broad learning system based on the quantized minimum error entropy criterion,” Sci. China Inf. Sci., vol. 65, no. 12, p. 222203, Nov. 2022. doi: 10.1007/s11432-022-3560-8
    [9]
    W. Cao, J. Yan, X. Yang, X. Luo, and X. Guan, “Model-free formation control of autonomous underwater vehicles: A broad learning-based solution,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1325–1328, May 2023. doi: 10.1109/JAS.2023.123165
    [10]
    R. E. Schapire and Y. Singer, “BoosTexter: A boosting-based system for text categorization,” Mach. Learn., vol. 39, no. 2–3, pp. 135–168, May 2000.
    [11]
    M. R. Boutell, J. Luo, X. Shen, and C. M. Brown, “Learning multi-label scene classification,” Pattern Recognit., vol. 37, no. 9, pp. 1757–1771, Sep. 2004. doi: 10.1016/j.patcog.2004.03.009
    [12]
    C. Sanden and J. Z. Zhang, “Enhancing multi-label music genre classification through ensemble techniques,” in Proc. 34th Int. ACM SIGIR Conf. Research and development in Information Retrieval, Beijing, China, 2011, pp. 705–714.
    [13]
    S. Zhu, X. Ji, W. Xu, and Y. Gong, “Multi-labelled classification using maximum entropy method,” in Proc. 28th Annu. Int. ACM SIGIR Conf. Research and Development in Information Retrieval, Salvador, Brazil, 2005, pp. 274–281.
    [14]
    S. Gopal and Y. Yang, “Multilabel classification with meta-level features,” in Proc. 33rd Int. ACM SIGIR Conf. Research and Development in Information Retrieval, Geneva, Switzerland, 2010, pp. 315–322.
    [15]
    J. Read, B. Pfahringer, and G. Holmes, “Pruned problem transformation methods for multi-label learning,” Mach. Learn., vol. 73, no. 3, pp. 303–325, 2008.
    [16]
    J. Read, B. Pfahringer, G. Holmes, and E. Frank, “Classifier chains for multi-label classification,” in Proc. European Conf. Machine Learning and Knowledge Discovery in Databases, Bled, Slovenia, 2009, pp. 254–269.
    [17]
    G. Tsoumakas and I. Katakis, “Multi-label classification: An overview,” Int. J. Data Warehous. Min., vol. 3, no. 3, pp. 1–13, 2007. doi: 10.4018/jdwm.2007070101
    [18]
    G. Tsoumakas and I. Vlahavas, “Random k-labelsets: An ensemble method for multilabel classification,” in Proc. 18th European Conf. Machine Learning, Warsaw, Poland, 2007, pp. 406-417.
    [19]
    C. Vens, J. Struyf, L. Schietgat, S. Džeroski, and H. Blockeel, “Decision trees for hierarchical multi-label classification,” Mach. Learn., vol. 73, no. 2, pp. 185–214, Nov. 2008. doi: 10.1007/s10994-008-5077-3
    [20]
    M.-L. Zhang and. Z.-H. Zhou, “Multilabel neural networks with applications to functional genomics and text categorization,” IEEE Trans. Knowl. Data Eng., vol. 18, no. 10, pp. 1338–1351, Oct. 2006. doi: 10.1109/TKDE.2006.162
    [21]
    Y.-B. Wang, J.-Y. Hang, and M.-L. Zhang, “Stable label-specific features generation for multi-label learning via mixture-based clustering ensemble,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1248–1261, Jul. 2022. doi: 10.1109/JAS.2022.105518
    [22]
    T. Reddy G, S. Bhattacharya, P. K. R. Maddikunta, S. Hakak, W. Z. Khan, A. K. Bashir, A. Jolfaei, and U. Tariq, “Antlion re-sampling based deep neural network model for classification of imbalanced multimodal stroke dataset,” Multimed. Tools Appl., vol. 81, no. 29, pp. 41429–41453, Dec. 2022. doi: 10.1007/s11042-020-09988-y
    [23]
    F. Charte, A. J. Rivera, M. J. del Jesus, and F. Herrera, “MLSMOTE: Approaching imbalanced multilabel learning through synthetic instance generation,” Knowl. Based Syst., vol. 89, pp. 385–397, Nov. 2015. doi: 10.1016/j.knosys.2015.07.019
    [24]
    M. A. Tahir, J. Kittler, and F. Yan, “Inverse random under sampling for class imbalance problem and its application to multi-label classification,” Pattern Recognit., vol. 45, no. 10, pp. 3738–3750, Oct. 2012. doi: 10.1016/j.patcog.2012.03.014
    [25]
    M. A. Tahir, J. Kittler, and A. Bouridane, “Multilabel classification using heterogeneous ensemble of multi-label classifiers,” Pattern Recognit. Lett., vol. 33, no. 5, pp. 513–523, Apr. 2012. doi: 10.1016/j.patrec.2011.10.019
    [26]
    S. Wan, Y. Duan, and Q. Zou, “HPSLPred: An ensemble multi-label classifier for human protein subcellular location prediction with imbalanced source,” Proteomics, vol. 17, no. 17–18, p. 1700262, Sep. 2017. doi: 10.1002/pmic.201700262
    [27]
    A. Pakrashi and B. Mac Namee, “Stacked-MLkNN: A stacking based improvement to multi-label k-nearest neighbours,” in Proc. First Int. Workshop on Learning with Imbalanced Domains: Theory and Applications, Skopje, Macedonia, 2017, pp. 51–63.
    [28]
    B. Liu and G. Tsoumakas, “Making classifier chains resilient to class imbalance,” in Proc. 10th Asian Conf. Machine Learning, Beijing, China, 2018, pp. 280–295.
    [29]
    M.-L. Zhang, Y.-K. Li, H. Yang, and X.-Y. Liu, “Towards class-imbalance aware multi-label learning,” IEEE Trans. Cybern., vol. 52, no. 6, pp. 4459–4471, Jun. 2020.
    [30]
    H. Yu, C. Sun, X. Yang, S. Zheng, Q. Wang, and X. Xi, “LW-ELM: A fast and flexible cost-sensitive learning framework for classifying imbalanced data,” IEEE Access, vol. 6, pp. 28488–28500, May 2018. doi: 10.1109/ACCESS.2018.2839340
    [31]
    K. Cheng, S. Gao, W. Dong, X. Yang, Q. Wang, and H. Yu, “Boosting label weighted extreme learning machine for classifying multi-label imbalanced data,” Neurocomputing, vol. 403, pp. 360–370, Aug. 2020. doi: 10.1016/j.neucom.2020.04.098
    [32]
    A. Elisseeff and J. Weston, “A kernel method for multi-labelled classification,” in Proc. 14th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2001, pp. 681–687.
    [33]
    F. Charte, A. J. Rivera, M. J. del Jesus, and F. Herrera, “Dealing with difficult minority labels in imbalanced mutilabel data sets,” Neurocomputing, vol. 326-327, pp. 39–53, Jan. 2019. doi: 10.1016/j.neucom.2016.08.158
    [34]
    X. Gong, T. Zhang, C. L. P. Chen, and Z. Liu, “Research review for broad learning system: Algorithms, theory, and applications,” IEEE Trans. Cybern., vol. 52, no. 9, pp. 8922–8950, Sep. 2022. doi: 10.1109/TCYB.2021.3061094
    [35]
    C. L. P. Chen, Z. Liu, and S. Feng, “Universal approximation capability of broad learning system and its structural variations,” IEEE Trans. Neural Netw. Learn. Syst., vol. 30, no. 4, pp. 1191–1204, Apr. 2019. doi: 10.1109/TNNLS.2018.2866622
    [36]
    L. Guo, R. Li, and B. Jiang, “An ensemble broad learning scheme for semisupervised vehicle type classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 12, pp. 5287–5297, Dec. 2021. doi: 10.1109/TNNLS.2021.3083508
    [37]
    X. Gong, C. L. P. Chen, and T. Zhang, “Cross-cultural emotion recognition with EEG and eye movement signals based on multiple stacked broad learning system,” IEEE Trans. Comput. Soc. Syst., vol. 11, no. 2, pp. 2014–2025, Apr. 2024. doi: 10.1109/TCSS.2023.3298324
    [38]
    M. Lin, K. Yang, Z. Yu, Y. Shi, and C. L. P. Chen, “Hybrid ensemble broad learning system for network intrusion detection,” IEEE Trans. Industr. Inform., vol. 20, no. 4, pp. 5622–5633, Apr. 2024. doi: 10.1109/TII.2023.3332957
    [39]
    P. Szymański and T. Kajdanowicz, “A scikit-based python environment for performing multi-label classification,” arXiv preprint arXiv: 1702.01460, 2017.
    [40]
    G. Tsoumakas, E. Spyromitros-Xioufis, J. Vilcek, and I. Vlahavas, “MULAN: A java library for multi-label learning,” J. Mach. Learn. Res., vol. 12, pp. 2411–2414, Jul. 2011.
    [41]
    Y. Cheng, D. Zhao, Y. Wang, and G. Pei, “Multi-label learning with kernel extreme learning machine autoencoder,” Knowl. Based Syst., vol. 178, pp. 1–10, Aug. 2019. doi: 10.1016/j.knosys.2019.04.002
    [42]
    N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic minority over-sampling technique,” J. Artif. Intell. Res., vol. 16, pp. 321–357, Jun. 2002. doi: 10.1613/jair.953
    [43]
    W.-J. Chen, Y.-H. Shao, C.-N. Li, and N.-Y. Deng, “MLTSVM: A novel twin support vector machine to multi-label learning,” Pattern Recognit., vol. 52, pp. 61–74, Apr. 2016. doi: 10.1016/j.patcog.2015.10.008
    [44]
    J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” J. Mach. Learn. Res., vol. 7, pp. 1–30, Dec. 2006.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(11)

    Article Metrics

    Article views (140) PDF downloads(47) Cited by()

    Highlights

    • Aiming at the serious multi-label imbalance problem, this paper innovatively proposes a multi-label weighted broad learning system (MLW-BLS), which alleviates the multi-label imbalance problem from the perspective of label imbalance weighting and label correlation mining
    • Furthermore, this paper proposes the multi-label adaptive weighted broad learning system (MLAW-BLS) to adaptively adjust corresponding label weights and values of MLW-BLS to construct an efficient imbalanced classifier set, which provides a novel idea to overcome the multi-label imbalance problem
    • Extensive comparative experiments are conducted on 30 datasets with 4 metrics to evaluate the effectiveness of MLAW-BLS compared with 7 mainstream algorithms. The results show that MLAW-BLS is superior to other state-of-the-art methods. Ablation experiments on 12 datasets fully verify the effectiveness of each module of MLAW-BLS

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return