A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Y. Liu, X. Wu, Y. Bo, J. Wang, and L. Ma, “A transfer learning framework for deep multi-agent reinforcement learning,” IEEE/CAA J. Autom. Sinica, 2024. doi: 10.1109/JAS.2023.124173
Citation: Y. Liu, X. Wu, Y. Bo, J. Wang, and L. Ma, “A transfer learning framework for deep multi-agent reinforcement learning,” IEEE/CAA J. Autom. Sinica, 2024. doi: 10.1109/JAS.2023.124173

A Transfer Learning Framework for Deep Multi-Agent Reinforcement Learning

doi: 10.1109/JAS.2023.124173
More Information
  • loading
  • [1]
    T. Liu, B. Tian, Y. Ai, L. Li, D. Cao, and F.-Y. Wang, “Parallel reinforcement learning: A framework and case study,” IEEE Trans. Pattern Analysis and Machine Intelligence, 2022.
    [2]
    W. Li, X. Wang, B. Jin, D. Luo, and H. Zha, “Structured cooperative reinforcement learning with time-varying composite action space,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 8618–8634, 2022. doi: 10.1109/TPAMI.2021.3102140
    [3]
    R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in Advances in neural information processing systems, 2017, pp. 6832–6393.
    [4]
    D. Gomez, N. Quijano, and L. F. Giraldo, “Information optimization and transferable state abstractions in deep reinforcement learning,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, no. 4, pp. 4782–4793, 2023.
    [5]
    W. Wang, T. Yang, Y. Liu, J. Hao, X. Hao, Y. Hu, Y. Chen, C. Fan, and Y. Gao, “From few to more: Large-scale dynamic multiagent curriculum learning,” in AAAI Conf. on Articial Intelligence, 2020, pp. 7293–7300.
    [6]
    N. A. Grupen, D. D. Lee, and B. Selman, “Multi-agent curricula and emergent implicit signaling,” in Int. Conf. on Autonomous Agents and Multiagent Systems, 2022, pp. 553–566.
    [7]
    Y. Sun, K. Zhang, and C. Sun, “Model-based transfer reinforcement learning based on graphical model representations,” IEEE Trans. Neural Networks and Learning Systems, vol. 34, no. 2, pp. 1035–1048, 2023. doi: 10.1109/TNNLS.2021.3107375
    [8]
    X. Gao, J. Si, and H. Huang, “Reinforcement learning control with knowledge shaping,” IEEE Trans. Neural Networks and Learning Systems, pp. 1–12, 2023.
    [9]
    N. Q. Hieu, D. T. Hoang, D. Niyato, P. Wang, D. I. Kim, and C. Yuen, “Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications,” IEEE Trans. Communications, vol. 70, no. 8, pp. 5164–5180, 2022. doi: 10.1109/TCOMM.2022.3182034
    [10]
    S. Omidshafiei, D.-K. Kim, M. Liu, G. Tesauro, M. Riemer, C. Amato, M. Campbell, and J. P. How, “Learning to teach in cooperative multiagent reinforcement learning,” in AAAI Conf. on Articial Intelligence, 2019, pp. 6128–6136.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)

    Article Metrics

    Article views (32) PDF downloads(4) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return