IEEE/CAA Journal of Automatica Sinica
Citation: | X. K. He and C. Lv, “Towards energy-efficient autonomous driving: A multi-objective reinforcement learning approach,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1329–1331, May 2023. doi: 10.1109/JAS.2023.123378 |
[1] |
S. Cheng, B. Yang, Z. Wang, and K. Nakano, “Spatio-temporal image representation and deep-learning-based decision framework for automated vehicles,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 12, pp. 24866–24875, Dec. 2022. doi: 10.1109/TITS.2022.3195213
|
[2] |
X. Wang, J. Sun, G. Wang, F. Allgöwer, and J. Chen, “Data-driven control of distributed event-triggered network systems,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 1–14, 2023.
|
[3] |
W. Liu, J. Sun, G. Wang, F. Bullo, and J. Chen, “Data-driven resilient predictive control under denial-of-service,” IEEE Trans. Automatic Control, 2022. DOI: 10.1109/TAC.2022.3209399
|
[4] |
B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. Pérez, “Deep reinforcement learning for autonomous driving: A survey,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 6, pp. 4909–4926, Jun. 2022. doi: 10.1109/TITS.2021.3054625
|
[5] |
X. He, C. Fei, Y. Liu, K. Yang, and X. Ji, “Multi-objective longitudinal decision-making for autonomous electric vehicle: An entropy-constrained reinforcement learning approach,” in Proc. IEEE 23rd Int. Conf. Intelligent Transportation Syst., 2020, pp. 1–6.
|
[6] |
C. Huang, C. Lv, P. Hang, and Y. Xing, “Toward safe and personalized autonomous driving: Decision-making and motion control with DPF and CDT techniques,” IEEE/ASME Trans. Mechatronics, vol. 26, no. 2, pp. 611–620, 2021. doi: 10.1109/TMECH.2021.3053248
|
[7] |
P. Hang, C. Huang, Z. Hu, Y. Xing, and C. Lv, “Decision making of connected automated vehicles at an unsignalized roundabout considering personalized driving behaviours,” IEEE Trans. Vehicular Technology, vol. 70, no. 5, pp. 4051–4064, 2021. doi: 10.1109/TVT.2021.3072676
|
[8] |
X. Xu, L. Zuo, X. Li, L. Qian, J. Ren, and Z. Sun, “A reinforcement learning approach to autonomous decision making of intelligent vehicles on highways,” IEEE Trans. Systems,Man,Cybernetics: Systems, vol. 50, no. 10, pp. 3884–3897, 2018.
|
[9] |
C. Li and K. Czarnecki, “Urban driving with multi-objective deep reinforcement learning,” in Proc. 18th Int. Conf. Autonomous Agents MultiAgent Syst., 2019, pp. 359–367.
|
[10] |
P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y.-P. Flötteröd, R. Hilbrich, L. Lücken, J. Rummel, P. Wagner, and E. Wießner, “Microscopic traffic simulation using sumo,” in Proc. 21st IEEE Int. Conf. Intelligent Transportation Systems, 2018, pp. 2575–2582.
|
[11] |
S. Natarajan and P. Tadepalli, “Dynamic preferences in multi-criteria reinforcement learning,” in Proc. 22nd Int. Conf. Machine Learning, 2005, pp. 601–608.
|
[12] |
R. Yang, X. Sun, and K. Narasimhan, “A generalized algorithm for multi-objective reinforcement learning and policy adaptation,” Advances Neural Information Proc. Syst., vol. 32, 2019.
|
[13] |
S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in Proc. PMLR Int. Conf. Machine Learning, 2018, pp. 1587–1596.
|