Reinforcement Learning based cooperative longitudinal control for reducing traffic oscillations and improving platoon stability
Tài liệu tham khảo
Aghabayk, 2013, A novel methodology for evolutionary calibration of Vissim by multi-threading, Presented at the Australasian Transport Research Forum, 1
Chen, 2012, A behavioral car-following model that captures traffic oscillations, Transportation Research Part B: Methodological, 46, 744, 10.1016/j.trb.2012.01.009
Chu, T., Kalabić, U., 2019. Model-based deep reinforcement learning for CACC in mixed-autonomy vehicle platoon, in: 2019 IEEE 58th Conference on Decision and Control (CDC). Presented at the 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 4079–4084. https://doi.org/10.1109/CDC40024.2019.9030110.
Desjardins, 2011, Cooperative Adaptive Cruise Control: A Reinforcement Learning Approach, IEEE Trans. Intell. Transport. Syst., 12, 1248, 10.1109/TITS.2011.2157145
Ge, 2014, Dynamics of connected vehicle systems with delayed acceleration feedback, Transportation Research Part C: Emerging Technologies, 46, 46, 10.1016/j.trc.2014.04.014
German Aerospace Center (DLR) and others, 2021. car-following model parameters.
Haarnoja, T., Zhou, A., Abbeel, P., Levine, S., 2018a. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv:1801.01290 [cs, stat].
Haarnoja, T., Zhou, A., Hartikainen, K., Tucker, G., Ha, S., Tan, J., Kumar, V., Zhu, H., Gupta, A., Abbeel, P., 2018b. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905.
Khodayari, 2012, A Modified Car-Following Model Based on a Neural Network Model of the Human Driver Effects, IEEE Trans. Syst., Man Cybern. A, 42, 1440, 10.1109/TSMCA.2012.2192262
Krajewski, R., Bock, J., Kloeker, L., Eckstein, L., 2018. The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems, in: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). Presented at the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), IEEE, Maui, HI, pp. 2118–2125. https://doi.org/10.1109/ITSC.2018.8569552.
Li, 2021, Car-following behavior characteristics of adaptive cruise control vehicles based on empirical experiments, Transportation Research Part B: Methodological, 147, 67, 10.1016/j.trb.2021.03.003
Li, 2014, Stop-and-go traffic analysis: Theoretical properties, environmental impacts and oscillation mitigation, Transportation Research Part B: Methodological, 70, 319, 10.1016/j.trb.2014.09.014
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D., 2019. Continuous control with deep reinforcement learning. arXiv:1509.02971 [cs, stat].
Morton, 2017, Analysis of Recurrent Neural Networks for Probabilistic Modeling of Driver Behavior, IEEE Trans. Intell. Transport. Syst., 18, 1289, 10.1109/TITS.2016.2603007
Qu, 2020, Jointly dampening traffic oscillations and improving energy consumption with electric, connected and automated vehicles: A reinforcement learning based approach, Appl. Energy, 257, 114030, 10.1016/j.apenergy.2019.114030
Ren, 2021, New England merge: a novel cooperative merge control method for improving highway work zone mobility and safety, Journal of Intelligent Transportation Systems, 25, 107, 10.1080/15472450.2020.1822747
Ren, 2020, Cooperative Highway Work Zone Merge Control Based on Reinforcement Learning in a Connected and Automated Environment, Transp. Res. Rec., 2674, 363, 10.1177/0361198120935873
Schaul, T., Quan, J., Antonoglou, I., Silver, D., 2016. Prioritized Experience Replay. Presented at the ICLR (Poster).
Stern, 2018, Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments, Transportation Research Part C: Emerging Technologies, 89, 205, 10.1016/j.trc.2018.02.005
Sugiyama, 2008, Traffic jams without bottlenecks—experimental evidence for the physical mechanism of the formation of a jam, New J. Phys., 10, 033001, 10.1088/1367-2630/10/3/033001
Vinitsky, 2018, Benchmarks for reinforcement learning in mixed-autonomy traffic, Conference on Robot Learning. PMLR, 399
Wang, P., Chan, C.-Y., 2017. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge, in: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). Presented at the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), IEEE, Yokohama, pp. 1–6. https://doi.org/10.1109/ITSC.2017.8317735.
Wu, C., Bayen, A.M., Mehta, A., 2018. Stabilizing Traffic with Autonomous Vehicles, in: 2018 IEEE International Conference on Robotics and Automation (ICRA). Presented at the 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, Brisbane, QLD, pp. 1–7. https://doi.org/10.1109/ICRA.2018.8460567.
Wu, Cathy, 2018. Learning and Optimization for Mixed Autonomy Systems-A Mobility Context.
Xiao, 2017, Realistic Car-Following Models for Microscopic Simulation of Adaptive and Cooperative Adaptive Cruise Control Vehicles, Transp. Res. Rec., 2623, 1, 10.3141/2623-01
Zheng, 2011, Applications of wavelet transform for analysis of freeway traffic: Bottlenecks, transient traffic, and traffic oscillations, Transportation Research Part B: Methodological, 45, 372, 10.1016/j.trb.2010.08.002
Zhu, 2019, Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving, Transportation Research Part C: Emerging Technologies, 117, 102662, 10.1016/j.trc.2020.102662
