IEEE/CAA Journal of Automatica Sinica
Citation: | H. Y. Lin, Y. Liu, S. Li, and X. B. Qu, “How generative adversarial networks promote the development of intelligent transportation systems: A survey,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 9, pp. 1781–1796, Sept. 2023. doi: 10.1109/JAS.2023.123744 |
[1] |
I. Ahmed, S. Din, G. Jeon, F. Piccialli, and G. Fortino, “Towards collaborative robotics in top view surveillance: A framework for multiple object tracking by detection using deep learning,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1253–1270, 2021. doi: 10.1109/JAS.2020.1003453
|
[2] |
B. Peng, M. F. Keskin, B. Kulcsár, and H. Wymeersch, “Connected autonomous vehicles for improving mixed traffic efficiency in unsignalized intersections with deep reinforcement learning,” Comm. Transportation Research, vol. 1, p. 100017, 2021. doi: 10.1016/j.commtr.2021.100017
|
[3] |
J. Dong, S. Chen, M. Miralinaghi, T. Chen, and S. Labi, “Development and testing of an image transformer for explainable autonomous driving systems,” J. Intelligent and Connected Vehicles, vol. 5, no. 3, pp. 235–249, 2022. doi: 10.1108/JICV-06-2022-0021
|
[4] |
H. Ding, W. Li, N. Xu, and J. Zhang, “An enhanced eco-driving strategy based on reinforcement learning for connected electric vehicles: Cooperative velocity and lane-changing control,” J. Intelligent and Connected Vehicles, vol. 5, no. 3, pp. 316–332, 2022. doi: 10.1108/JICV-07-2022-0030
|
[5] |
F. Liang, G. Zhiwei, W. Tao, G. Jinfeng, and D. Feng, “Collision avoidance model and its validation for intelligent vehicles based on deep learning LSTM,” J. Autom. Safety and Energy, vol. 13, no. 1, p. 104, 2022.
|
[6] |
S. Li, Y. Liu, and X. Qu, “Model controlled prediction: A reciprocal alternative of model predictive control,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1107–1110, 2022. doi: 10.1109/JAS.2022.105611
|
[7] |
Y. Liu, C. Lyu, Y. Zhang, Z. Liu, W. Yu, and X. Qu, “DeepTSP: Deep traffic state prediction model based on large-scale empirical data,” Comm. Transportation Research, vol. 1, p. 100012, 2021. doi: 10.1016/j.commtr.2021.100012
|
[8] |
Y. Liu, Z. Liu, and R. Jia, “DeepPF: A deep learning based architecture for metro passenger flow prediction,” Transportation Research Part C: Emerging Technologies, vol. 101, pp. 18–34, 2019. doi: 10.1016/j.trc.2019.01.027
|
[9] |
M. Xu, H. Lin, and Y. Liu, “A deep learning approach for vehicle velocity prediction considering the influence factors of multiple lanes,” Electronic Research Archive, vol. 31, no. 1, pp. 401–420, 2023. doi: 10.3934/era.2023020
|
[10] |
I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial networks,” Communi. ACM, vol. 63, no. 11, pp. 139–144, 2020. doi: 10.1145/3422622
|
[11] |
J. Liu, T. Li, P. Xie, S. Du, F. Teng, and X. Yang, “Urban big data fusion based on deep learning: An overview,” Infor. Fusion, vol. 53, pp. 123–133, 2020. doi: 10.1016/j.inffus.2019.06.016
|
[12] |
S. Wang, J. Cao, and P. Yu, “Deep learning for spatio-temporal data mining: A survey,” IEEE Trans. Knowledge and Data Engineering, vol. 34, no. 8, pp. 3681–3700, 2020.
|
[13] |
Y. Liu, R. Jia, J. Ye, and X. Qu, “How machine learning informs ride-hailing services: A survey,” Comm. Transportation Research, vol. 2, p. 100075, 2022. doi: 10.1016/j.commtr.2022.100075
|
[14] |
J. Ye, J. Zhao, K. Ye, and C. Xu, “How to build a graph-based deep learning architecture in traffic domain: A survey,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 5, pp. 3904–3924, 2020.
|
[15] |
K.-F. Wang, C. Gou, Y.-J. Duan, Y.-L. Lin, X.-H. Zheng, and F. Wang, “Generative adversarial networks: The state of the art and beyond,” Acta Autom. Sinica, vol. 43, no. 3, pp. 321–332, 2017.
|
[16] |
A. Aggarwal, M. Mittal, and G. Battineni, “Generative adversarial network: An overview of theory and applications,” Inter. J. Infor. Management Data Insights, vol. 1, no. 1, p. 100004, 2021. doi: 10.1016/j.jjimei.2020.100004
|
[17] |
R. Kulkarni, R. Gaikwad, R. Sugandhi, P. Kulkarni, and S. Kone, “Survey on deep learning in music using GAN,” Int. J. Eng. Res. Technol, vol. 8, no. 9, pp. 646–648, 2019.
|
[18] |
W. Xia, Y. Zhang, Y. Yang, J.-H. Xue, B. Zhou, and M.-H. Yang, “Gan inversion: A survey,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 45, no. 3, pp. 3121–3138, 2022.
|
[19] |
X. Wang, H. Guo, S. Hu, M.-C. Chang, and S. Lyu, “GAN-generated faces detection: A survey and new perspectives,” arXiv preprint arXiv: 2202.07145, 2022.
|
[20] |
A. You, J. K. Kim, I. H. Ryu, and T. K. Yoo, “Application of generative adversarial networks (GAN) for ophthalmology image domains: A survey,” Eye and Vision, vol. 9, no. 1, pp. 1–19, 2022. doi: 10.1186/s40662-021-00274-y
|
[21] |
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv: 1511.06434, 2015.
|
[22] |
T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila, “Analyzing and improving the image quality of stylegan,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 8110–8119.
|
[23] |
A. Brock, J. Donahue, and K. Simonyan, “Large scale GAN training for high fidelity natural image synthesis,”arXiv preprint arXiv: 1809.11096, 2018.
|
[24] |
H. Zhang, T. Xu, H. Li, et al., “StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 5907–5915.
|
[25] |
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 1125–1134.
|
[26] |
G. Antipov, M. Baccouche, and J.-L. Dugelay, “Face aging with conditional generative adversarial networks,” in Proc. IEEE Inte. Conf. Image Processing, 2017, pp. 2089–2093.
|
[27] |
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Inte. Conf. Computer Vision, 2017, pp. 2223–2232.
|
[28] |
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” in Proc. 30th Int. Conf. Neural Information Proc. Syst., 2016, pp. 2234–2242.
|
[29] |
A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier GANs,” in Proc. Int. Conf. Machine Learning, 2017, pp. 2642–2651.
|
[30] |
M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. Int. Conf. Machine Learning, 2017, pp. 214–223.
|
[31] |
T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,”arXiv preprint arXiv: 1710.10196, 2017.
|
[32] |
T. Wang, M. Liu, J. Zhu, et al., “Video-to-video synthesis,” arXiv preprint arXiv: 1808.06601, 2018.
|
[33] |
L. Yu, W. Zhang, J. Wang, and Y. Yu, “SeqGAN: Sequence generative adversarial nets with policy gradient,” in Proc. AAAI Conf Artificial Intelligence, 2017, vol. 31, p. 1.
|
[34] |
Y. Jin, J. Zhang, M. Li, Y. Tian, H. Zhu, and Z. Fang, “Towards the automatic anime characters creation with generative adversarial networks,” arXiv preprint arXiv: 1708.05509, 2017.
|
[35] |
M. B. Hedge, M. Nelson, T. Pengilly, and M. Weatherford, “PokéGAN: P2P (Pet to Pokémon) Stylizer,” SMU Data Science Review, vol. 5, no. 2, p. 10, 2021.
|
[36] |
M. Brundage, S. Avin, J. Clark, et al., “The malicious use of artificial intelligence: Forecasting, prevention, and mitigation,” arXiv preprint arXiv: 1802.07228, 2018.
|
[37] |
S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee, “Learning what and where to draw,” in Prpc. 30th Annual Conf. Neural Information Proc. Syst., 2016, pp. 217–225.
|
[38] |
A. Dash, J. C. B. Gamboa, S. Ahmed, M. Liwicki, and M. Z. Afzal, “TAC-GAN-text conditioned auxiliary classifier generative adversarial network,” arXiv preprint arXiv: 1703.06412, 2017.
|
[39] |
T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 8798–8807.
|
[40] |
D. Yoo, N. Kim, S. Park, A. S. Paek, and I. S. Kweon, “Pixel-level domain transfer,” in Proc. Computer Vision–ECCV: 14th European Conference, Amsterdam, The Netherlands, Proceedings, Part VIII 14, 2016, pp. 517–532.
|
[41] |
R. Huang, S. Zhang, T. Li, and R. He, “Beyond face rotation: Global and local perception GAN for photorealistic and identity preserving frontal view synthesis,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 2439–2448.
|
[42] |
Y. Taigman, A. Polyak, and L. Wolf, “Unsupervised cross-domain image generation,” arXiv preprint arXiv: 1611.02200, 2016.
|
[43] |
G. Perarnau, J. Van De Weijer, B. Raducanu, and J. M. Álvarez, “Invertible conditional gans for image editing,” arXiv preprint arXiv: 1611.06355, 2016.
|
[44] |
M.-Y. Liu and O. Tuzel, “Coupled generative adversarial networks,” in Proc. 30th Int. Conf. Neural Information Proc. Syst., 2016, pp. 469–477.
|
[45] |
A. Brock, T. Lim, J. M. Ritchie, and N. Weston, “Neural photo editing with introspective adversarial networks,” arXiv preprint arXiv: 1609.07093, 2016.
|
[46] |
C. Ledig, L. Theis, F. Huszár, et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proc. IEEE Conf Computer Vision and Pattern Recognition, 2017, pp. 4681–4690.
|
[47] |
H. Bin, C. Weihai, W. Xingming, and L. Chun-Liang, “High-quality face image SR using conditional generative adversarial networks,” arXiv preprint arXiv: 1707.00737, 2017.
|
[48] |
S. Vasu, N. Thekke Madam, and A. Rajagopalan, “Analyzing perception-distortion tradeoff using enhanced perceptual super-resolution network,” in Proc. European Conf. Computer Vision Workshops, 2018, pp. 114–131.
|
[49] |
C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating videos with scene dynamics,” Advances Neural Information Processing Systems, vol. 29, 2016.
|
[50] |
V. Sandfort, K. Yan, P. J. Pickhardt, and R. M. Summers, “Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks,” Scientific Reports, vol. 9, no. 1, p. 16884, 2019. doi: 10.1038/s41598-019-52737-x
|
[51] |
S. Yu, H. Dong, G. Yang, et al., “Deep DE-aliasing for fast compressive sensing MRI,” arXiv preprint arXiv: 1705.07137, 2017.
|
[52] |
J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling,” in Proc. 30th Int. Conf. Neural Information Proc. Syst., 2016, pp. 82–90.
|
[53] |
M. Gadelha, S. Maji, and R. Wang, “3D shape induction from 2D views of multiple objects,” in Proc. Int. Conf. 3D Vision, 2017, pp. 402–411.
|
[54] |
A. Li, P. Zhao, X. Liu, A. Mansourian, K. W. Axhausen, and X. Qu, “Comprehensive comparison of E-scooter sharing mobility: Evidence from 30 European cities,” Transportation Research Part D:Transport and Environment, vol. 105, p. 103229, 2022. doi: 10.1016/j.trd.2022.103229
|
[55] |
J. Ke, H. Zheng, H. Yang, and X. M. Chen, “Short-term forecasting of passenger demand under on-demand ride services: A spatio-temporal deep learning approach,” Transportation Research Part C: Emerging Technologies, vol. 85, pp. 591–608, 2017. doi: 10.1016/j.trc.2017.10.016
|
[56] |
J. Bao, X. Shi, and H. Zhang, “Spatial analysis of bikeshare ridership with smart card and POI data using geographically weighted regression method,” IEEE Access, vol. 6, pp. 76049–76059, 2018. doi: 10.1109/ACCESS.2018.2883462
|
[57] |
J. Wu and X. Qu, “Intersection control with connected and automated vehicles: A review,” J. Intelligent and Connected Vehicles, vol. 5, no. 3, pp. 260–269, 2022. doi: 10.1108/JICV-06-2022-0023
|
[58] |
L. Maoyue, L. Hongyu, H. Xiangmei, X. Guangqi, and Y. Wei, “Surrounding vehicle recognition and information map construction technology in automatic driving,” J. Automotive Safety and Energy, vol. 13, no. 1, p. 131, 2022.
|
[59] |
B. Kiran, I. Sobh, V. Talpaert, et al., “Deep reinforcement learning for autonomous driving: A survey,” IEEE Trans. Intelligent Trans- portation Systems, vol. 23, no. 6, pp. 4909–4926, 2021.
|
[60] |
M. Bojarski, D. Del Testa, D. Dworakowski, et al., “End to end learning for self-driving cars,” arXiv preprint arXiv: 1604.07316, 2016.
|
[61] |
S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua, “Long-term planning by short-term prediction,” arXiv preprint arXiv: 1602.01580, 2016.
|
[62] |
E. Santana and G. Hotz, “Learning a driving simulator,” arXiv preprint arXiv: 1608.01230, 2016.
|
[63] |
Z. Yang, Y. Chai, D. Anguelov, et al., “SurfelGAN: Synthesizing realistic sensor data for autonomous driving,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 11118–11127.
|
[64] |
M. Amodio and S. Krishnaswamy, “TraVeLGAN: Image-to-image translation by transformation vector learning,” in Proc. IEEE/CVF Conf. computer Vision and Pattern Recognition, 2019, pp. 8983–8992.
|
[65] |
S.-W. Huang, C.-T. Lin, S.-P. Chen, Y.-Y. Wu, P.-H. Hsu, and S.-H. Lai, “AugGAN: Cross domain adaptation with GAN-based data augmentation,” in Proc. European Conf. Computer Vision, 2018, pp. 718–731.
|
[66] |
W. Xu, N. Souly, and P. P. Brahma, “Reliability of GAN generated data to train and validate perception systems for autonomous vehicles,” in Proc IEEE/CVF Winter Conf. Applications Computer Vision, 2021, pp. 171–180.
|
[67] |
W. Wang, Y. Zhang, J. Gao, et al., “GOPS: A general optimal control problem solver for autonomous driving and industrial control applications,” Communi. Transportation Research, vol. 3, p. 100096, 2023. doi: 10.1016/j.commtr.2023.100096
|
[68] |
D. Fei, M. Guanyu, T. En, Z. Nan, B. Jianmin, and Z. Dengyin, “Multi-channel high-resolution network and attention mechanism fusion for vehicle detection model,” J. Automotive Safety and Energy, vol. 13, no. 1, p. 122, 2022.
|
[69] |
S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz, “MoCoGAN: Decomposing motion and content for video generation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 1526–1535.
|
[70] |
E. L. Denton, “Unsupervised learning of disentangled representations from video,” in Proc. 31st Int. Conf. Neural Information Proc. Syst., 2017, pp. 4417–4426.
|
[71] |
J. Walker, K. Marino, A. Gupta, and M. Hebert, “The pose knows: Video forecasting by generating pose futures,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 3332–3341.
|
[72] |
A. Clark, J. Donahue, and K. Simonyan, “Adversarial video generation on complex datasets,” arXiv preprint arXiv: 1907.06571,2019.
|
[73] |
S. Mozaffari, O. Y. Al-Jarrah, M. Dianati, P. Jennings, and A. Mouzakitis, “Deep learning-based vehicle behavior prediction for autonomous driving applications: A review,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 1, pp. 33–47, 2020.
|
[74] |
A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi, “Social GAN: Socially acceptable trajectories with generative adversarial networks,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 2255–2264.
|
[75] |
S. Eiffert, K. Li, M. Shan, S. Worrall, S. Sukkarieh, and E. Nebot, “Probabilistic crowd GAN: Multimodal pedestrian trajectory prediction using a graph vehicle-pedestrian attention network,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5026–5033, 2020. doi: 10.1109/LRA.2020.3004324
|
[76] |
Q. Zhang, S. Hu, J. Sun, Q. A. Chen, and Z. M. Mao, “On adversarial robustness of trajectory prediction for autonomous vehicles,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2022, pp. 15159–15168.
|
[77] |
C. Gómez-Huélamo, M. V. Conde, M. Ortiz, S. Montiel, R. Barea, and L. M. Bergasa, “Exploring attention GAN for vehicle motion prediction,” in Proc. IEEE 25th Int. Conf. Intelligent Transportation Systems, 2022, pp. 4011–4016.
|
[78] |
V. Kosaraju, A. Sadeghian, R. Martín-Martín, I. Reid, H. Rezatofighi, and S. Savarese, “Social-BIGAT: Multimodal trajectory forecasting using bicycle-GAN and graph attention networks,” Advances in Neural Information Processing Systems, vol. 32, 2019.
|
[79] |
E. Wang, H. Cui, S. Yalamanchi, M. Moorthy, and N. Djuric, “Improving movement predictions of traffic actors in bird’s-eye view models using GANs and differentiable trajectory rasterization,” in Proc. 26th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, 2020, pp. 2340–2348.
|
[80] |
C. Zhao, Y. Zhu, Y. Du, F. Liao, and C.-Y. Chan, “A novel direct trajectory planning approach based on generative adversarial networks and rapidly-exploring random tree,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 10, pp. 17910–17921, 2022. doi: 10.1109/TITS.2022.3164391
|
[81] |
M. Ghafoorian, C. Nugteren, N. Baka, O. Booij, and M. Hofmann, “El-GAN: Embedding loss driven generative adversarial networks for lane detection,” in Proc. European Conf. Computer Vision Workshops, 2018, pp. 256–272.
|
[82] |
X. Wang, A. Shrivastava, and A. Gupta, “A-fast-RCNN: Hard positive generation via adversary for object detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 2606–2615.
|
[83] |
J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan, “Perceptual generative adversarial networks for small object detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 1222–1230.
|
[84] |
Y. Bai, Y. Zhang, M. Ding, and B. Ghanem, “SOD-MTGAN: Small object detection via multi-task generative adversarial network,” in Proc. European Conf. Computer Vision, 2018, pp. 206–221.
|
[85] |
A. Liu, X. Liu, J. Fan, et al., “Perceptual-sensitive GAN for generating adversarial patches,” in Proc. AAAI Conf. Artificial Intelligence, 2019, vol. 33, no. 1, pp. 1028–1035.
|
[86] |
W. Zhu, J. Wu, T. Fu, J. Wang, J. Zhang, and Q. Shangguan, “Dynamic prediction of traffic incident duration on urban expressways: A deep learning approach based on LSTM and MLP,” J. Intelligent and Connected Vehicles, vol. 4, no. 2, pp. 80–91, 2021. doi: 10.1108/JICV-03-2021-0004
|
[87] |
A. Koochali, P. Schichtel, A. Dengel, and S. Ahmed, “Probabilistic forecasting of sensory data with generative adversarial networks–forGAN,” IEEE Access, vol. 7, pp. 63868–63880, 2019. doi: 10.1109/ACCESS.2019.2915544
|
[88] |
D. Saxena and J. Cao, “D-GAN: Deep generative adversarial nets for spatio-temporal prediction,” arXiv preprint arXiv: 1907.08556, 2019.
|
[89] |
J. Yoon, D. Jarrett, and M. Van der Schaar, “Time-series generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 32, 2019.
|
[90] |
L. Zhang, J. Wu, J. Shen, et al., “SATP-GAN: Self-attention based generative adversarial network for traffic flow prediction,” Transportmetrica B:Transport Dynamics, vol. 9, no. 1, pp. 552–568, 2021. doi: 10.1080/21680566.2021.1916646
|
[91] |
Y. Zhang, Y. Li, X. Zhou, X. Kong, and J. Luo, “Curb-GAN: Conditional urban traffic estimation through spatio-temporal generative adversarial networks,” in Proc. 26th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, 2020, pp. 842–852.
|
[92] |
Y. Zhang, Y. Li, X. Zhou, X. Kong, and J. Luo, “TrafficGAN: Off-deployment traffic estimation with traffic generative adversarial networks,” in Proc. IEEE Int. Conf. Data Mining, 2019, pp. 1474–1479.
|
[93] |
D. Xu, C. Wei, P. Peng, Q. Xuan, and H. Guo, “GE-GAN: A novel deep learning framework for road traffic state estimation,” Transportation Research Part C: Emerging Technologies, vol. 117, p. 102635, 2020. doi: 10.1016/j.trc.2020.102635
|
[94] |
J. Jin J, D. Rong, T. Zhang, et al., “A GAN-based short-term link traffic prediction approach for urban road networks under a parallel learning framework,” IEEE Trans. Intelligent Transportation Syst., vol. 23, no. 9, pp. 16185–16196, 2022. doi: 10.1109/TITS.2022.3148358
|
[95] |
Y. Lv, Y. Chen, L. Li, and F.-Y. Wang, “Generative adversarial networks for parallel transportation systems,” IEEE Intelligent Transportation Systems Magazine, vol. 10, no. 3, pp. 4–10, 2018. doi: 10.1109/MITS.2018.2842249
|
[96] |
Y. Wang, Y. Zheng, and Y. Xue, “Travel time estimation of a path using sparse trajectories,” in Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2014, pp. 25–34.
|
[97] |
I. M. Baytas, C. Xiao, X. Zhang, F. Wang, A. K. Jain, and J. Zhou, “Patient subtyping via time-aware LSTM networks,” in Proc. 23rd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2017, pp. 65–74.
|
[98] |
H.-F. Yu, N. Rao, and I. S. Dhillon, “Temporal regularized matrix factorization for high-dimensional time series prediction,” in Proc. 30th Int. Conf. Neural Information Proc. Syst., 2016, pp. 847–855.
|
[99] |
Z. Liu, Y. Yang, W. Huang, Z. Tang, N. Li, and F. Wu, “How do your neighbors disclose your information: Social-aware time series imputation,” in Proc. World Wide Web Conf., 2019, pp. 1164–1174.
|
[100] |
H. Wu, S. Zheng, J. Zhang, and K. Huang, “GP-GAN: Towards realistic high-resolution image blending,” in Proc. 27th ACM Int. Conf. Multimedia, 2019, pp. 2487–2495.
|
[101] |
D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 2536–2544.
|
[102] |
X. Wang, D. Xu, and F. Gu, “3D smodel inpainting based on 3D deep convolutional generative adversarial network,” IEEE Access, vol. 8, pp. 170355–170363, 2020. doi: 10.1109/ACCESS.2020.3024288
|
[103] |
W. Li, L. Min, Y. Jia-qing, Z. Ling-yu, P. Ke, and L. Zheng-xi, “Urban traffic flow data recovery method based on generative adversarial network,” J. Transportation Systems Engineering and Information Technology, vol. 18, no. 6, p. 63, 2018.
|
[104] |
G. Xiong, Z. Li, M. Zhao, et al., “TrajSGAN: A Semantic-guiding adversarial network for Urban trajectory generation,” IEEE Trans. Computational Social Systems, 2023.
|
[105] |
A. Borji, “Pros and cons of GAN evaluation measures,” Computer Vision and Image Understanding, vol. 179, pp. 41–65, 2019. doi: 10.1016/j.cviu.2018.10.009
|
[106] |
Z. Chen, S. Nie, T. Wu, and C. G. Healey, “High resolution face completion with multiple controllable attributes via fully end-to-end progressive generative adversarial networks,” arXiv preprint arXiv: 1801.07632, 2018.
|
[107] |
T. Chen, X. Zhai, M. Ritter, M. Lucic, and N. Houlsby, “Self-supervised gans via auxiliary rotation loss,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 12154–12163.
|
[108] |
F. Ni, J. Zhang, and M. N. Noori, “Deep learning for data anomaly detection and data compression of a long: Span suspension bridge,” Computer—Aided Civil and Infrastructure Engineering, vol. 35, no. 7, pp. 685–700, 2020. doi: 10.1111/mice.12528
|
[109] |
Y. Xu, X. Ouyang, Y. Cheng, et al., “Dual-mode vehicle motion pattern learning for high performance road traffic anomaly detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, 2018, pp. 145–152.
|
[110] |
V. Reddy, C. Sanderson, and B. C. Lovell, “Improved anomaly detection in crowded scenes via cell-based analysis of foreground speed, size and texture,” in Proc. CVPR Workshops, 2011, pp. 55–61.
|
[111] |
A. Mohan and S. Poobal, “Crack detection using image processing: A critical review and analysis,” Alexandria Engineering J., vol. 57, no. 2, pp. 787–798, 2018. doi: 10.1016/j.aej.2017.01.020
|
[112] |
K. Zhang, Y. Zhang, and H.-D. Cheng, “CrackGAN: Pavement crack detection using partially accurate ground truths based on generative adversarial learning,” IEEE Trans. Intelligent Transportation Systems, vol. 22, no. 2, pp. 1306–1319, 2020.
|
[113] |
W. Zhai, J. Zhu, Y. Cao, and Z. Wang, “A generative adversarial network based framework for unsupervised visual surface inspection,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, 2018, pp. 1283–1287.
|
[114] |
H. Oliveira and P. L. Correia, “CrackIT—An image processing toolbox for crack detection and characterization,” in Proc. IEEE Int. Conf. Image Processing, 2014, pp. 798–802.
|
[115] |
Z. Gao, B. Peng, T. Li, and C. Gou, “Generative adversarial networks for road crack image segmentation,” in Proc. Int. Joint Conf. Neural Networks, 2019, pp. 1–8.
|
[116] |
Q. Mei and M. Gül, “A cost effective solution for pavement crack inspection using cameras and deep neural networks,” Construction and Building Materials, vol. 256, p. 119397, 2020. doi: 10.1016/j.conbuildmat.2020.119397
|
[117] |
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 4700–4708.
|
[118] |
K. Zhang, Y. Zhang, and H. Cheng, “Self-supervised structure learning for crack detection based on cycle-consistent generative adversarial networks,” J. Computing in Civil Engineering, vol. 34, no. 3, p. 04020004, 2020. doi: 10.1061/(ASCE)CP.1943-5487.0000883
|
[119] |
J. Mao, H. Wang, and B. F. Spencer Jr, “Toward data anomaly detection for automated structural health monitoring: Exploiting generative adversarial nets and autoencoders,” Structural Health Monitoring, vol. 20, no. 4, pp. 1609–1626, 2021. doi: 10.1177/1475921720924601
|
[120] |
K. Lee and D. H. Shin, “Generative model of acceleration data for deep learning-based damage detection for bridges using generative adversarial network,” Journal of KIBIM, vol. 9, no. 1, pp. 42–51, 2019.
|
[121] |
P. Yang, W. Jin, and P. Tang, “Anomaly detection of railway catenary based on deep convolutional generative adversarial networks,” in Proc. 3rd IEEE Advanced Information Technology, Electronic and Automation Control Conf., 2018, pp. 1366–1370.
|
[122] |
Y. Lyu, Z. Han, J. Zhong, C. Li, and Z. Liu, “A generic anomaly detection of catenary support components based on generative adversarial networks,” IEEE Trans. Instrumentation and Measurement, vol. 69, no. 5, pp. 2439–2448, 2019.
|
[123] |
Y. Lyu, Z. Han, J. Zhong, C. Li, and Z. Liu, “A GAN-based anomaly detection method for isoelectric line in high-speed railway,” in Proc. IEEE Int. Instrumentation and Measurement Technology Conf., 2019, pp. 1–6.
|
[124] |
L. Xue and S. Gao, “Unsupervised anomaly detection system for railway turnout based on GAN,” in Proc. Journal of Physics: Conf. Series, 2019, vol. 1345, no. 3. p. 032069.
|
[125] |
K. Wang, X. Zhang, Q. Hao, Y. Wang, and Y. Shen, “Application of improved least-square generative adversarial networks for rail crack detection by AE technique,” Neurocomputing, vol. 332, pp. 236–248, 2019. doi: 10.1016/j.neucom.2018.12.057
|
[126] |
V. Saligrama and Z. Chen, “Video anomaly detection based on local statistical aggregates,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2012, pp. 2112–2119.
|
[127] |
S. Akcay, A. Atapour-Abarghouei, and T. P. Breckon, “Ganomaly: Semi-supervised anomaly detection via adversarial training,” in Proc.14th Asian Conf. Computer Proc. Vision, Perth, Australia, Revised Selected Papers, Part III 14, 2019, pp. 622–637.
|
[128] |
Y. Sun, W. Yu, Y. Chen, and A. Kadam, "Time series anomaly detection based on GAN," in Proc. 6th Int. Conf. Social Networks Analysis, Management and Security, 2019, pp. 375–382.
|
[129] |
S. Akçay, A. Atapour-Abarghouei, and T. P. Breckon, “Skip-ganomaly: Skip connected and adversarially trained encoder-decoder anomaly detection,” in Proc. Int. Joint Conf. Neural Networks, 2019, pp. 1–8.
|
[130] |
Y. Qiu, T. Misu, and C. Busso, “Driving anomaly detection with conditional generative adversarial network using physiological and can-bus data,” in Proc. Int. Conf. Multimodal Interaction, 2019, pp. 164–173.
|
[131] |
F. Liao, T. Arentze, and H. Timmermans, “Incorporating space–Time constraints and activity-travel time profiles in a multi-state supernetwork approach to individual activity-travel scheduling,” Transportation Research Part B: Methodological, vol. 55, pp. 41–58, 2013. doi: 10.1016/j.trb.2013.05.002
|