A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 5
May  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 64, TOP 7
Turn off MathJax
Article Contents
H. Liu, C. Y. Lin, B. W. Gong, and  D. Y. Wu,  “Automatic lane-level intersection map generation using low-channel roadside LiDAR,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1209–1222, May 2023. doi: 10.1109/JAS.2023.123183
Citation: H. Liu, C. Y. Lin, B. W. Gong, and  D. Y. Wu,  “Automatic lane-level intersection map generation using low-channel roadside LiDAR,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 5, pp. 1209–1222, May 2023. doi: 10.1109/JAS.2023.123183

Automatic Lane-Level Intersection Map Generation using Low-Channel Roadside LiDAR

doi: 10.1109/JAS.2023.123183
Funds:  This work was supported in part by the Scientific Research Project of the Education Department of Jilin Province (JJKH20221020KJ), the National Natural Science Foundation of China (51408257), and the Graduate Innovation Fund of Jilin University (101832020CX150)
More Information
  • A lane-level intersection map is a cornerstone in high-definition (HD) traffic network maps for autonomous driving and high-precision intelligent transportation systems applications such as traffic management and control, and traffic accident evaluation and prevention. Mapping an HD intersection is time-consuming, labor-intensive, and expensive with conventional methods. In this paper, we used a low-channel roadside light detection and range sensor (LiDAR) to automatically and dynamically generate a lane-level intersection, including the signal phases, geometry, layout, and lane directions. First, a mathematical model was proposed to describe the topology and detail of a lane-level intersection. Second, continuous and discontinuous traffic object trajectories were extracted to identify the signal phases and times. Third, the layout, geometry, and lane direction were identified using the convex hull detection algorithm for trajectories. Fourth, a sliding window algorithm was presented to detect the lane marking and extract the lane, and the virtual lane connecting the inbound and outbound of the intersection were generated using the vehicle trajectories within the intersection and considering the traffic rules. In the field experiment, the mean absolute estimation error is 2 s for signal phase and time identification. The lane marking identification Precision and Recall are 96% and 94.12%, respectively. Compared with the satellite-based, MMS-based, and crowdsourcing-based lane mapping methods, the average lane location deviation is 0.2 m and the update period is less than one hour by the proposed method with low-channel roadside LiDAR.

     

  • loading
  • [1]
    W. Jang, J. Hyun, J. An, M. Cho, and E. Kim, “A lane-level road marking map using a monocular camera,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 1, pp. 187–204, 2022.
    [2]
    C. Cadena et al., “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Trans. robotics, vol. 32, no. 6, pp. 1309–1332, 2016. doi: 10.1109/TRO.2016.2624754
    [3]
    J. Shu, S. Wang, X. Jia, W. Zhang, R. Xie, and H. Huang, “Efficient lane-level map building via vehicle-based crowdsourcing,” IEEE Trans. Intelligent Transportation Systems, vol. 23, no. 5, pp. 4049–4062, 2022.
    [4]
    X. Yang, L. Tang, L. Niu, X. Zhang, and Q. Li, “Generating lane-based intersection maps from crowdsourcing big trace data,” Transportation Research Part C: Emerging Technologies, vol. 89, pp. 168–187, 2018. doi: 10.1016/j.trc.2018.02.007
    [5]
    J. Wu, H. Xu, and J. Zheng, “Automatic background filtering and lane identification with roadside LiDAR data,” in Proc. IEEE 20th Int. Conf. Intelligent Transportation Systems, 2017, pp. 1–6.
    [6]
    C. Lin, Y. Guo, W. Li, H. Liu, and D. Wu, “An automatic lane marking detection method with low-density roadside LiDAR data,” IEEE Sensors Journal, vol. 21, no. 8, pp. 10029–10038, 2021. doi: 10.1109/JSEN.2021.3057999
    [7]
    Z. Zhang, J. Zheng, H. Xu, X. Wang, X. Fan, and R. Chen, “Automatic background construction and object detection based on roadside LiDAR,” IEEE Trans. Intelligent Transportation Systems, vol. 21, no. 10, pp. 4086–4097, 2020. doi: 10.1109/TITS.2019.2936498
    [8]
    M. Shan et al., “Demonstrations of cooperative perception: safety and robustness in connected and automated vehicle operations,” Sensors, vol. 21, no. 1, p. 200, Dec. 2021. doi: 10.3390/s21010200
    [9]
    J. Zhao, H. Xu, H. Liu, J. Wu, Y. Zheng, and D. Wu, “Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors,” Transportation Research Part C: Emerging Technologies, vol. 100, pp. 68–87, 2019. doi: 10.1016/j.trc.2019.01.007
    [10]
    H. Liu, C. Lin, B. Gong, and D. Wu, “Extending the detection range for low-channel roadside LiDAR by static background construction,” IEEE Trans. Geoscience and Remote Sensing, vol. 60, p. 5702412, 2022.
    [11]
    C. Lin, H. Liu, D. Wu, and B. Gong, “Background point filtering of low-channel infrastructure-based LiDAR data using a slice-based projection filtering algorithm,” Sensors, vol. 20, no. 11, p. 3054, 2020. doi: 10.3390/s20113054
    [12]
    C.-C. Chen, C. Shahabi, and C. A. Knoblock, “Utilizing road network data for automatic identification of road intersections from high resolution color orthoimagery,” in STDBM, 2004, vol. 4: Citeseer, pp. 17–24.
    [13]
    T. R. Kushner and S. Puri, “Progress in road intersection detection for autonomous vehicle navigation,” in Mobile Robots II, 1987, vol. 852: SPIE, pp. 19–24.
    [14]
    F. Deschênes and D. Ziou, “Detection of line junctions and line terminations using curvilinear features,” Pattern Recognition Letters, vol. 21, no. 6–7, pp. 637–649, 2000. doi: 10.1016/S0167-8655(00)00032-5
    [15]
    C. Wiedemann and H. Ebner, “Automatic completion and evaluation of road networks,” Int. Archives of Photogrammetry and Remote Sensing, vol. 33, no. B3/2; PART 3, pp. 979–986, 2000.
    [16]
    M. Amo, F. Martínez, and M. Torre, “Road extraction from aerial images using a region competition algorithm,” IEEE Trans. Image Processing, vol. 15, no. 5, pp. 1192–1201, 2006. doi: 10.1109/TIP.2005.864232
    [17]
    J. Hu, A. Razdan, J. C. Femiani, M. Cui, and P. Wonka, “Road network extraction and intersection detection from aerial images by tracking road footprints,” IEEE Trans. Geoscience and Remote Sensing, vol. 45, no. 12, pp. 4144–4157, 2007. doi: 10.1109/TGRS.2007.906107
    [18]
    G. Máttyus, W. Luo, and R. Urtasun, “Deeproadmapper: Extracting road topology from aerial images,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 3438–3446.
    [19]
    H. Ghandorh, W. Boulila, S. Masood, A. Koubaa, F. Ahmed, and J. Ahmad, “Semantic segmentation and edge detection—Approach to road detection in very high resolution satellite images,” Remote Sensing, vol. 14, no. 3, p. 613, 2022. doi: 10.3390/rs14030613
    [20]
    Q. Zhu et al., “A global context-aware and batch-independent network for road extraction from VHR satellite imagery,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 175, pp. 353–365, 2021. doi: 10.1016/j.isprsjprs.2021.03.016
    [21]
    Z. Yang, D. Zhou, Y. Yang, J. Zhang, and Z. Chen, “TransRoadNet: A novel road extraction method for remote sensing images via combining high-level semantic feature and context,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.
    [22]
    A. Batra, S. Singh, G. Pang, S. Basu, C. Jawahar, and M. Paluri, “Improved road connectivity by joint learning of orientation and segmentation,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 10385–10393.
    [23]
    N. Xue, S. Bai, F. Wang, G.-S. Xia, T. Wu, and L. Zhang, “Learning attraction field representation for robust line segment detection,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 1595–1603.
    [24]
    S. He et al., “Sat2graph: Road graph extraction through graph-tensor encoding,” in Proc. European Conf. Computer Vision, 2020: Springer, pp. 51–67.
    [25]
    N. Girard, D. Smirnov, J. Solomon, and Y. Tarabalka, “Polygonal building extraction by frame field learning,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 5891–5900.
    [26]
    Z. Xu et al., “csBoundary: City-scale road-boundary detection in aerial images for high-definition maps,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5063–5070, 2022. doi: 10.1109/LRA.2022.3154052
    [27]
    G.-P. Gwon, W.-S. Hur, S.-W. Kim, and S.-W. Seo, “Generation of a precise and efficient lane-level road map for intelligent vehicle systems,” IEEE Trans. Vehicular Technology, vol. 66, no. 6, pp. 4517–4533, 2016.
    [28]
    A. Schindler, G. Maier, and F. Janda, “Generation of high precision digital maps using circular arc splines,” in Proc. IEEE Intelligent Vehicles Symposium, 2012, pp. 246–251.
    [29]
    R. Dube et al., “SegMap: Segment-based mapping and localization using data-driven descriptors,” The Int. Journal of Robotics Research, vol. 39, no. 2-3, pp. 339–355, 2020. doi: 10.1177/0278364919863090
    [30]
    D. Rozenberszki and A. L. Majdik, “LOL: Lidar-only odometry and localization in 3D point cloud maps,” in Proc. IEEE Int. Conf. Robotics and Automation, 2020, pp. 4379–4385.
    [31]
    J. Lin and F. Zhang, “Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV,” in Proc. IEEE Int. Conf. Robotics and Automation, 2020, pp. 3126–3131.
    [32]
    Y. Zhou, Y. Takeda, M. Tomizuka, and W. Zhan, “Automatic construction of lane-level HD maps for urban scenes,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2021, pp. 6649–6656.
    [33]
    C. Ye, J. Li, H. Jiang, H. Zhao, L. Ma, and M. Chapman, “Semi-automated generation of road transition lines using mobile laser scanning data,” IEEE Trans. Intelligent Transportation Systems, vol. 21, no. 5, pp. 1877–1890, 2019.
    [34]
    K. Kim, S. Cho, and W. Chung, “HD map update for autonomous driving with crowdsourced data,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1895–1901, 2021. doi: 10.1109/LRA.2021.3060406
    [35]
    D. Pannen, M. Liebner, W. Hempel, and W. Burgard, “How to keep HD maps for automated driving up to date,” in Proc. IEEE Int. Conf. Robotics and Automation, 2020, pp. 2288–2294.
    [36]
    L. Tang, X. Yang, Z. Kan, and Q. Li, “Lane-level road information mining from vehicle GPS trajectories based on Naïve Bayesian classification,” ISPRS Int. Journal of Geo-Information, vol. 4, no. 4, pp. 2660–2680, 2015. doi: 10.3390/ijgi4042660
    [37]
    M. A. Arman and C. M. Tampère, “Lane-level routable digital map reconstruction for motorway networks using low-precision GPS data,” Transportation Research Part C: Emerging Technologies, vol. 129, pp. 103234, 2021.
    [38]
    C. Lin, H. Liu, D. Wu, and B. Gong, “Background point filtering of low-channel infrastructure-based LiDAR data using a slice-based projection filtering algorithm,” Sensors, vol. 20, no. 11, pp. 3054, 2020.
    [39]
    Y.-T. Tseng and H.-W. Ferng, “An improved traffic rerouting strategy using real-time traffic information and decisive weights,” IEEE Trans. Vehicular Technology, vol. 70, no. 10, pp. 9741–9751, 2021. doi: 10.1109/TVT.2021.3102706
    [40]
    T. Ross, A. J. May, and P. J. Grimsley, “Using traffic light information as navigational cues: Implications for navigation system design,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 7, no. 2, pp. 119–134, 2004. doi: 10.1016/j.trf.2004.02.004
    [41]
    L. Hu et al., “Optimal route algorithm considering traffic light and energy consumption,” IEEE Access, vol. 6, pp. 59695–59704, 2018. doi: 10.1109/ACCESS.2018.2871843
    [42]
    J. Yu and P. Lu, “Learning traffic signal phase and timing information from low-sampling rate taxi GPS trajectories,” Knowledge-Based Systems, vol. 110, pp. 275–292, 2016. doi: 10.1016/j.knosys.2016.07.036
    [43]
    Y.-T. Chuang, C.-W. Yi, Y.-C. Tseng, C.-S. Nian, and C.-H. Ching, “Discovering phase timing information of traffic light systems by stop-go shockwaves,” IEEE Trans. Mobile Computing, vol. 14, no. 1, pp. 58–71, 2014.
    [44]
    X. Jia, “Fabric defect detection based on open source computer vision library OpenCV,” in Proc. IEEE 2nd Int. Conf. Signal Processing Systems, 2010.
    [45]
    M. Youssef and V. Asari, “Human action recognition using hull convexity defect features with multi-modality setups,” Pattern Recognition Letters, vol. 34, no. 15, pp. 1971–1979, 2013. doi: 10.1016/j.patrec.2013.01.019
    [46]
    W. McIlhagga, “The Canny edge detector revisited,” Int. Journal of Computer Vision, vol. 91, no. 3, pp. 251–261, 2011. doi: 10.1007/s11263-010-0392-0
    [47]
    C.-W. Chang et al., “Multi-point turn decision making framework for human-like automated driving,” in Proc. IEEE 20th Int. Conf. Intelligent Transportation Systems, 2017, pp. 1–6.
    [48]
    J. Wu, H. Xu, Y. Tian, Y. Zhang, J. Zhao, and B. Lv, “An automatic lane identification method for the roadside light detection and ranging sensor,” Journal of Intelligent Transportation Systems, vol. 24, no. 5, pp. 467–479, 2020. doi: 10.1080/15472450.2020.1718500
    [49]
    L. Zhou, C. Zhang, and M. Wu, “D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, 2018, pp. 182–186.
    [50]
    A. Buslaev, S. Seferbekov, V. Iglovikov, and A. Shvets, “Fully convolutional network for automatic road extraction from satellite imagery,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, 2018, pp. 207–210.
    [51]
    X. Yang, X. Li, Y. Ye, R. Y. Lau, X. Zhang, and X. Huang, “Road detection and centerline extraction via deep recurrent convolutional neural network U-Net,” IEEE Trans. Geoscience and Remote Sensing, vol. 57, no. 9, pp. 7209–7220, 2019. doi: 10.1109/TGRS.2019.2912301
    [52]
    K. Jo and M. Sunwoo, “Generation of a precise roadway map for autonomous cars,” IEEE Trans. Intelligent Transportation Systems, vol. 15, no. 3, pp. 925–937, 2013.
    [53]
    A. Geiger, M. Lauer, C. Wojek, C. Stiller, and R. Urtasun, “3D traffic scene understanding from movable platforms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 1012–1025, 2013.
    [54]
    O. Pink and C. Stiller, “Automated map generation from aerial images for precise vehicle localization,” in Proc. 13th Int. IEEE Conf. Intelligent Transportation Systems, 2010, pp. 1517–1522.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(15)  / Tables(7)

    Article Metrics

    Article views (359) PDF downloads(50) Cited by()

    Highlights

    • A mathematical lane-level intersection mapping model was set up for low-channel roadside LiDAR
    • Traffic elements such as signal phases, directions, and lane markings were identified using low-channel roadside LiDAR data
    • A lane-level intersection map generation framework using low-channel roadside LiDAR is proposed

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return