A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 8 Issue 6
Jun.  2021

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
G. J. Wang, J. Wu, R. He, and B. Tian, "Speed and Accuracy Tradeoff for LiDAR Data Based Road Boundary Detection," IEEE/CAA J. Autom. Sinica, vol. 8, no. 6, pp. 1210-1220, Jun. 2021. doi: 10.1109/JAS.2020.1003414
Citation: G. J. Wang, J. Wu, R. He, and B. Tian, "Speed and Accuracy Tradeoff for LiDAR Data Based Road Boundary Detection," IEEE/CAA J. Autom. Sinica, vol. 8, no. 6, pp. 1210-1220, Jun. 2021. doi: 10.1109/JAS.2020.1003414

Speed and Accuracy Tradeoff for LiDAR Data Based Road Boundary Detection

doi: 10.1109/JAS.2020.1003414
Funds:  This work was supported by the Research on Construction and Simulation Technology of Hardware in Loop Testing Scenario for Self-Driving Electric Vehicle in China (2018YFB0105103J)
More Information
  • Road boundary detection is essential for autonomous vehicle localization and decision-making, especially under GPS signal loss and lane discontinuities. For road boundary detection in structural environments, obstacle occlusions and large road curvature are two significant challenges. However, an effective and fast solution for these problems has remained elusive. To solve these problems, a speed and accuracy tradeoff method for LiDAR-based road boundary detection in structured environments is proposed. The proposed method consists of three main stages: 1) a multi-feature based method is applied to extract feature points; 2) a road-segmentation-line-based method is proposed for classifying left and right feature points; 3) an iterative Gaussian Process Regression (GPR) is employed for filtering out false points and extracting boundary points. To demonstrate the effectiveness of the proposed method, KITTI datasets is used for comprehensive experiments, and the performance of our approach is tested under different road conditions. Comprehensive experiments show the road-segmentation-line-based method can classify left, and right feature points on structured curved roads, and the proposed iterative Gaussian Process Regression can extract road boundary points on varied road shapes and traffic conditions. Meanwhile, the proposed road boundary detection method can achieve real-time performance with an average of 70.5 ms per frame.

     

  • loading
  • [1]
    F. Oniga, S. Nedevschi, and M. Michael Meinecke, “Curb detection based on a multi-frame persistence map for urban driving scenarios,” in Proc. IEEE Conf. Intelligent Transportation Systems, 2008, pp. 67–72.
    [2]
    J. Siegemund, D. Pfeiffer, U. Franke, and W. Forstner, “Curb reconstruction using conditional random fields,” in Proc. IEEE Conf. Intelligent Vehicles Symposium, 2010, pp. 203–210.
    [3]
    L. Wang, T. Wu, Z. Xiao, L. Xiao, D. Zhao, and J. Han, “Multi-cue road boundary detection using stereo vision,” in Proc. IEEE Conf. Vehicular Electronics and Safety, 2016, pp. 48–53.
    [4]
    Y. Kang, C. Roh, S. Suh, and B. Song, “A LiDAR-based decision-making method for road boundary detection using multiple Kalman filters,” IEEE Trans. Industrial Electronics, vol. 59, no. 11, pp. 4360–4368, Nov. 2012. doi: 10.1109/TIE.2012.2185013
    [5]
    J. Han, D. Kim, M. Lee, and M. Sunwoo, “Road boundary detection and tracking for structured and unstructured roads using a 2D LiDAR sensor,” Int. Journal of Automotive Technology, vol. 15, no. 4, pp. 611–623, 2014. doi: 10.1007/s12239-014-0064-0
    [6]
    M. Buehler, K. Iagnemma, and S. Singh, “Junior: The Stanford Entry in the Urban Challenge,” in The DARPA urban challenge: autonomous vehicles in city traffic, vol. 56, Berlin, Heidelberg, Germany: Springer, 2009, pp. 91–123.
    [7]
    G. Seetharaman, A. Lakhotia, and E. Blasch, “Unmanned vehicles come of age: The DARPA grand challenge,” IEEE Computer Society, vol. 39, no. 12, pp. 26–29, 2006. doi: 10.1109/MC.2006.447
    [8]
    S. Verghese, “Self-driving cars and LiDAR,” in Proc. Conf. on Lasers and Electro-Optics, California, USA, 2017, pp. AM3A-1.
    [9]
    J. Fang, F. Yan, T. Zhao, F. Zhang, D. Zhou, R. Yang, Y. Ma, and L. Wang, “Simulating LiDAR Point Cloud for Autonomous Driving using Real-world Scenes and Traffic Flows,” arXiv: 1811.07112, 2018.
    [10]
    K. Bimbraw, “Autonomous cars: Past, present and future a review of the developments in the last century, the present scenario and the expected future of autonomous vehicle technology,” in Proc. Int. Conf. on Informatics in Control, Automation and Robotics, 2015, pp. 191–198.
    [11]
    P. Sun, X. Zhao, Z. Xu, and H. Min, “Urban curb robust detection algorithm based on 3D-LiDAR,” Journal of ZheJiang University, vol. 52, no. 3, pp. 504–514, Mar. 2018.
    [12]
    K. Hu, T. Wang, Z. Li, D. Chen, and X. Li, “Real-time extraction method of road boundary based on three-dimensional LiDAR,” Journal of Physics:Conf. Series, vol. 1074, no. 1, pp. 1–8, 2018.
    [13]
    W. Yao, Z. Deng, and L. Zhou, “Road curb detection using 3D LiDAR and integral laser points for intelligent vehicles,” Soft Computing and Intelligent Systems (SCIS)and 13th Int. Symp. on Advanced Intelligent Systems (ISIS), pp. 100–105, 2012.
    [14]
    B. Yang, L. Fang, and J. Li, “Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 79, pp. 80–93, 2013. doi: 10.1016/j.isprsjprs.2013.01.016
    [15]
    A. Y. Hata and D. F. Wolf, “Feature detection for vehicle localization in urban environments using a multilayer LiDAR,” IEEE Trans. Intelligent Transportation Systems, vol. 17, no. 2, pp. 420–429, 2016. doi: 10.1109/TITS.2015.2477817
    [16]
    S. Xu, R. Wang, and H. Zheng, “Road Curb Extraction from Mobile LiDAR Point Clouds,” IEEE Trans. Geoscience and Remote Sensing, vol. 55, no. 2, pp. 996–1009, 2017. doi: 10.1109/TGRS.2016.2617819
    [17]
    D. Zai, J. Li, Y. Guo, M. Cheng, and Y. Lin, “3D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts,” IEEE Trans. Intelligent Transportation Systems., vol. 19, no. 3, pp. 802–813, 2018. doi: 10.1109/TITS.2017.2701403
    [18]
    M. Wu, Z. Liu, and Z. Ren, “Algorithm of real-time road boundary detection based on 3D LiDAR,” Journal of Huazhong University of Science and Technology (Natural Science Edition), vol. 39, no. 2, pp. 351–354, 2011.
    [19]
    G. Wang, J. Wu, R. He, and S. Yang, “A Point Cloud-Based Robust Road Curb Detection and Tracking Method,” IEEE Access, vol. 7, pp. 24611–24625, 2019. doi: 10.1109/ACCESS.2019.2898689
    [20]
    P. Sun, X. Zhao, Z. Xu, R. Wang, and H. Min, “A 3D LiDAR Data-Based Dedicated Road Boundary Detection Algorithm for Autonomous Vehicles,” IEEE Access, vol. 7, pp. 29623–29638, 2019. doi: 10.1109/ACCESS.2019.2902170
    [21]
    P. Kumar, C. P. McElhinney, P. Lewis, and T. McCarthy, “An automated algorithm for extracting road edges from terrestrial mobile LiDAR data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 85, pp. 44–55, 2013. doi: 10.1016/j.isprsjprs.2013.08.003
    [22]
    H. Wang, H. Luo, C. Wen, J. Cheng, and P. Li, “Road Boundaries Detection Based on Local Normal Saliency from Mobile Laser Scanning Data,” IEEE Geoscience Remote Sensing Letters, vol. 12, no. 10, pp. 2085–2089, 2015. doi: 10.1109/LGRS.2015.2449074
    [23]
    M. Yadav, A. K. Singh, and B. Lohani, “Extraction of road surface from mobile LiDAR data of complex road environment,” Int. Journal of Remote Sensing, vol. 38, no. 16, pp. 4655–4682, 2017. doi: 10.1080/01431161.2017.1320451
    [24]
    Z. Liu, J. Wang, and D. Liu, “A new curb detection method for unmanned ground vehicles using 2D sequential laser data,” Sensors, vol. 13, no. 1, pp. 1102–1120, 2013. doi: 10.3390/s130101102
    [25]
    Velodyne. (2019). HDL-64E. [Online]. Available: https://velodynelidar.com/hdl-64e.html, Accessed on: Apr. 16, 2020.
    [26]
    M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. doi: 10.1145/358669.358692
    [27]
    Y. Zhang, J. Wang, X. Wang, and J. Dolan, “Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor,” IEEE Trans. Intelligent Transportation System, vol. 19, no. 12, pp. 3981–3991, Dec. 2018. doi: 10.1109/TITS.2018.2789462
    [28]
    J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-time,” in Proc. Conf. Robotics: Science and Systems, 2014.
    [29]
    S. Thrun, W. Burgard, and D. Fox, “Robot Perception,” in Probabilistic Robotics, London, England: MIT press, 2005, pp. 149–187. [Online]. Available: https://mitpress.mit.edu/books/probabilistic-robotics.
    [30]
    J. Hu, A. Razdan, JC. Femiani, M Cui, and P. Wonka, “Road network extraction and intersection detection from aerial images by tracking road footprints,” IEEE Trans. Geoscience and Remote Sensing, vol. 45, no. 12, pp. 4144–4157, 2007. doi: 10.1109/TGRS.2007.906107
    [31]
    Wikipedia, “Median filter,” The Free Encyclopedia, USA. [Online]. Available: https://en.wikipedia.org/wiki/Median_filter, Accessed on: Nov. 5, 2019.
    [32]
    T. Chen, B. Dai, D. Liu, J. Song, and Z. Liu, “Velodyne-based curb detection up to 50 meters away,” in Proc. IEEE Conf. Intelligent Vehicles Symp., 2015, pp. 241–248.
    [33]
    C. K. Williams and C. E. Rasmussen, “Regression,” in Gaussian Processes for Machine Learning, London, UK: MIT press, 2006.
    [34]
    B. Douillard, J. Underwood, N. Kuntz, V. Vlaskine, A. Quadros, P. Morton, and A. Frenkel, “On the segmentation of 3D LiDAR point clouds.” in Proc. Conf. Robotics and Automation, 2011, pp. 2798–2805.
    [35]
    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite.” in Proc. Conf. Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.
    [36]
    J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences,” in Proc. IEEE Conf. Computer Vision, 2019, pp. 9297–9307.
    [37]
    Powers and D. Martin, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” Journal of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, Feb. 2011.
    [38]
    Y. Sasaki, “The truth of the F-measure,” Teach Tutor mater, vol. 1, no. 5, pp. 1–5, 2007.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(17)  / Tables(3)

    Article Metrics

    Article views (2926) PDF downloads(206) Cited by()

    Highlights

    • This paper presents a road-segmentation-line-based classification method for classifying feature points. The road segmentation line is determined by a beam band model and an improved peak-finding algorithm. The method can classify left and right feature points accurately on curved roads up to 70 meters away.
    • This paper proposes a distance filter and random sample consensus filter for candidate point extraction and seed point extraction. The candidate points and seed points would be used for the subsequent feature points filtering based on an iterative Gaussian process.
    • This paper proposes an iterative Gaussian Process Regression (GPR) based feature points filtering method. The GPR algorithm can be applied to various road shapes without assuming that road boundaries are parametric models. At this same time, the algorithm can effectively remove false points caused by obstacle occlusions. Because GPR is a nonparametric model, it significantly enhances the adaptability to various road shapes and the robustness for obstacle occlusions.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return