A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 10
Oct.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
J. Q. Yang, Z. Q. Huang, S. W. Quan, Z. G. Cao, and Y. N. Zhang, “RANSACs for 3D rigid registration: A comparative evaluation,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 10, pp. 1861–1878, Oct. 2022. doi: 10.1109/JAS.2022.105500
Citation: J. Q. Yang, Z. Q. Huang, S. W. Quan, Z. G. Cao, and Y. N. Zhang, “RANSACs for 3D rigid registration: A comparative evaluation,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 10, pp. 1861–1878, Oct. 2022. doi: 10.1109/JAS.2022.105500

RANSACs for 3D Rigid Registration: A Comparative Evaluation

doi: 10.1109/JAS.2022.105500
Funds:  This work was supported in part by the National Natural Science Foundation of China (NFSC) (62002295, U19B2037), China Postdoctoral Science Foundation (2020M673319), Shaanxi Provincial Key R & D Program (2021KWZ-03), and the Natural Science Basic Research Plan in Shaanxi Province of China (2021JQ-290, 2020JQ-210)
More Information
  • Estimating an accurate six-degree-of-freedom (6-DoF) pose from correspondences with outliers remains a critical issue to 3D rigid registration. Random sample consensus (RANSAC) and its variants are popular solutions to this problem. Although there have been a number of RANSAC-fashion estimators, two issues remain unsolved. First, it is unclear which estimator is more appropriate to a particular application. Second, the impacts of different sampling strategies, hypothesis generation methods, hypothesis evaluation metrics, and stop criteria on the overall estimators remain ambiguous. This work fills these gaps by first considering six existing RANSAC-fashion methods and then proposing eight variants for a comprehensive evaluation. The objective is to thoroughly compare estimators in the RANSAC family, and evaluate the effects of each key stage on the eventual 6-DoF pose estimation performance. Experiments have been carried out on four standard datasets with different application scenarios, data modalities, and nuisances. They provide us with input correspondence sets with a variety of inlier ratios, spatial distributions, and scales. Based on the experimental results, we summarize remarkable outcomes and valuable findings, so as to give practical instructions to real-world applications, and highlight current bottlenecks and potential solutions in this research realm.

     

  • loading
  • [1]
    A. S. Mian, M. Bennamoun, and R. A. Owens, “Automatic correspondence for 3D modeling: An extensive review,” Int. Journal of Shape Modeling, vol. 11, no. 2, pp. 253–291, 2005. doi: 10.1142/S0218654305000797
    [2]
    H. Chen, X. Zhang, S. Du, Z. Wu, and N. Zheng, “A correntropy-based affine iterative closest point algorithm for robust point set registration,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 4, pp. 981–991, 2019. doi: 10.1109/JAS.2019.1911579
    [3]
    C. Leng, H. Zhang, G. Cai, Z. Chen, and A. Basu, “Total variation constrained non-negative matrix factorization for medical image registration,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 5, pp. 1025–1037, 2021. doi: 10.1109/JAS.2021.1003979
    [4]
    Y. Guo, F. Sohel, M. Bennamoun, M. Lu, and J. Wan, “Rotational projection statistics for 3D local surface description and object recognition,” Int. Journal of Computer Vision, vol. 105, no. 1, pp. 63–86, 2013. doi: 10.1007/s11263-013-0627-y
    [5]
    K. Tateno, F. Tombari, and N. Navab, “When 2.5D is not enough: Simultaneous reconstruction, segmentation and recognition on dense slam,” in Proc. IEEE Int. Conf. Robotics and Automation, 2016, pp. 2295–2302.
    [6]
    Y. Aoki, H. Goforth, R. A. Srivatsan, and S. Lucey, “Pointnetlk: Robust & efficient point cloud registration using pointnet,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2019, pp. 7163–7172.
    [7]
    C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2020, pp. 2514–2523.
    [8]
    G. D. Pais, S. Ramalingam, V. M. Govindu, J. C. Nascimento, R. Chellappa, and P. Miraldo, “3DRegNet: A deep neural network for 3D point registration,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 7193–7203.
    [9]
    Y. Wang and J. M. Solomon, “Deep closest point: Learning representations for point cloud registration,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 3523–3532.
    [10]
    X. Bai, Z. Luo, L. Zhou, H. Chen, L. Li, Z. Hu, H. Fu, and C.- L. Tai, “PointDSC: Robust point cloud registration using deep spatial consistency,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021.
    [11]
    J. Yang, J. Chen, Z. Huang, S. Quan, Y. Zhang, and Z. Cao, “3D correspondence grouping with compatibility features,” arXiv preprint arXiv: 2007.10570, 2020.
    [12]
    A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser, “3Dmatch: Learning local geometric descriptors from RGB-D reconstructions,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2017, pp. 199–208.
    [13]
    H. Deng, T. Birdal, and S. Ilic, “PPFNet: Global context aware local features for robust 3D point matching,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, 2018.
    [14]
    J. Yang, Z. Cao, and Q. Zhang, “A fast and robust local descriptor for 3D point cloud registration,” Information Sciences, vol. 346, pp. 163–179, 2016.
    [15]
    J. Yang, Y. Xiao, and Z. Cao, “Aligning 2.5D scene fragments with distinctive local geometric features and voting-based correspondences,” IEEE Trans. Circuits and Systems for Video Technology, vol. 29, no. 3, pp. 714–729, 2019. doi: 10.1109/TCSVT.2018.2813083
    [16]
    A. G. Buch, Y. Yang, N. Krüger, and H. G. Petersen, “In search of inliers: 3D correspondence by local and global voting,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2014, pp. 2075–2082.
    [17]
    Y. Guo, F. Sohel, M. Bennamoun, J. Wan, and M. Lu, “An accurate and robust range image registration algorithm for 3D object modeling,” IEEE Trans. Multimedia, vol. 16, no. 5, pp. 1377–1390, 2014. doi: 10.1109/TMM.2014.2316145
    [18]
    F. Tombari, S. Salti, and L. Di Stefano, “Performance evaluation of 3D keypoint detectors,” Int. Journal of Computer Vision, vol. 102, no. 1–3, pp. 198–220, 2013. doi: 10.1007/s11263-012-0545-4
    [19]
    Y. Guo, M. Bennamoun, F. Sohel, M. Lu, J. Wan, and N. M. Kwok, “A comprehensive performance evaluation of 3D local feature descriptors,” Int. Journal of Computer Vision, vol. 116, no. 1, pp. 66–89, 2016. doi: 10.1007/s11263-015-0824-y
    [20]
    J. Yang, K. Xian, P. Wang, and Y. Zhang, “A performance evaluation of correspondence grouping methods for 3D rigid data matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, 2019. DOI: 10.1109/TPAMI.2019.2960234
    [21]
    M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. doi: 10.1145/358669.358692
    [22]
    A. Aldoma, Z.-C. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, R. B. Rusu, S. Gedikli, and M. Vincze, “Tutorial: Point cloud library: Three-dimensional object recognition and 6 DoF pose estimation,” IEEE Robotics &Automation Magazine, vol. 19, no. 3, pp. 80–91, 2012.
    [23]
    Y. Guo, M. Bennamoun, F. Sohel, M. Lu, and J. Wan, “An integrated framework for 3-D modeling, object detection, and pose estimation from point-clouds,” IEEE Trans. Instrumentation and Measurement, vol. 64, no. 3, pp. 683–693, 2015. doi: 10.1109/TIM.2014.2358131
    [24]
    R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” in Proc. IEEE Int. Conf. Robotics and Automation, 2009, pp. 3212–3217.
    [25]
    S. Quan, J. Ma, F. Hu, B. Fang, and T. Ma, “Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors,” Information Sciences, vol. 444, pp. 153–171, 2018. doi: 10.1016/j.ins.2018.02.070
    [26]
    J. Yang, Q. Zhang, and Z. Cao, “Multi-attribute statistics histograms for accurate and robust pairwise registration of range images,” Neurocomputing, vol. 251, pp. 54–67, 2017. doi: 10.1016/j.neucom.2017.04.015
    [27]
    P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992. doi: 10.1109/34.121791
    [28]
    V. Golyanik, S. A. Ali, and D. Stricker, “Gravitational approach for point set registration,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 5802–5810.
    [29]
    N. Bonneel and D. Coeurjolly, “Spot: Sliced partial optimal transport,” ACM Trans. Graphics, vol. 38, no. 4, pp. 1–13, 2019.
    [30]
    A. Myronenko and X. Song, “Point set registration: Coherent point drift,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 12, pp. 2262–2275, 2010. doi: 10.1109/TPAMI.2010.46
    [31]
    X. Huang, G. Mei, J. Zhang, and R. Abbas, “A comprehensive survey on point cloud registration,” arXiv preprint arXiv: 2103.02690, 2021.
    [32]
    B. Zhao, X. Chen, X. Le, and J. Xi, “A comprehensive performance evaluation for 3D transformation estimation techniques,” arXiv preprint arXiv: 1901.05104, 2019.
    [33]
    P. H. Torr and A. Zisserman, “MLESAC: A new robust estimator with application to estimating image geometry,” Computer Vision and Image Understanding, vol. 78, no. 1, pp. 138–156, 2000. doi: 10.1006/cviu.1999.0832
    [34]
    O. Chum, J. Matas, and J. Kittler, “Locally optimized RANSAC,” in Proc. Joint Pattern Recognition Symp. Springer, 2003, pp. 236–243.
    [35]
    R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J.-M. Frahm, “USAC: A universal framework for random sample consensus,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 2022–2038, 2013. doi: 10.1109/TPAMI.2012.257
    [36]
    D. Barath and J. Matas, “Graph-cut RANSAC,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 6733–6741.
    [37]
    D. Barath, J. Noskova, M. Ivashechkin, and J. Matas, “MAGSAC++, a fast, reliable and accurate robust estimator,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 1304–1312.
    [38]
    A. Petrelli and L. D. Stefano, “On the repeatability of the local reference frame for partial shape matching,” in Proc. IEEE Int. Conf. Computer Vision, 2011, pp. 2244–2251.
    [39]
    X. Bai, Z. Luo, L. Zhou, H. Fu, L. Quan, and C.-L. Tai, “D3Feat: Joint learning of dense detection and description of 3D local features,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 6359–6367.
    [40]
    Z. Gojcic, C. Zhou, J. D. Wegner, and A. Wieser, “The perfect match: 3D point cloud matching with smoothed densities,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2019, pp. 5545–5554.
    [41]
    G. Bradski, “The opencv library,” Dr. Dobbs Journal: Software Tools for the Professional Programmer, vol. 25, no. 11, pp. 120–123, 2000.
    [42]
    R. B. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” in Proc. IEEE Int. Conf. Robotics and Automation, 2011, pp. 1–4.
    [43]
    Q.-Y. Zhou, J. Park, and V. Koltun, “Open3D: A modern library for 3D data processing,” arXiv preprint arXiv: 1801.09847, 2018.
    [44]
    J. Yang, Y. Xiao, and Z. Cao, “Toward the repeatability and robustness of the local reference frame for 3D shape matching: An evaluation,” IEEE Trans. Image Processing, vol. 27, no. 8, pp. 3766–3781, 2018. doi: 10.1109/TIP.2018.2827330
    [45]
    A. S. Mian, M. Bennamoun, and R. Owens, “Three-dimensional model-based object recognition and segmentation in cluttered scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 10, pp. 1584–1601, 2006. doi: 10.1109/TPAMI.2006.213
    [46]
    A. S. Mian, M. Bennamoun, and R. A. Owens, “A novel representation and feature matching algorithm for automatic pairwise registration of range images,” Int. Journal of Computer Vision, vol. 66, no. 1, pp. 19–40, 2006. doi: 10.1007/s11263-005-3221-0
    [47]
    S. Salti, F. Tombari, and L. Di Stefano, “Shot: Unique signatures of histograms for surface and texture description,” Computer Vision and Image Understanding, vol. 125, pp. 251–264, 2014. doi: 10.1016/j.cviu.2014.04.011
    [48]
    A. Mian, M. Bennamoun, and R. Owens, “On the repeatability and quality of keypoints for local feature-based 3D object retrieval from cluttered scenes,” Int. Journal of Computer Vision, vol. 89, no. 2–3, pp. 348–361, 2010. doi: 10.1007/s11263-009-0296-z
    [49]
    S. Choi, Q.-Y. Zhou, and V. Koltun, “Robust reconstruction of indoor scenes,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2015, pp. 5556–5565.
    [50]
    Q.-Y. Zhou, J. Park, and V. Koltun, “Fast global registration,” in Proc. European Conf. Computer Vision. Springer, 2016, pp. 766–782.
    [51]
    G. K. Tam, Z.-Q. Cheng, Y.-K. Lai, F. C. Langbein, Y. Liu, D. Marshall, R. R. Martin, X.-F. Sun, and P. L. Rosin, “Registration of 3D point clouds and meshes: A survey from rigid to nonrigid,” IEEE Trans. Visualization and Computer Graphics, vol. 19, no. 7, pp. 1199–1217, 2013. doi: 10.1109/TVCG.2012.310
    [52]
    D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. doi: 10.1023/B:VISI.0000029664.99615.94
    [53]
    Á. P. Bustos and T.-J. Chin, “Guaranteed outlier removal for point cloud registration with correspondences,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2868–2882, 2017.
    [54]
    I. Sipiran and B. Bustos, “Harris 3D: A robust extension of the harris operator for interest point detection on 3D meshes,” The Visual Computer, vol. 27, no. 11, pp. 963–976, 2011. doi: 10.1007/s00371-011-0610-y
    [55]
    Y. Zhong, “Intrinsic shape signatures: A shape descriptor for 3D object recognition,” in Proc. IEEE Int. Conf. Computer Vision Workshops, 2009, pp. 689–696.
    [56]
    F. Tombari, S. Salti, and L. Di Stefano, “Unique signatures of histograms for local surface description,” in Proc. European Conf. Computer Vision, 2010, pp. 356–369.
    [57]
    A. E. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 433–449, 1999. doi: 10.1109/34.765655
    [58]
    J. Yang, Q. Zhang, Y. Xiao, and Z. Cao, “TOLDI: An effective and robust approach for 3D local shape description,” Pattern Recognition, vol. 65, pp. 175–187, 2017. doi: 10.1016/j.patcog.2016.11.019
    [59]
    H. Yang, J. Shi, and L. Carlone, “Teaser: Fast and certifiable point cloud registration,” IEEE Trans. Robotics, vol. 37, no. 2, pp. 314–333, 2020.
    [60]
    J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-ICP: A globally optimal solution to 3D ICP point-set registration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2241–2254, 2016. doi: 10.1109/TPAMI.2015.2513405
    [61]
    N. Mellado, D. Aiger, and N. J. Mitra, “Super 4PCS fast global pointcloud registration via smart indexing,” in Proc. Computer Graphics Forum, vol. 33, no. 5. Wiley Online Library, 2014, pp. 205–215.
    [62]
    M. Saval-Calvo, J. Azorin-Lopez, A. Fuster-Guillo, V. Villena-Martinez, and R. B. Fisher, “3D non-rigid registration using color: Color coherent point drift,” Computer Vision and Image Understanding, vol. 169, pp. 119–135, 2018. doi: 10.1016/j.cviu.2018.01.008
    [63]
    K. Moo Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua, “Learning to find good correspondences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 2666–2674.
    [64]
    C. Zhao, Z. Cao, C. Li, X. Li, and J. Yang, “NM-Net: Mining reliable neighbors for robust feature correspondences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2019, pp. 215–224.
    [65]
    W. Luo, A. G. Schwing, and R. Urtasun, “Efficient deep learning for stereo matching,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 5695–5703.
    [66]
    E. Brachmann and C. Rother, “Neural-guided RANSAC: Learning where to sample model hypotheses,” in Proc. IEEE Int. Conf. Computer Vision, 2019, pp. 4322–4331.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(15)  / Tables(5)

    Article Metrics

    Article views (574) PDF downloads(82) Cited by()

    Highlights

    • A survey of six existing 6-DoF pose estimators in the RANSAC family
    • Eight RANSAC variants for a comprehensive 3D registration evaluation
    • Summary of the merits and demerits of RANSACs for 3D rigid registration

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return