A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 6
Jun.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
K. N. Zhang, J. Y. Ma, and J. J. Jiang, “Loop closure detection with reweighting NetVLAD and local motion and structure consensus,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1087–1090, Jun. 2022. doi: 10.1109/JAS.2022.105635
Citation: K. N. Zhang, J. Y. Ma, and J. J. Jiang, “Loop closure detection with reweighting NetVLAD and local motion and structure consensus,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 6, pp. 1087–1090, Jun. 2022. doi: 10.1109/JAS.2022.105635

Loop Closure Detection With Reweighting NetVLAD and Local Motion and Structure Consensus

doi: 10.1109/JAS.2022.105635
More Information
  • loading
  • [1]
    W. Huang, G. Zhang, and X. Han, “Dense mapping from an accurate tracking slam,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1565–1574, 2020. doi: 10.1109/JAS.2020.1003357
    [2]
    J. Sivic and A. Zisserman, “Video Google: A text retrieval approach to object matching in videos,” in Proc. ICCV, 2003, pp. 1470–1470.
    [3]
    H. Jégou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” in Proc. CVPR, 2010, pp. 3304–3311.
    [4]
    G. Tolias, Y. Avrithis, and H. Jégou, “Image search with selective match kernels: Aggregation across single and multiple images,” Int. J. Comput. Vis., vol. 116, no. 3, pp. 247–261, 2016. doi: 10.1007/s11263-015-0810-4
    [5]
    M. Cummins and P. Newman, “Fab-map: Probabilistic localization and mapping in the space of appearance,” Int. J. Rob. Res., vol. 27, no. 6, pp. 647–665, 2008. doi: 10.1177/0278364908090961
    [6]
    D. Gálvez-López and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. Robot., vol. 28, no. 5, pp. 1188–1197, 2012. doi: 10.1109/TRO.2012.2197158
    [7]
    E. Garcia-Fidalgo and A. Ortiz, “iBoW-LCD: An appearance-based loop-closure detection approach using incremental bags of binary words,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 3051–3057, 2018. doi: 10.1109/LRA.2018.2849609
    [8]
    K. A. Tsintotas, L. Bampis, and A. Gasteratos, “Assigning visual words to places for loop closure detection,” in Proc. ICRA, 2018, pp. 1–7.
    [9]
    D. Liu, Y. Cui, X. Guo, W. Ding, B. Yang, and Y. Chen, “Visual localization for autonomous driving: Mapping the accurate location in the city maze,” in Proc. ICPR, 2021, pp. 3170–3177.
    [10]
    R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD: CNN architecture for weakly supervised place recognition,” in Proc. CVPR, 2016, pp. 5297–5307.
    [11]
    D. Liu, Y. Cui, L. Yan, C. Mousas, B. Yang, and Y. Chen, “DenserNet: Weakly supervised visual localization using multi-scale feature aggregation,” in Proc. AAAI, 2021, pp. 6101–6109.
    [12]
    B. Cao, A. Araujo, and J. Sim, “Unifying deep local and global features for image search,” in Proc. ECCV, 2020, pp. 726–743.
    [13]
    S. An, H. Zhu, D. Wei, K. A. Tsintotas, and A. Gasteratos, “Fast and incremental loop closure detection with deep features and proximity graphs,” J. Field Robot., 2022. DOI: DOI: 10.1002/rob.22060
    [14]
    Y. Xu, J. Huang, J. Wang, Y. Wang, H. Qin, and K. Nan, “ESA-VLAD: A lightweight network based on second-order attention and NetVLAD for loop closure detection,” IEEE Robot. Autom. Lett., vol. 6, no. 4, pp. 6545–6552, 2021. doi: 10.1109/LRA.2021.3094228
    [15]
    H. Wang, W. Wang, X. Zhu, J. Dai, and L. Wang, “Collaborative visual navigation,” arXiv preprint arXiv: 2107.01151, 2021.
    [16]
    H. Wang, W. Wang, T. Shu, W. Liang, and J. Shen, “Active visual information gathering for vision-language navigation,” in Proc. ECCV, 2020, pp. 307–322.
    [17]
    Y. A. Malkov and D. A. Yashunin, “Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 4, pp. 824–836, 2018.
    [18]
    M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981. doi: 10.1145/358669.358692
    [19]
    H. Liu and S. Yan, “Common visual pattern discovery via spatially coherent correspondences,” in Proc. CVPR, 2010, pp. 1609–1616.
    [20]
    C. Leng, H. Zhang, G. Cai, Z. Chen, and A. Basu, “Total variation constrained non-negative matrix factorization for medical image registration,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 5, pp. 1025–1037, 2021. doi: 10.1109/JAS.2021.1003979
    [21]
    J. Ma, J. Zhao, J. Jiang, H. Zhou, and X. Guo, “Locality preserving matching,” Int. J. Comput. Vis., vol. 127, no. 5, pp. 512–531, 2019. doi: 10.1007/s11263-018-1117-z
    [22]
    J.-W. Bian, W.-Y. Lin, Y. Liu, L. Zhang, S.-K. Yeung, M.-M. Cheng, and I. Reid, “GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence,” Int. J. Comput. Vis., vol. 128, no. 6, pp. 1580–1593, 2020.
    [23]
    T. Weyand, A. Araujo, B. Cao, and J. Sim, “Google landmarks dataset v2–a large-scale benchmark for instance-level recognition and retrieval,” in Proc. CVPR, 2020, pp. 2575–2584.
    [24]
    S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000. doi: 10.1126/science.290.5500.2323
    [25]
    K. Zhang, Z. Li, and J. Ma, “Appearance-based loop closure detection via bidirectional manifold representation consensus,” in Proc. ICRA, 2021, pp. 6811–6817.
    [26]
    K. Zhang, X. Jiang, and J. Ma, “Appearance-based loop closure detection via locality-driven accurate motion field learning,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 3, pp. 2350–2365, 2021.
    [27]
    D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004. doi: 10.1023/B:VISI.0000029664.99615.94
    [28]
    X. Li and Z. Hu, “Rejecting mismatches by correspondence function,” Int. J. Comput. Vis., vol. 89, no. 1, pp. 1–17, 2010. doi: 10.1007/s11263-010-0318-x
    [29]
    X. Jiang, J. Ma, J. Jiang, and X. Guo, “Robust feature matching using spatial clustering with heavy outliers,” IEEE Trans. Image Process., vol. 29, pp. 736–746, 2020. doi: 10.1109/TIP.2019.2934572
    [30]
    S. A. M. Kazmi and B. Mertsching, “Detecting the expectancy of a place using nearby context for appearance-based mapping,” IEEE Trans. Robot., vol. 35, no. 6, pp. 1352–1366, 2019. doi: 10.1109/TRO.2019.2926475

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(4)  / Tables(4)

    Article Metrics

    Article views (337) PDF downloads(50) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return