A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 12 Issue 10
Oct.  2025

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 19.2, Top 1 (SCI Q1)
    CiteScore: 28.2, Top 1% (Q1)
    Google Scholar h5-index: 95, TOP 5
Turn off MathJax
Article Contents
Y. Zhang, G. Tian, C. Zhang, C. Hua, W. Ding, and C. K. Ahn, “Environment modeling for service robots from a task execution perspective,” IEEE/CAA J. Autom. Sinica, vol. 12, no. 10, pp. 1985–2001, Oct. 2025. doi: 10.1109/JAS.2025.125168
Citation: Y. Zhang, G. Tian, C. Zhang, C. Hua, W. Ding, and C. K. Ahn, “Environment modeling for service robots from a task execution perspective,” IEEE/CAA J. Autom. Sinica, vol. 12, no. 10, pp. 1985–2001, Oct. 2025. doi: 10.1109/JAS.2025.125168

Environment Modeling for Service Robots From a Task Execution Perspective

doi: 10.1109/JAS.2025.125168
Funds:  This work was supported in part by the National Natural Science Foundation of China (62203378, 62203377, 62073279), the Hebei Natural Science Foundation (F2024203036, F2024203115, F2025203101), the Science and Technology Program of Hebei (236Z2002G, 236Z1603G), the Science Research Project of Hebei Education Department (BJK2024195), and the National Research Foundation of Korea (NRF) Grant Funded by the Korea Government (Ministry of Science and ICT) (NRF-2020R1A2C1005449)
More Information
  • Service robots are increasingly entering the home to provide domestic tasks for residents. However, when working in an open, dynamic, and unstructured home environment, service robots still face challenges such as low intelligence for task execution and poor long-term autonomy (LTA), which has limited their deployment. As the basis of robotic task execution, environment modeling has attracted significant attention. This integrates core technologies such as environment perception, understanding, and representation to accurately recognize environmental information. This paper presents a comprehensive survey of environmental modeling from a new task-execution-oriented perspective. In particular, guided by the requirements of robots in performing domestic service tasks in the home environment, we systematically review the progress that has been made in task-execution-oriented environmental modeling in four respects: 1) localization, 2) navigation, 3) manipulation, and 4) LTA. Current challenges are discussed, and potential research opportunities are also highlighted.

     

  • loading
  • [1]
    Y. Tong, H. Liu, and Z. Zhang, “Advancements in humanoid robots: A comprehensive review and future prospects,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 301–328, Feb. 2024. doi: 10.1109/JAS.2023.124140
    [2]
    L. Kunze, N. Hawes, T. Duckett, M. Hanheide, and T. Krajník, “Artificial intelligence for long-term robot autonomy: A survey,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 4023–4030, Oct. 2018. doi: 10.1109/LRA.2018.2860628
    [3]
    T. Shen, J. Sun, S. Kong, Y. Wang, J. Li, X. Li, and F.-Y. Wang, “The journey/DAO/TAO of embodied intelligence: From large models to foundation intelligence and parallel intelligence,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 6, pp. 1313–1316, Jun. 2024. doi: 10.1109/JAS.2024.124407
    [4]
    D. Sirintuna, A. Giammarino, and A. Ajoudani, “An object deformation-agnostic framework for human–robot collaborative transportation,” IEEE Trans. Autom. Sci. Eng., vol. 21, no. 2, pp. 1986–1999, Apr. 2024. doi: 10.1109/TASE.2023.3259162
    [5]
    Y. Zhang, G. Tian, and H. Chen, “Exploring the cognitive process for service task in smart home: A robot service mechanism,” Future Gener. Comput. Syst., vol. 102, pp. 588–602, Jan. 2020. doi: 10.1016/j.future.2019.09.020
    [6]
    Z. Gao, J. Qin, S. Wang, and Y. Wang, “Boundary gap based reactive navigation in unknown environments,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 2, pp. 468–477, Feb. 2021. doi: 10.1109/JAS.2021.1003841
    [7]
    Q. Liu, X. Cui, Z. Liu, and H. Wang, “Cognitive navigation for intelligent mobile robots: A learning-based approach with topological memory configuration,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 9, pp. 1933–1943, Sept. 2024. doi: 10.1109/JAS.2024.124332
    [8]
    S. Liu, G. Tian, Y. Zhang, M. Zhang, and S. Liu, “Service planning oriented efficient object search: A knowledge-based framework for home service robot,” Expert Syst. Appl., vol. 187, p. 115853, Jan. 2022. doi: 10.1016/j.eswa.2021.115853
    [9]
    C. Wang, L. Meng, S. She, I. M. Mitchell, T. Li, F. Tung, W. Wan, M. Q.-H. Meng, and C. W. de Silva, “Autonomous mobile robot navigation in uneven and unstructured indoor environments,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 109−116.
    [10]
    N. Sünderhauf, T. T. Pham, Y. Latif, M. Milford, and I. Reid, “Meaningful maps with object-oriented semantic mapping,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Vancouver, Canada, 2017, pp. 5079−5085.
    [11]
    Y. Zhang, G. Tian, X. Shao, S. Liu, M. Zhang, and P. Duan, “Building metric-topological map to efficient object search for mobile robot,” IEEE Trans. Ind. Electron., vol. 69, no. 7, pp. 7076–7087, Jul. 2022. doi: 10.1109/TIE.2021.3095812
    [12]
    M. Ersen, E. Oztop, and S. Sariel, “Cognition-enabled robot manipulation in human environments: Requirements, recent work, and open problems,” IEEE Robot. Autom. Mag., vol. 24, no. 3, pp. 108–122, Sept. 2017. doi: 10.1109/MRA.2016.2616538
    [13]
    Y. Zhang, G. Tian, and X. Shao, “Safe and efficient robot manipulation: Task-oriented environment modeling and object pose estimation,” IEEE Trans. Instrum. Meas., vol. 70, p. 7502412, Apr. 2021.
    [14]
    H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robot. Autom. Mag., vol. 13, no. 2, pp. 99–110, Jun. 2006. doi: 10.1109/MRA.2006.1638022
    [15]
    T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (SLAM): Part Ⅱ,” IEEE Robot. Autom. Mag., vol. 13, no. 3, pp. 108–117, Sept. 2006. doi: 10.1109/MRA.2006.1678144
    [16]
    J. M. Santos, D. Portugal, and R. P. Rocha, “An evaluation of 2D SLAM techniques available in Robot Operating System,” in Proc. IEEE Int. Symp. Safety, Security, and Rescue Robotics, Linköping, Sweden, 2013, pp. 1−6.
    [17]
    S. Huang and G. Dissanayake, “A critique of current developments in simultaneous localization and mapping,” Int. J. Adv. Robot. Syst., vol. 13, no. 5, pp. 1–13, Oct. 2016.
    [18]
    C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Trans. Robot., vol. 32, no. 6, pp. 1309–1332, Dec. 2016. doi: 10.1109/TRO.2016.2624754
    [19]
    I. A. Kazerouni, L. Fitzgerald, G. Dooly, and D. Toal, “A survey of state-of-the-art on visual SLAM,” Expert Syst. Appl., vol. 205, p. 117734, Nov. 2022. doi: 10.1016/j.eswa.2022.117734
    [20]
    G. Younes, D. Asmar, E. Shammas, and J. Zelek, “Keyframe-based monocular SLAM: Design, survey, and future directions,” Robot. Auton. Syst., vol. 98, pp. 67–88, Dec. 2017. doi: 10.1016/j.robot.2017.09.010
    [21]
    M. R. U. Saputra, A. Markham, and N. Trigoni, “Visual SLAM and structure from motion in dynamic environments: A survey,” ACM Comput. Surv., vol. 51, no. 2, p. 37, Feb. 2018.
    [22]
    H. Pu, J. Luo, G. Wang, T. Huang, H. Liu, and J. Luo, “Visual SLAM integration with semantic segmentation and deep learning: A review,” IEEE Sens. J., vol. 23, no. 19, pp. 22119–22138, Oct. 2023. doi: 10.1109/JSEN.2023.3306371
    [23]
    J. A. Placed, J. Strader, H. Carrillo, N. Atanasov, V. Indelman, L. Carlone, and J. A. Castellanos, “A survey on active simultaneous localization and mapping: State of the art and new frontiers,” IEEE Trans. Robot., vol. 39, no. 3, pp. 1686–1705, Jun. 2023. doi: 10.1109/TRO.2023.3248510
    [24]
    L. Xia, J. Cui, R. Shen, X. Xu, Y. Gao, and X. Li, “A survey of image semantics-based visual simultaneous localization and mapping: Application-oriented solutions to autonomous navigation of mobile robots,” Int. J. Adv. Robot. Syst., vol. 17, no. 3, pp. 1–17, May 2020.
    [25]
    J. Crespo, J. C. Castillo, O. M. Mozos, and R. Barber, “Semantic information for robot navigation: A survey,” Appl. Sci., vol. 10, no. 2, p. 497, Jan. 2020. doi: 10.3390/app10020497
    [26]
    X. Han, S. Li, X. Wang, and W. Zhou, “Semantic mapping for mobile robots in indoor scenes: A survey,” Information, vol. 12, no. 2, p. 92, Feb. 2021. doi: 10.3390/info12020092
    [27]
    Y. Wang, Y. Tian, J. Chen, K. Xu, and X. Ding, “A survey of visual SLAM in dynamic environment: The evolution from geometric to semantic approaches,” IEEE Trans. Instrum. Meas., vol. 73, p. 2523221, Jun. 2024.
    [28]
    R. Eyvazpour, M. Shoaran, and G. Karimian, “Hardware implementation of SLAM algorithms: A survey on implementation approaches and platforms,” Artif. Intell. Rev., vol. 56, no. 7, pp. 6187–6239, Jul. 2023. doi: 10.1007/s10462-022-10310-5
    [29]
    B. Al-Tawil, T. Hempel, A. Abdelrahman, and A. Al-Hamadi, “A review of visual SLAM for robotics: Evolution, properties, and future applications,” Front. Robot. AI, vol. 11, p. 1347985, Apr. 2024. doi: 10.3389/frobt.2024.1347985
    [30]
    Y. Zhang, H. Yan, D. Zhu, J. Wang, C.-H. Zhang, W. Ding, X. Luo, C. Hua, and M. Q.-H. Meng, “Air-ground collaborative robots for fire and rescue missions: Towards mapping and navigation perspective,” arXiv preprint arXiv: 2412.20699, 2024.
    [31]
    R. Smith, M. Self, and P. Cheeseman, “Estimating uncertain spatial relationships in robotics,” in Autonomous Robot Vehicles, I. J. Cox and G. T. Wilfong, Eds. New York, NY, USA: Springer, 1990, pp. 167−193.
    [32]
    K. Konolige, G. Grisetti, R. Kümmerle, W. Burgard, B. Limketkai, and R. Vincent, “Efficient sparse pose adjustment for 2D mapping,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Taipei, China, 2010, pp. 22−29.
    [33]
    G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with Rao-Blackwellized particle filters,” IEEE Trans. Robot., vol. 23, no. 1, pp. 34–46, Feb. 2007. doi: 10.1109/TRO.2006.889486
    [34]
    A. Yilmaz and H. Temeltas, “Self-adaptive Monte Carlo method for indoor localization of smart AGVs using LIDAR data,” Robot. Auton. Syst., vol. 122, p. 103285, Dec. 2019. doi: 10.1016/j.robot.2019.103285
    [35]
    M. Wang, B. Xin, M. Jing, and Y. Qu, “An exploration-enhanced search algorithm for robot indoor source searching,” IEEE Trans. Robot., vol. 40, pp. 4160–4178, Sept. 2024. doi: 10.1109/TRO.2024.3454572
    [36]
    T. Krajník, J. P. Fentanes, M. Hanheide, and T. Duckett, “Persistent localization and life-long mapping in changing environments using the Frequency Map Enhancement,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Daejeon, Korea (South), 2016, pp. 4558−4563.
    [37]
    N. Banerjee, D. Lisin, S. R. Lenser, J. Briggs, R. Baravalle, V. Albanese, Y. Chen, A. Karimian, T. Ramaswamy, P. Pilotti, M. L. Alonso, L. Nardelli, V. Lane, R. Moser, A. O. Huttlin, J. Shriver, and P. Fong, “Lifelong mapping in the wild: Novel strategies for ensuring map stability and accuracy over time evaluated on thousands of robots,” Robot. Auton. Syst., vol. 164, p. 104403, Jun. 2023. doi: 10.1016/j.robot.2023.104403
    [38]
    J. Kim and W. Chung, “Localization of a mobile robot using a laser range finder in a glass-walled environment,” IEEE Trans. Ind. Electron., vol. 63, no. 6, pp. 3616–3627, Jun. 2016. doi: 10.1109/TIE.2016.2523460
    [39]
    J. Jiang, R. Miyagusuku, A. Yamashita, and H. Asama, “Online glass confidence map building using laser rangefinder for mobile robots,” Adv. Robot., vol. 34, no. 23, pp. 1506–1521, Sept. 2020. doi: 10.1080/01691864.2020.1819873
    [40]
    M. Awais, “Improved laser-based navigation for mobile robots,” in Proc. Int. Conf. Advanced Robotics, Munich, Germany, 2009, pp. 1−6.
    [41]
    Y. Zhang, G. Tian, X. Shao, and J. Cheng, “Effective safety strategy for mobile robots based on laser-visual fusion in home environments,” IEEE Trans. Syst., Man, Cybern., Syst., vol. 52, no. 7, pp. 4138–4150, Jul. 2022. doi: 10.1109/TSMC.2021.3090443
    [42]
    T.-J. Lee, C.-H. Kim, and D.-I. D. Cho, “A monocular vision sensor-based efficient SLAM method for indoor service robots,” IEEE Trans. Ind. Electron., vol. 66, no. 1, pp. 318–328, Jan. 2019. doi: 10.1109/TIE.2018.2826471
    [43]
    S. P. P. da Silva, J. S. Almeida, E. F. Ohata, J. J. P. C. Rodrigues, V. H. C. de Albuquerque, and P. P. Rebouças Filho, “Monocular vision aided depth map from RGB images to estimate of localization and support to navigation of mobile robots,” IEEE Sens. J., vol. 20, no. 20, pp. 12040–12048, Oct. 2020. doi: 10.1109/JSEN.2020.2964735
    [44]
    Q. Fu, H. Yu, L. Lai, J. Wang, X. Peng, W. Sun, and M. Sun, “A robust RGB-D SLAM system with points and lines for low texture indoor environments,” IEEE Sens. J., vol. 19, no. 21, pp. 9908–9920, Nov. 2019. doi: 10.1109/JSEN.2019.2927405
    [45]
    R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, Oct. 2015. doi: 10.1109/TRO.2015.2463671
    [46]
    U. Maniscalco, I. Infantino, and A. Manfre, “Robust mobile robot self-localization by soft sensor paradigm,” in Proc. IEEE Int. Symp. Robotics and Intelligent Sensors, Ottawa, Canada, 2017, pp. 19−24.
    [47]
    F. M. Rico, J. M. G. Hernández, R. Pérez-Rodríguez, J. D. Peña-Narvaez, and A. G. Gómez-Jacinto, “Open source robot localization for nonplanar environments,” J. Field Robot., vol. 41, no. 6, pp. 1922–1939, Sept. 2024. doi: 10.1002/rob.22353
    [48]
    C.-A. Yu, H.-Y. Chen, C.-C. Wang, and L.-C. Fu, “Complex environment localization system using complementary ceiling and ground map information,” Auton. Robot., vol. 47, no. 6, pp. 669–683, Aug. 2023. doi: 10.1007/s10514-023-10116-6
    [49]
    E. Marder-Eppstein, E. Berger, T. Foote, B. Gerkey, and K. Konolige, “The office marathon: Robust navigation in an indoor office environment,” in Proc. IEEE Int. Conf. Robotics and Autom., Anchorage, USA, 2010, pp. 300−307.
    [50]
    W. Zhen, S. Zeng, and S. Soberer, “Robust localization and localizability estimation with a rotating laser scanner,” in Proc. IEEE Int. Conf. Robotics and Autom., Singapore, Singapore, 2017, pp. 6240−6245.
    [51]
    C. Premebida, D. R. Faria, and U. Nunes, “Dynamic bayesian network for semantic place classification in mobile robotics,” Auton. Robot., vol. 41, no. 5, pp. 1161–1172, Jun. 2017. doi: 10.1007/s10514-016-9600-2
    [52]
    S. Rosa, A. Patanè, C. X. Lu, and N. Trigoni, “Semantic place understanding for human-robot coexistence—toward intelligent workplaces,” IEEE Trans. Human-Mach. Syst., vol. 49, no. 2, pp. 160–170, Apr. 2019. doi: 10.1109/THMS.2018.2875079
    [53]
    A. Rottmann, Ó. M. Mozos, C. Stachniss, and W. Burgard, “Semantic place classification of indoor environments with mobile robots using boosting,” in Proc. 20th Nat. Conf. Artificial Intelligence—Volume 3, Pittsburgh, Pennsylvania, 2005, pp. 1306−1311.
    [54]
    V. Balaska, L. Bampis, M. Boudourides, and A. Gasteratos, “Unsupervised semantic clustering and localization for mobile robotics tasks,” Robot. Auton. Syst., vol. 131, p. 103567, Sept. 2020. doi: 10.1016/j.robot.2020.103567
    [55]
    R. Song, J. Liu, W. Bi, Y. Zhang, M. Zhang, C.-H. Zhang, and C. Hua, “Robot localization based on semantic information in dynamic indoor environments with similar layouts,” in Proc. IEEE Int. Conf. Robotics and Biomimetics, Bangkok, Thailand, 2024, pp. 1520−1525.
    [56]
    M. Narayana, A. Kolling, L. Nardelli, and P. Fong, “Lifelong update of semantic maps in dynamic environments,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Las Vegas, USA, 2020, pp. 6164−6171.
    [57]
    C. Gomez, A. C. Hernandez, R. Barber, and C. Stachniss, “Localization exploiting semantic and metric information in non-static indoor environments,” J. Intell. Robot. Syst., vol. 109, no. 4, p. 86, Dec. 2023. doi: 10.1007/s10846-023-02021-y
    [58]
    H. M. Do, M. Pham, W. Sheng, D. Yang, and M. Liu, “RiSH: A robot-integrated smart home for elderly care,” Robot. Auton. Syst., vol. 101, pp. 74–92, Mar. 2018. doi: 10.1016/j.robot.2017.12.008
    [59]
    H. Taira, I. Rocco, J. Sedlar, M. Okutomi, J. Sivic, T. Pajdla, T. Sattler, and A. Torii, “Is this the right place? Geometric-semantic pose verification for indoor visual localization,” in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 4372−4382.
    [60]
    J. Biswas and M. M. Veloso, “Localization and navigation of the CoBots over long-term deployments,” Int. J. Robot. Res., vol. 32, no. 14, pp. 1679–1694, Dec. 2013. doi: 10.1177/0278364913503892
    [61]
    J. Oršulić, D. Miklić, and Z. Kovačić, “Efficient dense frontier detection for 2-D graph slam based on occupancy grid submaps,” IEEE Robot. Autom. Lett., vol. 4, no. 4, pp. 3569–3576, Oct. 2019. doi: 10.1109/LRA.2019.2928203
    [62]
    Y. Zhang, C.-H. Zhang, and X. Shao, “User preference-aware navigation for mobile robot in domestic via defined virtual area,” J. Netw. Comput. Appl., vol. 173, p. 102885, Jan. 2021. doi: 10.1016/j.jnca.2020.102885
    [63]
    K.-T. Song, S.-Y. Jiang, and S.-Y. Wu, “Safe guidance for a walking-assistant robot using gait estimation and obstacle avoidance,” IEEE/ASME Trans. Mech., vol. 22, no. 5, pp. 2070–2078, Oct. 2017. doi: 10.1109/TMECH.2017.2742545
    [64]
    M. Yani, A. A. Saputra, W. H. Chin, and N. Kubota, “Investigation of obstacle prediction network for improving home-care robot navigation performance,” J. Robot. Mech., vol. 35, no. 2, pp. 510–520, Apr. 2023. doi: 10.20965/jrm.2023.p0510
    [65]
    Y. Zhang, M. Yin, H. Wang, and C. Hua, “Cross-level multi-modal features learning with transformer for RGB-D object recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 12, pp. 7121–7130, Dec. 2023. doi: 10.1109/TCSVT.2023.3275814
    [66]
    R. P. Padhy, F. Xia, S. K. Choudhury, P. K. Sa, and S. Bakshi, “Monocular vision aided autonomous UAV navigation in indoor corridor environments,” IEEE Trans. Sustain. Comput., vol. 4, no. 1, pp. 96–108, Jan.-Mar. 2019. doi: 10.1109/TSUSC.2018.2810952
    [67]
    A. Mora, R. Barber, and L. Moreno, “Leveraging 3-D data for whole object shape and reflection aware 2-D map building,” IEEE Sens. J., vol. 24, no. 14, pp. 21941–21948, Jul. 2024. doi: 10.1109/JSEN.2023.3321936
    [68]
    D. Murray and C. Jennings, “Stereo vision based mapping and navigation for mobile robots,” in Proc. IEEE Int. Conf. Robotics and Autom., Albuquerque, USA, 1997, pp. 1694−1699.
    [69]
    S. Zug, F. Penzlin, A. Dietrich, T. T. Nguyen, and S. Albert, “Are laser scanners replaceable by Kinect sensors in robotic applications?” in Proc. IEEE Int. Symp. Robotic and Sensors Environments Proc., Magdeburg, Germany, 2012, pp. 144−149.
    [70]
    X. Qi, W. Wang, Z. Liao, X. Zhang, D. Yang, and R. Wei, “Object semantic grid mapping with 2D LiDAR and RGB-D camera for domestic robot navigation,” Appl. Sci., vol. 10, no. 17, p. 5782, Aug. 2020. doi: 10.3390/app10175782
    [71]
    R. C. Luo and C. C. Lai, “Multisensor fusion-based concurrent environment mapping and moving object detection for intelligent service robotics,” IEEE Trans. Ind. Electron., vol. 61, no. 8, pp. 4043–4051, Aug. 2014. doi: 10.1109/TIE.2013.2288199
    [72]
    H. Baltzakis, A. Argyros, and P. Trahanias, “Fusion of laser and visual data for robot motion planning and collision avoidance,” Mach. Vis. Appl., vol. 15, no. 2, pp. 92–100, Dec. 2003. doi: 10.1007/s00138-003-0133-2
    [73]
    B. Lau, C. Sprunk, and W. Burgard, “Efficient grid-based spatial representations for robot navigation in dynamic environments,” Robot. Auton. Syst., vol. 61, no. 10, pp. 1116–1130, Oct. 2013. doi: 10.1016/j.robot.2012.08.010
    [74]
    D. De Gregorio and L. Di Stefano, “SkiMap: An efficient mapping framework for robot navigation,” in Proc. IEEE Int. Conf. Robotics and Autom., Singapore, Singapore, 2017, pp. 2569−2576.
    [75]
    A. J. Sathyamoorthy, K. Weerakoon, M. Elnoor, M. Russell, J. Pusey, and D. Manocha, “MIM: Indoor and outdoor navigation in complex environments using multi-layer intensity maps,” in Proc. IEEE Int. Conf. Robotics and Autom., Yokohama, Japan, 2024, pp. 10917−10924.
    [76]
    S. Silva, N. Verdezoto, D. Paillacho, S. Millan-Norman, and J. D. Hernández, “Online social robot navigation in indoor, large and crowded environments,” in Proc. IEEE Int. Conf. Robotics and Autom., London, UK, 2023, pp. 9749−9756.
    [77]
    F. Tosi, Y. Zhang, Z. Gong, E. Sandström, S. Mattoccia, M. R. Oswald, and M. Poggi, “How NeRFs and 3D Gaussian splatting are reshaping SLAM: A survey,” arXiv preprint arXiv: 2402.13255, 2024.
    [78]
    M. Kim, O. Kwon, H. Jun, and S. Oh, “RNR-Nav: A real-world visual navigation system using renderable neural radiance maps,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Abu Dhabi, United Arab Emirates, 2024, pp. 12987−12992.
    [79]
    S. Katragadda, W. Lee, Y. Peng, P. Geneva, C. Chen, C. Guo, M. Li, and G. Huang, “NeRF-VINS: A real-time neural radiance field map-based visual-inertial navigation system,” in Proc. IEEE Int. Conf. Robotics and Autom., Yokohama, Japan, 2024, pp. 10230−10237.
    [80]
    D. Maier, A. Hornung, and M. Bennewitz, “Real-time navigation in 3D environments based on depth camera data,” in Proc. 12th IEEE-RAS Int. Conf. Humanoid Robots, Osaka, Japan, 2012, pp. 692−697.
    [81]
    Q. Liu, N. Chen, Z. Liu, and H. Wang, “Toward learning-based visuomotor navigation with neural radiance fields,” IEEE Trans. Ind. Inf., vol. 20, no. 6, pp. 8907–8916, Jun. 2024. doi: 10.1109/TII.2024.3378829
    [82]
    K. Muravyev and K. Yakovlev, “Evaluation of topological mapping methods in indoor environments,” IEEE Access, vol. 11, pp. 132683–132698, Nov. 2023. doi: 10.1109/ACCESS.2023.3335818
    [83]
    F. Blochliger, M. Fehr, M. Dymczyk, T. Schneider, and R. Siegwart, “Topomap: Topological mapping and navigation based on visual SLAM maps,” in Proc. IEEE Int. Conf. Robotics and Autom., Brisbane, Australia, 2018, pp. 3818−3825.
    [84]
    C. Gomez, M. Fehr, A. Millane, A. C. Hernandez, J. Nieto, R. Barber, and R. Siegwart, “Hybrid topological and 3D dense mapping through autonomous exploration for large indoor environments,” in Proc. IEEE Int. Conf. Robotics and Autom., Paris, France, 2020, pp. 9673−9679.
    [85]
    H. Bavle, J. L. Sanchez-Lopez, M. Shaheer, J. Civera, and H. Voos, “Situational graphs for robot navigation in structured indoor environments,” IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9107–9114, Oct. 2022. doi: 10.1109/LRA.2022.3189785
    [86]
    K. Zheng and A. Pronobis, “From pixels to buildings: End-to-end probabilistic deep networks for large-scale semantic mapping,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Macau, China, 2019, pp. 3511−3518.
    [87]
    K. Song, W. Liu, G. Chen, X. Xu, and Z. Xiong, “FHT-Map: Feature-based hybrid topological map for relocalization and path planning,” IEEE Robot. Autom. Lett., vol. 9, no. 6, pp. 5401–5408, Jun. 2024. doi: 10.1109/LRA.2024.3392493
    [88]
    F. Fraundorfer, C. Engels, and D. Nister, “Topological mapping, localization and navigation using image collections,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, San Diego, USA, 2007, pp. 3872−3877.
    [89]
    N. Kim, O. Kwon, H. Yoo, Y. Choi, J. Park, and S. Oh, “Topological semantic graph memory for image-goal navigation,” in Proc. 6th Conf. Robot Learning, Auckland, New Zealand, 2023, pp. 393−402.
    [90]
    J. Delfin, H. M. Becerra, and G. Arechavaleta, “Humanoid navigation using a visual memory with obstacle avoidance,” Robot. Auton. Syst., vol. 109, pp. 109–124, Nov. 2018. doi: 10.1016/j.robot.2018.08.010
    [91]
    A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: An efficient probabilistic 3D mapping framework based on octrees,” Auton. Robot., vol. 34, no. 3, pp. 189–206, Apr. 2013. doi: 10.1007/s10514-012-9321-0
    [92]
    T. Gervet, Z. Xian, N. Gkanatsios, and K. Fragkiadaki, “Act3D: 3D feature field transformers for multi-task robotic manipulation,” in Proc. 7th Conf. Robot Learning, Atlanta, USA, 2023, pp. 3949−3965.
    [93]
    J. Bimbo, A. S. Morgan, and A. M. Dollar, “Force-based simultaneous mapping and object reconstruction for robotic manipulation,” IEEE Robot. Autom. Lett., vol. 7, no. 2, pp. 4749–4756, Apr. 2022. doi: 10.1109/LRA.2022.3152244
    [94]
    M. Murooka, R. Ueda, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Planning and execution of groping behavior for contact sensor based manipulation in an unknown environment,” in Proc. IEEE Int. Conf. Robotics and Autom., Stockholm, Sweden, 2016, pp. 3955−3962.
    [95]
    Y. Hara and M. Tomono, “Moving object removal and surface mesh mapping for path planning on 3D terrain,” Adv. Robot., vol. 34, no. 6, pp. 375–387, Jan. 2020. doi: 10.1080/01691864.2020.1717375
    [96]
    T. Ran, L. Yuan, J. Zhang, Z. Wu, and L. He, “Object-oriented semantic SLAM based on geometric constraints of points and lines,” IEEE Trans. Cogn. Develop. Syst., vol. 15, no. 2, pp. 751–760, Jun. 2023. doi: 10.1109/TCDS.2022.3188172
    [97]
    R.-Z. Qiu, Y. Hu, Y. Song, G. Yang, Y. Fu, J. Ye, J. Mu, R. Yang, N. Atanasov, S. Scherer, and X. Wang, “Learning generalizable feature fields for mobile manipulation,” arXiv preprint arXiv: 2403.07563, 2024.
    [98]
    R. Monica and J. Aleotti, “Point cloud projective analysis for part-based grasp planning,” IEEE Robot. Autom. Lett., vol. 5, no. 3, pp. 4695–4702, Jul. 2020. doi: 10.1109/LRA.2020.3003883
    [99]
    R. Terasawa, Y. Ariki, T. Narihira, T. Tsuboi, and K. Nagasaka, “3D-CNN based heuristic guided task-space planner for faster motion planning,” in Proc. IEEE Int. Conf. Robotics and Autom., Paris, France, 2020, pp. 9548−9554.
    [100]
    R. Wu, K. Cheng, Y. Shen, C. Ning, G. Zhan, and H. Dong, “Learning environment-aware affordance for 3D articulated object manipulation under occlusions,” in Proc. 37th Int. Conf. Neural Information Processing Systems, New Orleans, USA, 2023, pp. 2664.
    [101]
    Y. Zhang, G. Tian, J. Lu, M. Zhang, and S. Zhang, “Efficient dynamic object search in home environment by mobile robot: A priori knowledge-based approach,” IEEE Trans. Veh. Technol., vol. 68, no. 10, pp. 9466–9477, Oct. 2019. doi: 10.1109/TVT.2019.2934509
    [102]
    Y. Wu, Y. Zhang, D. Zhu, Z. Deng, W. Sun, X. Chen, and J. Zhang, “An object SLAM framework for association, mapping, and high-level tasks,” IEEE Trans. Robot., vol. 39, no. 4, pp. 2912–2932, Aug. 2023. doi: 10.1109/TRO.2023.3273180
    [103]
    S. Lu, H. Chang, E. P. Jing, A. Boularias, and K. Bekris, “OVIR-3D: Open-vocabulary 3D instance retrieval without training on 3D data,” in Proc. 7th Conf. Robot Learning, Atlanta, USA, 2023, pp. 1610−1620.
    [104]
    R. K. Megalingam, S. Tantravahi, H. S. S. K. Tammana, and H. S. R. Puram, “2D-3D hybrid mapping for path planning in autonomous robots,” Int. J. Intell. Robot. Appl., vol. 7, no. 2, pp. 291–303, Jun. 2023. doi: 10.1007/s41315-023-00272-4
    [105]
    R. Martins, D. Bersan, M. F. M. Campos, and E. R. Nascimento, “Extending maps with semantic and contextual object information for robot navigation: A learning-based framework using visual and depth cues,” J. Intell. Robot. Syst., vol. 99, no. 3−4, pp. 555–569, Sept. 2020. doi: 10.1007/s10846-019-01136-5
    [106]
    J.-R. Ruiz-Sarmiento, C. Galindo, and J. Gonzalez-Jimenez, “Building Multiversal Semantic Maps for mobile robot operation,” Knowl.-Based Syst., vol. 119, pp. 257–272, Mar. 2017. doi: 10.1016/j.knosys.2016.12.016
    [107]
    C. Wang, M. Xia, and M. Q.-H. Meng, “Stable autonomous robotic wheelchair navigation in the environment with slope way,” IEEE Trans. Veh. Technol., vol. 69, no. 10, pp. 10759–10771, Oct. 2020. doi: 10.1109/TVT.2020.3009979
    [108]
    J. Biswas and M. Veloso, “Depth camera based indoor mobile robot localization and navigation,” in Proc. IEEE Int. Conf. Robotics and Autom., Saint Paul, USA, 2012, pp. 1697−1702.
    [109]
    A. Hornung, M. Phillips, E. G. Jones, M. Bennewitz, M. Likhachev, and S. Chitta, “Navigation in three-dimensional cluttered environments for mobile manipulation,” in Proc. IEEE Int. Conf. Robotics and Autom., Saint Paul, USA, 2012, pp. 423−429.
    [110]
    F. Schmalstieg, D. Honerkamp, T. Welschehold, and A. Valada, “Learning hierarchical interactive multi-object search for mobile manipulation,” IEEE Robot. Autom. Lett., vol. 8, no. 12, pp. 8549–8556, Dec. 2023. doi: 10.1109/LRA.2023.3329619
    [111]
    M. Günther, T. Wiemann, S. Albrecht, and J. Hertzberg, “Model-based furniture recognition for building semantic object maps,” Artif. Intell., vol. 247, pp. 336–351, Jun. 2017. doi: 10.1016/j.artint.2014.12.007
    [112]
    M. Zhang, G. Tian, Y. Cui, Y. Zhang, and Z. Xia, “Hierarchical semantic knowledge-based object search method for household robots,” IEEE Trans. Emerg. Top. Comput. Intell., vol. 8, no. 1, pp. 930–941, Feb. 2024. doi: 10.1109/TETCI.2023.3297838
    [113]
    L. Riazuelo, M. Tenorth, D. Di Marco, M. Salas, D. Gálvez-López, L. Mösenlechner, L. Kunze, M. Beetz, J. D. Tardós, L. Montano, and J. M. M. Montiel, “RoboEarth semantic mapping: A cloud enabled knowledge-based approach,” IEEE Trans. Autom. Sci. Eng., vol. 12, no. 2, pp. 432–443, Apr. 2015. doi: 10.1109/TASE.2014.2377791
    [114]
    J. H. Kwak, J. Lee, J. J. Whang, and S. Jo, “Semantic grasping via a knowledge graph of robotic manipulation: A graph representation learning approach,” IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9397–9404, Oct. 2022. doi: 10.1109/LRA.2022.3191194
    [115]
    W. Bi, M. Yin, W. Ren, G. Zhao, Y. Zhang, and C. Hua, “Object fingerprinting-based environment model for service robots: Task-oriented modeling approach,” in Proc. IEEE Int. Conf. CYBER Technology in Autom., Control, and Intelligent Systems, Qinhuangdao, China, 2023, pp. 504−509.
    [116]
    J. G. Rogers and H. I. Christensen, “A conditional random field model for place and object classification,” in Proc. IEEE Int. Conf. Robotics and Autom., Saint Paul, USA, 2012, pp. 1766−1772.
    [117]
    L. Kunze and M. Beetz, “Envisioning the qualitative effects of robot manipulation actions using simulation-based projections,” Artif. Intell., vol. 247, pp. 352–380, Jun. 2017. doi: 10.1016/j.artint.2014.12.004
    [118]
    C. Schenck and D. Fox, “Perceiving and reasoning about liquids using fully convolutional networks,” Int. J. Robot. Res., vol. 37, no. 4−5, pp. 452–471, Apr. 2018. doi: 10.1177/0278364917734052
    [119]
    M. C. Gemici and A. Saxena, “Learning haptic representation for manipulating deformable food objects,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Chicago, USA, 2014, pp. 638−645.
    [120]
    K. Hauser and V. Ng-Thow-Hing, “Randomized multi-modal motion planning for a humanoid robot manipulation task,” Int. J. Robot. Res., vol. 30, no. 6, pp. 678–698, May 2011. doi: 10.1177/0278364910386985
    [121]
    A. Inceoglu, C. Koc, B. O. Kanat, M. Ersen, and S. Sariel, “Continuous visual world modeling for autonomous robot manipulation,” IEEE Trans. Syst., Man, Cybern.: Syst., vol. 49, no. 1, pp. 192–205, Jan. 2019. doi: 10.1109/TSMC.2017.2787482
    [122]
    Y. Cui, G. Tian, Z. Jiang, M. Zhang, Y. Gu, and Y. Wang, “An active task cognition method for home service robot using multi-graph attention fusion mechanism,” IEEE Trans. Circuits Syst. Video Technol., vol. 34, no. 6, pp. 4957–4972, Jun. 2024. doi: 10.1109/TCSVT.2023.3339292
    [123]
    Y. Zhang, G. Tian, X. Shao, M. Zhang, and S. Liu, “Semantic grounding for long-term autonomy of mobile robots toward dynamic object search in home environments,” IEEE Trans. Ind. Electron., vol. 70, no. 2, pp. 1655–1665, Feb. 2023. doi: 10.1109/TIE.2022.3159913
    [124]
    M. Thosar, C. A. Mueller, G. Jäger, J. Schleiss, N. Pulugu, R. Mallikarjun Chennaboina, S. V. Rao Jeevangekar, A. Birk, M. Pfingsthorn, and S. Zug, “From multi-modal property dataset to robot-centric conceptual knowledge about household objects,” Front. Robot. AI, vol. 8, p. 476084, Apr. 2021. doi: 10.3389/frobt.2021.476084
    [125]
    J. Hertzberg, H. Jaeger, and F. Schönherr, “Learning to ground fact symbols in behavior-based robots,” in Proc. 15th European Conf. Artificial Intelligence, Lyon, France, 2002, pp. 708−712.
    [126]
    C. Li, G. Tian, and M. Zhang, “A semantic knowledge-based method for home service robot to grasp an object,” Knowl.-Based Syst., vol. 297, p. 111947, Aug. 2024.
    [127]
    S. Coradeschi and A. Saffiotti, “An introduction to the anchoring problem,” Robot. Auton. Syst., vol. 43, no. 2−3, pp. 85–96, May 2003. doi: 10.1016/S0921-8890(03)00021-6
    [128]
    A. Persson, P. Z. Dos Martires, L. De Raedt, and A. Loutfi, “Semantic relational object tracking,” IEEE Trans. Cogn. Dev. Syst., vol. 12, no. 1, pp. 84–97, Mar. 2020. doi: 10.1109/TCDS.2019.2915763
    [129]
    J. Elfring, S. van den Dries, M. J. G. van de Molengraft, and M. Steinbuch, “Semantic world modeling using probabilistic multiple hypothesis anchoring,” Robot. Auton. Syst., vol. 61, no. 2, pp. 95–105, Feb. 2013. doi: 10.1016/j.robot.2012.11.005
    [130]
    K.-T. Shih and H. H. Chen, “Exploiting perceptual anchoring for color image enhancement,” IEEE Trans. Multimedia, vol. 18, no. 2, pp. 300–310, Feb. 2016. doi: 10.1109/TMM.2015.2503918
    [131]
    P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sünderhauf, I. Reid, S. Gould, and A. van den Hengel, “Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 3674−3683.
    [132]
    A. Chikhalikar, A. A. Ravankar, J. V. S. Luces, and Y. Hirata, “Semantic-based multi-object search optimization in service robots using probabilistic and contextual priors,” IEEE Access, vol. 12, pp. 113151–113164, Aug. 2024. doi: 10.1109/ACCESS.2024.3444478
    [133]
    M. Mantelli, F. M. Noori, D. Pittol, R. Maffei, J. Torresen, and M. Kolberg, “Semantic temporal object search system based on heat maps,” J. Intell. Robot. Syst., vol. 106, no. 4, p. 69, Dec. 2022. doi: 10.1007/s10846-022-01760-8
    [134]
    A. Aydemir, M. Göbelbecker, A. Pronobis, K. Sjöö, and P. Jensfelt, “Plan-based object search and exploration using semantic spatial knowledge in the real world,” in Proc. 5th European Conf. Mobile Robots, Örebro, Sweden, 2011, pp. 13−18.
    [135]
    A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox, “6-DOF grasping for target-driven object manipulation in clutter,” in Proc. IEEE Int. Conf. Robotics and Autom., Paris, France, 2020, pp. 6232−6238.
    [136]
    C. Wang, J. Cheng, J. Wang, X. Li, and M. Q.-H. Meng, “Efficient object search with belief road map using mobile robot,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 3081–3088, Oct. 2018. doi: 10.1109/LRA.2018.2849610
    [137]
    J. Park, T. Yoon, J. Hong, Y. Yu, M. Pan, and S. Choi, “Zero-shot active visual search (ZAVIS): Intelligent object search for robotic assistants,” in Proc. IEEE Int. Conf. Robotics and Autom., London, UK, 2023, pp. 2004−2010.
    [138]
    Z. Zeng, A. Röfer, and O. C. Jenkins, “Semantic linking maps for active visual object search,” in Proc. IEEE Int. Conf. Robotics and Autom., Paris, France, 2020, pp. 1984−1990.
    [139]
    D. Honerkamp, M. Büchner, F. Despinoy, T. Welschehold, and A. Valada, “Language-grounded dynamic scene graphs for interactive object search with mobile manipulation,” IEEE Robot. Autom. Lett., vol. 9, no. 10, pp. 8298–8305, Oct. 2024. doi: 10.1109/LRA.2024.3441495
    [140]
    I. Kostavelis and A. Gasteratos, “Semantic maps from multiple visual cues,” Expert Syst. Appl., vol. 68, pp. 45–57, Feb. 2017. doi: 10.1016/j.eswa.2016.10.014
    [141]
    C. Keroglou, I. Kansizoglou, P. Michailidis, K. M. Oikonomou, I. T. Papapetros, P. Dragkola, I. T. Michailidis, A. Gasteratos, E. B. Kosmatopoulos, and G. C. Sirakoulis, “A survey on technical challenges of assistive robotics for elder people in domestic environments: The ASPiDA concept,” IEEE Trans. Med. Robot. Bionics, vol. 5, no. 2, pp. 196–205, May 2023. doi: 10.1109/TMRB.2023.3261342
    [142]
    S. Levine and D. Shah, “Learning robotic navigation from experience: Principles, methods and recent results,” Philos. Trans. Roy. Soc. B Biol. Sci., vol. 378, no. 1869, p. 20210447, Jan. 2023. doi: 10.1098/rstb.2021.0447
    [143]
    D. Katare, D. Perino, J. Nurmi, M. Warnier, M. Janssen, and A. Y. Ding, “A survey on approximate edge AI for energy efficient autonomous driving services,” IEEE Commun. Surv. Tut., vol. 25, no. 4, pp. 2714–2754, Aug. 2023. doi: 10.1109/COMST.2023.3302474
    [144]
    R. Firoozi, J. Tucker, S. Tian, A. Majumdar, J. Sun, W. Liu, Y. Zhu, S. Song, A. Kapoor, K. Hausman, B. Ichter, D. Driess, J. Wu, C. Lu, and M. Schwager, “Foundation models in robotics: Applications, challenges, and the future,” Int. J. Robot. Res., vol. 44, no. 5, pp. 701–739, Apr. 2025. doi: 10.1177/02783649241281508
  • Supplementary Material of JAS-2024-0785.pdf

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(4)

    Article Metrics

    Article views (420) PDF downloads(57) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return