A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 2 Issue 1
Jan.  2015

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Wei Zheng, Fan Zhou and Zengfu Wang, "Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor," IEEE/CAA J. of Autom. Sinica, vol. 2, no. 1, pp. 33-44, 2015.
Citation: Wei Zheng, Fan Zhou and Zengfu Wang, "Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor," IEEE/CAA J. of Autom. Sinica, vol. 2, no. 1, pp. 33-44, 2015.

Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor

Funds:

This work was supported by National Science and Technology Major Project of the Ministry of Science and Technology of China (2012GB102007).

  • In this paper, we present a multi-sensor fusion based monocular visual navigation system for a quadrotor with limited payload, power and computational resources. Our system is equipped with an inertial measurement unit (IMU), a sonar and a monocular down-looking camera. It is able to work well in GPS-denied and markerless environments. Different from most of the keyframe-based visual navigation systems, our system uses the information from both keyframes and keypoints in each frame. The GPU-based speeded up robust feature (SURF) is employed for feature detection and feature matching. Based on the flight characteristics of quadrotor, we propose a refined preliminary motion estimation algorithm combining IMU data. A multi-level judgment rule is then presented which is beneficial to hovering conditions and reduces the error accumulation effectively. By using the sonar sensor, the metric scale estimation problem has been solved. We also present the novel IMU+3P (IMU with three point correspondences) algorithm for accurate pose estimation. This algorithm transforms the 6-DOF pose estimation problem into a 4-DOF problem and can obtain more accurate results with less computation time. We perform the experiments of monocular visual navigation system in real indoor and outdoor environments. The results demonstrate that the monocular visual navigation system performing in real-time has robust and accurate navigation results of the quadrotor.

     

  • loading
  • [1]
    Scherer S, Singh S, Chamberlain L, Elgersma M. Flying fast and low among obstacles:methodology and experiments. International Journal of Robotics Research, 2008, 27(5):549-574
    [2]
    He R J, Bachrach A, Achtelik M, Geramifard A, Gurdan D, Perentice S, Stumpf J, Roy N. On the design and use of a micro air vehicle to track and avoid adversaries. International Journal of Robotics Research, 2010, 29(5):529-546
    [3]
    Ahrens S, Levine D, Andrews G, How P J. Vision-based guidance and control of a hovering vehicle in unknown, GPS-denied environments. In:Proceedings of the 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan:IEEE, 2009. 2643-2648
    [4]
    Abeywardena D, Wang Z, Kodagoda S, Dissanayake G. Visual-Inertial fusion for quadrotor micro air vehicles with improved scale observability. In:Proceedings of the 2013 IEEE International Conference on Robotics and Automation. Karlsruhe, Germany:IEEE, 2013. 3148-3153
    [5]
    Meier L, Tanskanen P, Heng L, Lee G H, Fraundorfer F, Pollefeys M. PIXHAWK:a micro aerial vehicle design for autonomous flight using onboard computer vision. Auton Robot, 2012, 33(1-2):21-39
    [6]
    Grzonka S, Grisetti G, Burgard W. Towards a navigation system for autonomous indoor flying. In:Proceedings of the 2009 IEEE International Conference on Robotics and Automation. Kobe, Japan:IEEE, 2009. 2878-2883
    [7]
    Wang F, Cui J Q, Chen B M, Lee T H. A comprehensive UAV indoor navigation system based on vision optical flow and laser FastSLAM. Acta Automatica Sinica, 2013, 39(11):1889-1900
    [8]
    Bachrach A, He R J, Roy N. Autonomous flight in unstructured and unknown indoor environments. In:Proceedings of the 2009 European Micro Aerial Vehicle Conference and Flight Competition. Delft, Netherland:Massachusetts Institute of Technology, 2009. 119-126
    [9]
    Bachrach A, Prentice S, He R J, Roy N. RANGE-robust autonomous navigation in GPS-denied environments. Journal of Field Robotics, 2011, 28(5):644-666
    [10]
    Shen S J, Michael N, Kumar V. Autonomous indoor 3D exploration with a micro-aerial vehicle. In:Proceedings of the 2012 IEEE International Conference on Robotics and Automation. Saint Paul, USA:IEEE, 2012. 9-15
    [11]
    Fraundorfer F, Heng L, Honegger D, Lee H G, Meier L, Tanskanen P, Pollefeys M. Vision-based autonomous mapping and exploration using a quadrotor MAV. In:Proceedings of the 2012 International Conference on Intelligent Robots and Systems. Algarve, Portugal:IEEE, 2012. 4557-4564
    [12]
    Shen S J, Mulgaonkar Y, Michael N, Kumar V. Vision-based state estimation for autonomous rotorcraft MAVs in complex environments. In:Proceedings of the 2013 IEEE International Conference on Robotics and Automation. Karlsruhe, Germany:IEEE, 2013. 1758-1764
    [13]
    Davison A J, Reid I D, Molton N D, Stasse O. MonoSLAM:Realtime single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6):1052-1067
    [14]
    Williams B, Hudson N, Tweddle B, Brockers R, Matthies L. Feature and pose constrained visual aided inertial navigation for computationally constrained aerial vehicles. In:Proceedings of the 2011 IEEE International Conference on Robotics and Automation. Shanghai, China:IEEE, 2011. 431-438
    [15]
    Mouragnon E, Lhuillier M, Dhome M, Dekeyser F, Sayd P. Real time localization and 3D reconstruction. In:Proceedings of the 2006 IEEE Conference on Computer Vision and Pattern Recognition. New York, USA:IEEE, 2006. 363-370
    [16]
    Klein G, Murray D. Parallel tracking and mapping for small AR workspaces. In:Proceedings of the 2007 IEEE and ACM International Symposium on Mixed and Augmented Reality. Nara, Japan:IEEE, 2007. 225-234
    [17]
    Klein G, Murray D. Improving the agility of keyframe-based SLAM. In:Proceedings of the 2008 European Conference on Computer Vision. Marseille, France:Springer, 2008. 802-815
    [18]
    Strasdat H, Montiel J M M, Davison A J. Visual SLAM:why filter? Image and Vision Computing, 2012, 30(2):65-77
    [19]
    Zou D P, Tan P. CoSLAM:collaborative visual SLAM in dynamic environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(2):354-366
    [20]
    Blosch M, Weiss S, Scaramuzza D, Siegwart R. Vision based MAV navigation in unknown and unstructured environments. In:Proceedings of the 2010 IEEE International Conference on Robotics and Automation. Alaska, USA:IEEE, 2010. 21-28
    [21]
    Achtelik M, Achtelik M, Weiss S, Siegwart R. Onboard IMU and monocular vision based control for MAVs in unknown in- and outdoor environments. In:Proceedings of the 2011 IEEE International Conference on Robotics and Automation. Shanghai, China:IEEE, 2011. 3056-3063
    [22]
    Engel J, Sturm J, Cremers D. Camera-based navigation of a lowcost quadrocopter. In:Proceedings of the 2012 IEEE International Conference on Intelligent Robots and Systems. Algarve, Portugal:IEEE, 2012. 2815-2821
    [23]
    Weiss S, Achtelik M W, Lynen S, Chli M, Siegwart R. Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments. In:Proceedings of the 2012 IEEE International Conference on Robotics and Automation. Saint Paul, USA:IEEE, 2012. 957-964
    [24]
    Nister D. An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(6):756-770
    [25]
    Li H D, Hartley R. Five-point motion estimation made easy. In:Proceedings of the 2006 IEEE International Conference on Pattern Recognition. Hong Kong, China:IEEE, 2006. 630-633
    [26]
    Fraundorfer F, Tanskanen P, Pollefeys M. A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles. In:Proceedings of the 2010 European Conference on Computer Vision. Crete, Greece:Springer, 2010. 269-282
    [27]
    Sun Feng-Mei, Wang Bo. A note on the roots distribution and stability of the PnP Problem. Acta Automatica Sinica, 2010, 36(9):1213-1219(in Chinese)
    [28]
    Quan L, Lan Z D. Linear N-point camera pose determination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(8):774-780
    [29]
    Ansar A, Daniilidis K. Linear pose estimation from points or lines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5):578-589
    [30]
    Gao X S, Hou X R, Tang J L, Cheng H F. Complete solution classification for the perspective-three-point problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(8):930-943
    [31]
    Lepetit V, Moreno-Noguer F, Pascal F. EPnP:accurate non-iterative O(n) solution to the PnP problem. International Journal of Computer Vision, 2008, 81(2):151-166
    [32]
    Lu C P, Hager G D, Mjolsness E. Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(6):610-622
    [33]
    Hu Zhan-Yi, Lei Cheng, Wu Fu-Chao. A short note on P4P problem. Acta Automatica Sinica, 2001, 27(6):770-776(in Chinese)
    [34]
    Schweighofer G, Pinz A. Robust pose estimation from a planar target. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(12):2024-2030
    [35]
    Kukelova Z, Bujnak M, Pajdla T. Closed-form solutions to minimal absolute pose problems with known vertical direction. In:Proceedings of the 2010 Asian Conference on Computer Vision. Berlin Heidelberg:Springer, 2010. 216-229
    [36]
    Rosten E, Drummond T. Machine learning for high-speed corner detection. In:Proceedings of the 2006 European Conference on Computer Vision. Berlin Heidelberg:Springer, 2006. 430-443
    [37]
    Mair E, Hager G D, Burschka D, Suppa M, Hirzinger G. Adaptive and generic corner detection based on the accelerated segment Test. In:Proceedings of the 2010 European Conference on Computer Vision. Crete, Greece:Springer, 2010. 183-196
    [38]
    Calonder M, Lepetit V, Strecha C, Fua P. BRIEF:Binary robust independent elementary features. In:Proceedings of the 2010 European Conference on Computer Vision. Berlin Heidelberg:Springer, 2010. 778-792
    [39]
    Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2):91-110
    [40]
    Bay H, Tuytelaars T, van Gool L V. SURF:speeded up robust features. Computer Vision and Image Understanding, 2008, 110(3):346-359

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (1218) PDF downloads(33) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return