Super Odometry: IMU-centric LiDAR-Visual-Inertial Estimator for Challenging Environments

Authors: Shibo Zhao, Hengrui Zhang, Peng Wang, Lucas Nogueira, Sebastian Scherer

Abstract: We propose Super Odometry, a high-precision multi-modal sensor fusion framework, providing a simple but effective way to fuse multiple sensors such as LiDAR, camera, and IMU sensors and achieve robust state estimation in perceptually-degraded environments. Different from traditional sensor-fusion methods, Super Odometry employs an IMU-centric data processing pipeline, which combines the advantages of loosely coupled methods with tightly coupled methods and recovers motion in a coarse-to-fine manner. The proposed framework is composed of three parts: IMU odometry, visual-inertial odometry, and laser-inertial odometry. The visual-inertial odometry and laser-inertial odometry provide the pose prior to constrain the IMU bias and receive the motion prediction from IMU odometry. To ensure high performance in real-time, we apply a dynamic octree that only consumes 10 % of the running time compared with a static KD-tree. The proposed system was deployed on drones and ground robots, as part of Team Explorer's effort to the DARPA Subterranean Challenge where the team won 1st and 2nd place in the Tunnel and Urban Circuits, respectively.

Submitted to arXiv on 30 Apr. 2021

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.
Shibo Zhao et al.Farhad AghiliCarlos Campos et al.Elijah S. Lee et al.Yuezhan Tao et al.Daniel Dauner et al.Tao Yang et al.Lukas Schmid et al.Tingrui Zhang et al.Daniela Rato et al.Zhefan Xu & Kenji ShimadaKuanqi Cai et al.Michal Adamkiewicz et al.Ryan Nemiroff et al.Chih-Yuan ChiuGuoxuan Zhang

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.