IMU-Assisted KLT Feature Tracker - Robotics Institute Carnegie Mellon University

IMU-Assisted KLT Feature Tracker

Portrait of IMU-Assisted KLT Feature Tracker
This Project is no longer active.

Feature tracking is a front-end stage to many vision applications from
optical flow to object tracking and 3D reconstruction. Robust tracking
performance is mandatory for better results in higher-level algorithms
such as visual odometry in visual navigation. We implemented the KLT
(Kanade-Lucas-Tomasi) method to track a set of feature points in an
image sequence. Our goal is to enhance KLT to increase the number of
feature points and their tracking length under realtime constraint.

We increase the robustness by addressing the following two issues of
KLT: bounded search region and a low-order tracking motion model. The
first issue can be addressed by fusing the IMU with KLT so that its
revised search region is more likely to have a true global minimum based
on estimated camera-ego motion. The second issue can be resolved by
using a high-order motion model to treat severe appearance change to a
template due to camera rolling and outdoor illumination.

Additional computational load caused by the increased number of
parameters in a more complex motion model can be alleviated by
restricting the Hessian computation and GPU implementation. This
enhanced KLT in cooperation with IMU can achieve a video-rate tracking
of up to 1000 features simultaneously even under sharp camera rotations.

Both CPU and GPU implementations using C++ and CUDA are available for
Win32 platforms. They are currently implemented together in a single
main program. Download it from the project webpage.

current contact

past head

  • Junsik Kim