My interest in structure from motion was primary motivated by the capability of creating a point cloud that can be used as a reference for tracking reference. The video below is more a proof-of-concept than a prototype but this is an overview of my outdoor tracking algorithm for Augmented Reality:
In a pre-processing step I’ve built a sparse point cloud of the place using my SFMToolkit. Each vertex of the point cloud has several 2D Sift features correspondences. I’ve only kept one Sift descriptor per vertex (mean of the descriptors) and put all descriptors in an index using Flann.
For each frame of the video to be augmented, I’ve extracted Sift feature with SiftGPU and then matched them using Flann 2-nearest neighbor search and a distance ratio threshold. The Flann matching is done in parallel with boost::threadpool. The matches computed contains a lot of outliers. So I have implemented a Ransac pose estimator using EPnP that permits to filter bad 2d/3d correspondences.
My implementation is slow (due to my implementation of Ransac EPnP that could be improved).
|Sift first octave: -1|
|Sift extraction:||49ms||2917 features|
|Sift matching:||57ms||(parallel matching using Flann)|
|Ransac EPnP:||110ms||121 inliers of 208 matches|
|Global: 4.6fps (9.4fps without pose estimation)|
|Sift first octave: 0|
|Sift extraction:||32ms||707 features|
|Sift matching:||15ms||(parallel matching using Flann)|
|Ransac EPnP:||144ms||62 inliers of 93 matches|
|Global: 5.2fps (21.2fps without pose estimation)|
The slowness is not a so big issue because it doesn’t need to run at 30fps. Indeed the goal of my prototype is to have absolute pose with this tracking system each second and relative pose using inertial system available on mobile device (or using KLT tracking).
- Performance (faster is better )
- Point cloud reference is not always accurate (Bundler fault)
In another post I’ll introduce alternative to Bundler: faster and more accurate.