I have made this experiment in 2 days:
First of all, I must admit that this is more a “proof-of-concept” rather than a prototype… But the goal was to illustrate a concept needed for my job. I love this kind of challenge! Building something like this in 2 days was only possible thanks to great open-source library:
I’m using a panoramic image as reference. For each frame of the video I’m extracting Sift feature using SiftGPU and matching them with those of the reference image. Then I’m computing the homography between the 2 images using Ransac homography estimator (OpenCV cvFindHomography).
The performance are low due to complexity of the Sift detection and matching and that I’m applying the homography using cvWarpPerspective.
|Sift extraction:||28ms||1228 features|
|Sift matching:||17ms||using SiftGPU|
|Ransac Homography estimation:||2ms||89 inliers of 208 matches|
|Homography application:||36ms||done on the CPU with OpenCV|
I’m working on another version using Fast (or Agast) as feature detector and Brief as descriptor. This should lead to a significant speed-up and may eventually run on a mobile… Using the GPU vertex and pixel shader instead of the CPU to apply the homography should also gives a nice speed-up.
I’m also aware that it is not correct to apply an homography on a cylindric panoramic image (especially if you don’t undistort the input video frame too )