Posts Tagged ‘gpusurf’

2010 visual experiments

January 7th, 2011

Happy new year everyone!

2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I’m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you’ve missed! I’d like also to thanks some person that have been helping me too:

Visual experiments created in 2010:

During this year I have added some features to Ogre3D:

  • ArToolKitPlus: augmented reality marker-based system
  • Cuda: for beginner only (at least advanced user could grab some useful code)
  • OpenCL: for beginner only (at least advanced user could grab some useful code)
  • Html5 Canvas: implementation based on skia for graphics and V8 for javascript scripting
  • Kinect: this is a very hacky solution, I’ll improve it later

I also have learned GPGPU programming by myself while coding a partial GPUSurf implementation based on Nico Cornelis paper. But this implementation is not complete and I’m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.

I have created some visual experiments related to Augmented Reality:

My outdoor 3D tracking algorithm for augmented reality needs an accurate point cloud: this is why I’m interested in structure from motion and I’ve created two SfM toolkit:

Posts published in 2010:

Share

Pose Estimation using SfM point cloud

July 12th, 2010

The idea of this pose estimator is based on PTAM (Parallel Tracking and Mapping). PTAM is capable of tracking in an unknown environment thanks to the mapping done in parallel. But in fact if you want to augment reality, it’s generally because you already know what you are looking at. So, being able to have a tracking working in an unknown environment is not always needed. My idea was simple: instead of doing a mapping in parallel, why not using SFM in a pre-processing step ?

input: point cloud + camera shot output: position and orientation of the camera

So my outdoor tracking algorithm will eventually work like this:

  • pre-processing step
    • generate a point cloud of the outdoor scene you want to track using Bundler
    • create a binary file with a descriptor (Sift/Surf) per vertex of the point cloud
  • in real-time, for each frame N:
    • extract feature using FAST
    • match feature from frame N-1 using 2D patch
    • compute “relative pose” between frame N and N-1
  • in almost real-time, for each “key frame”:
    • extract feature and descriptor
    • match descriptor with those of the point cloud
    • generate 2D/3D correspondence from matches
    • compute “absolute pose” using PnP solver (EPnP)

The tricky part is that absolute pose computation could last several “relative pose” estimation. So once you’ve got the absolute pose you’ll have to compensate the delay by cumulating the previous relative pose…

This is what I’ve got so far:

  • pre-processing step: binary file generated using SiftGPU (planning to move on my GPUSurf implementation) and Bundler (planning to move on Insight3D or implement it myself using sba)
  • relative pose: I don’t have an implementation of the relative pose estimator
  • absolute pose: it’s basically working but needs some improvements:
    • switch feature extraction/matching from Sift to Surf
    • remove unused descriptors to speed-up maching step (by scoring descriptors used as inlier with training data)
    • use another PnP solver (or add ransac to support outliers and have more accurate results)
Share

GPU-Surf video demo

June 25th, 2010

In the previous post I’ve been announcing GPU-Surf first release. Now I’m glad to show you a live video demo of GPU-Surf and another demo using Bundler (structure from motion tools):

There are three demos in this video:

  1. GPU-Surf live demo.
  2. PlyReader displaying Notre-Dame dataset.
  3. PlyReader displaying my own dataset (Place de la Bourse, Bordeaux).

GPU-Surf

You’ll get more information on the dedicated demo section.
In this video GPU-Surf was running slowly because of Ogre::Canvas but it should be running really faster.

PlyReader displaying Notre-Dame dataset

I’m also interested in structure from motion algorithm, that’s why I have tested Bundler, which comes with a good dataset of Notre-Dame de Paris.

I have created a very simple PlyReader using Ogre3D, the first version was using billboard to display point cloud but it was slow (30fps with 130k points). Now I’m using custom vertex buffer and it runs at 800fps with 130k points.

The reconstruction was done by the team who created Bundler from 715 pictures of Notre-Dame de Paris (thanks to Flickr). In fact, in this demo they have done the big part of the job, I have just grab their output to check if my PlyReader was capable of reading such a big file.

PlyReader displaying my own dataset

If you already used Bundler you know that structure from motion algorithm needs a very slow pre-processing step to get “matches” between pictures of the dataset. Bundler is packaged to use Lowe’s Sift binary, but it’s very slow because it’s taking pgm as picture input and the output is written in a text file. Then a matching step is executed using KeyMatchFull.exe which is optimized using libANN but still very slow.

I have replaced the feature extraction and matching steps by my own tool: BundlerMatcher. It is using SiftGPU, which gives a very nice speed-up. As my current implementation of GPU-Surf isn’t complete I can’t use it instead of SiftGPU but this is my intention.

23 pictures taken with a classic camera
(Canon Powershot A700)
Point cloud generated using Bundler

I have created this dataset with my camera and matched the pictures using my own tool: BundlerMatcher. This tool creates the same .key file as Lowe Sift tool and creates a matches.txt file that is used by Bundler. I have tried to get rid off this temporary .key file and keep everything in memory but changing Bundler code to handle this structure was harder than I predicted… I’m now more interested by insight3d implementation (presentation, source) which seems to be easier to hack with.

Share

GPUSurf and Ogre::GPGPU

June 23rd, 2010

In this post I’d like to introduce my GPU-Surf implementation and a new library for Ogre3D Ogre::GPGPU.

What is GPUSurf ?

It is a GPU accelerated version of Surf algorithm based on a paper of Nico Cornelis1.
This version is using GPGPU technique (pixel shader) and Cuda for computing. The Cuda part was done using my Ogre::Cuda library and the GPGPU part was done using a new library called Ogre::GPGPU. This new library is just a little helper that hide the fact that GPGPU computing is done using quad rendering.

Screenshot of my GPU-Surf implementation (3 octaves displayed)

GPU-Surf could be used to help panoramic image creation, create tracking algorithm, speed-up structure from motion… I’m currently using SiftGPU to speed-up image matching step of structure from motion tools (bundler), but SiftGPU v360 as a memory issue (it’s eating a lot of virtual memory, your program as to be 64bit to bypass common 32bit application limitation: 2Go of virtual memory under windows), and Sift matching is more expensive than Surf (descriptor of 128 float vs 64 for Surf). That’s why I have decided to create my own implementation of GPU-Surf.

Structure from motion using PhotoSynth

Implementation details:

The current version of my implementation of GPU-Surf is uncomplete (descriptor missing, only detector is available). You’ll get all information about license (MIT), svn repository, demo (GPUSurfDemo1.zip) and documentation on the GPUSurf page.

[1]: N. Cornelis, L. Van Gool: Fast Scale Invariant Feature Detection and Matching on Programmable Graphics Hardware (ncorneli_cvpr2008.pdf).

Share