Posts Tagged ‘ogre3d’

Dense point cloud created with PhotoSynth and PMVS2

August 22nd, 2010

In my previous post I have introduced my PhotoSynth ToolKit. The source code is available on my google code under MIT license, you can download it right now : PhotoSynthToolKit2.zip. I have created a video to show you what I’ve managed to do with it:

As you can see in this video I have managed to use PMVS2 with PhotoSynth output.
All the synths used in this video are available on my PhotoSynth account or directly:

Workflow

My PhotoSynth ToolKit is composed of 3 programs:

  • PhotoSynthDownloader: download 0.json + bin files + thumbs
  • PhotoSynth2PMVS: undistort bunch of pictures and write CONTOUR files needed for PMVS2
  • PhotoSynthTileDownloader [optional]: download all pictures of a synth in HD (not relased yet for legal reason, but you can watch a preview video)

Limitations

It seems that my first version doesn’t handle the JSON parsing of all kind of synth very well, I’ll try to post a new version asap. fixed in PhotoSynthToolKit2.zip

PMVS2 for windows is a 32bit applications, so it has a 2Gb memory limits (3Gb if you start windows with the /3Gb options + compile the app with custom flag ?). I haven’t tried yet the 64bit linux version but I have managed to compile a 64bit version of PMVS2. My 64bit version manage to use more than 4Gb of memory for picture loading, but it crashes right after the end of all picture loading. I didn’t investigate that much, it should be my fault too, compiling the dependencies (gsl, pthread, jpeg) wasn’t an easy task.

Anyway, PMVS2 should be used with CMVS but I’m not sure that I can extract enough information from PhotoSynth. Indeed Bundler output is more verbose, you have 2d/3d correspondence + number of matches per images. I think that I can create a vis.dat file using some information stored in the JSON file but it should only speed-up the process, so it doesn’t help that much with the 2Gb limits.

Credits

My PhotoSynth ToolKit is coded in C++ and the source code is available on my google code (MIT license). It is using:

  • Boost.Asio: network request for Soap + file download
  • TinyXml: parsing of soap request
  • JSON Spirit: parsing of PhotoSynth file: “0.json”
  • jpeg: read/write jpeg for radial undistort

Furthermore, part of the code are based on:

Please go to the PhotoSynthToolkit page to get the latest version

Share

Remote Augmented Reality Prototype

July 11th, 2010

I have created a new augmented reality prototype (5 days experiments). It is using a client/server approach based on Boost.Asio. The first assumption of this prototype is that you’ve got a mobile client not so powerful and a powerful server with a decent GPU.

So the idea is simple: the client uploads a video frame and the server does the pose estimation and send back the augmented rendering to the client. My first prototype is using ArToolKitPlus in almost real-time (15fps) but I’m also working on a markerless version that would be less interactive (< 1fps). The mobile client was an UMPC (Samsung Q1).

Thanks to Boost.Asio I’ve been able to produce a strong client/server very quickly. Then I have created two implementations of PoseEstimator :

class PoseEstimator
{
	public:
		bool computePose(const Ogre::PixelBox& videoFrame);
		Ogre::Vector3 getPosition() const;
		Ogre::Quaternion getOrientation() const;
}
  • ArToolKitPoseEstimator (using ArToolKitPlus to get pose estimation)
  • SfMPoseEstimator (using EPnP and a point cloud generated with Bundler -Structure from Motion tool- to get pose estimation)

ArToolKitPoseEstimator

There is nothing fancy about this pose estimator, I’ve just implemented this one as proof of concept and to check my server performance. In fact, ArToolKit pose estimation is not expensive and can run in real-time on a mobile.

SfMPoseEstimator

I’ll just introduce the concept of this pose estimator in this post. So the idea is simple, in augmented reality fake rolex you generally know the object you are looking at because you want to augment it. The idea was to create a point cloud of the object you want to augment (using Structure from Motion) and keep the link between the 3D points and theirs 2D descriptors. Thus when you take a shot of the scene you can compare the 2D descriptors of your shot with those of the point cloud and so create 2D/3D correspondence. Then the pose estimation can be estimated by solving the Perspective-n-Point camera calibration problem (using EPnP for example).

Performance

The server is very basic, it doesn’t handle client queuing yet (1 client = 1 thread), but it already does the off-screen rendering and send back the texture in raw RGB.

The version using ArToolKit is only Replica Handbag running at 15fps because I had trouble with the jpeg compression so I turn it off. So this version is only bandwidth limited. I didn’t investigate this issue that much because I know that the SfMPoseEstimator is going to be limited by the matching step. Furthermore I’m not sure that it’s a good idea to send highly compressed image to the server (compression artifact can add extra features).

My SfMPoseEstimator is also working but it’s very expensive (~1s using the GPU) and it’s not always accurate due to some flaws of my original implementation. I’ll explain how it works in my following post.

Share

Structure From Motion Experiment

July 8th, 2010

I have taken a new set of picture of the “Porte Cailhau” in Bordeaux. And I have used one of my tools (BundlerMatcher) to compute image matching using SiftGPU. BundlerMatcher generates a file compatible with Bundler match file. So using BundlerMatcher you can skip the long pre-processing step of feature extraction and image matching and enjoy GPU acceleration!

I have used the “bundle.out” file produced by Bundler to get cameras informations:

  • intrinsic parameters: focal, distorsion
  • extrinsic parameters: position, orientation

With these informations you can see the point cloud through the viewpoint of one of the camera registered by Bundler. I’ve added this feature to my current Ogre3D PlyReader. I also have added a background plane to be able to see the picture taken from this viewpoint. This demo is not available for download right now, but you can still watch the video :

The Ogre3D PlyReader and BundlerMatcher will eventually be added to my SVN. I’m currently busy working on another demo, so stay tuned !

Share

GPU-Surf video demo

June 25th, 2010

In the previous post I’ve been announcing GPU-Surf first release. Now I’m glad to show you a live video demo of GPU-Surf and another demo using Bundler (structure from motion tools):

There are three demos in this video:

  1. GPU-Surf live demo.
  2. PlyReader displaying Notre-Dame dataset.
  3. PlyReader displaying my own dataset (Place de la Bourse, Bordeaux).

GPU-Surf

You’ll get more information on the dedicated demo section.
In this video GPU-Surf was running slowly because of Ogre::Canvas but it should be running really faster.

PlyReader displaying Notre-Dame dataset

I’m also interested in structure from motion algorithm, that’s why I have tested Bundler, which comes with a good dataset of Notre-Dame de Paris.

I have created a very simple PlyReader using Ogre3D, the first version was using billboard to display point cloud but it was slow (30fps with 130k points). Now I’m using custom vertex buffer and it runs at 800fps with 130k points.

The reconstruction was done by the team who created Bundler from 715 pictures of Notre-Dame de Paris (thanks to Flickr). In fact, in this demo they have done the big part of the job, I have just grab their output to check if my PlyReader was capable of reading such a big file.

PlyReader displaying my own dataset

If you already used Bundler you know that structure from motion algorithm needs a very slow pre-processing step to get “matches” between pictures of the dataset. Bundler is packaged to use Lowe’s Sift binary, but it’s very slow because it’s taking pgm as picture input and the output is written in a text file. Then a matching step is executed using KeyMatchFull.exe which is optimized using libANN but still very slow.

I have replaced the feature extraction and matching steps by my own tool: BundlerMatcher. It is using SiftGPU, which gives a very nice speed-up. As my current implementation of GPU-Surf isn’t complete I can’t use it instead of SiftGPU but this is my intention.

23 pictures taken with a classic camera
(Canon Powershot A700)
Point cloud generated using Bundler

I have created this dataset with my camera and matched the pictures using my own tool: BundlerMatcher. This tool creates the same .key file as Lowe Sift tool and creates a matches.txt file that is used by Bundler. I have tried to get rid off this temporary .key file and keep everything in memory but changing Bundler code to handle this structure was harder than I predicted… I’m now more interested by insight3d implementation (presentation, source) which seems to be easier to hack with.

Share

GPUSurf and Ogre::GPGPU

June 23rd, 2010

In this post I’d like to introduce my GPU-Surf implementation and a new library for Ogre3D Ogre::GPGPU.

What is GPUSurf ?

It is a GPU accelerated version of Surf algorithm based on a paper of Nico Cornelis1.
This version is using GPGPU technique (pixel shader) and Cuda for computing. The Cuda part was done using my Ogre::Cuda library and the GPGPU part was done using a new library called Ogre::GPGPU. This new library is just a little helper that hide the fact that GPGPU computing is done using quad rendering.

Screenshot of my GPU-Surf implementation (3 octaves displayed)

GPU-Surf could be used to help panoramic image creation, create tracking algorithm, speed-up structure from motion… I’m currently using SiftGPU to speed-up image matching step of structure from motion tools (bundler), but SiftGPU v360 as a memory issue (it’s eating a lot of virtual memory, your program as to be 64bit to bypass common 32bit application limitation: 2Go of virtual memory under windows), and Sift matching is more expensive than Surf (descriptor of 128 float vs 64 for Surf). That’s why I have decided to create my own implementation of GPU-Surf.

Structure from motion using PhotoSynth

Implementation details:

The current version of my implementation of GPU-Surf is uncomplete (descriptor missing, only detector is available). You’ll get all information about license (MIT), svn repository, demo (GPUSurfDemo1.zip) and documentation on the GPUSurf page.

[1]: N. Cornelis, L. Van Gool: Fast Scale Invariant Feature Detection and Matching on Programmable Graphics Hardware (ncorneli_cvpr2008.pdf).

Share