Archive for the ‘ogre3d’ category

Ogre3D PhotoSynth Viewer

January 26th, 2011

This viewer is now integrated with the new version of my PhotoSynthToolkit (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by PhotoSynth using PMVS2 and then create great accurate mesh using MeshLab.

New feature of PhotoSynthToolkit v5:

  • Thumbnails downloading should be faster (8x)
  • New C++ HD picture downloader (download tiles and re-compose them)
  • Tools to generate “vis.dat” from previous PMVS2 call (analysing .patch file)
  • Working Ogre3D PhotoSynth viewer:
    • Can read dense point cloud created with my PhotoSynthToolkit using PMVS2
    • Click on a picture to change camera viewpoint
    • No-roll camera system

Warning: the PhotoSynth viewer may need a very powerful GPU (depending on the synth complexity: point cloud size and number of thumbnails). I’ve currently tested a scene with 820 pictures and 900k vertices on a Nvidia 8800 GTX with 768mo and it was working at 25fps (75fps with a 470 GTX and 1280mo). I wish I could have used Microsoft Seadragon :-) .

Download:

The PhotoSynthToolkit v5 is available on his dedicated page, please do not make direct link to the zip file but to this page instead. So people willing to download the toolkit will always get the latest version.

Video demo:

Future version

Josh Harle has created CameraExport: a solution for 3DS Max that enable to render the picture of the Synth using camera projection. I don’t have tested it yet but I’ll try to generate a file compatible with his 3DS Max script directly from my toolkit, thus avoiding to download the Synth again using a modified version of SynthExport. Josh has also created a very interesting tutorial on how to use mask with PMVS2:

Masks with the PhotoSynth Toolkit 4 – tutorial from Josh Harle on Vimeo.

Share

2010 visual experiments

January 7th, 2011

Happy new year everyone!

2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I’m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you’ve missed! I’d like also to thanks some person that have been helping me too:

Visual experiments created in 2010:

During this year I have added some features to Ogre3D:

  • ArToolKitPlus: augmented reality marker-based system
  • Cuda: for beginner only (at least advanced user could grab some useful code)
  • OpenCL: for beginner only (at least advanced user could grab some useful code)
  • Html5 Canvas: implementation based on skia for graphics and V8 for javascript scripting
  • Kinect: this is a very hacky solution, I’ll improve it later

I also have learned GPGPU programming by myself while coding a partial GPUSurf implementation based on Nico Cornelis paper. But this implementation is not complete and I’m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.

I have created some visual experiments related to Augmented Reality:

My outdoor 3D tracking algorithm for augmented reality needs an accurate point cloud: this is why I’m interested in structure from motion and I’ve created two SfM toolkit:

Posts published in 2010:

Share

Outdoor tracking using panoramic image

December 22nd, 2010

I have made this experiment in 2 days:

First of all, I must admit that this is more a “proof-of-concept” rather than a prototype… But the goal was to illustrate a concept needed for my job. I love this kind of challenge! Building something like this in 2 days was only possible thanks to great open-source library:

Analysis

I’m using a panoramic image as reference. For each frame of the video I’m extracting Sift feature using SiftGPU and matching them with those of the reference image. Then I’m computing the homography between the 2 images using Ransac homography estimator (OpenCV cvFindHomography).

Performance

The performance are low due to complexity of the Sift detection and matching and that I’m applying the homography using cvWarpPerspective.

Sift extraction: 28ms 1228 features
Sift matching: 17ms using SiftGPU
Ransac Homography estimation: 2ms 89 inliers of 208 matches
Homography application: 36ms done on the CPU with OpenCV
Global: 12fps
 

I’m working on another version using Fast (or Agast) as feature detector and Brief as descriptor. This should lead to a significant speed-up and may eventually run on a mobile… Using the GPU vertex and pixel shader instead of the CPU to apply the homography should also gives a nice speed-up.

I’m also aware that it is not correct to apply an homography on a cylindric panoramic image (especially if you don’t undistort the input video frame too ;) )

Share

Augmented Reality outdoor tracking becoming reality

December 13th, 2010

My interest in structure from motion was primary motivated by the capability of creating a point cloud that can be used as a reference for tracking reference. The video below is more a proof-of-concept than a prototype but this is an overview of my outdoor tracking algorithm for Augmented Reality:

Analysis

In a pre-processing step I’ve built a sparse point cloud of the place using my SFMToolkit. Each vertex of the point cloud has several 2D Sift features correspondences. I’ve only kept one Sift descriptor per vertex (mean of the descriptors) and put all descriptors in an index using Flann.

For each frame of the video to be augmented, I’ve extracted Sift feature with SiftGPU and then matched them using Flann 2-nearest neighbor search and a distance ratio threshold. The Flann matching is done in parallel with boost::threadpool. The matches computed contains a lot of outliers. So I have implemented a Ransac pose estimator using EPnP that permits to filter bad 2d/3d correspondences.

Performance

My implementation is slow (due to my implementation of Ransac EPnP that could be improved).

Sift first octave: -1
Sift extraction: 49ms 2917 features
Sift matching: 57ms (parallel matching using Flann)
Ransac EPnP: 110ms 121 inliers of 208 matches
Global: 4.6fps (9.4fps without pose estimation)

Sift first octave: 0
Sift extraction: 32ms 707 features
Sift matching: 15ms (parallel matching using Flann)
Ransac EPnP: 144ms 62 inliers of 93 matches
Global: 5.2fps (21.2fps without pose estimation)

The slowness is not a so big issue because it doesn’t need to run at 30fps. Indeed the goal of my prototype is to have absolute pose with this tracking system each second and relative pose using inertial system available on mobile device (or using KLT tracking).

Issue

  • Performance (faster is better ;-) )
  • Point cloud reference is not always accurate (Bundler fault)

In another post I’ll introduce alternative to Bundler: faster and more accurate.

Share

Kinect experiment with Ogre3D

November 20th, 2010

I’ve just bought a Kinect and decided to do some experiment with it:

This demo is a ripp-off the Kinect-v11 demo made by Zephod. In fact I’ve designed a new Ogre::Kinect library that provide Kinect connection through Zephod library. Then I’ve replace the Zephod OpenGL demo by an Ogre3D demo using my library. The nice part is that I’ve managed to move some depth to rgb conversion to the GPU (using pixel shader).

Links

Binary demo: OgreKinectDemo1.zip
Source code: svn on code.google
Documentation: doxygen
License: MIT

Share