Archive for the ‘ogre3d’ category

PhotoSynthToolkit results

November 19th, 2010

This is just a small post to show you want kind of results you can get with my PhotoSynthToolkit:

Download location and source code introduced in my previous post.

 
Share

PhotoSynth Toolkit updated

November 9th, 2010

Overview

I have updated my PhotoSynth toolkit for easier usage (the same way as SFMToolkit). This is an example of dense mesh creation from 12 pictures using this toolkit:

The 12 pictures were shot with a Canon PowerShot A700:

Thanks to this toolkit, PMVS2 and MeshLab you can create a dense mesh from these 12 pictures:

triangulated mesh with vertex color -> triangulated mesh with vertex color and SSAO -> triangulated mesh shaded with SSAO -> triangulated mesh wireframe -> photosynth sparse point cloud
(sparse point cloud : 8600 vertices, dense point cloud: 417k vertices, mesh: 917k triangles)

 

You can also take a loot at the PhotoSynth reconstruction of the sculpture.

PhotoSynthToolkit is composed of several programs:

  • PhotoSynthDownloader: download PhotoSynth point cloud and cameras parameters
  • PhotoSynth2PMVS: enable to run PMVS2 with a downloaded PhotoSynth point cloud
  • PMVS2 : http://grail.cs.washington.edu/software/pmvs/ created by Yasutaka Furukawa
  • PhotoSynthViewer: Ogre3D PhotoSynth viewer [not working yet]

Download

The source code is available under MIT license on my github. I have also released a win32 binary version with windows scripting (WSH) for easier usage: PhotoSynthToolkit4.zip.

Help

If you need some help or just want to discuss about photogrammetry, please join the photogrammetry forum created by Olafur Haraldsson. You may also be interested by Josh Harle’s video tutorials, they are partially out-dated due to the new PhotoSynthToolkit version but these videos are very good to learn how to use MeshLab.

Please go to the PhotoSynthToolkit page to get the latest version

Share

Structure From Motion Toolkit released

November 5th, 2010

Overview

I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let’s say you have a nice place like the one just bellow:

Place de la Bourse, Bordeaux, FRANCE (picture from Bing)
 

Well, now you can take a lot of pictures of the place (around 50 in my case):

 

And then compute structure from motion and get a sparse point cloud using Bundler:

Finally you have a dense point cloud divided in cluster by CMVS and computed by PMVS2:

You can also take a loot at the PhotoSynth reconstruction of the place with 53 pictures and 26 (without the fountain).

This is the SFMToolkit workflow:

SFMToolkit is composed of several programs:

Download

As you can see this “toolkit” is composed of several open-source component. This is why I have decided to open-source my part of the job too. You can download the source code from the SFMToolkit github. You can also download a pre-compiled x64 version of the toolkit with windows scripting (WSH) for easier usage (but not cross-platform): SFMToolkit1.zip.

Help

If you need some help or just want to discuss about photogrammetry, please join the photogrammetry forum created by Olafur Haraldsson. You may also be interested by Josh Harle’s video tutorials, they are partially out-dated due to the new SFMToolkit but these videos are very good to learn how to use MeshLab.

Please go to the SFMToolkit page to get the latest version

Share

My PhotoSynth ToolKit

August 19th, 2010

I have released a ToolKit for PhotoSynth that permit to create a dense point cloud using PMVS2.
You can download PhotoSynthToolKit1.zip and take a look at the code on my google code.

PhotoSynth sparse point-cloud
11k vertices
PMVS2 dense point-cloud
230k vertices

I also have created a web app : PhotoSynthTileDownloader that permit to download all pictures of a synth in HD. I didn’t have release it yet because I’m concerned about the legal issue, but you can see that it’s already working by yourself:

I’ll give more information about it in a few day, stay tuned !

Edit: I have removed the worflow graph and moved it on my next post.

Please go to the PhotoSynthToolkit page to get the latest version

Share

Pose Estimation using SfM point cloud

July 12th, 2010

The idea of this pose estimator is based on PTAM (Parallel Tracking and Mapping). PTAM is capable of tracking in an unknown environment thanks to the mapping done in parallel. But in fact if you want to augment reality, it’s generally because you already know what you are looking at. So, being able to have a tracking working in an unknown environment is not always needed. My idea was simple: instead of doing a mapping in parallel, why not using SFM in a pre-processing step ?

input: point cloud + camera shot output: position and orientation of the camera

So my outdoor tracking algorithm will eventually work like this:

  • pre-processing step
    • generate a point cloud of the outdoor scene you want to track using Bundler
    • create a binary file with a descriptor (Sift/Surf) per vertex of the point cloud
  • in real-time, for each frame N:
    • extract feature using FAST
    • match feature from frame N-1 using 2D patch
    • compute “relative pose” between frame N and N-1
  • in almost real-time, for each “key frame”:
    • extract feature and descriptor
    • match descriptor with those of the point cloud
    • generate 2D/3D correspondence from matches
    • compute “absolute pose” using PnP solver (EPnP)

The tricky part is that absolute pose computation could last several “relative pose” estimation. So once you’ve got the absolute pose you’ll have to compensate the delay by cumulating the previous relative pose…

This is what I’ve got so far:

  • pre-processing step: binary file generated using SiftGPU (planning to move on my GPUSurf implementation) and Bundler (planning to move on Insight3D or implement it myself using sba)
  • relative pose: I don’t have an implementation of the relative pose estimator
  • absolute pose: it’s basically working but needs some improvements:
    • switch feature extraction/matching from Sift to Surf
    • remove unused descriptors to speed-up maching step (by scoring descriptors used as inlier with training data)
    • use another PnP solver (or add ransac to support outliers and have more accurate results)
Share