Archive for the ‘photogrammetry’ category

SFMToolkit updated

January 29th, 2011

This is a very short post to announce a SFMToolkit bug fix… If the toolkit wasn’t working at all on your machine (File not found error in “1 – Bundle.wsf”) this may fix the issue. The bug was linked to the Windows separator setting: ‘.‘ or ‘,‘. Thus the default 0.8 matching threshold was on some system applied as 0: no matching -> no bundler output :-(

I’ve also fixed other errors in BundlerMatcher too (small memory leak + no more 4096 match limit), so you should download this version too even if the previous version was working on your system. The new version is available on his dedicated page: please do not make direct link to the zip file but to this page so people downloading will always get the latest version.

Share

Ogre3D PhotoSynth Viewer

January 26th, 2011

This viewer is now integrated with the new version of my PhotoSynthToolkit (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by PhotoSynth using PMVS2 and then create great accurate mesh using MeshLab.

New feature of PhotoSynthToolkit v5:

  • Thumbnails downloading should be faster (8x)
  • New C++ HD picture downloader (download tiles and re-compose them)
  • Tools to generate “vis.dat” from previous PMVS2 call (analysing .patch file)
  • Working Ogre3D PhotoSynth viewer:
    • Can read dense point cloud created with my PhotoSynthToolkit using PMVS2
    • Click on a picture to change camera viewpoint
    • No-roll camera system

Warning: the PhotoSynth viewer may need a very powerful GPU (depending on the synth complexity: point cloud size and number of thumbnails). I’ve currently tested a scene with 820 pictures and 900k vertices on a Nvidia 8800 GTX with 768mo and it was working at 25fps (75fps with a 470 GTX and 1280mo). I wish I could have used Microsoft Seadragon :-) .

Download:

The PhotoSynthToolkit v5 is available on his dedicated page, please do not make direct link to the zip file but to this page instead. So people willing to download the toolkit will always get the latest version.

Video demo:

Future version

Josh Harle has created CameraExport: a solution for 3DS Max that enable to render the picture of the Synth using camera projection. I don’t have tested it yet but I’ll try to generate a file compatible with his 3DS Max script directly from my toolkit, thus avoiding to download the Synth again using a modified version of SynthExport. Josh has also created a very interesting tutorial on how to use mask with PMVS2:

Masks with the PhotoSynth Toolkit 4 – tutorial from Josh Harle on Vimeo.

Share

2010 visual experiments

January 7th, 2011

Happy new year everyone!

2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I’m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you’ve missed! I’d like also to thanks some person that have been helping me too:

Visual experiments created in 2010:

During this year I have added some features to Ogre3D:

  • ArToolKitPlus: augmented reality marker-based system
  • Cuda: for beginner only (at least advanced user could grab some useful code)
  • OpenCL: for beginner only (at least advanced user could grab some useful code)
  • Html5 Canvas: implementation based on skia for graphics and V8 for javascript scripting
  • Kinect: this is a very hacky solution, I’ll improve it later

I also have learned GPGPU programming by myself while coding a partial GPUSurf implementation based on Nico Cornelis paper. But this implementation is not complete and I’m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.

I have created some visual experiments related to Augmented Reality:

My outdoor 3D tracking algorithm for augmented reality needs an accurate point cloud: this is why I’m interested in structure from motion and I’ve created two SfM toolkit:

Posts published in 2010:

Share

Structure from motion projects

December 20th, 2010

I’ve introduced my tracking algorithm in the previous post. One of the issue I have is that the point cloud generated by my SFMToolkit (using Bundler) is not always accurate. This is a list of structure from motion projects alternative I’m interested in:

Building Rome in a Day:

Project home is using Bundler (GPL)

Building Rome on a Cloudless Day:

Project home | Source code (Non-profit license, I’ve ported their source to windows)

Samantha:

Project home (I’ve contacted them without response but they said that they were going to release the source code: check at 28:50)

Samantha Bundler

PhotoSynth:

Website – Microsoft closed-source SFM application: check out my PhotoSynthToolkit

PhotoSynth Bundler

ETH-V3D Structure-and-Motion software:

Project home with source code (GPL, I’ve partially ported it to windows)

Simple Sparse Bundle Adjustment:

Project home with source code (LGPL, I’ve ported it to windows)

A multi-stage linear approach to structure from motion:

Project home | paper

Results from the paper of LinearSFM (Microsoft Research)

This list is not exhaustive, I’ve seen other projects (Efficient Large Scale Multi-View Stereo for Ultra High Resolution Image Sets: not sure how it is related to ETH-V3D Structure-and-Motion software)

Share

Augmented Reality outdoor tracking becoming reality

December 13th, 2010

My interest in structure from motion was primary motivated by the capability of creating a point cloud that can be used as a reference for tracking reference. The video below is more a proof-of-concept than a prototype but this is an overview of my outdoor tracking algorithm for Augmented Reality:

Analysis

In a pre-processing step I’ve built a sparse point cloud of the place using my SFMToolkit. Each vertex of the point cloud has several 2D Sift features correspondences. I’ve only kept one Sift descriptor per vertex (mean of the descriptors) and put all descriptors in an index using Flann.

For each frame of the video to be augmented, I’ve extracted Sift feature with SiftGPU and then matched them using Flann 2-nearest neighbor search and a distance ratio threshold. The Flann matching is done in parallel with boost::threadpool. The matches computed contains a lot of outliers. So I have implemented a Ransac pose estimator using EPnP that permits to filter bad 2d/3d correspondences.

Performance

My implementation is slow (due to my implementation of Ransac EPnP that could be improved).

Sift first octave: -1
Sift extraction: 49ms 2917 features
Sift matching: 57ms (parallel matching using Flann)
Ransac EPnP: 110ms 121 inliers of 208 matches
Global: 4.6fps (9.4fps without pose estimation)

Sift first octave: 0
Sift extraction: 32ms 707 features
Sift matching: 15ms (parallel matching using Flann)
Ransac EPnP: 144ms 62 inliers of 93 matches
Global: 5.2fps (21.2fps without pose estimation)

The slowness is not a so big issue because it doesn’t need to run at 30fps. Indeed the goal of my prototype is to have absolute pose with this tracking system each second and relative pose using inertial system available on mobile device (or using KLT tracking).

Issue

  • Performance (faster is better ;-) )
  • Point cloud reference is not always accurate (Bundler fault)

In another post I’ll introduce alternative to Bundler: faster and more accurate.

Share