This is a very short post to announce a SFMToolkit bug fix… If the toolkit wasn’t working at all on your machine (File not found error in “1 – Bundle.wsf”) this may fix the issue. The bug was linked to the Windows separator setting: ‘.‘ or ‘,‘. Thus the default 0.8 matching threshold was on some system applied as 0: no matching -> no bundler output
I’ve also fixed other errors in BundlerMatcher too (small memory leak + no more 4096 match limit), so you should download this version too even if the previous version was working on your system. The new version is available on his dedicated page: please do not make direct link to the zip file but to this page so people downloading will always get the latest version.
This viewer is now integrated with the new version of my PhotoSynthToolkit (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by PhotoSynth using PMVS2 and then create great accurate mesh using MeshLab.
New feature of PhotoSynthToolkit v5:
Thumbnails downloading should be faster (8x)
New C++ HD picture downloader (download tiles and re-compose them)
Tools to generate “vis.dat” from previous PMVS2 call (analysing .patch file)
Working Ogre3D PhotoSynth viewer:
Can read dense point cloud created with my PhotoSynthToolkit using PMVS2
Click on a picture to change camera viewpoint
No-roll camera system
Warning: the PhotoSynth viewer may need a very powerful GPU (depending on the synth complexity: point cloud size and number of thumbnails). I’ve currently tested a scene with 820 pictures and 900k vertices on a Nvidia 8800 GTX with 768mo and it was working at 25fps (75fps with a 470 GTX and 1280mo). I wish I could have used Microsoft Seadragon .
Download:
The PhotoSynthToolkit v5 is available on his dedicated page, please do not make direct link to the zip file but to this page instead. So people willing to download the toolkit will always get the latest version.
Video demo:
Future version
Josh Harle has created CameraExport: a solution for 3DS Max that enable to render the picture of the Synth using camera projection. I don’t have tested it yet but I’ll try to generate a file compatible with his 3DS Max script directly from my toolkit, thus avoiding to download the Synth again using a modified version of SynthExport. Josh has also created a very interesting tutorial on how to use mask with PMVS2:
2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I’m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you’ve missed! I’d like also to thanks some person that have been helping me too:
Josh Harle: for his videos tutorials and his nice blog
You: for reading this
Visual experiments created in 2010:
During this year I have added some features to Ogre3D:
ArToolKitPlus: augmented reality marker-based system
Cuda: for beginner only (at least advanced user could grab some useful code)
OpenCL: for beginner only (at least advanced user could grab some useful code)
Html5 Canvas: implementation based on skia for graphics and V8 for javascript scripting
Kinect: this is a very hacky solution, I’ll improve it later
I also have learned GPGPU programming by myself while coding a partial GPUSurf implementation based on Nico Cornelis paper. But this implementation is not complete and I’m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.
I have created some visual experiments related to Augmented Reality:
My outdoor 3D tracking algorithm for augmented reality needs an accurate point cloud: this is why I’m interested in structure from motion and I’ve created two SfM toolkit:
I’ve introduced my tracking algorithm in the previous post. One of the issue I have is that the point cloud generated by my SFMToolkit (using Bundler) is not always accurate. This is a list of structure from motion projects alternative I’m interested in:
My interest in structure from motion was primary motivated by the capability of creating a point cloud that can be used as a reference for tracking reference. The video below is more a proof-of-concept than a prototype but this is an overview of my outdoor tracking algorithm for Augmented Reality:
Analysis
In a pre-processing step I’ve built a sparse point cloud of the place using my SFMToolkit. Each vertex of the point cloud has several 2D Sift features correspondences. I’ve only kept one Sift descriptor per vertex (mean of the descriptors) and put all descriptors in an index using Flann.
For each frame of the video to be augmented, I’ve extracted Sift feature with SiftGPU and then matched them using Flann 2-nearest neighbor search and a distance ratio threshold. The Flann matching is done in parallel with boost::threadpool. The matches computed contains a lot of outliers. So I have implemented a Ransac pose estimator using EPnP that permits to filter bad 2d/3d correspondences.
Performance
My implementation is slow (due to my implementation of Ransac EPnP that could be improved).
Sift first octave: -1
Sift extraction:
49ms
2917 features
Sift matching:
57ms
(parallel matching using Flann)
Ransac EPnP:
110ms
121 inliers of 208 matches
Global: 4.6fps (9.4fps without pose estimation)
Sift first octave: 0
Sift extraction:
32ms
707 features
Sift matching:
15ms
(parallel matching using Flann)
Ransac EPnP:
144ms
62 inliers of 93 matches
Global: 5.2fps (21.2fps without pose estimation)
The slowness is not a so big issue because it doesn’t need to run at 30fps. Indeed the goal of my prototype is to have absolute pose with this tracking system each second and relative pose using inertial system available on mobile device (or using KLT tracking).
Issue
Performance (faster is better )
Point cloud reference is not always accurate (Bundler fault)
In another post I’ll introduce alternative to Bundler: faster and more accurate.
I'm Henri Astre, a French guy who likes visual experiments. I've done some nice demos in various domains (augmented reality, GPU computing, structure from motion, 3d rendering, web app...) that you can see on this blog. [+ resume]
Subscribe
You are currently browsing the archives for the photogrammetry category.