This is a very short post to announce a SFMToolkit bug fix… If the toolkit wasn’t working at all on your machine (File not found error in “1 – Bundle.wsf”) this may fix the issue. The bug was linked to the Windows separator setting: ‘.‘ or ‘,‘. Thus the default 0.8 matching threshold was on some system applied as 0: no matching -> no bundler output
I’ve also fixed other errors in BundlerMatcher too (small memory leak + no more 4096 match limit), so you should download this version too even if the previous version was working on your system. The new version is available on his dedicated page: please do not make direct link to the zip file but to this page so people downloading will always get the latest version.
New C++ HD picture downloader (download tiles and re-compose them)
Tools to generate “vis.dat” from previous PMVS2 call (analysing .patch file)
Working Ogre3D PhotoSynth viewer:
Can read dense point cloud created with my PhotoSynthToolkit using PMVS2
Click on a picture to change camera viewpoint
No-roll camera system
Warning: the PhotoSynth viewer may need a very powerful GPU (depending on the synth complexity: point cloud size and number of thumbnails). I’ve currently tested a scene with 820 pictures and 900k vertices on a Nvidia 8800 GTX with 768mo and it was working at 25fps (75fps with a 470 GTX and 1280mo). I wish I could have used Microsoft Seadragon .
The PhotoSynthToolkit v5 is available on his dedicated page, please do not make direct link to the zip file but to this page instead. So people willing to download the toolkit will always get the latest version.
Josh Harle has created CameraExport: a solution for 3DS Max that enable to render the picture of the Synth using camera projection. I don’t have tested it yet but I’ll try to generate a file compatible with his 3DS Max script directly from my toolkit, thus avoiding to download the Synth again using a modified version of SynthExport. Josh has also created a very interesting tutorial on how to use mask with PMVS2:
2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I’m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you’ve missed! I’d like also to thanks some person that have been helping me too:
Kinect: this is a very hacky solution, I’ll improve it later
I also have learned GPGPU programming by myself while coding a partial GPUSurf implementation based on Nico Cornelis paper. But this implementation is not complete and I’m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.
I have created some visual experiments related to Augmented Reality:
I'm Henri Astre, a French guy who likes visual experiments. I've done some nice demos in various domains (augmented reality, GPU computing, structure from motion, 3d rendering, web app...) that you can see on this blog. [+ resume]
You are currently browsing the archives for January, 2011.