<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>
<channel>
	<title>Visual-Experiments.com &#187; ogre3d</title>
	<atom:link href="http://www.visual-experiments.com/tag/ogre3d/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.visual-experiments.com</link>
	<description>ASTRE Henri experiments with Ogre3D and web stuff</description>
	<lastBuildDate>Mon, 16 Jan 2017 18:59:35 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.2</generator>
		<item>
		<title>Ogre3D PhotoSynth Viewer</title>
		<link>http://www.visual-experiments.com/2011/01/26/ogre3d-photosynth-viewer/</link>
		<comments>http://www.visual-experiments.com/2011/01/26/ogre3d-photosynth-viewer/#comments</comments>
		<pubDate>Wed, 26 Jan 2011 21:33:06 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[toolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1301</guid>
		<description><![CDATA[This viewer is now integrated with the new version of my PhotoSynthToolkit (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by PhotoSynth using PMVS2 and then create great accurate mesh using MeshLab. New feature of PhotoSynthToolkit v5: Thumbnails downloading should be [...]]]></description>
			<content:encoded><![CDATA[<p><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/01/PhotoSynthViewer.jpg" alt="" title="PhotoSynthViewer" width="591" height="332" class="alignnone size-full wp-image-1304" /></p>
<p>This viewer is now integrated with the new version of my<a href="http://www.visual-experiments.com/demos/photosynthtoolkit/"> PhotoSynthToolkit</a> (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by <a href="http://photosynth.net/">PhotoSynth</a> using <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a> and then create <a href="http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/">great accurate mesh</a> using <a href="http://meshlab.sourceforge.net/">MeshLab</a>.</p>
<h3>New feature of PhotoSynthToolkit v5:</h3>
<ul style="margin-left: 20px;">
<li>Thumbnails downloading should be faster (8x)</li>
<li>New C++ HD picture downloader (download tiles and re-compose them)</li>
<li>Tools to generate &#8220;vis.dat&#8221; from previous PMVS2 call (analysing .patch file)</li>
<li>Working Ogre3D PhotoSynth viewer:
<ul>
<li>Can read dense point cloud created with my PhotoSynthToolkit using PMVS2</li>
<li>Click on a picture to change camera viewpoint</li>
<li>No-roll camera system</li>
</ul>
</li>
</ul>
<p><strong>Warning</strong>: the PhotoSynth viewer may need a very powerful GPU (depending on the synth complexity: point cloud size and number of thumbnails). I&#8217;ve currently tested a scene with 820 pictures and 900k vertices on a Nvidia 8800 GTX with 768mo and it was working at 25fps (75fps with a 470 GTX and 1280mo). I wish I could have used <strong>Microsoft Seadragon</strong> <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' /> .</p>
<h3>Download:</h3>
<p>The PhotoSynthToolkit v5 is available on <a href="http://www.visual-experiments.com/demos/photosynthtoolkit/">his dedicated page</a>, please do not make direct link to the zip file but to <a href="http://www.visual-experiments.com/demos/photosynthtoolkit/">this page</a> instead. So people willing to download the toolkit will always get the latest version.</p>
<h3>Video demo:</h3>
<p><iframe title="YouTube video player" class="youtube-player" type="text/html" width="560" height="345" src="http://www.youtube.com/embed/fM2Y0sUBErE" frameborder="0" allowFullScreen></iframe></p>
<h3>Future version</h3>
<p><a href="http://blog.neonascent.net/">Josh Harle</a> has created <a href="http://blog.neonascent.net/archives/cameraexport-photosynth-to-camera-projection-in-3ds-max/">CameraExport</a>: a solution for 3DS Max that enable to render the picture of the Synth using camera projection. I don&#8217;t have tested it yet but I&#8217;ll try to generate a file compatible with his 3DS Max script directly from my toolkit, thus avoiding to download the Synth again using a modified version of SynthExport. Josh has also created a very interesting tutorial on <strong>how to use mask with PMVS2</strong>:</p>
<p><iframe src="http://player.vimeo.com/video/18517975" width="560" height="420" frameborder="0"></iframe>
<p><a href="http://vimeo.com/18517975">Masks with the PhotoSynth Toolkit 4 &#8211; tutorial</a> from <a href="http://vimeo.com/user3453059">Josh Harle</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F01%2F26%2Fogre3d-photosynth-viewer%2F&amp;title=Ogre3D%20PhotoSynth%20Viewer"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/01/26/ogre3d-photosynth-viewer/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>2010 visual experiments</title>
		<link>http://www.visual-experiments.com/2011/01/07/2010-visual-experiments/</link>
		<comments>http://www.visual-experiments.com/2011/01/07/2010-visual-experiments/#comments</comments>
		<pubDate>Fri, 07 Jan 2011 16:01:17 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[2010]]></category>
		<category><![CDATA[artoolkit]]></category>
		<category><![CDATA[canvas]]></category>
		<category><![CDATA[cuda]]></category>
		<category><![CDATA[gpusurf]]></category>
		<category><![CDATA[opencl]]></category>
		<category><![CDATA[visual experiments]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1254</guid>
		<description><![CDATA[Happy new year everyone! 2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I&#8217;m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you&#8217;ve missed! I&#8217;d like also [...]]]></description>
			<content:encoded><![CDATA[<h3>Happy new year everyone!</h3>
<p>2010 was a year full of <a href="http://www.visual-experiments.com/">visual experiments</a> for <a href="http://www.visual-experiments.com/about/resume-english/">me</a>, I hope that you like what you see on this blog. In this post I&#8217;m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you&#8217;ve missed! I&#8217;d like also to thanks some person that have been helping me too: </p>
<ul style="margin-left: 20px;">
<li><strong>Olafur Haraldsson:</strong> for creating <a href="http://www.pgrammetry.com/">the photogrammetry forum</a></li>
<li><strong>Josh Harle:</strong> for his videos tutorials and <a href="http://blog.neonascent.net/">his nice blog</a></li>
<li><strong>You:</strong> for reading this <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </li>
</ul>
<h3>Visual experiments created in 2010:</h3>
<p>During this year I have added some features to <strong>Ogre3D</strong>: </p>
<ul style="margin-left: 20px;">
<li><a href="http://www.visual-experiments.com/demos/artoolkitplus-for-ogre3d/">ArToolKitPlus</a>: augmented reality marker-based system</li>
<li><a href="http://www.visual-experiments.com/demos/ogrecuda/">Cuda</a>: for beginner only (at least advanced user could grab some useful code)</li>
<li><a href="http://www.visual-experiments.com/demos/ogreopencl/">OpenCL</a>: for beginner only (at least advanced user could grab some useful code)</li>
<li><a href="http://www.visual-experiments.com/demos/ogrecanvas/">Html5 Canvas</a>: implementation based on <a href="http://code.google.com/p/skia/">skia</a> for graphics and <a href="http://code.google.com/p/v8/">V8</a> for javascript scripting</li>
<li><a href="http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/">Kinect</a>: this is a very hacky solution, I&#8217;ll improve it later</li>
</ul>
<p>I also have learned <strong>GPGPU programming</strong> by myself while coding a partial <a href="http://www.visual-experiments.com/demos/gpusurf/">GPUSurf</a> implementation based on Nico Cornelis paper. But this implementation is not complete and I&#8217;m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.</p>
<p>I have created some visual experiments related to <strong>Augmented Reality</strong>:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/">Remote AR prototype</a></li>
<li><a href="http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/">Outdoor 3D tracking using point cloud generated by structure from motion software</a></li>
<li><a href="http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/">Outdoor 2D tracking using panorama image</a></li>
</ul>
<p>My outdoor 3D tracking algorithm for augmented reality needs an accurate point cloud: this is why I&#8217;m interested in <strong>structure from motion</strong> and I&#8217;ve created two SfM toolkit:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.visual-experiments.com/sfmtoolkit/">SFMToolkit</a> (SiftGPU -> Bundler -> CMVS -> PMVS2)</li>
<li><a href="http://www.visual-experiments.com/photosynthtoolkit/">PhotoSynthToolkit</a> (PhotoSynth -> PMVS2)</li>
</ul>
<h3>Posts published in 2010:</h3>
<ul style="margin-left: 20px;">
<li>2010/12/22: <a href="http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/">Outdoor tracking using panoramic image</a></li>
<li>2010/12/20: <a href="http://www.visual-experiments.com/2010/12/20/structure-from-motion-projects/">Structure from motion projects</a></li>
<li>2010/12/13: <a href="http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/">Augmented Reality outdoor tracking becoming reality</a></li>
<li>2010/11/20: <a href="http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/">Kinect experiment with Ogre3D</a></li>
<li>2010/11/19: <a href="http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/">PhotoSynthToolkit results</a></li>
<li>2010/11/09: <a href="http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/">PhotoSynth Toolkit updated</a></li>
<li>2010/11/05: <a href="http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/">Structure From Motion Toolkit released</a></li>
<li>2010/09/27: <a href="http://www.visual-experiments.com/2010/09/27/my-5-years-old-quiksee-competitor/">My 5 years old Quiksee competitor</a></li>
<li>2010/09/23: <a href="http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/">PMVS2 x64 and videos tutorials</a></li>
<li>2010/09/08: <a href="http://www.visual-experiments.com/2010/09/08/introducing-opensynther/">Introducing OpenSynther</a></li>
<li>2010/08/22: <a href="http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/">Dense point cloud created with PhotoSynth and PMVS2</a></li>
<li>2010/08/19: <a href="http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/">My PhotoSynth ToolKit</a></li>
<li>2010/07/12: <a href="http://www.visual-experiments.com/2010/07/12/pose-estimation-using-sfm-point-cloud/">Pose Estimation using SfM point cloud</a></li>
<li>2010/07/11: <a href="http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/">Remote Augmented Reality Prototype</a></li>
<li>2010/07/08: <a href="http://www.visual-experiments.com/2010/07/08/structure-from-motion-experiment/">Structure From Motion Experiment</a></li>
<li>2010/06/25: <a href="http://www.visual-experiments.com/2010/06/25/gpu-surf-video-demo/">GPU-Surf video demo</a></li>
<li>2010/06/23: <a href="http://www.visual-experiments.com/2010/06/23/gpusurf-and-ogregpgpu/">GPUSurf and Ogre::GPGPU</a></li>
<li>2010/06/20: <a href="http://www.visual-experiments.com/2010/06/20/ogrecanvas-a-2d-api-for-ogre3d/">Ogre::Canvas, a 2D API for Ogre3D</a></li>
<li>2010/05/09: <a href="http://www.visual-experiments.com/2010/05/09/ogreopencl-and-ogrecanvas/">Ogre::OpenCL and Ogre::Canvas</a></li>
<li>2010/04/26: <a href="http://www.visual-experiments.com/2010/04/26/cuda-integration-with-ogre3d/">Cuda integration with Ogre3D</a></li>
<li>2010/04/09: <a href="http://www.visual-experiments.com/2010/04/09/multitouch-prototype-done-using-awesomium-and-ogre3d/">Multitouch prototype done using Awesomium and Ogre3D</a></li>
<li>2010/03/05: <a href="http://www.visual-experiments.com/2010/03/05/artoolkitplus-integration-with-ogre3d/">ArToolKitPlus integration with Ogre3D</a></li>
<li>2010/02/20: <a href="http://www.visual-experiments.com/2010/02/20/hello-world/">Hello World !</a></li>
</ul>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F01%2F07%2F2010-visual-experiments%2F&amp;title=2010%20visual%20experiments"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/01/07/2010-visual-experiments/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Outdoor tracking using panoramic image</title>
		<link>http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/</link>
		<comments>http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/#comments</comments>
		<pubDate>Wed, 22 Dec 2010 13:10:29 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[opencv]]></category>
		<category><![CDATA[sift]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1167</guid>
		<description><![CDATA[I have made this experiment in 2 days: First of all, I must admit that this is more a &#8220;proof-of-concept&#8221; rather than a prototype&#8230; But the goal was to illustrate a concept needed for my job. I love this kind of challenge! Building something like this in 2 days was only possible thanks to great [...]]]></description>
			<content:encoded><![CDATA[<p>I have made this experiment in 2 days:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/ZmbP022QXpk?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/ZmbP022QXpk?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<p>First of all, I must admit that this is more a &#8220;proof-of-concept&#8221; rather than a prototype&#8230; But the goal was to illustrate a concept needed for my job. I love this kind of challenge! Building something like this in 2 days was only possible thanks to great open-source library:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.ogre3d.org/">Ogre3D</a> (MIT)</li>
<li><a href="http://opencv.willowgarage.com/wiki/">OpenCV</a> (BSD)</li>
<li><a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a> (non-profit license)</li>
</ul>
<h3>Analysis</h3>
<p>I&#8217;m using a panoramic image as reference. For each frame of the video I&#8217;m extracting Sift feature using SiftGPU and matching them with those of the reference image. Then I&#8217;m computing the homography between the 2 images using Ransac homography estimator (OpenCV cvFindHomography).</p>
<h3>Performance</h3>
<p>The performance are low due to complexity of the Sift detection and matching and that I&#8217;m applying the homography using cvWarpPerspective.</p>
<style type="text/css">
table.result {
color: black;
border: 1px solid black;
}
table.result td {
text-align: left;
padding: 1px;
}
</style>
<table class="result">
<tr>
<td>Sift extraction:</td>
<td>28ms</td>
<td>1228 features</td>
</tr>
<tr>
<td>Sift matching:</td>
<td>17ms</td>
<td>using SiftGPU</td>
</tr>
<tr>
<td>Ransac Homography estimation:</td>
<td>2ms</td>
<td>89 inliers of 208 matches</td>
</tr>
<tr>
<td>Homography application:</td>
<td>36ms</td>
<td>done on the CPU with OpenCV</td>
</tr>
<tr>
<td colspan="3">Global: 12fps</td>
</tr>
</table>
<div style="height: 20px;">&nbsp;</div>
<p>I&#8217;m working on another version using <a href="http://svr-www.eng.cam.ac.uk/~er258/work/fast.html">Fast</a> (or <a href="http://www6.in.tum.de/Main/ResearchAgast">Agast</a>) as feature detector and <a href="http://cvlab.epfl.ch/software/brief/index.php">Brief</a> as descriptor. This should lead to a significant speed-up and may eventually run on a mobile&#8230; Using the GPU vertex and pixel shader instead of the CPU to apply the homography should also gives a nice speed-up.</p>
<p>I&#8217;m also aware that it is not correct to apply an homography on a cylindric panoramic image (especially if you don&#8217;t undistort the input video frame too <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> )</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F12%2F22%2Foutdoor-tracking-using-panoramic-image%2F&amp;title=Outdoor%20tracking%20using%20panoramic%20image"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Kinect experiment with Ogre3D</title>
		<link>http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/</link>
		<comments>http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/#comments</comments>
		<pubDate>Sat, 20 Nov 2010 18:38:42 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[kinect]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1057</guid>
		<description><![CDATA[I&#8217;ve just bought a Kinect and decided to do some experiment with it: This demo is a ripp-off the Kinect-v11 demo made by Zephod. In fact I&#8217;ve designed a new Ogre::Kinect library that provide Kinect connection through Zephod library. Then I&#8217;ve replace the Zephod OpenGL demo by an Ogre3D demo using my library. The nice [...]]]></description>
			<content:encoded><![CDATA[<p>I&#8217;ve just bought a Kinect and decided to do some experiment with it:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/Bna-IaEnDpU?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Bna-IaEnDpU?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<p>This demo is a ripp-off the <a href="http://ajaxorg.posterous.com/kinect-driver-for-windows-prototype">Kinect-v11</a> demo made by Zephod. In fact I&#8217;ve designed a new Ogre::Kinect library that provide Kinect connection through Zephod library. Then I&#8217;ve replace the Zephod OpenGL demo by an Ogre3D demo using my library. The nice part is that I&#8217;ve managed to move some depth to rgb conversion to the GPU (using pixel shader).</p>
<h3>Links</h3>
<p>Binary demo: <a href="http://code.google.com/p/visual-experiments/downloads/list">OgreKinectDemo1.zip</a><br />
Source code: <a href="http://code.google.com/p/visual-experiments/source/checkout">svn on code.google</a><br />
Documentation: <a href="http://visual-experiments.com/documentations/OgreKinect/">doxygen</a><br />
License: MIT</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F20%2Fkinect-experiment-with-ogre3d%2F&amp;title=Kinect%20experiment%20with%20Ogre3D"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/feed/</wfw:commentRss>
		<slash:comments>16</slash:comments>
		</item>
		<item>
		<title>Structure From Motion Toolkit released</title>
		<link>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/</link>
		<comments>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/#comments</comments>
		<pubDate>Fri, 05 Nov 2010 15:23:55 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[bundlermatcher]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=898</guid>
		<description><![CDATA[Overview I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let&#8217;s say you have a nice place like the one just bellow: Place de la Bourse, Bordeaux, FRANCE (picture from Bing) &#160; Well, now you can take a lot of pictures of the place (around 50 in my [...]]]></description>
			<content:encoded><![CDATA[<h3>Overview</h3>
<p>I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let&#8217;s say you have a nice place like the one just bellow:</p>
<table>
<tbody style="background-color: white">
<tr>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_bing.jpg" alt="" title="place_de_la_bourse_bing" width="580" height="266" class="alignnone size-full wp-image-923" /></td>
</tr>
<tr>
<td>Place de la Bourse, Bordeaux, FRANCE (picture from Bing)</td>
</tr>
</tbody>
</table>
<div style="height: 20px;">&nbsp;</div>
<p>Well, now you can take a lot of pictures of the place (around 50 in my case):<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_pictures.jpg" alt="" title="place_de_la_bourse_pictures" width="565" height="220" class="alignnone size-full wp-image-943" /></p>
<div style="height: 5px;">&nbsp;</div>
<p>And then compute structure from motion and get a sparse point cloud using <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_bundler.jpg" alt="" title="place_de_la_bourse_bundler" width="576" height="256" class="alignnone size-full wp-image-936" /></p>
<p>Finally you have a dense point cloud divided in cluster by <a href="http://grail.cs.washington.edu/software/cmvs/">CMVS</a> and computed by <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/animation-cmvs.gif" alt="" title="animation-cmvs" width="580" height="250" class="alignnone size-full wp-image-939" /></p>
<p> You can also take a loot at the <a href="http://photosynth.net/">PhotoSynth</a> reconstruction of the place with <a href="http://photosynth.net/view.aspx?cid=e82eca65-60fe-498b-8916-80d1e3245640">53 pictures</a> and <a href="http://photosynth.net/view.aspx?cid=93c72ebb-5c54-4aff-ad12-3d0c5ade31fd">26 (without the fountain)</a>.</p>
<p>This is the SFMToolkit workflow:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/sfmtoolkit_toolchain.jpg" alt="" title="sfmtoolkit_toolchain" width="570" height="160" class="alignnone size-full wp-image-954" /></p>
<p>SFMToolkit is composed of several programs:</p>
<ul style="margin-left: 20px">
<li>BundlerFocalExtractor : extract CCD width from Exif using XML database.</li>
<li>BundlerMatcher : extract and match feature using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a>.</li>
<li>Bundler : <a href="http://phototour.cs.washington.edu/bundler/">http://phototour.cs.washington.edu/bundler/</a> created by Noah Snavely.</li>
<li>CMVS : <a href="http://grail.cs.washington.edu/software/cmvs/">http://grail.cs.washington.edu/software/cmvs/</a> created by Yasutaka Furukawa.</li>
<li>PMVS2 : <a href="http://grail.cs.washington.edu/software/pmvs/">http://grail.cs.washington.edu/software/pmvs/</a> created by Yasutaka Furukawa.</li>
<li>BundlerViewer : Bundler and PMVS2 output viewer based on <a href="http://www.ogre3d.org/">Ogre3D</a> (OpenSource 3D rendering engine).</li>
</ul>
<h3>Download</h3>
<p>As you can see this &#8220;toolkit&#8221; is composed of several open-source component. This is why I have decided to open-source my part of the job too. You can download the source code from the <a href="http://github.com/dddExperiments/SFMToolkit">SFMToolkit github</a>. You can also download a pre-compiled x64 version of the toolkit with windows scripting (WSH) for easier usage (but not cross-platform): <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/blog/?sdmon=downloads/SFMToolkit1.zip">SFMToolkit1.zip</a>.</p>
<h3>Help</h3>
<p>If you need some help or just want to discuss about photogrammetry, please join the <a href="http://pgrammetry.com/forum/" style="font-weight: bold; font-size: 15px;">photogrammetry forum</a> created by Olafur Haraldsson. You may also be interested by Josh Harle&#8217;s <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/">video tutorials</a>, they are partially out-dated due to the new SFMToolkit but these videos are very good to learn how to use <a href="http://meshlab.sourceforge.net/">MeshLab</a>.</p>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/sfmtoolkit/">Please go to the SFMToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F05%2Fstructure-from-motion-toolkit-released%2F&amp;title=Structure%20From%20Motion%20Toolkit%20released"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
		<item>
		<title>Dense point cloud created with PhotoSynth and PMVS2</title>
		<link>http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/</link>
		<comments>http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/#comments</comments>
		<pubDate>Sun, 22 Aug 2010 22:07:55 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=637</guid>
		<description><![CDATA[In my previous post I have introduced my PhotoSynth ToolKit. The source code is available on my google code under MIT license, you can download it right now : PhotoSynthToolKit2.zip. I have created a video to show you what I&#8217;ve managed to do with it: As you can see in this video I have managed [...]]]></description>
			<content:encoded><![CDATA[<p>In my <a href="http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/">previous post</a> I have introduced my PhotoSynth ToolKit. The source code is available on my <a href="http://code.google.com/p/visual-experiments/">google code</a> under MIT license, you can download it right now : <a href="http://code.google.com/p/visual-experiments/downloads/list">PhotoSynthToolKit2.zip</a>. I have created a video to show you what I&#8217;ve managed to do with it:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/M278ItE8Dfw?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/M278ItE8Dfw?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<p>As you can see in this video I have managed to use <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a> with <a href="http://photosynth.net/">PhotoSynth</a> output.<br />
All the synths used in this video are available on <a href="http://photosynth.net/userprofilepage.aspx?user=dddExperiments">my PhotoSynth account</a> or directly:</p>
<ul>
<li><strong>Porte Cailhau:</strong> <a href="http://photosynth.net/view.aspx?cid=1e509490-5657-453d-a2f6-2e55d14ae512">http://photosynth.net/view.aspx?cid=1e509490-5657-453d-a2f6-2e55d14ae512</a></li>
<li><strong>Goutz:</strong> <a href="http://photosynth.net/view.aspx?cid=1471c7c7-da12-4859-9289-a2e6d2129319">http://photosynth.net/view.aspx?cid=1471c7c7-da12-4859-9289-a2e6d2129319</a></li>
<li><strong>Place de la Bourse:</strong> <a href="http://photosynth.net/view.aspx?cid=e82eca65-60fe-498b-8916-80d1e3245640">http://photosynth.net/view.aspx?cid=e82eca65-60fe-498b-8916-80d1e3245640</a></li>
</ul>
<h3>Workflow</h3>
<p>My PhotoSynth ToolKit is composed of 3 programs:</p>
<ul>
<li><strong>PhotoSynthDownloader:</strong> download 0.json + bin files + thumbs</li>
<li><strong>PhotoSynth2PMVS:</strong> undistort bunch of pictures and write CONTOUR files needed for PMVS2</li>
<li><strong>PhotoSynthTileDownloader [optional]:</strong> download all pictures of a synth in HD (not relased yet for legal reason, but you can watch a <a href="http://www.youtube.com/watch?v=xqeV3pI1TfU">preview video</a>)</li>
</ul>
<p><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/08/PhotoSynth-ToolKit1.png" alt="" title="PhotoSynth-ToolKit" width="570" height="495" class="alignnone size-full wp-image-654" /></p>
<h3>Limitations</h3>
<p><del datetime="2010-08-24T13:00:44+00:00">It seems that my first version doesn&#8217;t handle the JSON parsing of all kind of synth very well, I&#8217;ll try to post a new version asap.</del> <strong>fixed</strong> in <a href="http://code.google.com/p/visual-experiments/downloads/list">PhotoSynthToolKit2.zip</a></p>
<p>PMVS2 for windows is a 32bit applications, so it has a 2Gb memory limits (3Gb if you start windows with the <a href="http://technet.microsoft.com/en-us/library/bb124810(EXCHG.65).aspx">/3Gb options</a> + compile the app with custom flag ?). I haven&#8217;t tried yet the 64bit linux version but I have managed to compile a 64bit version of PMVS2. My 64bit version manage to use more than 4Gb of memory for picture loading, but it crashes right after the end of all picture loading. I didn&#8217;t investigate that much, it should be my fault too, compiling the dependencies (gsl, pthread, jpeg) wasn&#8217;t an easy task.</p>
<p>Anyway, PMVS2 should be used with <a href="http://grail.cs.washington.edu/software/cmvs/">CMVS</a> but I&#8217;m not sure that I can extract enough information from PhotoSynth. Indeed Bundler output is more verbose, you have 2d/3d correspondence + number of matches per images. I think that I can create a vis.dat file using some information stored in the JSON file but it should only speed-up the process, so it doesn&#8217;t help that much with the 2Gb limits.</p>
<h3>Credits</h3>
<p>My PhotoSynth ToolKit is coded in C++ and the source code is available on my google code (MIT license). It is using:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.boost.org/">Boost.Asio</a>: network request for Soap + file download</li>
<li><a href="http://www.grinninglizard.com/tinyxml/">TinyXml</a>: parsing of soap request</li>
<li><a href="http://www.codeproject.com/KB/recipes/JSON_Spirit.aspx">JSON Spirit</a>: parsing of PhotoSynth file: &#8220;0.json&#8221;</li>
<li><a href="http://www.ijg.org/">jpeg</a>: read/write jpeg for radial undistort</li>
</ul>
<p>Furthermore, part of the code are based on:</p>
<ul style="margin-left: 20px;">
<li><a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>: RadialUndistort + Bundler2PMVS</li>
<li><a href="http://synthexport.codeplex.com/">SynthExport</a>: C# binary loader</li>
</ul>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/photosynthtoolkit/">Please go to the PhotoSynthToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F08%2F22%2Fdense-point-cloud-created-with-photosyth-and-pmvs2%2F&amp;title=Dense%20point%20cloud%20created%20with%20PhotoSynth%20and%20PMVS2"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/feed/</wfw:commentRss>
		<slash:comments>26</slash:comments>
		</item>
		<item>
		<title>Remote Augmented Reality Prototype</title>
		<link>http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/</link>
		<comments>http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/#comments</comments>
		<pubDate>Sun, 11 Jul 2010 17:30:03 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[artoolkit]]></category>
		<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[boost]]></category>
		<category><![CDATA[gpu]]></category>
		<category><![CDATA[sift]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=514</guid>
		<description><![CDATA[I have created a new augmented reality prototype (5 days experiments). It is using a client/server approach based on Boost.Asio. The first assumption of this prototype is that you&#8217;ve got a mobile client not so powerful and a powerful server with a decent GPU. So the idea is simple: the client uploads a video frame [...]]]></description>
			<content:encoded><![CDATA[<p>I have created a new augmented reality prototype (5 days experiments). It is using a client/server approach based on <a href="http://think-async.com/">Boost.Asio</a>. The first assumption of this prototype is that you&#8217;ve got a mobile client not so powerful and a powerful server with a decent GPU.<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/07/remoteArToolKit.png" alt="" title="remoteArToolKit" width="467" height="205" class="alignnone size-full wp-image-528" /></p>
<table>
<tbody style="background-color: white; color: #4D4D4D; text-align: left; vertical-align: top;">
<tr>
<td>So the idea is simple: the client uploads a video frame and the server does the pose estimation and send back the augmented rendering to the client. My first prototype is using ArToolKitPlus in almost real-time (15fps) but I&#8217;m also working on a markerless version that would be less interactive (< 1fps). The mobile client was an UMPC (Samsung Q1).</td>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/07/samsung.q1.jpg" alt="" title="samsung.q1" width="150" height="135" class="alignnone size-full wp-image-583" /></td>
</tr>
</tbody>
</table>
<p>Thanks to Boost.Asio I&#8217;ve been able to produce a strong client/server very quickly. Then I have created two implementations of PoseEstimator :</p>
<pre class="brush: cpp; title: ;">
class PoseEstimator
{
	public:
		bool computePose(const Ogre::PixelBox&amp; videoFrame);
		Ogre::Vector3 getPosition() const;
		Ogre::Quaternion getOrientation() const;
}
</pre>
<ul style="margin-left: 20px">
<li>ArToolKitPoseEstimator <em>(using <a href="http://studierstube.icg.tu-graz.ac.at/handheld_ar/artoolkitplus.php">ArToolKitPlus</a> to get pose estimation)</em></li>
<li>SfMPoseEstimator <em>(using <a href="http://cvlab.epfl.ch/software/EPnP/">EPnP</a> and a point cloud generated with <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>  -Structure from Motion tool- to get pose estimation)</em></li>
</ul>
<h3>ArToolKitPoseEstimator</h3>
<p>There is nothing fancy about this pose estimator, I&#8217;ve just implemented this one as proof of concept and to check my server performance. In fact, ArToolKit pose estimation is not expensive and can run in real-time on a mobile.</p>
<h3>SfMPoseEstimator</h3>
<p>I&#8217;ll just introduce the concept of this pose estimator in this post. So the idea is simple, in augmented reality <a href="http://www.midnightliaison.co.uk/">fake rolex</a> you generally know the object you are looking at because you want to augment it. The idea was to create a point cloud of the object you want to augment (using Structure from Motion) and keep the link between the 3D points and theirs 2D descriptors. Thus when you take a shot of the scene you can compare the 2D descriptors of your shot with those of the point cloud and so create 2D/3D correspondence. Then the pose estimation can be estimated by solving the Perspective-n-Point camera calibration problem (using <a href="http://cvlab.epfl.ch/software/EPnP/index.php">EPnP</a> for example).</p>
<h3>Performance</h3>
<p>The server is very basic, it doesn&#8217;t handle client queuing yet (1 client = 1 thread), but it already does the off-screen rendering and send back the texture in raw RGB.</p>
<p>The version using ArToolKit is only <a href="http://www.pursevillage.com/">Replica Handbag</a> running at 15fps because I had trouble with the jpeg compression so I turn it off. So this version is only bandwidth limited. I didn&#8217;t investigate this issue that much because I know that the SfMPoseEstimator is going to be limited by the matching step. Furthermore I&#8217;m not sure that it&#8217;s a good idea to send highly compressed image to the server (compression artifact can add extra features).</p>
<p>My SfMPoseEstimator is also working but it&#8217;s very expensive (~1s using the GPU)  and it&#8217;s not always accurate due to some flaws of my original implementation. I&#8217;ll explain how it works in my following post.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F07%2F11%2Fremote-augmented-reality-prototype%2F&amp;title=Remote%20Augmented%20Reality%20Prototype"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Structure From Motion Experiment</title>
		<link>http://www.visual-experiments.com/2010/07/08/structure-from-motion-experiment/</link>
		<comments>http://www.visual-experiments.com/2010/07/08/structure-from-motion-experiment/#comments</comments>
		<pubDate>Thu, 08 Jul 2010 22:05:25 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[structure from motion]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=535</guid>
		<description><![CDATA[I have taken a new set of picture of the &#8220;Porte Cailhau&#8221; in Bordeaux. And I have used one of my tools (BundlerMatcher) to compute image matching using SiftGPU. BundlerMatcher generates a file compatible with Bundler match file. So using BundlerMatcher you can skip the long pre-processing step of feature extraction and image matching and [...]]]></description>
			<content:encoded><![CDATA[<p>I have taken a new set of picture of the &#8220;<a href="http://maps.google.com/maps?hl=en&#038;q=porte+cailhau&#038;ie=UTF8&#038;hq=porte+cailhau&#038;hnear=&#038;t=h&#038;z=16">Porte Cailhau</a>&#8221; in Bordeaux. And I have used one of my tools (BundlerMatcher) to compute image matching using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a>. BundlerMatcher generates a file compatible with <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a> match file. So using BundlerMatcher you can skip the long pre-processing step of feature extraction and image matching and enjoy GPU acceleration!</p>
<p>I have used the &#8220;bundle.out&#8221; file produced by Bundler to get cameras informations:</p>
<ul style="margin-left: 20px">
<li>intrinsic parameters: focal, distorsion</li>
<li>extrinsic parameters: position, orientation</li>
</ul>
<p>With these informations you can see the point cloud through the viewpoint of one of the camera registered by Bundler. I&#8217;ve added this feature to my current Ogre3D PlyReader. I also have added a background plane to be able to see the picture taken from this viewpoint. This demo is not available for download right now, but you can still watch the video :</p>
<p><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/wTaZCa06NHQ&amp;hl=fr_FR&amp;fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/wTaZCa06NHQ&amp;hl=fr_FR&amp;fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object></p>
<p>The Ogre3D PlyReader and BundlerMatcher will eventually be added to my SVN. I&#8217;m currently busy working on another demo, so stay tuned !</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F07%2F08%2Fstructure-from-motion-experiment%2F&amp;title=Structure%20From%20Motion%20Experiment"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/07/08/structure-from-motion-experiment/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>GPU-Surf video demo</title>
		<link>http://www.visual-experiments.com/2010/06/25/gpu-surf-video-demo/</link>
		<comments>http://www.visual-experiments.com/2010/06/25/gpu-surf-video-demo/#comments</comments>
		<pubDate>Fri, 25 Jun 2010 12:29:20 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[gpusurf]]></category>
		<category><![CDATA[structure from motion]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=454</guid>
		<description><![CDATA[In the previous post I&#8217;ve been announcing GPU-Surf first release. Now I&#8217;m glad to show you a live video demo of GPU-Surf and another demo using Bundler (structure from motion tools): There are three demos in this video: GPU-Surf live demo. PlyReader displaying Notre-Dame dataset. PlyReader displaying my own dataset (Place de la Bourse, Bordeaux). [...]]]></description>
			<content:encoded><![CDATA[<p>In the <a href="http://www.visual-experiments.com/2010/06/23/gpusurf-and-ogregpgpu/">previous post</a> I&#8217;ve been announcing <a href="http://www.visual-experiments.com/demos/gpusurf/">GPU-Surf</a> first release. Now I&#8217;m glad to show you a live video demo of GPU-Surf and another demo using <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a> (structure from motion tools):</p>
<p><object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/lKQZaqG8yJc&#038;hl=fr&#038;fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/lKQZaqG8yJc&#038;hl=fr&#038;fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object></p>
<p>There are three demos in this video:</p>
<ol>
<li>GPU-Surf live demo.</li>
<li>PlyReader displaying Notre-Dame dataset.</li>
<li>PlyReader displaying my own dataset (<a href="http://maps.google.fr/maps?cid=16664510742528689060&#038;q=place+de+la+bourse&#038;ved=0CEcQ2wU&#038;ei=WX0kTPCyLd-5jAe4reXTCA&#038;ie=UTF8&#038;hq=place+de+la+bourse&#038;hnear=&#038;ll=44.841576,-0.569524&#038;spn=0.003534,0.006089&#038;t=h&#038;z=18">Place de la Bourse, Bordeaux</a>).</li>
</ol>
<h3>GPU-Surf</h3>
<p>You&#8217;ll get more information on the <a href="http://www.visual-experiments.com/demos/gpusurf/">dedicated demo section</a>.<br />
In this video GPU-Surf was running slowly because of Ogre::Canvas but it should be running really faster.</p>
<h3>PlyReader displaying Notre-Dame dataset</h3>
<p>I&#8217;m also interested in <a href="http://en.wikipedia.org/wiki/Structure_from_motion">structure from motion</a> algorithm, that&#8217;s why I have tested <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>, which comes with a <a href="http://phototour.cs.washington.edu/datasets/">good dataset of Notre-Dame de Paris</a>.</p>
<p>I have created a very simple PlyReader using Ogre3D, the first version was using billboard to display point cloud but it was slow (30fps with 130k points). Now I&#8217;m using custom vertex buffer and it runs at 800fps with 130k points.</p>
<p>The reconstruction was done by the team who created Bundler from 715 pictures of Notre-Dame de Paris (thanks to Flickr). In fact, in this demo they have done the big part of the job, I have just grab their output to check if my PlyReader was capable of reading such a big file.</p>
<h3>PlyReader displaying my own dataset</h3>
<p>If you already used Bundler you know that structure from motion algorithm needs a very slow pre-processing step to get &#8220;matches&#8221; between pictures of the dataset. Bundler is packaged to use <a href="http://www.cs.ubc.ca/~lowe/keypoints/">Lowe&#8217;s Sift binary</a>, but it&#8217;s very slow because it&#8217;s taking pgm as picture input and the output is written in a text file. Then a matching step is executed using KeyMatchFull.exe which is optimized using libANN but still very slow. </p>
<p>I have replaced the feature extraction and matching steps by my own tool: BundlerMatcher. It is using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a>, which gives a very nice speed-up. As my current implementation of GPU-Surf isn&#8217;t complete I can&#8217;t use it instead of SiftGPU but this is my intention.</p>
<table>
<tbody style="background-color: white">
<tr>
<td colspan="2"><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/06/place-de-la-bourse.jpg" alt="" title="place-de-la-bourse" width="590" height="285" class="alignnone size-full wp-image-457" /></td>
</tr>
<tr>
<td>23 pictures taken with a classic camera <br />(Canon Powershot A700)</td>
<td>Point cloud generated using <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a></td>
</tr>
</tbody>
</table>
<div style="height: 20px"></div>
<p>I have created this dataset with my camera and matched the pictures using my own tool: BundlerMatcher. This tool creates the same .key file as Lowe Sift tool and creates a matches.txt file that is used by Bundler. I have tried to get rid off this temporary .key file and keep everything in memory but changing Bundler code to handle this structure was harder than I predicted&#8230; I&#8217;m now more interested by insight3d implementation (<a href="http://insight3d.sourceforge.net/">presentation</a>, <a href="http://sourceforge.net/projects/insight3d/">source</a>) which seems to be easier to hack with.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F06%2F25%2Fgpu-surf-video-demo%2F&amp;title=GPU-Surf%20video%20demo"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/06/25/gpu-surf-video-demo/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>GPUSurf and Ogre::GPGPU</title>
		<link>http://www.visual-experiments.com/2010/06/23/gpusurf-and-ogregpgpu/</link>
		<comments>http://www.visual-experiments.com/2010/06/23/gpusurf-and-ogregpgpu/#comments</comments>
		<pubDate>Wed, 23 Jun 2010 21:09:18 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[gpgpu]]></category>
		<category><![CDATA[gpusurf]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=415</guid>
		<description><![CDATA[In this post I&#8217;d like to introduce my GPU-Surf implementation and a new library for Ogre3D Ogre::GPGPU. What is GPUSurf ? It is a GPU accelerated version of Surf algorithm based on a paper of Nico Cornelis1. This version is using GPGPU technique (pixel shader) and Cuda for computing. The Cuda part was done using [...]]]></description>
			<content:encoded><![CDATA[<p>In this post I&#8217;d like to introduce my <a href="http://www.visual-experiments.com/demos/gpusurf/">GPU-Surf implementation</a> and a new library for Ogre3D <a href="http://www.visual-experiments.com/demos/ogregpgpu/">Ogre::GPGPU</a>.</p>
<h3>What is GPUSurf ?</h3>
<p>It is a GPU accelerated version of <a href="http://en.wikipedia.org/wiki/SURF">Surf</a> algorithm based on a paper of Nico Cornelis<sup><a href="#nico_cornelis">1</a></sup>.<br />
This version is using GPGPU technique (pixel shader) and Cuda for computing. The Cuda part was done using my <a href="http://www.visual-experiments.com/demos/ogrecuda/">Ogre::Cuda</a> library and the GPGPU part was done using a new library called <a href="http://www.visual-experiments.com/demos/ogregpgpu/">Ogre::GPGPU</a>. This new library is just a little helper that hide the fact that GPGPU computing is done using quad rendering.</p>
<table>
<tbody style="background-color: white">
<tr>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/06/gpusurf.screenshot.jpg" alt="" title="gpusurf.screenshot" width="560" height="240" class="alignnone size-full wp-image-365" /></td>
</tr>
<tr>
<td>Screenshot of my GPU-Surf implementation <em>(3 octaves displayed)</em></td>
</tr>
</tbody>
</table>
<div style="height: 20px"></div>
<p>GPU-Surf could be used to help panoramic image creation, create tracking algorithm, speed-up structure from motion&#8230; I&#8217;m currently using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a> to speed-up image matching step of structure from motion tools (<a href="http://phototour.cs.washington.edu/bundler/">bundler</a>), but SiftGPU v360 as a memory issue (it&#8217;s eating a lot of virtual memory, your program as to be 64bit to bypass common 32bit application limitation: 2Go of virtual memory under windows), and Sift matching is more expensive than Surf (descriptor of 128 float vs 64 for Surf). That&#8217;s why I have decided to create my own implementation of GPU-Surf.</p>
<table>
<tbody style="background-color: white">
<tr>
<td><iframe frameborder="0" src="http://photosynth.net/embed.aspx?cid=d3b19f2d-59fe-4ce5-8be0-91d0be2f629f&#038;delayLoad=true&#038;slideShowPlaying=false" width="500" height="300"></iframe></td>
</tr>
<tr>
<td>Structure from motion using PhotoSynth</td>
</tr>
</tbody>
</table>
<h3>Implementation details:</h3>
<p>The current version of my implementation of GPU-Surf is uncomplete (descriptor missing, only detector is available). You’ll get all information about license (MIT), svn repository, demo (<a href="http://code.google.com/p/visual-experiments/downloads/list">GPUSurfDemo1.zip</a>) and documentation on the <a href="http://www.visual-experiments.com/demos/gpusurf/">GPUSurf page</a>.</p>
<p><a name="nico_cornelis">[1]</a>: N. Cornelis, L. Van Gool: <em>Fast Scale Invariant Feature Detection and Matching on Programmable Graphics Hardware</em> (<a href="http://homes.esat.kuleuven.be/~ncorneli/gpusurf/ncorneli_cvpr2008.pdf ">ncorneli_cvpr2008.pdf</a>).</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F06%2F23%2Fgpusurf-and-ogregpgpu%2F&amp;title=GPUSurf%20and%20Ogre%3A%3AGPGPU"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/06/23/gpusurf-and-ogregpgpu/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
	</channel>
</rss>
