<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>
<channel>
	<title>Visual-Experiments.com &#187; ogre3d</title>
	<atom:link href="http://www.visual-experiments.com/category/ogre3d/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.visual-experiments.com</link>
	<description>ASTRE Henri experiments with Ogre3D and web stuff</description>
	<lastBuildDate>Mon, 16 Jan 2017 18:59:35 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.2</generator>
		<item>
		<title>Ogre3D PhotoSynth Viewer</title>
		<link>http://www.visual-experiments.com/2011/01/26/ogre3d-photosynth-viewer/</link>
		<comments>http://www.visual-experiments.com/2011/01/26/ogre3d-photosynth-viewer/#comments</comments>
		<pubDate>Wed, 26 Jan 2011 21:33:06 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[toolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1301</guid>
		<description><![CDATA[This viewer is now integrated with the new version of my PhotoSynthToolkit (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by PhotoSynth using PMVS2 and then create great accurate mesh using MeshLab. New feature of PhotoSynthToolkit v5: Thumbnails downloading should be [...]]]></description>
			<content:encoded><![CDATA[<p><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/01/PhotoSynthViewer.jpg" alt="" title="PhotoSynthViewer" width="591" height="332" class="alignnone size-full wp-image-1304" /></p>
<p>This viewer is now integrated with the new version of my<a href="http://www.visual-experiments.com/demos/photosynthtoolkit/"> PhotoSynthToolkit</a> (v5). This toolkit allow you to download synth point cloud and thumbnails pictures. You can also densify the sparse point cloud generated by <a href="http://photosynth.net/">PhotoSynth</a> using <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a> and then create <a href="http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/">great accurate mesh</a> using <a href="http://meshlab.sourceforge.net/">MeshLab</a>.</p>
<h3>New feature of PhotoSynthToolkit v5:</h3>
<ul style="margin-left: 20px;">
<li>Thumbnails downloading should be faster (8x)</li>
<li>New C++ HD picture downloader (download tiles and re-compose them)</li>
<li>Tools to generate &#8220;vis.dat&#8221; from previous PMVS2 call (analysing .patch file)</li>
<li>Working Ogre3D PhotoSynth viewer:
<ul>
<li>Can read dense point cloud created with my PhotoSynthToolkit using PMVS2</li>
<li>Click on a picture to change camera viewpoint</li>
<li>No-roll camera system</li>
</ul>
</li>
</ul>
<p><strong>Warning</strong>: the PhotoSynth viewer may need a very powerful GPU (depending on the synth complexity: point cloud size and number of thumbnails). I&#8217;ve currently tested a scene with 820 pictures and 900k vertices on a Nvidia 8800 GTX with 768mo and it was working at 25fps (75fps with a 470 GTX and 1280mo). I wish I could have used <strong>Microsoft Seadragon</strong> <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' /> .</p>
<h3>Download:</h3>
<p>The PhotoSynthToolkit v5 is available on <a href="http://www.visual-experiments.com/demos/photosynthtoolkit/">his dedicated page</a>, please do not make direct link to the zip file but to <a href="http://www.visual-experiments.com/demos/photosynthtoolkit/">this page</a> instead. So people willing to download the toolkit will always get the latest version.</p>
<h3>Video demo:</h3>
<p><iframe title="YouTube video player" class="youtube-player" type="text/html" width="560" height="345" src="http://www.youtube.com/embed/fM2Y0sUBErE" frameborder="0" allowFullScreen></iframe></p>
<h3>Future version</h3>
<p><a href="http://blog.neonascent.net/">Josh Harle</a> has created <a href="http://blog.neonascent.net/archives/cameraexport-photosynth-to-camera-projection-in-3ds-max/">CameraExport</a>: a solution for 3DS Max that enable to render the picture of the Synth using camera projection. I don&#8217;t have tested it yet but I&#8217;ll try to generate a file compatible with his 3DS Max script directly from my toolkit, thus avoiding to download the Synth again using a modified version of SynthExport. Josh has also created a very interesting tutorial on <strong>how to use mask with PMVS2</strong>:</p>
<p><iframe src="http://player.vimeo.com/video/18517975" width="560" height="420" frameborder="0"></iframe>
<p><a href="http://vimeo.com/18517975">Masks with the PhotoSynth Toolkit 4 &#8211; tutorial</a> from <a href="http://vimeo.com/user3453059">Josh Harle</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F01%2F26%2Fogre3d-photosynth-viewer%2F&amp;title=Ogre3D%20PhotoSynth%20Viewer"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/01/26/ogre3d-photosynth-viewer/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>2010 visual experiments</title>
		<link>http://www.visual-experiments.com/2011/01/07/2010-visual-experiments/</link>
		<comments>http://www.visual-experiments.com/2011/01/07/2010-visual-experiments/#comments</comments>
		<pubDate>Fri, 07 Jan 2011 16:01:17 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[2010]]></category>
		<category><![CDATA[artoolkit]]></category>
		<category><![CDATA[canvas]]></category>
		<category><![CDATA[cuda]]></category>
		<category><![CDATA[gpusurf]]></category>
		<category><![CDATA[opencl]]></category>
		<category><![CDATA[visual experiments]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1254</guid>
		<description><![CDATA[Happy new year everyone! 2010 was a year full of visual experiments for me, I hope that you like what you see on this blog. In this post I&#8217;m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you&#8217;ve missed! I&#8217;d like also [...]]]></description>
			<content:encoded><![CDATA[<h3>Happy new year everyone!</h3>
<p>2010 was a year full of <a href="http://www.visual-experiments.com/">visual experiments</a> for <a href="http://www.visual-experiments.com/about/resume-english/">me</a>, I hope that you like what you see on this blog. In this post I&#8217;m making a little overview of all visual experiments created by me during this year. This is an opportunity to catch-up something you&#8217;ve missed! I&#8217;d like also to thanks some person that have been helping me too: </p>
<ul style="margin-left: 20px;">
<li><strong>Olafur Haraldsson:</strong> for creating <a href="http://www.pgrammetry.com/">the photogrammetry forum</a></li>
<li><strong>Josh Harle:</strong> for his videos tutorials and <a href="http://blog.neonascent.net/">his nice blog</a></li>
<li><strong>You:</strong> for reading this <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> </li>
</ul>
<h3>Visual experiments created in 2010:</h3>
<p>During this year I have added some features to <strong>Ogre3D</strong>: </p>
<ul style="margin-left: 20px;">
<li><a href="http://www.visual-experiments.com/demos/artoolkitplus-for-ogre3d/">ArToolKitPlus</a>: augmented reality marker-based system</li>
<li><a href="http://www.visual-experiments.com/demos/ogrecuda/">Cuda</a>: for beginner only (at least advanced user could grab some useful code)</li>
<li><a href="http://www.visual-experiments.com/demos/ogreopencl/">OpenCL</a>: for beginner only (at least advanced user could grab some useful code)</li>
<li><a href="http://www.visual-experiments.com/demos/ogrecanvas/">Html5 Canvas</a>: implementation based on <a href="http://code.google.com/p/skia/">skia</a> for graphics and <a href="http://code.google.com/p/v8/">V8</a> for javascript scripting</li>
<li><a href="http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/">Kinect</a>: this is a very hacky solution, I&#8217;ll improve it later</li>
</ul>
<p>I also have learned <strong>GPGPU programming</strong> by myself while coding a partial <a href="http://www.visual-experiments.com/demos/gpusurf/">GPUSurf</a> implementation based on Nico Cornelis paper. But this implementation is not complete and I&#8217;m willing to rewrite it with a GPGPU framework based on OpenGL and CG only (not Ogre3D). With such a framework writing Sift/Surf detector should be easier and more efficient.</p>
<p>I have created some visual experiments related to <strong>Augmented Reality</strong>:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/">Remote AR prototype</a></li>
<li><a href="http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/">Outdoor 3D tracking using point cloud generated by structure from motion software</a></li>
<li><a href="http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/">Outdoor 2D tracking using panorama image</a></li>
</ul>
<p>My outdoor 3D tracking algorithm for augmented reality needs an accurate point cloud: this is why I&#8217;m interested in <strong>structure from motion</strong> and I&#8217;ve created two SfM toolkit:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.visual-experiments.com/sfmtoolkit/">SFMToolkit</a> (SiftGPU -> Bundler -> CMVS -> PMVS2)</li>
<li><a href="http://www.visual-experiments.com/photosynthtoolkit/">PhotoSynthToolkit</a> (PhotoSynth -> PMVS2)</li>
</ul>
<h3>Posts published in 2010:</h3>
<ul style="margin-left: 20px;">
<li>2010/12/22: <a href="http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/">Outdoor tracking using panoramic image</a></li>
<li>2010/12/20: <a href="http://www.visual-experiments.com/2010/12/20/structure-from-motion-projects/">Structure from motion projects</a></li>
<li>2010/12/13: <a href="http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/">Augmented Reality outdoor tracking becoming reality</a></li>
<li>2010/11/20: <a href="http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/">Kinect experiment with Ogre3D</a></li>
<li>2010/11/19: <a href="http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/">PhotoSynthToolkit results</a></li>
<li>2010/11/09: <a href="http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/">PhotoSynth Toolkit updated</a></li>
<li>2010/11/05: <a href="http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/">Structure From Motion Toolkit released</a></li>
<li>2010/09/27: <a href="http://www.visual-experiments.com/2010/09/27/my-5-years-old-quiksee-competitor/">My 5 years old Quiksee competitor</a></li>
<li>2010/09/23: <a href="http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/">PMVS2 x64 and videos tutorials</a></li>
<li>2010/09/08: <a href="http://www.visual-experiments.com/2010/09/08/introducing-opensynther/">Introducing OpenSynther</a></li>
<li>2010/08/22: <a href="http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/">Dense point cloud created with PhotoSynth and PMVS2</a></li>
<li>2010/08/19: <a href="http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/">My PhotoSynth ToolKit</a></li>
<li>2010/07/12: <a href="http://www.visual-experiments.com/2010/07/12/pose-estimation-using-sfm-point-cloud/">Pose Estimation using SfM point cloud</a></li>
<li>2010/07/11: <a href="http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/">Remote Augmented Reality Prototype</a></li>
<li>2010/07/08: <a href="http://www.visual-experiments.com/2010/07/08/structure-from-motion-experiment/">Structure From Motion Experiment</a></li>
<li>2010/06/25: <a href="http://www.visual-experiments.com/2010/06/25/gpu-surf-video-demo/">GPU-Surf video demo</a></li>
<li>2010/06/23: <a href="http://www.visual-experiments.com/2010/06/23/gpusurf-and-ogregpgpu/">GPUSurf and Ogre::GPGPU</a></li>
<li>2010/06/20: <a href="http://www.visual-experiments.com/2010/06/20/ogrecanvas-a-2d-api-for-ogre3d/">Ogre::Canvas, a 2D API for Ogre3D</a></li>
<li>2010/05/09: <a href="http://www.visual-experiments.com/2010/05/09/ogreopencl-and-ogrecanvas/">Ogre::OpenCL and Ogre::Canvas</a></li>
<li>2010/04/26: <a href="http://www.visual-experiments.com/2010/04/26/cuda-integration-with-ogre3d/">Cuda integration with Ogre3D</a></li>
<li>2010/04/09: <a href="http://www.visual-experiments.com/2010/04/09/multitouch-prototype-done-using-awesomium-and-ogre3d/">Multitouch prototype done using Awesomium and Ogre3D</a></li>
<li>2010/03/05: <a href="http://www.visual-experiments.com/2010/03/05/artoolkitplus-integration-with-ogre3d/">ArToolKitPlus integration with Ogre3D</a></li>
<li>2010/02/20: <a href="http://www.visual-experiments.com/2010/02/20/hello-world/">Hello World !</a></li>
</ul>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F01%2F07%2F2010-visual-experiments%2F&amp;title=2010%20visual%20experiments"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/01/07/2010-visual-experiments/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Outdoor tracking using panoramic image</title>
		<link>http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/</link>
		<comments>http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/#comments</comments>
		<pubDate>Wed, 22 Dec 2010 13:10:29 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[opencv]]></category>
		<category><![CDATA[sift]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1167</guid>
		<description><![CDATA[I have made this experiment in 2 days: First of all, I must admit that this is more a &#8220;proof-of-concept&#8221; rather than a prototype&#8230; But the goal was to illustrate a concept needed for my job. I love this kind of challenge! Building something like this in 2 days was only possible thanks to great [...]]]></description>
			<content:encoded><![CDATA[<p>I have made this experiment in 2 days:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/ZmbP022QXpk?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/ZmbP022QXpk?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<p>First of all, I must admit that this is more a &#8220;proof-of-concept&#8221; rather than a prototype&#8230; But the goal was to illustrate a concept needed for my job. I love this kind of challenge! Building something like this in 2 days was only possible thanks to great open-source library:</p>
<ul style="margin-left: 20px;">
<li><a href="http://www.ogre3d.org/">Ogre3D</a> (MIT)</li>
<li><a href="http://opencv.willowgarage.com/wiki/">OpenCV</a> (BSD)</li>
<li><a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a> (non-profit license)</li>
</ul>
<h3>Analysis</h3>
<p>I&#8217;m using a panoramic image as reference. For each frame of the video I&#8217;m extracting Sift feature using SiftGPU and matching them with those of the reference image. Then I&#8217;m computing the homography between the 2 images using Ransac homography estimator (OpenCV cvFindHomography).</p>
<h3>Performance</h3>
<p>The performance are low due to complexity of the Sift detection and matching and that I&#8217;m applying the homography using cvWarpPerspective.</p>
<style type="text/css">
table.result {
color: black;
border: 1px solid black;
}
table.result td {
text-align: left;
padding: 1px;
}
</style>
<table class="result">
<tr>
<td>Sift extraction:</td>
<td>28ms</td>
<td>1228 features</td>
</tr>
<tr>
<td>Sift matching:</td>
<td>17ms</td>
<td>using SiftGPU</td>
</tr>
<tr>
<td>Ransac Homography estimation:</td>
<td>2ms</td>
<td>89 inliers of 208 matches</td>
</tr>
<tr>
<td>Homography application:</td>
<td>36ms</td>
<td>done on the CPU with OpenCV</td>
</tr>
<tr>
<td colspan="3">Global: 12fps</td>
</tr>
</table>
<div style="height: 20px;">&nbsp;</div>
<p>I&#8217;m working on another version using <a href="http://svr-www.eng.cam.ac.uk/~er258/work/fast.html">Fast</a> (or <a href="http://www6.in.tum.de/Main/ResearchAgast">Agast</a>) as feature detector and <a href="http://cvlab.epfl.ch/software/brief/index.php">Brief</a> as descriptor. This should lead to a significant speed-up and may eventually run on a mobile&#8230; Using the GPU vertex and pixel shader instead of the CPU to apply the homography should also gives a nice speed-up.</p>
<p>I&#8217;m also aware that it is not correct to apply an homography on a cylindric panoramic image (especially if you don&#8217;t undistort the input video frame too <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' /> )</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F12%2F22%2Foutdoor-tracking-using-panoramic-image%2F&amp;title=Outdoor%20tracking%20using%20panoramic%20image"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/12/22/outdoor-tracking-using-panoramic-image/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Augmented Reality outdoor tracking becoming reality</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/</link>
		<comments>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/#comments</comments>
		<pubDate>Mon, 13 Dec 2010 10:08:11 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[sift]]></category>
		<category><![CDATA[tracking]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909</guid>
		<description><![CDATA[My interest in structure from motion was primary motivated by the capability of creating a point cloud that can be used as a reference for tracking reference. The video below is more a proof-of-concept than a prototype but this is an overview of my outdoor tracking algorithm for Augmented Reality: Analysis In a pre-processing step [...]]]></description>
			<content:encoded><![CDATA[<p>My interest in structure from motion was primary motivated by the capability of creating a point cloud that can be used as a reference for tracking reference. The video below is more a proof-of-concept than a prototype but this is an overview of my <strong>outdoor tracking algorithm for Augmented Reality</strong>:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/DdVz4xQJPC0?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/DdVz4xQJPC0?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<h3>Analysis</h3>
<p>In a pre-processing step I&#8217;ve built a sparse point cloud of the place using my <a href="http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/">SFMToolkit</a>. Each vertex of the point cloud has several 2D Sift features correspondences. I&#8217;ve only kept one Sift descriptor per vertex (mean of the descriptors) and put all descriptors in an index using <a href="http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN">Flann</a>.</p>
<p>For each frame of the video to be augmented, I&#8217;ve extracted Sift feature with <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a> and then matched them using Flann 2-nearest neighbor search and a distance ratio threshold. The <a href="http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN">Flann</a> matching is done in parallel with <a href="http://threadpool.sourceforge.net/">boost::threadpool</a>. The matches computed contains a lot of outliers. So I have implemented a <a href="http://en.wikipedia.org/wiki/RANSAC">Ransac</a> pose estimator using <a href="http://cvlab.epfl.ch/software/EPnP/">EPnP</a> that permits to filter bad 2d/3d correspondences.</p>
<h3>Performance</h3>
<p>My implementation is slow (due to my implementation of Ransac EPnP that could be improved).</p>
<style type="text/css">
table.result {
color: black;
border: 1px solid black;
}
table.result td {
text-align: left;
padding: 1px;
}
</style>
<table class="result">
<tr>
<td colspan="3">Sift first octave: -1</td>
</tr>
<tr>
<td>Sift extraction:</td>
<td>49ms</td>
<td>2917 features</td>
</tr>
<tr>
<td>Sift matching:</td>
<td>57ms</td>
<td>(parallel matching using Flann)</td>
</tr>
<tr>
<td>Ransac EPnP:</td>
<td>110ms</td>
<td>121 inliers of 208 matches</td>
</tr>
<tr>
<td colspan="3">Global: 4.6fps (9.4fps without pose estimation)</td>
</tr>
</table>
<p></p>
<table class="result">
<tr>
<td colspan="3">Sift first octave: 0</td>
</tr>
<tr>
<td>Sift extraction:</td>
<td>32ms</td>
<td>707 features</td>
</tr>
<tr>
<td>Sift matching:</td>
<td>15ms</td>
<td>(parallel matching using Flann)</td>
</tr>
<tr>
<td>Ransac EPnP:</td>
<td>144ms</td>
<td>62 inliers of 93 matches</td>
</tr>
<tr>
<td colspan="3">Global: 5.2fps (21.2fps without pose estimation)</td>
</tr>
</table>
<p>The slowness is not a so big issue because it doesn&#8217;t need to run at 30fps. Indeed the goal of my prototype is to have absolute pose with this tracking system each second and relative pose using inertial system available on mobile device (or using KLT tracking).</p>
<h3>Issue</h3>
<ul style="margin-left: 20px;">
<li>Performance (faster is better <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> )</li>
<li>Point cloud reference is not always accurate (<a href="http://phototour.cs.washington.edu/bundler/">Bundler</a> fault)</li>
</ul>
<p>In another post I&#8217;ll introduce alternative to Bundler: faster and more accurate.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F12%2F13%2Faugmented-reality-outdoor-tracking-becoming-reality%2F&amp;title=Augmented%20Reality%20outdoor%20tracking%20becoming%20reality"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Kinect experiment with Ogre3D</title>
		<link>http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/</link>
		<comments>http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/#comments</comments>
		<pubDate>Sat, 20 Nov 2010 18:38:42 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[kinect]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1057</guid>
		<description><![CDATA[I&#8217;ve just bought a Kinect and decided to do some experiment with it: This demo is a ripp-off the Kinect-v11 demo made by Zephod. In fact I&#8217;ve designed a new Ogre::Kinect library that provide Kinect connection through Zephod library. Then I&#8217;ve replace the Zephod OpenGL demo by an Ogre3D demo using my library. The nice [...]]]></description>
			<content:encoded><![CDATA[<p>I&#8217;ve just bought a Kinect and decided to do some experiment with it:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/Bna-IaEnDpU?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Bna-IaEnDpU?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<p>This demo is a ripp-off the <a href="http://ajaxorg.posterous.com/kinect-driver-for-windows-prototype">Kinect-v11</a> demo made by Zephod. In fact I&#8217;ve designed a new Ogre::Kinect library that provide Kinect connection through Zephod library. Then I&#8217;ve replace the Zephod OpenGL demo by an Ogre3D demo using my library. The nice part is that I&#8217;ve managed to move some depth to rgb conversion to the GPU (using pixel shader).</p>
<h3>Links</h3>
<p>Binary demo: <a href="http://code.google.com/p/visual-experiments/downloads/list">OgreKinectDemo1.zip</a><br />
Source code: <a href="http://code.google.com/p/visual-experiments/source/checkout">svn on code.google</a><br />
Documentation: <a href="http://visual-experiments.com/documentations/OgreKinect/">doxygen</a><br />
License: MIT</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F20%2Fkinect-experiment-with-ogre3d%2F&amp;title=Kinect%20experiment%20with%20Ogre3D"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/20/kinect-experiment-with-ogre3d/feed/</wfw:commentRss>
		<slash:comments>16</slash:comments>
		</item>
		<item>
		<title>PhotoSynthToolkit results</title>
		<link>http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/</link>
		<comments>http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/#comments</comments>
		<pubDate>Fri, 19 Nov 2010 11:02:04 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[pmvs]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1047</guid>
		<description><![CDATA[This is just a small post to show you want kind of results you can get with my PhotoSynthToolkit: Download location and source code introduced in my previous post. &#160;]]></description>
			<content:encoded><![CDATA[<p>This is just a small post to show you want kind of results you can get with my <a href="http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/">PhotoSynthToolkit</a>:</p>
<p><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/qaWE2GixSVs?fs=1&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/qaWE2GixSVs?fs=1&amp;hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object></p>
<p>Download location and source code introduced in my <a href="http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/">previous post</a>.</p>
<div style="height: 30px;">&nbsp;</div>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F19%2Fphotosynthtoolkit-results%2F&amp;title=PhotoSynthToolkit%20results"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/19/photosynthtoolkit-results/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>PhotoSynth Toolkit updated</title>
		<link>http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/</link>
		<comments>http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/#comments</comments>
		<pubDate>Tue, 09 Nov 2010 11:00:49 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[pmvs]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=992</guid>
		<description><![CDATA[Overview I have updated my PhotoSynth toolkit for easier usage (the same way as SFMToolkit). This is an example of dense mesh creation from 12 pictures using this toolkit: The 12 pictures were shot with a Canon PowerShot A700: Thanks to this toolkit, PMVS2 and MeshLab you can create a dense mesh from these 12 [...]]]></description>
			<content:encoded><![CDATA[<h3>Overview</h3>
<p>I have updated my PhotoSynth toolkit for easier usage (the same way as <a href="http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/">SFMToolkit</a>). This is an example of dense mesh creation from 12 pictures using this toolkit:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/11/mascaron_hd.jpg" alt="" title="mascaron_hd" width="580" height="375" class="alignnone size-full wp-image-999" /></p>
<p>The 12 pictures were shot with a <a href="http://www.dpreview.com/reviews/specs/Canon/canon_a700.asp">Canon PowerShot A700</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/11/mascaron.jpg" alt="" title="mascaron" width="564" height="146" class="alignnone size-full wp-image-997" /><br />
Thanks to this toolkit, <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a> and <a href="http://meshlab.sourceforge.net/">MeshLab</a> you can create a dense mesh from these 12 pictures:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/11/mascaron_animation.gif" alt="" title="mascaron_animation" width="580" height="306" class="alignnone size-full wp-image-1000" /><br />
<em>triangulated mesh with vertex color -> triangulated mesh with vertex color and SSAO ->  triangulated mesh shaded with SSAO -> triangulated mesh wireframe  -> photosynth sparse point cloud <br />(sparse point cloud : 8600 vertices, dense point cloud: 417k vertices, mesh: 917k triangles)</em></p>
<div style="30px;">&nbsp;</div>
<p>You can also take a loot at the <a href="http://photosynth.net/view.aspx?cid=6eac0501-c379-4beb-8758-6a7ec75e0304">PhotoSynth reconstruction</a> of the sculpture.</p>
<p>PhotoSynthToolkit is composed of several programs:</p>
<ul style="margin-left: 20px">
<li>PhotoSynthDownloader: download PhotoSynth point cloud and cameras parameters</li>
<li>PhotoSynth2PMVS: enable to run PMVS2 with a downloaded PhotoSynth point cloud</li>
<li>PMVS2 : <a href="http://grail.cs.washington.edu/software/pmvs/">http://grail.cs.washington.edu/software/pmvs/</a> created by Yasutaka Furukawa</li>
<li>PhotoSynthViewer: <a href="http://www.ogre3d.org/">Ogre3D</a> PhotoSynth viewer [not working yet]</li>
</ul>
<h3>Download</h3>
<p>The <a href="https://github.com/dddExperiments/PhotoSynthToolkit">source code</a> is available under MIT license on my github. I have also released a win32 binary version with windows scripting (WSH) for easier usage: <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/blog/?sdmon=downloads/PhotoSynthToolkit4.zip">PhotoSynthToolkit4.zip</a>.</p>
<h3>Help</h3>
<p>If you need some help or just want to discuss about photogrammetry, please join the <a href="http://pgrammetry.com/forum/" style="font-weight: bold; font-size: 15px;">photogrammetry forum</a> created by Olafur Haraldsson. You may also be interested by Josh Harle&#8217;s <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/">video tutorials</a>, they are partially out-dated due to the new PhotoSynthToolkit version but these videos are very good to learn how to use <a href="http://meshlab.sourceforge.net/">MeshLab</a>.</p>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/photosynthtoolkit/">Please go to the PhotoSynthToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F09%2Fphotosynth-toolkit-updated%2F&amp;title=PhotoSynth%20Toolkit%20updated"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/09/photosynth-toolkit-updated/feed/</wfw:commentRss>
		<slash:comments>15</slash:comments>
		</item>
		<item>
		<title>Structure From Motion Toolkit released</title>
		<link>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/</link>
		<comments>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/#comments</comments>
		<pubDate>Fri, 05 Nov 2010 15:23:55 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[bundlermatcher]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=898</guid>
		<description><![CDATA[Overview I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let&#8217;s say you have a nice place like the one just bellow: Place de la Bourse, Bordeaux, FRANCE (picture from Bing) &#160; Well, now you can take a lot of pictures of the place (around 50 in my [...]]]></description>
			<content:encoded><![CDATA[<h3>Overview</h3>
<p>I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let&#8217;s say you have a nice place like the one just bellow:</p>
<table>
<tbody style="background-color: white">
<tr>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_bing.jpg" alt="" title="place_de_la_bourse_bing" width="580" height="266" class="alignnone size-full wp-image-923" /></td>
</tr>
<tr>
<td>Place de la Bourse, Bordeaux, FRANCE (picture from Bing)</td>
</tr>
</tbody>
</table>
<div style="height: 20px;">&nbsp;</div>
<p>Well, now you can take a lot of pictures of the place (around 50 in my case):<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_pictures.jpg" alt="" title="place_de_la_bourse_pictures" width="565" height="220" class="alignnone size-full wp-image-943" /></p>
<div style="height: 5px;">&nbsp;</div>
<p>And then compute structure from motion and get a sparse point cloud using <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_bundler.jpg" alt="" title="place_de_la_bourse_bundler" width="576" height="256" class="alignnone size-full wp-image-936" /></p>
<p>Finally you have a dense point cloud divided in cluster by <a href="http://grail.cs.washington.edu/software/cmvs/">CMVS</a> and computed by <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/animation-cmvs.gif" alt="" title="animation-cmvs" width="580" height="250" class="alignnone size-full wp-image-939" /></p>
<p> You can also take a loot at the <a href="http://photosynth.net/">PhotoSynth</a> reconstruction of the place with <a href="http://photosynth.net/view.aspx?cid=e82eca65-60fe-498b-8916-80d1e3245640">53 pictures</a> and <a href="http://photosynth.net/view.aspx?cid=93c72ebb-5c54-4aff-ad12-3d0c5ade31fd">26 (without the fountain)</a>.</p>
<p>This is the SFMToolkit workflow:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/sfmtoolkit_toolchain.jpg" alt="" title="sfmtoolkit_toolchain" width="570" height="160" class="alignnone size-full wp-image-954" /></p>
<p>SFMToolkit is composed of several programs:</p>
<ul style="margin-left: 20px">
<li>BundlerFocalExtractor : extract CCD width from Exif using XML database.</li>
<li>BundlerMatcher : extract and match feature using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a>.</li>
<li>Bundler : <a href="http://phototour.cs.washington.edu/bundler/">http://phototour.cs.washington.edu/bundler/</a> created by Noah Snavely.</li>
<li>CMVS : <a href="http://grail.cs.washington.edu/software/cmvs/">http://grail.cs.washington.edu/software/cmvs/</a> created by Yasutaka Furukawa.</li>
<li>PMVS2 : <a href="http://grail.cs.washington.edu/software/pmvs/">http://grail.cs.washington.edu/software/pmvs/</a> created by Yasutaka Furukawa.</li>
<li>BundlerViewer : Bundler and PMVS2 output viewer based on <a href="http://www.ogre3d.org/">Ogre3D</a> (OpenSource 3D rendering engine).</li>
</ul>
<h3>Download</h3>
<p>As you can see this &#8220;toolkit&#8221; is composed of several open-source component. This is why I have decided to open-source my part of the job too. You can download the source code from the <a href="http://github.com/dddExperiments/SFMToolkit">SFMToolkit github</a>. You can also download a pre-compiled x64 version of the toolkit with windows scripting (WSH) for easier usage (but not cross-platform): <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/blog/?sdmon=downloads/SFMToolkit1.zip">SFMToolkit1.zip</a>.</p>
<h3>Help</h3>
<p>If you need some help or just want to discuss about photogrammetry, please join the <a href="http://pgrammetry.com/forum/" style="font-weight: bold; font-size: 15px;">photogrammetry forum</a> created by Olafur Haraldsson. You may also be interested by Josh Harle&#8217;s <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/">video tutorials</a>, they are partially out-dated due to the new SFMToolkit but these videos are very good to learn how to use <a href="http://meshlab.sourceforge.net/">MeshLab</a>.</p>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/sfmtoolkit/">Please go to the SFMToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F05%2Fstructure-from-motion-toolkit-released%2F&amp;title=Structure%20From%20Motion%20Toolkit%20released"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
		<item>
		<title>My PhotoSynth ToolKit</title>
		<link>http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/</link>
		<comments>http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/#comments</comments>
		<pubDate>Thu, 19 Aug 2010 15:35:59 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=642</guid>
		<description><![CDATA[I have released a ToolKit for PhotoSynth that permit to create a dense point cloud using PMVS2. You can download PhotoSynthToolKit1.zip and take a look at the code on my google code. PhotoSynth sparse point-cloud 11k vertices PMVS2 dense point-cloud 230k vertices I also have created a web app : PhotoSynthTileDownloader that permit to download [...]]]></description>
			<content:encoded><![CDATA[<p>I have released a ToolKit for PhotoSynth that permit to create a dense point cloud using <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a>.<br />
You can download <a href="http://code.google.com/p/visual-experiments/downloads/list">PhotoSynthToolKit1.zip</a> and take a look at the code on my <a href="http://code.google.com/p/visual-experiments/">google code</a>.</p>
<table>
<tbody style="background-color: white">
<tr>
<td colspan="2"><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/08/goutz.jpg" alt="" title="goutz" width="570" height="205" class="alignnone size-full wp-image-646" /></td>
</tr>
<tr>
<td>PhotoSynth sparse point-cloud <br />11k vertices</td>
<td>PMVS2 dense point-cloud<br /> 230k vertices</td>
</tr>
</tbody>
</table>
<p>I also have created a web app : PhotoSynthTileDownloader that permit to download all pictures of a synth in HD. I didn&#8217;t have release it yet because I&#8217;m concerned about the legal issue, but you can see that it&#8217;s already working by yourself:</p>
<p><object width="480" height="290"><param name="movie" value="http://www.youtube.com/v/xqeV3pI1TfU?fs=1&amp;hl=fr_FR"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/xqeV3pI1TfU?fs=1&amp;hl=fr_FR" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="290"></embed></object></p>
<p>I&#8217;ll give more information about it in a few day, stay tuned !</p>
<p><strong>Edit:</strong> I have removed the worflow graph and moved it on my <a href="http://www.visual-experiments.com/2010/08/22/dense-point-cloud-created-with-photosyth-and-pmvs2/">next post</a>.</p>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/photosynthtoolkit/">Please go to the PhotoSynthToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F08%2F19%2Fmy-photosynth-toolkit%2F&amp;title=My%20PhotoSynth%20ToolKit"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/08/19/my-photosynth-toolkit/feed/</wfw:commentRss>
		<slash:comments>36</slash:comments>
		</item>
		<item>
		<title>Pose Estimation using SfM point cloud</title>
		<link>http://www.visual-experiments.com/2010/07/12/pose-estimation-using-sfm-point-cloud/</link>
		<comments>http://www.visual-experiments.com/2010/07/12/pose-estimation-using-sfm-point-cloud/#comments</comments>
		<pubDate>Mon, 12 Jul 2010 08:42:14 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[gpusurf]]></category>
		<category><![CDATA[pose estimation]]></category>
		<category><![CDATA[sift]]></category>
		<category><![CDATA[structure from motion]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=600</guid>
		<description><![CDATA[The idea of this pose estimator is based on PTAM (Parallel Tracking and Mapping). PTAM is capable of tracking in an unknown environment thanks to the mapping done in parallel. But in fact if you want to augment reality, it&#8217;s generally because you already know what you are looking at. So, being able to have [...]]]></description>
			<content:encoded><![CDATA[<p>The idea of this pose estimator is based on <a href="http://www.robots.ox.ac.uk/~gk/PTAM/">PTAM</a> <em>(Parallel Tracking and Mapping)</em>. PTAM is capable of tracking in an unknown environment thanks to the mapping done in parallel. But in fact if you want to augment reality, it&#8217;s generally because you already know what you are looking at. So, being able to have a tracking working in an unknown environment is not always needed. My idea was simple: <strong>instead of doing a mapping in parallel, why not using SFM in a pre-processing step ?</strong></p>
<table>
<tbody style="background-color: white">
<tr>
<td colspan="2"><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/07/sfm.pose_.estimation.png" alt="" title="sfm.pose.estimation" width="571" height="258" class="alignnone size-full wp-image-621" /></td>
</tr>
<tr>
<td>input: point cloud + camera shot</td>
<td>output: position and orientation of the camera</td>
</tr>
</tbody>
</table>
<div style="height: 10px"></div>
<p>So my outdoor tracking algorithm will eventually work like this:</p>
<ul style="margin-left: 20px">
<li>pre-processing step
<ul style="margin-left: 20px">
<li>generate a point cloud of the outdoor scene you want to track using Bundler</li>
<li>create a binary file with a descriptor <em>(Sift/Surf)</em> per vertex of the point cloud</li>
</ul>
</li>
<li>in real-time, for each frame N:
<ul style="margin-left: 20px">
<li>extract feature using <a href="http://mi.eng.cam.ac.uk/~er258/work/fast.html">FAST</a></li>
<li>match feature from frame N-1 using 2D patch</li>
<li>compute <strong>&#8220;relative pose&#8221;</strong> between frame N and N-1</li>
</ul>
</li>
<li>in almost real-time, for each &#8220;key frame&#8221;:
<ul style="margin-left: 20px">
<li>extract feature and descriptor</li>
<li>match descriptor with those of the point cloud</li>
<li>generate 2D/3D correspondence from matches</li>
<li>compute <strong>&#8220;absolute pose&#8221;</strong> using PnP solver <em>(<a href="http://cvlab.epfl.ch/software/EPnP/">EPnP</a>)</em></li>
</ul>
</li>
</ul>
<p>The tricky part is that absolute pose computation could last several &#8220;relative pose&#8221; estimation. So once you&#8217;ve got the absolute pose you&#8217;ll have to compensate the delay by cumulating the previous relative pose&#8230;</p>
<p>This is what I&#8217;ve got so far:</p>
<ul style="margin-left: 20px">
<li><strong>pre-processing step:</strong> binary file generated using SiftGPU (planning to move on my GPUSurf implementation) and Bundler (planning to move on <a href="http://insight3d.sourceforge.net/">Insight3D</a> or implement it myself using <a href="http://www.ics.forth.gr/~lourakis/sba/index.html">sba</a>)</li>
<li><strong>relative pose:</strong> I don&#8217;t have an implementation of the relative pose estimator</li>
<li><strong>absolute pose:</strong> it&#8217;s basically working but needs some improvements:
<ul style="margin-left: 20px">
<li>switch feature extraction/matching from Sift to Surf</li>
<li>remove unused descriptors to speed-up maching step (by scoring descriptors used as inlier with training data)</li>
<li>use another PnP solver (or add ransac to support outliers and have more accurate results)</li>
</ul>
</li>
</ul>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F07%2F12%2Fpose-estimation-using-sfm-point-cloud%2F&amp;title=Pose%20Estimation%20using%20SfM%20point%20cloud"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/07/12/pose-estimation-using-sfm-point-cloud/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
	</channel>
</rss>
