<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Augmented Reality outdoor tracking becoming reality</title>
	<atom:link href="http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/</link>
	<description>ASTRE Henri experiments with Ogre3D and web stuff</description>
	<lastBuildDate>Tue, 22 Aug 2017 10:32:15 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.2</generator>
	<item>
		<title>By: Henri</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-8324</link>
		<dc:creator>Henri</dc:creator>
		<pubDate>Tue, 15 Nov 2011 18:27:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-8324</guid>
		<description>@Ngoc Vu: I don&#039;t have used GPU knn because all sift features will need to stay in GPU memory (which is ok for small areas but not for bigger ones).
@Lex: actually I&#039;ve already read this paper: this is indeed very interesting to me (and IMO much more complex than my proof-of-concept). As explained in this post (and answer) you don&#039;t need to run this algorithm in real-time: this is only needed to bootstrap (or update) a slam algorithm to real world coordinates. So the idea behind my prototype was to run the global pose estimator on the server side (in my case a Corei7 with a GeForce GTX 470). But I&#039;ve also implemented a prototype based on panoramic image which use a reference panoramic image as boostrap (detection based on surf) and a Kalman filter on sensor information (tracking) that will run on an iPad. I&#039;ve implemented the prototype on a low-end device: Samsun UMPC Q1 and my previous company is supposed to port it to the iPad...</description>
		<content:encoded><![CDATA[<p>@Ngoc Vu: I don&#8217;t have used GPU knn because all sift features will need to stay in GPU memory (which is ok for small areas but not for bigger ones).</p>
<p>@Lex: actually I&#8217;ve already read this paper: this is indeed very interesting to me (and IMO much more complex than my proof-of-concept). As explained in this post (and answer) you don&#8217;t need to run this algorithm in real-time: this is only needed to bootstrap (or update) a slam algorithm to real world coordinates. So the idea behind my prototype was to run the global pose estimator on the server side (in my case a Corei7 with a GeForce GTX 470). But I&#8217;ve also implemented a prototype based on panoramic image which use a reference panoramic image as boostrap (detection based on surf) and a Kalman filter on sensor information (tracking) that will run on an iPad. I&#8217;ve implemented the prototype on a low-end device: Samsun UMPC Q1 and my previous company is supposed to port it to the iPad&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Lex van der Sluijs</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-8315</link>
		<dc:creator>Lex van der Sluijs</dc:creator>
		<pubDate>Tue, 15 Nov 2011 11:40:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-8315</guid>
		<description>Hi Henri,
Very impressive, great result!
The ISMAR 2011 paper &#039;Real-Time Self-Localization from Panoramic Images on Mobile Devices&#039; by Clemens Arth will interest you as well. He uses a panoramic tracker to locate enough feature points for a 6DOF pose esitimation on a mobile phone (with a small FOV). The idea is to pick up the tracking from there using another system, e.g. with SLAM, PTAM, Optical Flow, etc.
But you have managed to do it without a panorama!
One  question: what are the (approximate) specs of the device used to create the movie in this post?
Lex</description>
		<content:encoded><![CDATA[<p>Hi Henri,</p>
<p>Very impressive, great result!<br />
The ISMAR 2011 paper &#8216;Real-Time Self-Localization from Panoramic Images on Mobile Devices&#8217; by Clemens Arth will interest you as well. He uses a panoramic tracker to locate enough feature points for a 6DOF pose esitimation on a mobile phone (with a small FOV). The idea is to pick up the tracking from there using another system, e.g. with SLAM, PTAM, Optical Flow, etc.</p>
<p>But you have managed to do it without a panorama!</p>
<p>One  question: what are the (approximate) specs of the device used to create the movie in this post?</p>
<p>Lex</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Ngoc Vu</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-6716</link>
		<dc:creator>Ngoc Vu</dc:creator>
		<pubDate>Fri, 02 Sep 2011 15:38:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-6716</guid>
		<description>Hi Henri,
How would you evaluate the matching performance if we replace FLANN with GPU-based KNN (http://www.i3s.unice.fr/~creative/KNN/) in your implementation?</description>
		<content:encoded><![CDATA[<p>Hi Henri,</p>
<p>How would you evaluate the matching performance if we replace FLANN with GPU-based KNN (<a href="http://www.i3s.unice.fr/~creative/KNN/" rel="nofollow">http://www.i3s.unice.fr/~creative/KNN/</a>) in your implementation?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Structure from motion projects &#187; Visual-Experiments.com</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-5892</link>
		<dc:creator>Structure from motion projects &#187; Visual-Experiments.com</dc:creator>
		<pubDate>Sat, 18 Jun 2011 09:37:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-5892</guid>
		<description>[...] introduced my tracking algorithm in the previous post. One of the issue I have is that the point cloud generated by my SFMToolkit (using Bundler) is not [...]</description>
		<content:encoded><![CDATA[<p>[...] introduced my tracking algorithm in the previous post. One of the issue I have is that the point cloud generated by my SFMToolkit (using Bundler) is not [...]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-1786</link>
		<dc:creator>admin</dc:creator>
		<pubDate>Mon, 20 Dec 2010 15:52:28 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-1786</guid>
		<description>@MarkAlexander: Thanks for the encouragements! I&#039;m wondering if your are using your PTAM adaptation on a mobile ? Because almost all mobile only have 1 ARM, and a dual-core machine is more suitable for running bundle adjustement on another thread...
@Pierre: Thanks for the article! (I&#039;ve already seen it but never spent time to read it completely). Concerning the yellow matches of the video this is not a coincidence ;-). When I&#039;ve completed this prototype I was looking for option to compress the point cloud used for tracking (keeping only 1 Sift descriptor per vertex, scoring descriptors used as inliers in pose estimation with learnt video and removing unused descriptors, manual cleaning of the point cloud, ...). And I&#039;ve seen this video on the &lt;a href=&quot;http://www.icg.tugraz.at/Members/irschara/index&quot; rel=&quot;nofollow&quot;&gt;homepage of Arnold Irschara&lt;/a&gt;. But at this time the video was only available as AVI download, this is why I don&#039;t have embedded the youtube video. BTW their solution is very different: they are compressing the point cloud using mean-shift and they are using exact GPU matching with synthetic view as opposed to my solution using multi-core CPU approximative matching with &lt;a href=&quot;http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN&quot; rel=&quot;nofollow&quot;&gt;FLANN&lt;/a&gt; on the whole point cloud.</description>
		<content:encoded><![CDATA[<p>@MarkAlexander: Thanks for the encouragements! I&#8217;m wondering if your are using your PTAM adaptation on a mobile ? Because almost all mobile only have 1 ARM, and a dual-core machine is more suitable for running bundle adjustement on another thread&#8230;<br />
@Pierre: Thanks for the article! (I&#8217;ve already seen it but never spent time to read it completely). Concerning the yellow matches of the video this is not a coincidence <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> . When I&#8217;ve completed this prototype I was looking for option to compress the point cloud used for tracking (keeping only 1 Sift descriptor per vertex, scoring descriptors used as inliers in pose estimation with learnt video and removing unused descriptors, manual cleaning of the point cloud, &#8230;). And I&#8217;ve seen this video on the <a href="http://www.icg.tugraz.at/Members/irschara/index" rel="nofollow">homepage of Arnold Irschara</a>. But at this time the video was only available as AVI download, this is why I don&#8217;t have embedded the youtube video. BTW their solution is very different: they are compressing the point cloud using mean-shift and they are using exact GPU matching with synthetic view as opposed to my solution using multi-core CPU approximative matching with <a href="http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN" rel="nofollow">FLANN</a> on the whole point cloud.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Pierre</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-1750</link>
		<dc:creator>Pierre</dc:creator>
		<pubDate>Fri, 17 Dec 2010 14:55:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-1750</guid>
		<description>The following video will insterest you :
http://www.youtube.com/watch?v=aVqT-A08ZTk&amp;feature=player_profilepage
It show matches in yellow too, has you have done !</description>
		<content:encoded><![CDATA[<p>The following video will insterest you :<br />
<a href="http://www.youtube.com/watch?v=aVqT-A08ZTk&#038;feature=player_profilepage" rel="nofollow">http://www.youtube.com/watch?v=aVqT-A08ZTk&#038;feature=player_profilepage</a></p>
<p>It show matches in yellow too, has you have done !</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Pierre</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-1748</link>
		<dc:creator>Pierre</dc:creator>
		<pubDate>Fri, 17 Dec 2010 14:38:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-1748</guid>
		<description>I have an interesting article for you !
Wide Area Localization on Mobile Phones. ISMAR 2009.
http://www.icg.tugraz.at/pub/pdf/ismar09_loc</description>
		<content:encoded><![CDATA[<p>I have an interesting article for you !<br />
Wide Area Localization on Mobile Phones. ISMAR 2009.<br />
<a href="http://www.icg.tugraz.at/pub/pdf/ismar09_loc" rel="nofollow">http://www.icg.tugraz.at/pub/pdf/ismar09_loc</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: MarkAlexander</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-1721</link>
		<dc:creator>MarkAlexander</dc:creator>
		<pubDate>Tue, 14 Dec 2010 19:41:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-1721</guid>
		<description>I impressed with the rate you crank out new things ;) we are using an adaption of PTAM as a tracker. In PTAM there are no discriptors as in Sift/Surf but &quot;warped&quot;patches which makes it fast and detectable at bigger angles and the bundle adjuster does not run every frame but every 30th frame or so. However, the accuracy is not very good. Its great for augmented reality though ;) keep up the good work</description>
		<content:encoded><![CDATA[<p>I impressed with the rate you crank out new things <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' />  we are using an adaption of PTAM as a tracker. In PTAM there are no discriptors as in Sift/Surf but &#8220;warped&#8221;patches which makes it fast and detectable at bigger angles and the bundle adjuster does not run every frame but every 30th frame or so. However, the accuracy is not very good. Its great for augmented reality though <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';)' class='wp-smiley' />  keep up the good work</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: admin</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-1706</link>
		<dc:creator>admin</dc:creator>
		<pubDate>Mon, 13 Dec 2010 16:51:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-1706</guid>
		<description>@Pierre: this is indeed a big issue. But my idea was to compute the absolute pose with this algorithm on the server side and compute relative pose using KLT (or inertial data) on the mobile device. I have introduced this idea in my &lt;a href=&quot;http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/&quot; rel=&quot;nofollow&quot;&gt;remote augmented reality prototype&lt;/a&gt;.
@Cesar.Lopez: No, this prototype is still using Bundler (I&#039;ll post a list of alternative I&#039;m working on in another post).</description>
		<content:encoded><![CDATA[<p>@Pierre: this is indeed a big issue. But my idea was to compute the absolute pose with this algorithm on the server side and compute relative pose using KLT (or inertial data) on the mobile device. I have introduced this idea in my <a href="http://www.visual-experiments.com/2010/07/11/remote-augmented-reality-prototype/" rel="nofollow">remote augmented reality prototype</a>.</p>
<p>@Cesar.Lopez: No, this prototype is still using Bundler (I&#8217;ll post a list of alternative I&#8217;m working on in another post).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Cesar.Lopez</title>
		<link>http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/comment-page-1/#comment-1704</link>
		<dc:creator>Cesar.Lopez</dc:creator>
		<pubDate>Mon, 13 Dec 2010 15:16:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=909#comment-1704</guid>
		<description>So did you manage to integrate the Insight3d camera calibration code with your own (i&#039;m guessing SAMANTHA based) bundler alternative?</description>
		<content:encoded><![CDATA[<p>So did you manage to integrate the Insight3d camera calibration code with your own (i&#8217;m guessing SAMANTHA based) bundler alternative?</p>
]]></content:encoded>
	</item>
</channel>
</rss>
