<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>
<channel>
	<title>Visual-Experiments.com &#187; sfmtoolkit</title>
	<atom:link href="http://www.visual-experiments.com/tag/sfmtoolkit/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.visual-experiments.com</link>
	<description>ASTRE Henri experiments with Ogre3D and web stuff</description>
	<lastBuildDate>Mon, 16 Jan 2017 18:59:35 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.2</generator>
		<item>
		<title>New bundler version</title>
		<link>http://www.visual-experiments.com/2012/05/26/new-bundler-version/</link>
		<comments>http://www.visual-experiments.com/2012/05/26/new-bundler-version/#comments</comments>
		<pubDate>Sat, 26 May 2012 14:24:09 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[opensynther]]></category>
		<category><![CDATA[pba]]></category>
		<category><![CDATA[sba]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=2384</guid>
		<description><![CDATA[I&#8217;ve compiled a new version of Bundler with 2 interesting new options: --parallel_epipolar --ba sba BTW those options should be passed as command arguments to bundler.exe directly (and not been added to the options.txt file). The parallel_epipolar option allows to estimate the fundamental matrices (ransac + lm) in parallel. This problem is embarrassingly parallel but [...]]]></description>
			<content:encoded><![CDATA[<p>I&#8217;ve compiled a new version of Bundler with 2 interesting new options:</p>
<pre>
--parallel_epipolar
--ba sba
</pre>
<p>BTW those options should be passed as command arguments to bundler.exe directly (and not been added to the options.txt file).</p>
<p>The <strong>parallel_epipolar</strong> option allows to estimate the fundamental matrices (ransac + lm) in parallel. This problem is embarrassingly parallel but due to some global variables used in the callback passed to lmfit it wasn&#8217;t that easy to implement: I&#8217;ve used a functor to hide the global variables used by the callback but sadly lmfit doesn&#8217;t have a void* userData parameter (only a pointer to the callback). Thus I&#8217;ve modified lmfit and add this missing parameter that allow me to pass a pointer to my functor. Furthermore I had to compile almost everything in C++ instead of C to use my functor inside lmfit. Thus I had to fix a lot of malloc calls that weren&#8217;t compiling in C++ due to missing cast. To keep a &#8220;backward behavior compatibility&#8221; <strong>this option is disabled by default</strong>.</p>
<p>The <strong>ba</strong> option allows to change the bundle adjustment &#8220;engine&#8221; used. Here is the list of available &#8220;engine&#8221;:</p>
<ul>
<li><strong>sba</strong> (default)</li>
<li><strong>none</strong> (for debug only)</li>
<li><strong>pba_cpu_double</strong></li>
<li><strong>pba_cpu_float</strong></li>
<li><strong>pba_gpu_float</strong></li>
</ul>
<p>Pba stand for Parallel Bundle Adjustment: I&#8217;ve integrated <a href="http://grail.cs.washington.edu/projects/mcba/">mcba</a> from Changchang Wu.</p>
<p>So if you have an Nvidia GPU card and installed the Cuda runtime you can add those options:</p>
<pre>
bundler.exe list_focal_absolute.txt --ba pba_gpu_float --parallel_epipolar
--options_file options.txt //on the same line
</pre>
<p>Timing on a 245 pictures dataset:</p>
<table>
<tr>
<th>Bundler BA</th>
<th>Time</th>
<th>Nb pictures registered</th>
</tr>
<tr>
<td>SBA</td>
<td>2h18min</td>
<td>233</td>
</tr>
<tr>
<td>PBA CPU double</td>
<td>23min</td>
<td>230</td>
</tr>
<tr>
<td>PBA CPU float</td>
<td>9min</td>
<td>230</td>
</tr>
<tr>
<td>PBA GPU float</td>
<td>6min</td>
<td>230</td>
</tr>
<tr>
<td>none (for debug)</td>
<td>2min</td>
<td>189 (bad reconstruction)</td>
</tr>
</table>
<p></p>
<p>You can download this new version of bundler: <a href="http://www.visual-experiments.com/blog/?sdmon=downloads/Bundler-multiBA-parallelEpipolar-x64.zip">bundler-multiBA-parallelEpipolar-x64.zip</a><br />
<strong>Update</strong>: the source code is available on the <a href="https://github.com/dddExperiments/Bundler/tree/MCBA">MCBA branch</a> of my Bundler fork.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2012%2F05%2F26%2Fnew-bundler-version%2F&amp;title=New%20bundler%20version"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2012/05/26/new-bundler-version/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Moving on</title>
		<link>http://www.visual-experiments.com/2011/09/26/moving-on/</link>
		<comments>http://www.visual-experiments.com/2011/09/26/moving-on/#comments</comments>
		<pubDate>Mon, 26 Sep 2011 20:26:21 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[acute3d]]></category>
		<category><![CDATA[photofly]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1833</guid>
		<description><![CDATA[I&#8217;ve spent almost 4 years at axyz.fr but it was time for me to move on. I&#8217;ve quit my job and I&#8217;ve started to work at Acute3D in September: they are working in the structure from motion and dense 3d reconstruction field. You can try their dense mesh creation technology by using Autodesk Photofly. Photofly [...]]]></description>
			<content:encoded><![CDATA[<p>I&#8217;ve spent almost 4 years at <a href="http://www.axyz.fr/">axyz.fr</a> but it was time for me to move on. <strong>I&#8217;ve quit my job and I&#8217;ve started to work at <a href="http://acute3d.com/">Acute3D</a> in September</strong>: they are working in the structure from motion and dense 3d reconstruction field. You can try their dense mesh creation technology by using <a href="http://labs.autodesk.com/utilities/photo_scene_editor/">Autodesk Photofly</a>. Photofly is calibrating the cameras using Autodesk implementation and then the mesh is generated using <a href="http://www.acute3d.com/">Acute3D</a> technology (Autodesk has bought a license of their meshing technology). </p>
<p>I&#8217;m working on the development of a calibration system (like bundler) at <a href="http://acute3d.com/">Acute3D</a>. I&#8217;ve decided to work with them as I think that they do have the best technology available and I wanted to work with very skilled people working in computer vision.</p>
<p>I don&#8217;t know if I&#8217;ll be able to continue writing on this blog as this new job is really more interesting and challenging for me.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F09%2F26%2Fmoving-on%2F&amp;title=Moving%20on"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/09/26/moving-on/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>News about OpenSynther</title>
		<link>http://www.visual-experiments.com/2011/05/09/news-about-opensynther/</link>
		<comments>http://www.visual-experiments.com/2011/05/09/news-about-opensynther/#comments</comments>
		<pubDate>Mon, 09 May 2011 08:24:30 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[opensynther]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1649</guid>
		<description><![CDATA[I&#8217;ve worked a lot on OpenSynther lately: OpenSynther is the name of my structure-from-motion solution. This new version is a major rewrite of the previous version which was using Surf with both GPU and multi-core CPU matching. The new version is using SiftGPU and Flann to achieve linear matching complexity of unstructured input as described [...]]]></description>
			<content:encoded><![CDATA[<p>I&#8217;ve worked a lot on <a href="http://www.visual-experiments.com/demos/opensynther/">OpenSynther</a> lately: OpenSynther is the name of my structure-from-motion solution. This new version is a major rewrite of the <a href="http://www.visual-experiments.com/2010/09/08/introducing-opensynther/">previous version</a> which was using Surf with both GPU and multi-core CPU matching. The new version is using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a> and <a href="http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN">Flann</a> to achieve linear matching complexity of unstructured input as described in <a href="http://profs.sci.univr.it/~fusiello/demo/samantha/">Samantha paper</a>. You can find more information about OpenSynther features on it <a style="font-size: 20px;" href="http://www.visual-experiments.com/demos/opensynther/">dedicated page</a> (including source code).</p>
<p>OpenSynther has been designed as a library (<strong>OpenSyntherLib</strong>) which has already proven to be useful for several programs written by myself:</p>
<ul style="margin-left: 20px;">
<li><strong>OpenSynther</strong>: work in progress&#8230; used by my augmented reality demo</li>
<li><strong>PhotoSynth2CMVS</strong>: this allow to use <a href="http://grail.cs.washington.edu/software/cmvs/">CMVS</a> with <a href="http://www.visual-experiments.com/demos/photosynthtoolkit/">PhotoSynthToolkit</a></li>
<li><strong>BundlerMatcher</strong>: this is the matching solution used by <a href="http://www.visual-experiments.com/demos/sfmtoolkit/">SFMToolkit</a></li>
</ul>
<h3>Outdoor augmented reality demo using OpenSynther</h3>
<p>I&#8217;ve improved my <a href="http://www.visual-experiments.com/2010/12/13/augmented-reality-outdoor-tracking-becoming-reality/">first attempt of outdoor augmented reality</a>: I&#8217;m now relying on PhotoSynth capability of creating a point cloud of the scene instead of <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>. Then I&#8217;m doing some processing with OpenSynther and here is what what you get:</p>
<p><iframe width="560" height="349" src="http://www.youtube.com/embed/JOoQTs5k258" frameborder="0" allowfullscreen></iframe><br />
<br />
You can also take a look at the <strong style="font-size: 20px;">3 others youtube videos showing this tracking in action</strong> around this church: <a href="http://youtu.be/5kSdy6DOIdI">MVI_6380.avi</a>, <a href="http://youtu.be/N-3xmmqLuD8">MVI_6381.avi</a>, <a href="http://youtu.be/8XiqGpQ9QuQ">MVI_6382.avi</a>.</p>
<h3>PhotoSynth2CMVS</h3>
<p>This is not ready yet, I still have some stuff to fix before releasing it. But I&#8217;m already producing a valid &#8220;bundle.out&#8221; file compatible with CMVS processing from PhotoSynth. I&#8217;ve processed the <a href="http://photosynth.net/view.aspx?cid=2776dec7-918d-4c64-8ded-342b74421c1a">V3D dataset</a> with <strong>PhotoSynth2CMVS</strong> and sent the bundle.out file to <a href="http://www.olihar.com/">Olafur Haraldsson</a> who has managed to create the corresponding 36 million vertices point cloud using <a href="http://grail.cs.washington.edu/software/cmvs/">CMVS</a> and <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a>:<br />
<iframe src="http://player.vimeo.com/video/21889929?title=0&amp;byline=0&amp;portrait=0" width="520" height="293" frameborder="0"></iframe><br />
The <a href="http://photosynth.net/view.aspx?cid=2776dec7-918d-4c64-8ded-342b74421c1a">V3D dataset</a> was created by <a href="http://www.inf.ethz.ch/personal/chzach/">Christopher Zach</a>.</p>
<h3>BundlerMatcher</h3>
<p>The new unstructured linear matching is really fast as you can see on the above chart compared to PhotoSynth. <strong>But the quality of the generated point cloud is not as good as PhotoSynth</strong>.<br />
<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/03/photosynth_matching.png" alt="" title="photosynth_matching" width="492" height="289" class="aligncenter size-full wp-image-1496" /><br />
<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/03/opensynther_matching1.png" alt="" title="opensynther_matching" width="492" height="289" class="aligncenter size-full wp-image-1498" /><br />
This benchmark was computed on a Core i7 with an Nvidia 470 GTX. I&#8217;ve also compared the quality of the matching methods implemented in <a href="http://www.visual-experiments.com/demos/opensynther/">OpenSynther</a> (linear VS quadratic). I&#8217;ve used Bundler as a comparator with a dataset of 245 pictures:</p>
<style type="text/css">
table.benchmarks, table.benchmarks td, table.benchmarks tr {
color: black;
border: 1px solid black;
background-color: white;
text-align: left;
margin: auto;
width: 70%;
}
table.benchmarks td {
padding: 2px;
}
</style>
<table style="margin-bottom: 15px;" class="benchmarks">
<tbody>
<tr>
<td></td>
<td>Linear</td>
<td>Quadratic</td>
</tr>
<tr>
<td>Nb pictures registered</td>
<td>193</td>
<td>243</td>
</tr>
<tr>
<td>Time spent to register 193 pictures</td>
<td>33min</td>
<td>1h43min</td>
</tr>
</tbody>
</table>
<p>On the one hand, both <strong>the matching and the bundle adjustment are faster with linear matching</strong> but on the other hand, having only 193 out of 245 pictures registered is not acceptable. I have some idea on how to improve the linear matching pictures registering ratio but this is not implemented yet (this is why PhotoSynth2CMVS is not released for now).</p>
<h3>Future</h3>
<p>I&#8217;ve been playing with <a href="http://cvlab.epfl.ch/research/detect/ldahash/">LDAHash</a> last week and I&#8217;d like to support this in OpenSynther to improve matching speed and accuracy. It would also help to reduce the memory used by OpenSynther (by a factor 16: 128 floats -> 256bits per feature). I&#8217;m also wondering if the <a href="http://www.cs.unc.edu/~jmf/Software.html">Cuda knn</a> implementation could speed-up the matching (if applicable)? I &#8216;d also like to restore the previous Surf version of OpenSynther which was really fun to implement. Adding a sequential bundle adjustment (as in bundler) would be really interesting too&#8230;</p>
<h3>Off-topic</h3>
<p>I&#8217;ve made some modifications to my blog: switched to WordPress 3.x, activated page caching, added social sharing buttons and added my <a href="http://www.linkedin.com/in/henriastre/en">LinkedIn account</a> next to the donate button&#8230;</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F05%2F09%2Fnews-about-opensynther%2F&amp;title=News%20about%20OpenSynther"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/05/09/news-about-opensynther/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>3D-Arch Workshop</title>
		<link>http://www.visual-experiments.com/2011/03/31/3d-arch-workshop/</link>
		<comments>http://www.visual-experiments.com/2011/03/31/3d-arch-workshop/#comments</comments>
		<pubDate>Thu, 31 Mar 2011 18:22:38 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[opensynther]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1504</guid>
		<description><![CDATA[The 3D-Arch workshop was really inspiring: it was really great meeting people working on the same subject, exchanging idea&#8230; and the place was really nice too. I couldn&#8217;t resist to create some PhotoSynth of the place for future reconstruction: dragon, eagle, statue, door, &#8230; Andrea Fusiello showcased amazing results with Samantha: reconstruction without any camera [...]]]></description>
			<content:encoded><![CDATA[<p><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/03/trento.jpg" alt="" title="trento" width="150" height="113" class="alignnone size-full wp-image-1471" style="float: right; margin-left: 20px;" /><br />
The <a href="http://www.3d-arch.org/">3D-Arch workshop</a> was really inspiring: it was really great meeting people working on the same subject, exchanging idea&#8230; and the place was really nice too. I couldn&#8217;t resist to create some PhotoSynth of the place for future reconstruction: <a href="http://photosynth.net/view.aspx?cid=8645d183-4718-4325-bd7b-5d48955839d6">dragon</a>, <a href="http://photosynth.net/view.aspx?cid=18500a6e-f4fc-401f-909a-cc701b285834">eagle</a>, <a href="http://photosynth.net/view.aspx?cid=4cb18de7-274b-471b-b5a0-87884dc2d095">statue</a>, <a href="http://photosynth.net/view.aspx?cid=e6a79560-b303-4053-bdb8-f7e2a52b0870">door</a>, &#8230;</p>
<ul style="margin-left: 20px;">
<li><a href="http://profs.sci.univr.it/~fusiello/">Andrea Fusiello</a> showcased amazing results with <a href="http://profs.sci.univr.it/~fusiello/demo/samantha/">Samantha</a>: reconstruction without any camera calibration needed (neither Exif data).</li>
<li><a href="http://acute3d.com/">Jean-Philippe Pons</a> announced that his dense multi-view reconstruction solution will be incorporated in <a href="http://www.youtube.com/watch?v=5ivMJdYdnNs">Autodesk PhotoFly</a>.</li>
<li><a href="http://recherche.ign.fr/labos/matis/cv.php?prenom=&#038;nom=Pierrot-Deseilligny">Marc Pierrot-Deseilligny</a> presented <a href="http://www.micmac.ign.fr/index.php?id=3">Apero</a>: an open-source bundle adjustment software for automatic calibration and orientation of set of images (needs calibrated camera).</li>
</ul>
<p>I should have published this post sooner but I wanted to make a double post with my new OpenSynther results&#8230; <strong>CMVS support in PhotoSynthToolkit is coming</strong>! You should expect another post next week with nice results <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F03%2F31%2F3d-arch-workshop%2F&amp;title=3D-Arch%20Workshop"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/03/31/3d-arch-workshop/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>New toolkits released</title>
		<link>http://www.visual-experiments.com/2011/02/22/new-toolkits-released/</link>
		<comments>http://www.visual-experiments.com/2011/02/22/new-toolkits-released/#comments</comments>
		<pubDate>Tue, 22 Feb 2011 10:58:54 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[missstereo]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<category><![CDATA[v3dsfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1388</guid>
		<description><![CDATA[V3DSfMToolkit ETH-V3D Structure-and-Motion software was created by Christopher Zach. The original source code with dataset is available at Christopher Zach Open-Source page (GPL license). I have created a windows port of V3DSfMToolkit with scripting wich is available as both binary (V3dSfMToolkit1.zip) and source (github). I&#8217;ve tested this toolkit with the dataset given by Christopher Zach [...]]]></description>
			<content:encoded><![CDATA[<h3>V3DSfMToolkit</h3>
<p>ETH-V3D Structure-and-Motion software was created by <a href="http://www.inf.ethz.ch/personal/chzach/">Christopher Zach</a>. The original source code with dataset is available at <a href="http://www.inf.ethz.ch/personal/chzach/opensource.html">Christopher Zach Open-Source page</a> (GPL license). I have created a <a href="https://github.com/dddExperiments/V3DSfMToolkit">windows port of V3DSfMToolkit</a> with scripting wich is available as both <a href="https://github.com/downloads/dddExperiments/V3DSfMToolkit/V3dSfMToolkit1.zip">binary</a> (<a href="https://github.com/downloads/dddExperiments/V3DSfMToolkit/V3dSfMToolkit1.zip">V3dSfMToolkit1.zip</a>) and <a href="https://github.com/dddExperiments/V3DSfMToolkit">source</a> (<a href="https://github.com/dddExperiments/V3DSfMToolkit">github</a>).</p>
<p><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/02/v3dsfmtoolkit.jpg" alt="" title="v3dsfmtoolkit" width="523" height="207" class="alignnone size-full wp-image-1411" /><br />
I&#8217;ve tested this toolkit with the dataset given by Christopher Zach (see above screenshot) the reconstruction looks good but I only managed to get partial reconstruction from my own dataset.</p>
<h3>MissStereo</h3>
<p>Quasi-Euclidean Epipolar Rectification: MissStereo created by Pascal Monasse, Neus Sabater, Zhongwei Tang. The original source code is available at the <a href="http://www.ipol.im/pub/algo/m_quasi_euclidean_epipolar_rectification/">IPOL related page</a> under GPL license. You can download my windows port as both <a href="http://www.visual-experiments.com/blog/?sdmon=downloads/MissStereo1.zip">binary</a> (<a href="http://www.visual-experiments.com/blog/?sdmon=downloads/MissStereo1.zip">MissStereo1.zip</a>) and <a href="https://github.com/dddExperiments/MissStereo">source</a> (<a href="https://github.com/dddExperiments/MissStereo">github</a>).</p>
<table>
<tbody style="background: white">
<tr>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/02/MissStereo.gif" alt="" title="MissStereo" width="220" height="302" class="alignnone size-full wp-image-1396" /></td>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/02/meshlab-2011-02-21-10-53-21-96.gif" alt="" title="MissStereoAnimation" width="360" height="300" class="alignnone size-full wp-image-1394" /></td>
</tr>
</tbody>
</table>
<p>I&#8217;m interested by this method to estimate the fundamental matrix without prior focal length knowledge.</p>
<h3>PhotoSynthToolkit with XSI support</h3>
<p>With the help of Julien Carmagnac (3D Graphist and XSI advanced user), I&#8217;ve duplicated the 3DS Max texture projection rendering solution for Softimage XSI:</p>
<div style="position: relative;">
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/02/xsi.gif" alt="" title="PhotoSynthToolkit_xsi_cameras" width="560" height="396" class="alignnone size-full wp-image-1438" /><br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/02/PhotoSynthToolkit_xsi_support.jpg" alt="" title="PhotoSynthToolkit_xsi_support" class="alignnone size-full wp-image-1418" style="position: absolute; z-index: 5; width: 150px; left: 5px; top: 253px;" />
</div>
<p>As usual, the new version of PhotoSynthToolkit including this new feature is available on his <a href="http://www.visual-experiments.com/demos/photosynthtoolkit/">dedicated page</a>.</p>
<h3>SFMToolkit with sequence matching</h3>
<p><a href="http://www.visual-experiments.com/demos/sfmtoolkit/">SFMToolkit</a> was packaged with BundlerMatcher, my own matching solution using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a>: this is a trivial quadratic exhaustive matching implementation. This implementation is well-suited for unstructured (unordered) pictures but if you are using a sequence of images as input (movie) you can reduce the complexity of the matching to linear. You can do that by only comparing frame N with frame N+1, N+2 [...], N+p. I&#8217;ve also fixed a bug that may occur on some system (Windows 7 x64 with UAC activated): thanks to <a href="http://blog.neonascent.net/">Josh Harle</a> for the bug reporting!<br />
<br />
The new version of SFMToolkit is available on his <a href="http://www.visual-experiments.com/demos/sfmtoolkit/">dedicated page</a>: this new feature is hidden, you need to edit &#8220;1 &#8211; Bundler.wsf&#8221; and set SEQUENCE_MATCHING_ENABLED to true (replace false by true).</p>
<h3>3D-Arch&#8217;2011</h3>
<p>I&#8217;m going to the <a href="http://www.3d-arch.org/">3D-Arch&#8217;2011 Workshop</a> at <a href="http://maps.google.com/maps?f=q&#038;source=s_q&#038;hl=en&#038;geocode=&#038;q=trento&#038;aq=&#038;sll=37.0625,-95.677068&#038;sspn=66.408528,135.263672&#038;ie=UTF8&#038;hq=&#038;hnear=Trento+Province+of+Trento,+Trentino-Alto+Adige%2FS%C3%BCdtirol,+Italy&#038;ll=45.943511,11.134644&#038;spn=3.705217,8.453979&#038;z=8&#038;iwloc=A">Trento</a>: 3D Virtual Reconstruction and Visualization of Complex Architectures. I hope to see amazing things about 3D reconstruction <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
<p><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2011/02/3darch.jpg" alt="" title="3darch" width="560" height="175" class="alignnone size-full wp-image-1415" /><br />
Jean-Philippe Pons (CSTB, Sophia-Antipolis, France): High-resolution large-scale multi-view stereo </p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F02%2F22%2Fnew-toolkits-released%2F&amp;title=New%20toolkits%20released"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/02/22/new-toolkits-released/feed/</wfw:commentRss>
		<slash:comments>25</slash:comments>
		</item>
		<item>
		<title>SFMToolkit updated</title>
		<link>http://www.visual-experiments.com/2011/01/29/sfmtoolkit-updated/</link>
		<comments>http://www.visual-experiments.com/2011/01/29/sfmtoolkit-updated/#comments</comments>
		<pubDate>Sat, 29 Jan 2011 18:17:41 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=1358</guid>
		<description><![CDATA[This is a very short post to announce a SFMToolkit bug fix&#8230; If the toolkit wasn&#8217;t working at all on your machine (File not found error in &#8220;1 &#8211; Bundle.wsf&#8221;) this may fix the issue. The bug was linked to the Windows separator setting: &#8216;.&#8216; or &#8216;,&#8216;. Thus the default 0.8 matching threshold was on [...]]]></description>
			<content:encoded><![CDATA[<p>This is a very short post to announce a <strong>SFMToolkit bug fix</strong>&#8230; If the toolkit wasn&#8217;t working at all on your machine (File not found error in &#8220;1 &#8211; Bundle.wsf&#8221;) this may fix the issue. The bug was linked to the Windows separator setting: &#8216;<strong>.</strong>&#8216; or &#8216;<strong>,</strong>&#8216;. Thus the default 0.8 matching threshold was on some system applied as 0: no matching -> no bundler output <img src='http://www.visual-experiments.com/blog/wp-includes/images/smilies/icon_sad.gif' alt=':-(' class='wp-smiley' /> </p>
<p>I&#8217;ve also fixed other errors in BundlerMatcher too (small memory leak + no more 4096 match limit), so <strong>you should download this version too even if the previous version was working on your system</strong>. The new version is available on his <a style="font-size: 15px;" href="http://www.visual-experiments.com/demos/sfmtoolkit/">dedicated page</a>: please do not make direct link to the zip file but to <a href="http://www.visual-experiments.com/demos/sfmtoolkit/">this page</a> so people downloading will always get the latest version.</p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2011%2F01%2F29%2Fsfmtoolkit-updated%2F&amp;title=SFMToolkit%20updated"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2011/01/29/sfmtoolkit-updated/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Structure From Motion Toolkit released</title>
		<link>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/</link>
		<comments>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/#comments</comments>
		<pubDate>Fri, 05 Nov 2010 15:23:55 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[ogre3d]]></category>
		<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[bundlermatcher]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=898</guid>
		<description><![CDATA[Overview I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let&#8217;s say you have a nice place like the one just bellow: Place de la Bourse, Bordeaux, FRANCE (picture from Bing) &#160; Well, now you can take a lot of pictures of the place (around 50 in my [...]]]></description>
			<content:encoded><![CDATA[<h3>Overview</h3>
<p>I have finally released my Structure-From-Motion Toolkit (SFMToolkit). So what can you do with it ? Let&#8217;s say you have a nice place like the one just bellow:</p>
<table>
<tbody style="background-color: white">
<tr>
<td><img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_bing.jpg" alt="" title="place_de_la_bourse_bing" width="580" height="266" class="alignnone size-full wp-image-923" /></td>
</tr>
<tr>
<td>Place de la Bourse, Bordeaux, FRANCE (picture from Bing)</td>
</tr>
</tbody>
</table>
<div style="height: 20px;">&nbsp;</div>
<p>Well, now you can take a lot of pictures of the place (around 50 in my case):<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_pictures.jpg" alt="" title="place_de_la_bourse_pictures" width="565" height="220" class="alignnone size-full wp-image-943" /></p>
<div style="height: 5px;">&nbsp;</div>
<p>And then compute structure from motion and get a sparse point cloud using <a href="http://phototour.cs.washington.edu/bundler/">Bundler</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/place_de_la_bourse_bundler.jpg" alt="" title="place_de_la_bourse_bundler" width="576" height="256" class="alignnone size-full wp-image-936" /></p>
<p>Finally you have a dense point cloud divided in cluster by <a href="http://grail.cs.washington.edu/software/cmvs/">CMVS</a> and computed by <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a>:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/animation-cmvs.gif" alt="" title="animation-cmvs" width="580" height="250" class="alignnone size-full wp-image-939" /></p>
<p> You can also take a loot at the <a href="http://photosynth.net/">PhotoSynth</a> reconstruction of the place with <a href="http://photosynth.net/view.aspx?cid=e82eca65-60fe-498b-8916-80d1e3245640">53 pictures</a> and <a href="http://photosynth.net/view.aspx?cid=93c72ebb-5c54-4aff-ad12-3d0c5ade31fd">26 (without the fountain)</a>.</p>
<p>This is the SFMToolkit workflow:<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/10/sfmtoolkit_toolchain.jpg" alt="" title="sfmtoolkit_toolchain" width="570" height="160" class="alignnone size-full wp-image-954" /></p>
<p>SFMToolkit is composed of several programs:</p>
<ul style="margin-left: 20px">
<li>BundlerFocalExtractor : extract CCD width from Exif using XML database.</li>
<li>BundlerMatcher : extract and match feature using <a href="http://www.cs.unc.edu/~ccwu/siftgpu/">SiftGPU</a>.</li>
<li>Bundler : <a href="http://phototour.cs.washington.edu/bundler/">http://phototour.cs.washington.edu/bundler/</a> created by Noah Snavely.</li>
<li>CMVS : <a href="http://grail.cs.washington.edu/software/cmvs/">http://grail.cs.washington.edu/software/cmvs/</a> created by Yasutaka Furukawa.</li>
<li>PMVS2 : <a href="http://grail.cs.washington.edu/software/pmvs/">http://grail.cs.washington.edu/software/pmvs/</a> created by Yasutaka Furukawa.</li>
<li>BundlerViewer : Bundler and PMVS2 output viewer based on <a href="http://www.ogre3d.org/">Ogre3D</a> (OpenSource 3D rendering engine).</li>
</ul>
<h3>Download</h3>
<p>As you can see this &#8220;toolkit&#8221; is composed of several open-source component. This is why I have decided to open-source my part of the job too. You can download the source code from the <a href="http://github.com/dddExperiments/SFMToolkit">SFMToolkit github</a>. You can also download a pre-compiled x64 version of the toolkit with windows scripting (WSH) for easier usage (but not cross-platform): <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/blog/?sdmon=downloads/SFMToolkit1.zip">SFMToolkit1.zip</a>.</p>
<h3>Help</h3>
<p>If you need some help or just want to discuss about photogrammetry, please join the <a href="http://pgrammetry.com/forum/" style="font-weight: bold; font-size: 15px;">photogrammetry forum</a> created by Olafur Haraldsson. You may also be interested by Josh Harle&#8217;s <a style="font-weight: bold; font-size: 15px;" href="http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/">video tutorials</a>, they are partially out-dated due to the new SFMToolkit but these videos are very good to learn how to use <a href="http://meshlab.sourceforge.net/">MeshLab</a>.</p>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/sfmtoolkit/">Please go to the SFMToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F11%2F05%2Fstructure-from-motion-toolkit-released%2F&amp;title=Structure%20From%20Motion%20Toolkit%20released"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/11/05/structure-from-motion-toolkit-released/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
		<item>
		<title>PMVS2 x64 and videos tutorials</title>
		<link>http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/</link>
		<comments>http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/#comments</comments>
		<pubDate>Thu, 23 Sep 2010 09:26:12 +0000</pubDate>
		<dc:creator>Henri</dc:creator>
				<category><![CDATA[photogrammetry]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[bundler]]></category>
		<category><![CDATA[photosynthtoolkit]]></category>
		<category><![CDATA[pmvs]]></category>
		<category><![CDATA[sfmtoolkit]]></category>
		<guid isPermaLink="false">http://www.visual-experiments.com/?p=772</guid>
		<description><![CDATA[PMVS2 x64 I&#8217;ve finally managed to spend a couple of hours to compile a 64 bit version of PMVS2 for windows! You can download PMVS2_x64.zip right now. I&#8217;ll hope that this version will help some persons, I&#8217;ve personally managed to create a very dense model thanks to this version and PMVS2 was using more than [...]]]></description>
			<content:encoded><![CDATA[<h3>PMVS2 x64</h3>
<p>I&#8217;ve finally managed to spend a couple of hours to compile a 64 bit version of <a href="http://grail.cs.washington.edu/software/pmvs/">PMVS2</a> for windows! You can download <a href="http://code.google.com/p/visual-experiments/downloads/list">PMVS2_x64.zip</a> right now. I&#8217;ll hope that this version will help some persons, I&#8217;ve personally managed to create a very dense model thanks to this version and PMVS2 was using more than 4Gb of ram on a 8-cores machines.</p>
<blockquote><p>
<strong>How to compile PMVS2 x64 by yourself:</strong><br />
download the <a href="http://francemapping.free.fr/Portfolio/Prog3D/CMVS.html">CMake package of CMVS</a> (containing PMVS) created by Pierre Moulon.<br />
download and compile gsl 1.8<br />
download precompiled pthread x64 lib from <a href="http://www.equalizergraphics.com/cgi-bin/viewvc.cgi/trunk/src/Windows/pthreads/">equalizer svn</a><br />
download and compile <a href="http://icl.cs.utk.edu/lapack-for-windows/clapack/clapack-3.2.1-CMAKE.tgz">clapack 3.2.1 using CMake</a>
</p></blockquote>
<h3>PhotoSynthTileDownloader</h3>
<p>As requested by some persons, I&#8217;ve updated my PhotoSynthTileDownloader: you can now resume a partial download ! It&#8217;s already available for download: <a href="http://code.google.com/p/visual-experiments/downloads/list">PhotoSynthTileDownloader2.zip</a>.</p>
<h3>Videos tutorials</h3>
<p><a href="http://blog.neonascent.net/">Josh Harle</a> has done some very nice videos tutorials on how to use my PhotoSynth ToolKit and has created another toolkit for Bundler that is using my BundlerMatcher.<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/09/photosynth.jpg" alt="" title="photosynth" width="570" height="62" class="alignnone size-full wp-image-806" /><br />
<a href="http://blog.neonascent.net/archives/photosynth-toolkit/">PhotoSynth Toolkit post on Josh Harle&#8217;s blog</a>.<br />
<iframe src="http://player.vimeo.com/video/14796939" width="560" height="420" frameborder="0"></iframe></p>
<blockquote><p>
<strong>Note:</strong> In fact your synths doesn&#8217;t need to be 100% synthy. My tool (PhotoSynth2PMVS) is capable of using an uncomplete synth. And now you could use my 64bit version of PMVS2 instead.
</p></blockquote>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/photosynthtoolkit/">Please go to the PhotoSynthToolkit page to get the latest version</a><br />
<br />
<img src="http://www.visual-experiments.com/blog/wp-content/uploads/2010/09/bundlermatcher.jpg" alt="" title="bundlermatcher" width="570" height="55" class="alignnone size-full wp-image-807" /><br />
<a href="http://blog.neonascent.net/archives/bundler-photogrammetry-package/">Bundler photogrammetry package post on Josh Harle&#8217;s blog</a>.<br />
<iframe src="http://player.vimeo.com/video/14783202" width="560" height="420" frameborder="0"></iframe><br />
</p>
<p><a style="color:red; font-size: 15px; text-decoration: underline; font-weight: bold" href="http://www.visual-experiments.com/demos/sfmtoolkit/">Please go to the SFMToolkit page to get the latest version</a></p>
<p><a class="a2a_dd addtoany_share_save" href="http://www.addtoany.com/share_save#url=http%3A%2F%2Fwww.visual-experiments.com%2F2010%2F09%2F23%2Fpmvs2-x64-and-videos-tutorials%2F&amp;title=PMVS2%20x64%20and%20videos%20tutorials"><img src="http://www.visual-experiments.com/blog/wp-content/plugins/add-to-any/share_save_171_16.png" width="171" height="16" alt="Share"/></a> </p>]]></content:encoded>
			<wfw:commentRss>http://www.visual-experiments.com/2010/09/23/pmvs2-x64-and-videos-tutorials/feed/</wfw:commentRss>
		<slash:comments>52</slash:comments>
		</item>
	</channel>
</rss>
