1 2 3<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 4 "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 5<html xmlns="http://www.w3.org/1999/xhtml"> 6 <head> 7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> 8 9 <title>Affine region detectors - Boost.GIL documentation</title> 10 <link rel="stylesheet" href="../_static/pygments.css" type="text/css" /> 11 <link rel="stylesheet" href="../_static/style.css" type="text/css" /> 12 <script type="text/javascript"> 13 var DOCUMENTATION_OPTIONS = { 14 URL_ROOT: '../', 15 VERSION: '', 16 COLLAPSE_MODINDEX: false, 17 FILE_SUFFIX: '.html' 18 }; 19 </script> 20 <script type="text/javascript" src="../_static/jquery.js"></script> 21 <script type="text/javascript" src="../_static/underscore.js"></script> 22 <script type="text/javascript" src="../_static/doctools.js"></script> 23 <link rel="index" title="Index" href="../genindex.html" /> 24 <link rel="search" title="Search" href="../search.html" /> 25 <link rel="top" title="Boost.GIL documentation" href="../index.html" /> 26 <link rel="up" title="Image Processing" href="index.html" /> 27 <link rel="next" title="IO extensions" href="../io.html" /> 28 <link rel="prev" title="Basics" href="basics.html" /> 29 </head> 30 <body> 31 <div class="header"> 32 <table border="0" cellpadding="7" cellspacing="0" width="100%" summary= 33 "header"> 34 <tr> 35 <td valign="top" width="300"> 36 <h3><a href="../index.html"><img 37 alt="C++ Boost" src="../_static/gil.png" border="0"></a></h3> 38 </td> 39 40 <td > 41 <h1 align="center"><a href="../index.html"></a></h1> 42 </td> 43 <td> 44 <div id="searchbox" style="display: none"> 45 <form class="search" action="../search.html" method="get"> 46 <input type="text" name="q" size="18" /> 47 <input type="submit" value="Search" /> 48 <input type="hidden" name="check_keywords" value="yes" /> 49 <input type="hidden" name="area" value="default" /> 50 </form> 51 </div> 52 <script type="text/javascript">$('#searchbox').show(0);</script> 53 </td> 54 </tr> 55 </table> 56 </div> 57 <hr/> 58 <div class="content"> 59 <div class="navbar" style="text-align:right;"> 60 61 62 <a class="prev" title="Basics" href="basics.html"><img src="../_static/prev.png" alt="prev"/></a> 63 <a class="up" title="Image Processing" href="index.html"><img src="../_static/up.png" alt="up"/></a> 64 <a class="next" title="IO extensions" href="../io.html"><img src="../_static/next.png" alt="next"/></a> 65 66 </div> 67 68 <div class="section" id="affine-region-detectors"> 69<h1>Affine region detectors</h1> 70<div class="section" id="what-is-being-detected"> 71<h2>What is being detected?</h2> 72<p>Affine region is basically any region of the image 73that is stable under affine transformations. It can be 74edges under affinity conditions, corners (small patch of an image) 75or any other stable features.</p> 76</div> 77<hr class="docutils" /> 78<div class="section" id="available-detectors"> 79<h2>Available detectors</h2> 80<p>At the moment, the following detectors are implemented</p> 81<ul class="simple"> 82<li>Harris detector</li> 83<li>Hessian detector</li> 84</ul> 85</div> 86<hr class="docutils" /> 87<div class="section" id="algorithm-steps"> 88<h2>Algorithm steps</h2> 89<div class="section" id="harris-and-hessian"> 90<h3>Harris and Hessian</h3> 91<p>Both are derived from a concept called Moravec window. Lets have a look 92at the image below:</p> 93<div class="figure" id="id1"> 94<img alt="Moravec window corner case" src="../_images/Moravec-window-corner.png" /> 95<p class="caption"><span class="caption-text">Moravec window corner case</span></p> 96</div> 97<p>As can be noticed, moving the yellow window in any direction will cause 98very big change in intensity. Now, lets have a look at the edge case:</p> 99<div class="figure" id="id2"> 100<img alt="Moravec window edge case" src="../_images/Moravec-window-edge.png" /> 101<p class="caption"><span class="caption-text">Moravec window edge case</span></p> 102</div> 103<p>In this case, intensity change will happen only when moving in 104particular direction.</p> 105<p>This is the key concept in understanding how the two corner detectors 106work.</p> 107<p>The algorithms have the same structure:</p> 108<ol class="arabic simple"> 109<li>Compute image derivatives</li> 110<li>Compute Weighted sum</li> 111<li>Compute response</li> 112<li>Threshold (optional)</li> 113</ol> 114<p>Harris and Hessian differ in what <strong>derivatives they compute</strong>. Harris 115computes the following derivatives:</p> 116<p><code class="docutils literal"><span class="pre">HarrisMatrix</span> <span class="pre">=</span> <span class="pre">[(dx)^2,</span> <span class="pre">dxdy],</span> <span class="pre">[dxdy,</span> <span class="pre">(dy)^2]</span></code></p> 117<p>(note that <code class="docutils literal"><span class="pre">d(x^2)</span></code> and <code class="docutils literal"><span class="pre">(dy^2)</span></code> are <strong>numerical</strong> powers, not gradient again).</p> 118<p>The three distinct terms of a matrix can be separated into three images, 119to simplify implementation. Hessian, on the other hand, computes second 120order derivatives:</p> 121<p><code class="docutils literal"><span class="pre">HessianMatrix</span> <span class="pre">=</span> <span class="pre">[dxdx,</span> <span class="pre">dxdy][dxdy,</span> <span class="pre">dydy]</span></code></p> 122<p><strong>Weighted sum</strong> is the same for both. Usually Gaussian blur 123matrix is used as weights, because corners should have hill like 124curvature in gradients, and other weights might be noisy. 125Basically overlay weights matrix over a corner, compute sum of 126<code class="docutils literal"><span class="pre">s[i,j]=image[x</span> <span class="pre">+</span> <span class="pre">i,</span> <span class="pre">y</span> <span class="pre">+</span> <span class="pre">j]</span> <span class="pre">*</span> <span class="pre">weights[i,</span> <span class="pre">j]</span></code> for <code class="docutils literal"><span class="pre">i,</span> <span class="pre">j</span></code> 127from zero to weight matrix dimensions, then move the window 128and compute again until all of the image is covered.</p> 129<p><strong>Response computation</strong> is a matter of choice. Given the general form 130of both matrices above</p> 131<p><code class="docutils literal"><span class="pre">[a,</span> <span class="pre">b][c,</span> <span class="pre">d]</span></code></p> 132<p>One of the response functions is</p> 133<p><code class="docutils literal"><span class="pre">response</span> <span class="pre">=</span> <span class="pre">det</span> <span class="pre">-</span> <span class="pre">k</span> <span class="pre">*</span> <span class="pre">trace^2</span> <span class="pre">=</span> <span class="pre">a</span> <span class="pre">*</span> <span class="pre">c</span> <span class="pre">-</span> <span class="pre">b</span> <span class="pre">*</span> <span class="pre">d</span> <span class="pre">-</span> <span class="pre">k</span> <span class="pre">*</span> <span class="pre">(a</span> <span class="pre">+</span> <span class="pre">d)^2</span></code></p> 134<p><code class="docutils literal"><span class="pre">k</span></code> is called discrimination constant. Usual values are <code class="docutils literal"><span class="pre">0.04</span></code> - 135<code class="docutils literal"><span class="pre">0.06</span></code>.</p> 136<p>The other is simply determinant</p> 137<p><code class="docutils literal"><span class="pre">response</span> <span class="pre">=</span> <span class="pre">det</span> <span class="pre">=</span> <span class="pre">a</span> <span class="pre">*</span> <span class="pre">c</span> <span class="pre">-</span> <span class="pre">b</span> <span class="pre">*</span> <span class="pre">d</span></code></p> 138<p><strong>Thresholding</strong> is optional, but without it the result will be 139extremely noisy. For complex images, like the ones of outdoors, for 140Harris it will be in order of 100000000 and for Hessian will be in order 141of 10000. For simpler images values in order of 100s and 1000s should be 142enough. The numbers assume <code class="docutils literal"><span class="pre">uint8_t</span></code> gray image.</p> 143<p>To get deeper explanation please refer to following <strong>paper</strong>:</p> 144<p><a class="reference external" href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.434.4816&rep=rep1&type=pdf">Harris, Christopher G., and Mike Stephens. “A combined corner and edge 145detector.” In Alvey vision conference, vol. 15, no. 50, pp. 10-5244. 1461988.</a></p> 147<p><a class="reference external" href="https://hal.inria.fr/inria-00548252/document">Mikolajczyk, Krystian, and Cordelia Schmid. “An affine invariant interest point detector.” In European conference on computer vision, pp. 128-142. Springer, Berlin, Heidelberg, 2002.</a></p> 148<p><a class="reference external" href="https://hal.inria.fr/inria-00548528/document">Mikolajczyk, Krystian, Tinne Tuytelaars, Cordelia Schmid, Andrew Zisserman, Jiri Matas, Frederik Schaffalitzky, Timor Kadir, and Luc Van Gool. “A comparison of affine region detectors.” International journal of computer vision 65, no. 1-2 (2005): 43-72.</a></p> 149</div> 150</div> 151</div> 152 153 154 <div class="navbar" style="text-align:right;"> 155 156 157 <a class="prev" title="Basics" href="basics.html"><img src="../_static/prev.png" alt="prev"/></a> 158 <a class="up" title="Image Processing" href="index.html"><img src="../_static/up.png" alt="up"/></a> 159 <a class="next" title="IO extensions" href="../io.html"><img src="../_static/next.png" alt="next"/></a> 160 161 </div> 162 </div> 163 <div class="footer" role="contentinfo"> 164 Last updated on 2020-08-11 15:08:48. 165 Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.5.6. 166 </div> 167 </body> 168</html>