cdocutils.nodes document q)q}q(U nametypesq}q(X,creating a generic version of gil algorithmsqNXvirtual image viewsqNXimage view transformationsqNXtutorial: image gradientq NXimageq NXusing locatorsq NXfirst implementationq NX)run-time specified images and image viewsq NXcolor conversionqNX1d pixel iteratorsqNXinterface and glue codeqNXstl equivalent algorithmsqNUcontentsqNX conclusionqNuUsubstitution_defsq}qUparse_messagesq]qUcurrent_sourceqNU decorationqNUautofootnote_startqKUnameidsq}q(hU,creating-a-generic-version-of-gil-algorithmsqhUvirtual-image-viewsqhUimage-view-transformationsqh Ututorial-image-gradientq h Uimageq!h Uusing-locatorsq"h Ufirst-implementationq#h U)run-time-specified-images-and-image-viewsq$hUcolor-conversionq%hUd-pixel-iteratorsq&hUinterface-and-glue-codeq'hUstl-equivalent-algorithmsq(hUcontentsq)hU conclusionq*uUchildrenq+]q,cdocutils.nodes section q-)q.}q/(U rawsourceq0UUparentq1hUsourceq2X0/root/project/libs/gil/doc/tutorial/gradient.rstq3Utagnameq4Usectionq5U attributesq6}q7(Udupnamesq8]Uclassesq9]Ubackrefsq:]Uidsq;]qh auUlineq?KUdocumentq@hh+]qA(cdocutils.nodes title qB)qC}qD(h0XTutorial: Image GradientqEh1h.h2h3h4UtitleqFh6}qG(h8]h9]h:]h;]h=]uh?Kh@hh+]qHcdocutils.nodes Text qIXTutorial: Image GradientqJqK}qL(h0hEh1hCubaubcdocutils.nodes topic qM)qN}qO(h0Uh1h.h2h3h4UtopicqPh6}qQ(h8]h9]qR(UcontentsqSUlocalqTeh:]h;]qUh)ah=]qVhauh?Kh@hh+]qWcdocutils.nodes bullet_list qX)qY}qZ(h0Uh1hNh2Nh4U bullet_listq[h6}q\(h8]h9]h:]h;]h=]uh?Nh@hh+]q](cdocutils.nodes list_item q^)q_}q`(h0Uh6}qa(h8]h9]h:]h;]h=]uh1hYh+]qbcdocutils.nodes paragraph qc)qd}qe(h0Uh6}qf(h8]h9]h:]h;]h=]uh1h_h+]qgcdocutils.nodes reference qh)qi}qj(h0Uh6}qk(h;]qlUid1qmah:]h8]h9]h=]Urefidh'uh1hdh+]qnhIXInterface and Glue Codeqoqp}qq(h0XInterface and Glue Codeqrh1hiubah4U referenceqsubah4U paragraphqtubah4U list_itemquubh^)qv}qw(h0Uh6}qx(h8]h9]h:]h;]h=]uh1hYh+]qyhc)qz}q{(h0Uh6}q|(h8]h9]h:]h;]h=]uh1hvh+]q}hh)q~}q(h0Uh6}q(h;]qUid2qah:]h8]h9]h=]Urefidh#uh1hzh+]qhIXFirst Implementationqq}q(h0XFirst Implementationqh1h~ubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid3qah:]h8]h9]h=]Urefidh"uh1hh+]qhIXUsing Locatorsqq}q(h0XUsing Locatorsqh1hubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid4qah:]h8]h9]h=]Urefidhuh1hh+]qhIX,Creating a Generic Version of GIL Algorithmsqq}q(h0X,Creating a Generic Version of GIL Algorithmsqh1hubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid5qah:]h8]h9]h=]Urefidhuh1hh+]qhIXImage View Transformationsqq}q(h0XImage View Transformationsqh1hubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid6qah:]h8]h9]h=]Urefidh&uh1hh+]qhIX1D pixel iteratorsq̅q}q(h0X1D pixel iteratorsqh1hubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid7qah:]h8]h9]h=]Urefidh(uh1hh+]qhIXSTL Equivalent Algorithmsqޅq}q(h0XSTL Equivalent Algorithmsqh1hubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid8qah:]h8]h9]h=]Urefidh%uh1hh+]qhIXColor Conversionqq}q(h0XColor Conversionqh1hubah4hsubah4htubah4huubh^)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hYh+]qhc)q}q(h0Uh6}q(h8]h9]h:]h;]h=]uh1hh+]qhh)q}q(h0Uh6}q(h;]qUid9rah:]h8]h9]h=]Urefidh!uh1hh+]rhIXImagerr}r(h0XImagerh1hubah4hsubah4htubah4huubh^)r}r(h0Uh6}r(h8]h9]h:]h;]h=]uh1hYh+]r hc)r }r (h0Uh6}r (h8]h9]h:]h;]h=]uh1jh+]r hh)r}r(h0Uh6}r(h;]rUid10rah:]h8]h9]h=]Urefidhuh1j h+]rhIXVirtual Image Viewsrr}r(h0XVirtual Image Viewsrh1jubah4hsubah4htubah4huubh^)r}r(h0Uh6}r(h8]h9]h:]h;]h=]uh1hYh+]rhc)r}r(h0Uh6}r(h8]h9]h:]h;]h=]uh1jh+]rhh)r }r!(h0Uh6}r"(h;]r#Uid11r$ah:]h8]h9]h=]Urefidh$uh1jh+]r%hIX)Run-Time Specified Images and Image Viewsr&r'}r((h0X)Run-Time Specified Images and Image Viewsr)h1j ubah4hsubah4htubah4huubh^)r*}r+(h0Uh6}r,(h8]h9]h:]h;]h=]uh1hYh+]r-hc)r.}r/(h0Uh6}r0(h8]h9]h:]h;]h=]uh1j*h+]r1hh)r2}r3(h0Uh6}r4(h;]r5Uid12r6ah:]h8]h9]h=]Urefidh*uh1j.h+]r7hIX Conclusionr8r9}r:(h0X Conclusionr;h1j2ubah4hsubah4htubah4huubeubaubhc)r<}r=(h0XtThis comprehensive (and long) tutorial will walk you through an example of using GIL to compute the image gradients.r>h1h.h2h3h4hth6}r?(h8]h9]h:]h;]h=]uh?Kh@hh+]r@hIXtThis comprehensive (and long) tutorial will walk you through an example of using GIL to compute the image gradients.rArB}rC(h0j>h1j<ubaubhc)rD}rE(h0XWe will start with some very simple and non-generic code and make it more generic as we go along. Let us start with a horizontal gradient and use the simplest possible approximation to a gradient - central difference.rFh1h.h2h3h4hth6}rG(h8]h9]h:]h;]h=]uh?K h@hh+]rHhIXWe will start with some very simple and non-generic code and make it more generic as we go along. Let us start with a horizontal gradient and use the simplest possible approximation to a gradient - central difference.rIrJ}rK(h0jFh1jDubaubhc)rL}rM(h0XdThe gradient at pixel x can be approximated with the half-difference of its two neighboring pixels::h1h.h2h3h4hth6}rN(h8]h9]h:]h;]h=]uh?Kh@hh+]rOhIXcThe gradient at pixel x can be approximated with the half-difference of its two neighboring pixels:rPrQ}rR(h0XcThe gradient at pixel x can be approximated with the half-difference of its two neighboring pixels:h1jLubaubcdocutils.nodes literal_block rS)rT}rU(h0XD[x] = (I[x-1] - I[x+1]) / 2h1h.h2h3h4U literal_blockrVh6}rW(U xml:spacerXUpreserverYh;]h:]h8]h9]h=]uh?Kh@hh+]rZhIXD[x] = (I[x-1] - I[x+1]) / 2r[r\}r](h0Uh1jTubaubhc)r^}r_(h0XFor simplicity, we will also ignore the boundary cases - the pixels along the edges of the image for which one of the neighbors is not defined. The focus of this document is how to use GIL, not how to create a good gradient generation algorithm.r`h1h.h2h3h4hth6}ra(h8]h9]h:]h;]h=]uh?Kh@hh+]rbhIXFor simplicity, we will also ignore the boundary cases - the pixels along the edges of the image for which one of the neighbors is not defined. The focus of this document is how to use GIL, not how to create a good gradient generation algorithm.rcrd}re(h0j`h1j^ubaubh-)rf}rg(h0Uh1h.h2h3h4h5h6}rh(h8]h9]h:]h;]rih'ah=]rjhauh?Kh@hh+]rk(hB)rl}rm(h0hrh1jfh2h3h4hFh6}rn(h;]h:]h8]h9]h=]Urefidrohmuh?Kh@hh+]rphIXInterface and Glue Coderqrr}rs(h0hrh1jlubaubhc)rt}ru(h0XsLet us first start with 8-bit unsigned grayscale image as the input and 8-bit signed grayscale image as the output.rvh1jfh2h3h4hth6}rw(h8]h9]h:]h;]h=]uh?Kh@hh+]rxhIXsLet us first start with 8-bit unsigned grayscale image as the input and 8-bit signed grayscale image as the output.ryrz}r{(h0jvh1jtubaubhc)r|}r}(h0X6Here is how the interface to our algorithm looks like:r~h1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kh@hh+]rhIX6Here is how the interface to our algorithm looks like:rr}r(h0j~h1j|ubaubjS)r}r(h0X#include using namespace boost::gil; void x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { assert(src.dimensions() == dst.dimensions()); ... // compute the gradient }h1jfh2h3h4jVh6}r(UlinenosrUlanguagerXcppjXjYh;]h:]h8]Uhighlight_argsr}h9]h=]uh?K!h@hh+]rhIX#include using namespace boost::gil; void x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { assert(src.dimensions() == dst.dimensions()); ... // compute the gradient }rr}r(h0Uh1jubaubhc)r}r(h0X``gray8c_view_t`` is the type of the source image view - an 8-bit grayscale view, whose pixels are read-only (denoted by the "c").h1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?K,h@hh+]r(cdocutils.nodes literal r)r}r(h0X``gray8c_view_t``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX gray8c_view_trr}r(h0Uh1jubah4UliteralrubhIXq is the type of the source image view - an 8-bit grayscale view, whose pixels are read-only (denoted by the "c").rr}r(h0Xq is the type of the source image view - an 8-bit grayscale view, whose pixels are read-only (denoted by the "c").h1jubeubhc)r}r(h0XThe output is a grayscale view with a 8-bit signed (denoted by the "s") integer channel type. See Appendix 1 for the complete convention GIL uses to name concrete types.rh1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?K/h@hh+]rhIXThe output is a grayscale view with a 8-bit signed (denoted by the "s") integer channel type. See Appendix 1 for the complete convention GIL uses to name concrete types.rr}r(h0jh1jubaubhc)r}r(h0XGIL makes a distinction between an image and an image view. A GIL **image view**, is a shallow, lightweight view of a rectangular grid of pixels. It provides access to the pixels but does not own the pixels. Copy-constructing a view does not deep-copy the pixels. Image views do not propagate their constness to the pixels and should always be taken by a const reference. Whether a view is mutable or read-only (immutable) is a property of the view type.h1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?K3h@hh+]r(hIXBGIL makes a distinction between an image and an image view. A GIL rr}r(h0XBGIL makes a distinction between an image and an image view. A GIL h1jubcdocutils.nodes strong r)r}r(h0X**image view**h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX image viewrr}r(h0Uh1jubah4UstrongrubhIXv, is a shallow, lightweight view of a rectangular grid of pixels. It provides access to the pixels but does not own the pixels. Copy-constructing a view does not deep-copy the pixels. Image views do not propagate their constness to the pixels and should always be taken by a const reference. Whether a view is mutable or read-only (immutable) is a property of the view type.rr}r(h0Xv, is a shallow, lightweight view of a rectangular grid of pixels. It provides access to the pixels but does not own the pixels. Copy-constructing a view does not deep-copy the pixels. Image views do not propagate their constness to the pixels and should always be taken by a const reference. Whether a view is mutable or read-only (immutable) is a property of the view type.h1jubeubhc)r}r(h0XA GIL `image`, on the other hand, is a view with associated ownership. It is a container of pixels; its constructor/destructor allocates/deallocates the pixels, its copy-constructor performs deep-copy of the pixels and its ``operator==`` performs deep-compare of the pixels. Images also propagate their constness to their pixels - a constant reference to an image will not allow for modifying its pixels.h1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?K;h@hh+]r(hIXA GIL rr}r(h0XA GIL h1jubcdocutils.nodes title_reference r)r}r(h0X`image`h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXimagerr}r(h0Uh1jubah4Utitle_referencerubhIX, on the other hand, is a view with associated ownership. It is a container of pixels; its constructor/destructor allocates/deallocates the pixels, its copy-constructor performs deep-copy of the pixels and its rr}r(h0X, on the other hand, is a view with associated ownership. It is a container of pixels; its constructor/destructor allocates/deallocates the pixels, its copy-constructor performs deep-copy of the pixels and its h1jubj)r}r(h0X``operator==``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX operator==rr}r(h0Uh1jubah4jubhIX performs deep-compare of the pixels. Images also propagate their constness to their pixels - a constant reference to an image will not allow for modifying its pixels.rr}r(h0X performs deep-compare of the pixels. Images also propagate their constness to their pixels - a constant reference to an image will not allow for modifying its pixels.h1jubeubhc)r}r(h0XyMost GIL algorithms operate on image views; images are rarely needed. GIL's design is very similar to that of the STL. The STL equivalent of GIL's image is a container, like ``std::vector``, whereas GIL's image view corresponds to STL range, which is often represented with a pair of iterators. STL algorithms operate on ranges, just like GIL algorithms operate on image views.h1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?KBh@hh+]r(hIXMost GIL algorithms operate on image views; images are rarely needed. GIL's design is very similar to that of the STL. The STL equivalent of GIL's image is a container, like rr}r(h0XMost GIL algorithms operate on image views; images are rarely needed. GIL's design is very similar to that of the STL. The STL equivalent of GIL's image is a container, like h1jubj)r}r(h0X``std::vector``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX std::vectorrr}r(h0Uh1jubah4jubhIX, whereas GIL's image view corresponds to STL range, which is often represented with a pair of iterators. STL algorithms operate on ranges, just like GIL algorithms operate on image views.rr}r(h0X, whereas GIL's image view corresponds to STL range, which is often represented with a pair of iterators. STL algorithms operate on ranges, just like GIL algorithms operate on image views.h1jubeubhc)r}r(h0XGIL's image views can be constructed from raw data - the dimensions, the number of bytes per row and the pixels, which for chunky views are represented with one pointer. Here is how to provide the glue between your code and GIL:rh1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?KIh@hh+]rhIXGIL's image views can be constructed from raw data - the dimensions, the number of bytes per row and the pixels, which for chunky views are represented with one pointer. Here is how to provide the glue between your code and GIL:rr}r(h0jh1jubaubjS)r}r(h0Xvvoid ComputeXGradientGray8( unsigned char const* src_pixels, ptrdiff_t src_row_bytes, int w, int h, signed char* dst_pixels, ptrdiff_t dst_row_bytes) { gray8c_view_t src = interleaved_view(w, h, (gray8_pixel_t const*)src_pixels, src_row_bytes); gray8s_view_t dst = interleaved_view(w, h, (gray8s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src, dst); }h1jfh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?KNh@hh+]rhIXvvoid ComputeXGradientGray8( unsigned char const* src_pixels, ptrdiff_t src_row_bytes, int w, int h, signed char* dst_pixels, ptrdiff_t dst_row_bytes) { gray8c_view_t src = interleaved_view(w, h, (gray8_pixel_t const*)src_pixels, src_row_bytes); gray8s_view_t dst = interleaved_view(w, h, (gray8s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src, dst); }rr}r(h0Uh1jubaubhc)r}r(h0XThis glue code is very fast and views are lightweight - in the above example the views have a size of 16 bytes. They consist of a pointer to the top left pixel and three integers - the width, height, and number of bytes per row.rh1jfh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?KZh@hh+]rhIXThis glue code is very fast and views are lightweight - in the above example the views have a size of 16 bytes. They consist of a pointer to the top left pixel and three integers - the width, height, and number of bytes per row.rr}r(h0jh1jubaubeubh-)r}r(h0Uh1h.h2h3h4h5h6}r(h8]h9]h:]h;]rh#ah=]rh auh?K_h@hh+]r(hB)r}r(h0hh1jh2h3h4hFh6}r(h;]h:]h8]h9]h=]johuh?K_h@hh+]rhIXFirst Implementationrr }r (h0hh1jubaubhc)r }r (h0XaFocusing on simplicity at the expense of speed, we can compute the horizontal gradient like this:r h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kah@hh+]rhIXaFocusing on simplicity at the expense of speed, we can compute the horizontal gradient like this:rr}r(h0j h1j ubaubjS)r}r(h0Xvoid x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y = 0; y < src.height(); ++y) for (int x = 1; x < src.width() - 1; ++x) dst(x, y) = (src(x-1, y) - src(x+1, y)) / 2; }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Kdh@hh+]rhIXvoid x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y = 0; y < src.height(); ++y) for (int x = 1; x < src.width() - 1; ++x) dst(x, y) = (src(x-1, y) - src(x+1, y)) / 2; }rr}r(h0Uh1jubaubhc)r}r(h0XWe use image view's ``operator(x,y)`` to get a reference to the pixel at a given location and we set it to the half-difference of its left and right neighbors. ``operator()`` returns a reference to a grayscale pixel. A grayscale pixel is convertible to its channel type (``unsigned char`` for ``src``) and it can be copy-constructed from a channel. (This is only true for grayscale pixels).h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kmh@hh+]r(hIXWe use image view's rr}r (h0XWe use image view's h1jubj)r!}r"(h0X``operator(x,y)``h6}r#(h8]h9]h:]h;]h=]uh1jh+]r$hIX operator(x,y)r%r&}r'(h0Uh1j!ubah4jubhIX| to get a reference to the pixel at a given location and we set it to the half-difference of its left and right neighbors. r(r)}r*(h0X| to get a reference to the pixel at a given location and we set it to the half-difference of its left and right neighbors. h1jubj)r+}r,(h0X``operator()``h6}r-(h8]h9]h:]h;]h=]uh1jh+]r.hIX operator()r/r0}r1(h0Uh1j+ubah4jubhIXa returns a reference to a grayscale pixel. A grayscale pixel is convertible to its channel type (r2r3}r4(h0Xa returns a reference to a grayscale pixel. A grayscale pixel is convertible to its channel type (h1jubj)r5}r6(h0X``unsigned char``h6}r7(h8]h9]h:]h;]h=]uh1jh+]r8hIX unsigned charr9r:}r;(h0Uh1j5ubah4jubhIX for r<r=}r>(h0X for h1jubj)r?}r@(h0X``src``h6}rA(h8]h9]h:]h;]h=]uh1jh+]rBhIXsrcrCrD}rE(h0Uh1j?ubah4jubhIXZ) and it can be copy-constructed from a channel. (This is only true for grayscale pixels).rFrG}rH(h0XZ) and it can be copy-constructed from a channel. (This is only true for grayscale pixels).h1jubeubhc)rI}rJ(h0XWhile the above code is easy to read, it is not very fast, because the binary ``operator()`` computes the location of the pixel in a 2D grid, which involves addition and multiplication. Here is a faster version of the above:h1jh2h3h4hth6}rK(h8]h9]h:]h;]h=]uh?Kth@hh+]rL(hIXNWhile the above code is easy to read, it is not very fast, because the binary rMrN}rO(h0XNWhile the above code is easy to read, it is not very fast, because the binary h1jIubj)rP}rQ(h0X``operator()``h6}rR(h8]h9]h:]h;]h=]uh1jIh+]rShIX operator()rTrU}rV(h0Uh1jPubah4jubhIX computes the location of the pixel in a 2D grid, which involves addition and multiplication. Here is a faster version of the above:rWrX}rY(h0X computes the location of the pixel in a 2D grid, which involves addition and multiplication. Here is a faster version of the above:h1jIubeubjS)rZ}r[(h0XTvoid x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y = 0; y < src.height(); ++y) { gray8c_view_t::x_iterator src_it = src.row_begin(y); gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x=1; x < src.width() - 1; ++x) dst_it[x] = (src_it[x-1] - src_it[x+1]) / 2; } }h1jh2h3h4jVh6}r\(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Kxh@hh+]r]hIXTvoid x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y = 0; y < src.height(); ++y) { gray8c_view_t::x_iterator src_it = src.row_begin(y); gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x=1; x < src.width() - 1; ++x) dst_it[x] = (src_it[x-1] - src_it[x+1]) / 2; } }r^r_}r`(h0Uh1jZubaubhc)ra}rb(h0X[We use pixel iterators initialized at the beginning of each row. GIL's iterators are Random Access Traversal iterators. If you are not familiar with random access iterators, think of them as if they were pointers. In fact, in the above example the two iterator types are raw C pointers and their ``operator[]`` is a fast pointer indexing operator.h1jh2h3h4hth6}rc(h8]h9]h:]h;]h=]uh?Kh@hh+]rd(hIX(We use pixel iterators initialized at the beginning of each row. GIL's iterators are Random Access Traversal iterators. If you are not familiar with random access iterators, think of them as if they were pointers. In fact, in the above example the two iterator types are raw C pointers and their rerf}rg(h0X(We use pixel iterators initialized at the beginning of each row. GIL's iterators are Random Access Traversal iterators. If you are not familiar with random access iterators, think of them as if they were pointers. In fact, in the above example the two iterator types are raw C pointers and their h1jaubj)rh}ri(h0X``operator[]``h6}rj(h8]h9]h:]h;]h=]uh1jah+]rkhIX operator[]rlrm}rn(h0Uh1jhubah4jubhIX% is a fast pointer indexing operator.rorp}rq(h0X% is a fast pointer indexing operator.h1jaubeubhc)rr}rs(h0XGThe code to compute gradient in the vertical direction is very similar:rth1jh2h3h4hth6}ru(h8]h9]h:]h;]h=]uh?Kh@hh+]rvhIXGThe code to compute gradient in the vertical direction is very similar:rwrx}ry(h0jth1jrubaubcdocutils.nodes comment rz)r{}r|(h0Xgcode-block: cpp void y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int x = 0; x < src.width(); ++x) { gray8c_view_t::y_iterator src_it = src.col_begin(x); gray8s_view_t::y_iterator dst_it = dst.col_begin(x); for (int y = 1; y < src.height() - 1; ++y) dst_it[y] = (src_it[y-1] - src_it[y+1]) / 2; } }h1jh2h3h4Ucommentr}h6}r~(jXjYh;]h:]h8]h9]h=]uh?Kh@hh+]rhIXgcode-block: cpp void y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int x = 0; x < src.width(); ++x) { gray8c_view_t::y_iterator src_it = src.col_begin(x); gray8s_view_t::y_iterator dst_it = dst.col_begin(x); for (int y = 1; y < src.height() - 1; ++y) dst_it[y] = (src_it[y-1] - src_it[y+1]) / 2; } }rr}r(h0Uh1j{ubaubhc)r}r(h0XInstead of looping over the rows, we loop over each column and create a ``y_iterator``, an iterator moving vertically. In this case a simple pointer cannot be used because the distance between two adjacent pixels equals the number of bytes in each row of the image. GIL uses here a special step iterator class whose size is 8 bytes - it contains a raw C pointer and a step. Its ``operator[]`` multiplies the index by its step.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kh@hh+]r(hIXHInstead of looping over the rows, we loop over each column and create a rr}r(h0XHInstead of looping over the rows, we loop over each column and create a h1jubj)r}r(h0X``y_iterator``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX y_iteratorrr}r(h0Uh1jubah4jubhIX$, an iterator moving vertically. In this case a simple pointer cannot be used because the distance between two adjacent pixels equals the number of bytes in each row of the image. GIL uses here a special step iterator class whose size is 8 bytes - it contains a raw C pointer and a step. Its rr}r(h0X$, an iterator moving vertically. In this case a simple pointer cannot be used because the distance between two adjacent pixels equals the number of bytes in each row of the image. GIL uses here a special step iterator class whose size is 8 bytes - it contains a raw C pointer and a step. Its h1jubj)r}r(h0X``operator[]``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX operator[]rr}r(h0Uh1jubah4jubhIX" multiplies the index by its step.rr}r(h0X" multiplies the index by its step.h1jubeubhc)r}r(h0X>The above version of ``y_gradient``, however, is much slower (easily an order of magnitude slower) than ``x_gradient`` because of the memory access pattern; traversing an image vertically results in lots of cache misses. A much more efficient and cache-friendly version will iterate over the columns in the inner loop:h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kh@hh+]r(hIXThe above version of rr}r(h0XThe above version of h1jubj)r}r(h0X``y_gradient``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX y_gradientrr}r(h0Uh1jubah4jubhIXE, however, is much slower (easily an order of magnitude slower) than rr}r(h0XE, however, is much slower (easily an order of magnitude slower) than h1jubj)r}r(h0X``x_gradient``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX x_gradientrr}r(h0Uh1jubah4jubhIX because of the memory access pattern; traversing an image vertically results in lots of cache misses. A much more efficient and cache-friendly version will iterate over the columns in the inner loop:rr}r(h0X because of the memory access pattern; traversing an image vertically results in lots of cache misses. A much more efficient and cache-friendly version will iterate over the columns in the inner loop:h1jubeubjS)r}r(h0Xvoid y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y = 1; y < src.height() - 1; ++y) { gray8c_view_t::x_iterator src1_it = src.row_begin(y-1); gray8c_view_t::x_iterator src2_it = src.row_begin(y+1); gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) { *dst_it = ((*src1_it) - (*src2_it)) / 2; ++dst_it; ++src1_it; ++src2_it; } } }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Kh@hh+]rhIXvoid y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y = 1; y < src.height() - 1; ++y) { gray8c_view_t::x_iterator src1_it = src.row_begin(y-1); gray8c_view_t::x_iterator src2_it = src.row_begin(y+1); gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) { *dst_it = ((*src1_it) - (*src2_it)) / 2; ++dst_it; ++src1_it; ++src2_it; } } }rr}r(h0Uh1jubaubhc)r}r(h0XThis sample code also shows an alternative way of using pixel iterators - instead of ``operator[]`` one could use increments and dereferences.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kh@hh+]r(hIXUThis sample code also shows an alternative way of using pixel iterators - instead of rr}r(h0XUThis sample code also shows an alternative way of using pixel iterators - instead of h1jubj)r}r(h0X``operator[]``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX operator[]rr}r(h0Uh1jubah4jubhIX+ one could use increments and dereferences.rr}r(h0X+ one could use increments and dereferences.h1jubeubeubh-)r}r(h0Uh1h.h2h3h4h5h6}r(h8]h9]h:]h;]rh"ah=]rh auh?Kh@hh+]r(hB)r}r(h0hh1jh2h3h4hFh6}r(h;]h:]h8]h9]h=]johuh?Kh@hh+]rhIXUsing Locatorsrr}r(h0hh1jubaubhc)r}r(h0XUnfortunately this cache-friendly version requires the extra hassle of maintaining two separate iterators in the source view. For every pixel, we want to access its neighbors above and below it. Such relative access can be done with GIL locators:rh1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kh@hh+]rhIXUnfortunately this cache-friendly version requires the extra hassle of maintaining two separate iterators in the source view. For every pixel, we want to access its neighbors above and below it. Such relative access can be done with GIL locators:rr}r(h0jh1jubaubjS)r}r(h0Xvoid y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { gray8c_view_t::xy_locator src_loc = src.xy_at(0,1); for (int y = 1; y < src.height() - 1; ++y) { gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) { (*dst_it) = (src_loc(0,-1) - src_loc(0,1)) / 2; ++dst_it; ++src_loc.x(); // each dimension can be advanced separately } src_loc+=point(-src.width(), 1); // carriage return } }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Kh@hh+]rhIXvoid y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { gray8c_view_t::xy_locator src_loc = src.xy_at(0,1); for (int y = 1; y < src.height() - 1; ++y) { gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) { (*dst_it) = (src_loc(0,-1) - src_loc(0,1)) / 2; ++dst_it; ++src_loc.x(); // each dimension can be advanced separately } src_loc+=point(-src.width(), 1); // carriage return } }rr}r(h0Uh1jubaubhc)r}r(h0XThe first line creates a locator pointing to the first pixel of the second row of the source view. A GIL pixel locator is very similar to an iterator, except that it can move both horizontally and vertically. ``src_loc.x()`` and ``src_loc.y()`` return references to a horizontal and a vertical iterator respectively, which can be used to move the locator along the desired dimension, as shown above. Additionally, the locator can be advanced in both dimensions simultaneously using its ``operator+=`` and ``operator-=``. Similar to image views, locators provide binary ``operator()`` which returns a reference to a pixel with a relative offset to the current locator position. For example, ``src_loc(0,1)`` returns a reference to the neighbor below the current pixel. Locators are very lightweight objects - in the above example the locator has a size of 8 bytes - it consists of a raw pointer to the current pixel and an int indicating the number of bytes from one row to the next (which is the step when moving vertically). The call to ``++src_loc.x()`` corresponds to a single C pointer increment. However, the example above performs more computations than necessary. The code ``src_loc(0,1)`` has to compute the offset of the pixel in two dimensions, which is slow. Notice though that the offset of the two neighbors is the same, regardless of the pixel location. To improve the performance, GIL can cache and reuse this offset::h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Kh@hh+]r(hIXThe first line creates a locator pointing to the first pixel of the second row of the source view. A GIL pixel locator is very similar to an iterator, except that it can move both horizontally and vertically. rr}r(h0XThe first line creates a locator pointing to the first pixel of the second row of the source view. A GIL pixel locator is very similar to an iterator, except that it can move both horizontally and vertically. h1jubj)r}r(h0X``src_loc.x()``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX src_loc.x()rr}r(h0Uh1jubah4jubhIX and rr}r(h0X and h1jubj)r}r(h0X``src_loc.y()``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX src_loc.y()rr}r(h0Uh1jubah4jubhIX return references to a horizontal and a vertical iterator respectively, which can be used to move the locator along the desired dimension, as shown above. Additionally, the locator can be advanced in both dimensions simultaneously using its rr}r(h0X return references to a horizontal and a vertical iterator respectively, which can be used to move the locator along the desired dimension, as shown above. Additionally, the locator can be advanced in both dimensions simultaneously using its h1jubj)r}r (h0X``operator+=``h6}r (h8]h9]h:]h;]h=]uh1jh+]r hIX operator+=r r }r(h0Uh1jubah4jubhIX and rr}r(h0X and h1jubj)r}r(h0X``operator-=``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX operator-=rr}r(h0Uh1jubah4jubhIX2. Similar to image views, locators provide binary rr}r(h0X2. Similar to image views, locators provide binary h1jubj)r}r(h0X``operator()``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX operator()r r!}r"(h0Uh1jubah4jubhIXk which returns a reference to a pixel with a relative offset to the current locator position. For example, r#r$}r%(h0Xk which returns a reference to a pixel with a relative offset to the current locator position. For example, h1jubj)r&}r'(h0X``src_loc(0,1)``h6}r((h8]h9]h:]h;]h=]uh1jh+]r)hIX src_loc(0,1)r*r+}r,(h0Uh1j&ubah4jubhIXM returns a reference to the neighbor below the current pixel. Locators are very lightweight objects - in the above example the locator has a size of 8 bytes - it consists of a raw pointer to the current pixel and an int indicating the number of bytes from one row to the next (which is the step when moving vertically). The call to r-r.}r/(h0XM returns a reference to the neighbor below the current pixel. Locators are very lightweight objects - in the above example the locator has a size of 8 bytes - it consists of a raw pointer to the current pixel and an int indicating the number of bytes from one row to the next (which is the step when moving vertically). The call to h1jubj)r0}r1(h0X``++src_loc.x()``h6}r2(h8]h9]h:]h;]h=]uh1jh+]r3hIX ++src_loc.x()r4r5}r6(h0Uh1j0ubah4jubhIX~ corresponds to a single C pointer increment. However, the example above performs more computations than necessary. The code r7r8}r9(h0X~ corresponds to a single C pointer increment. However, the example above performs more computations than necessary. The code h1jubj)r:}r;(h0X``src_loc(0,1)``h6}r<(h8]h9]h:]h;]h=]uh1jh+]r=hIX src_loc(0,1)r>r?}r@(h0Uh1j:ubah4jubhIX has to compute the offset of the pixel in two dimensions, which is slow. Notice though that the offset of the two neighbors is the same, regardless of the pixel location. To improve the performance, GIL can cache and reuse this offset:rArB}rC(h0X has to compute the offset of the pixel in two dimensions, which is slow. Notice though that the offset of the two neighbors is the same, regardless of the pixel location. To improve the performance, GIL can cache and reuse this offset:h1jubeubjS)rD}rE(h0Xrvoid y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { gray8c_view_t::xy_locator src_loc = src.xy_at(0,1); gray8c_view_t::xy_locator::cached_location_t above = src_loc.cache_location(0,-1); gray8c_view_t::xy_locator::cached_location_t below = src_loc.cache_location(0, 1); for (int y = 1; y < src.height() - 1; ++y) { gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) { (*dst_it) = (src_loc[above] - src_loc[below]) / 2; ++dst_it; ++src_loc.x(); } src_loc+=point(-src.width(), 1); } }h1jh2h3h4jVh6}rF(jXjYh;]h:]h8]h9]h=]uh?Kh@hh+]rGhIXrvoid y_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { gray8c_view_t::xy_locator src_loc = src.xy_at(0,1); gray8c_view_t::xy_locator::cached_location_t above = src_loc.cache_location(0,-1); gray8c_view_t::xy_locator::cached_location_t below = src_loc.cache_location(0, 1); for (int y = 1; y < src.height() - 1; ++y) { gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) { (*dst_it) = (src_loc[above] - src_loc[below]) / 2; ++dst_it; ++src_loc.x(); } src_loc+=point(-src.width(), 1); } }rHrI}rJ(h0Uh1jDubaubhc)rK}rL(h0XnIn this example ``src_loc[above]`` corresponds to a fast pointer indexing operation and the code is efficient.h1jh2h3h4hth6}rM(h8]h9]h:]h;]h=]uh?Mh@hh+]rN(hIXIn this example rOrP}rQ(h0XIn this example h1jKubj)rR}rS(h0X``src_loc[above]``h6}rT(h8]h9]h:]h;]h=]uh1jKh+]rUhIXsrc_loc[above]rVrW}rX(h0Uh1jRubah4jubhIXL corresponds to a fast pointer indexing operation and the code is efficient.rYrZ}r[(h0XL corresponds to a fast pointer indexing operation and the code is efficient.h1jKubeubeubh-)r\}r](h0Uh1h.h2h3h4h5h6}r^(h8]h9]h:]h;]r_hah=]r`hauh?M h@hh+]ra(hB)rb}rc(h0hh1j\h2h3h4hFh6}rd(h;]h:]h8]h9]h=]johuh?M h@hh+]rehIX,Creating a Generic Version of GIL Algorithmsrfrg}rh(h0hh1jbubaubhc)ri}rj(h0XLet us make our ``x_gradient`` more generic. It should work with any image views, as long as they have the same number of channels. The gradient operation is to be computed for each channel independently.h1j\h2h3h4hth6}rk(h8]h9]h:]h;]h=]uh?Mh@hh+]rl(hIXLet us make our rmrn}ro(h0XLet us make our h1jiubj)rp}rq(h0X``x_gradient``h6}rr(h8]h9]h:]h;]h=]uh1jih+]rshIX x_gradientrtru}rv(h0Uh1jpubah4jubhIX more generic. It should work with any image views, as long as they have the same number of channels. The gradient operation is to be computed for each channel independently.rwrx}ry(h0X more generic. It should work with any image views, as long as they have the same number of channels. The gradient operation is to be computed for each channel independently.h1jiubeubhc)rz}r{(h0X)Here is how the new interface looks like:r|h1j\h2h3h4hth6}r}(h8]h9]h:]h;]h=]uh?Mh@hh+]r~hIX)Here is how the new interface looks like:rr}r(h0j|h1jzubaubjS)r}r(h0Xtemplate void x_gradient(const SrcView& src, const DstView& dst) { gil_function_requires >(); gil_function_requires >(); gil_function_requires < ColorSpacesCompatibleConcept < typename color_space_type::type, typename color_space_type::type > >(); ... // compute the gradient }h1j\h2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rhIXtemplate void x_gradient(const SrcView& src, const DstView& dst) { gil_function_requires >(); gil_function_requires >(); gil_function_requires < ColorSpacesCompatibleConcept < typename color_space_type::type, typename color_space_type::type > >(); ... // compute the gradient }rr}r(h0Uh1jubaubhc)r}r(h0XThe new algorithm now takes the types of the input and output image views as template parameters. That allows using both built-in GIL image views, as well as any user-defined image view classes. The first three lines are optional; they use ``boost::concept_check`` to ensure that the two arguments are valid GIL image views, that the second one is mutable and that their color spaces are compatible (i.e. have the same set of channels).h1j\h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M'h@hh+]r(hIXThe new algorithm now takes the types of the input and output image views as template parameters. That allows using both built-in GIL image views, as well as any user-defined image view classes. The first three lines are optional; they use rr}r(h0XThe new algorithm now takes the types of the input and output image views as template parameters. That allows using both built-in GIL image views, as well as any user-defined image view classes. The first three lines are optional; they use h1jubj)r}r(h0X``boost::concept_check``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXboost::concept_checkrr}r(h0Uh1jubah4jubhIX to ensure that the two arguments are valid GIL image views, that the second one is mutable and that their color spaces are compatible (i.e. have the same set of channels).rr}r(h0X to ensure that the two arguments are valid GIL image views, that the second one is mutable and that their color spaces are compatible (i.e. have the same set of channels).h1jubeubhc)r}r(h0X\GIL does not require using its own built-in constructs. You are free to use your own channels, color spaces, iterators, locators, views and images. However, to work with the rest of GIL they have to satisfy a set of requirements; in other words, they have to \e model the corresponding GIL _concept_. GIL's concepts are defined in the user guide.h1j\h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M/h@hh+]rhIX[GIL does not require using its own built-in constructs. You are free to use your own channels, color spaces, iterators, locators, views and images. However, to work with the rest of GIL they have to satisfy a set of requirements; in other words, they have to e model the corresponding GIL _concept_. GIL's concepts are defined in the user guide.rr}r(h0X\GIL does not require using its own built-in constructs. You are free to use your own channels, color spaces, iterators, locators, views and images. However, to work with the rest of GIL they have to satisfy a set of requirements; in other words, they have to \e model the corresponding GIL _concept_. GIL's concepts are defined in the user guide.h1jubaubhc)r}r(h0XOne of the biggest drawbacks of using templates and generic programming in C++ is that compile errors can be very difficult to comprehend. This is a side-effect of the lack of early type checking - a generic argument may not satisfy the requirements of a function, but the incompatibility may be triggered deep into a nested call, in code unfamiliar and hardly related to the problem. GIL uses ``boost::concept_check`` to mitigate this problem. The above three lines of code check whether the template parameters are valid models of their corresponding concepts. If a model is incorrect, the compile error will be inside ``gil_function_requires``, which is much closer to the problem and easier to track. Furthermore, such checks get compiled out and have zero performance overhead. The disadvantage of using concept checks is the sometimes severe impact they have on compile time. This is why GIL performs concept checks only in debug mode, and only if ``BOOST_GIL_USE_CONCEPT_CHECK`` is defined (off by default).h1j\h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M6h@hh+]r(hIXOne of the biggest drawbacks of using templates and generic programming in C++ is that compile errors can be very difficult to comprehend. This is a side-effect of the lack of early type checking - a generic argument may not satisfy the requirements of a function, but the incompatibility may be triggered deep into a nested call, in code unfamiliar and hardly related to the problem. GIL uses rr}r(h0XOne of the biggest drawbacks of using templates and generic programming in C++ is that compile errors can be very difficult to comprehend. This is a side-effect of the lack of early type checking - a generic argument may not satisfy the requirements of a function, but the incompatibility may be triggered deep into a nested call, in code unfamiliar and hardly related to the problem. GIL uses h1jubj)r}r(h0X``boost::concept_check``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXboost::concept_checkrr}r(h0Uh1jubah4jubhIX to mitigate this problem. The above three lines of code check whether the template parameters are valid models of their corresponding concepts. If a model is incorrect, the compile error will be inside rr}r(h0X to mitigate this problem. The above three lines of code check whether the template parameters are valid models of their corresponding concepts. If a model is incorrect, the compile error will be inside h1jubj)r}r(h0X``gil_function_requires``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXgil_function_requiresrr}r(h0Uh1jubah4jubhIX4, which is much closer to the problem and easier to track. Furthermore, such checks get compiled out and have zero performance overhead. The disadvantage of using concept checks is the sometimes severe impact they have on compile time. This is why GIL performs concept checks only in debug mode, and only if rr}r(h0X4, which is much closer to the problem and easier to track. Furthermore, such checks get compiled out and have zero performance overhead. The disadvantage of using concept checks is the sometimes severe impact they have on compile time. This is why GIL performs concept checks only in debug mode, and only if h1jubj)r}r(h0X``BOOST_GIL_USE_CONCEPT_CHECK``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXBOOST_GIL_USE_CONCEPT_CHECKrr}r(h0Uh1jubah4jubhIX is defined (off by default).rr}r(h0X is defined (off by default).h1jubeubhc)r}r(h0XThe body of the generic function is very similar to that of the concrete one. The biggest difference is that we need to loop over the channels of the pixel and compute the gradient for each channel:rh1j\h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?MGh@hh+]rhIXThe body of the generic function is very similar to that of the concrete one. The biggest difference is that we need to loop over the channels of the pixel and compute the gradient for each channel:rr}r(h0jh1jubaubjS)r}r(h0Xtemplate void x_gradient(const SrcView& src, const DstView& dst) { for (int y=0; y < src.height(); ++y) { typename SrcView::x_iterator src_it = src.row_begin(y); typename DstView::x_iterator dst_it = dst.row_begin(y); for (int x = 1; x < src.width() - 1; ++x) for (int c = 0; c < num_channels::value; ++c) dst_it[x][c] = (src_it[x-1][c]- src_it[x+1][c]) / 2; } }h1j\h2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?MKh@hh+]rhIXtemplate void x_gradient(const SrcView& src, const DstView& dst) { for (int y=0; y < src.height(); ++y) { typename SrcView::x_iterator src_it = src.row_begin(y); typename DstView::x_iterator dst_it = dst.row_begin(y); for (int x = 1; x < src.width() - 1; ++x) for (int c = 0; c < num_channels::value; ++c) dst_it[x][c] = (src_it[x-1][c]- src_it[x+1][c]) / 2; } }rr}r(h0Uh1jubaubhc)r}r(h0XHaving an explicit loop for each channel could be a performance problem. GIL allows us to abstract out such per-channel operations:rh1j\h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M[h@hh+]rhIXHaving an explicit loop for each channel could be a performance problem. GIL allows us to abstract out such per-channel operations:rr}r(h0jh1jubaubjS)r}r(h0Xtemplate struct halfdiff_cast_channels { template Out operator()(T const& in1, T const& in2) const { return Out((in1 - in2) / 2); } }; template void x_gradient(const SrcView& src, const DstView& dst) { typedef typename channel_type::type dst_channel_t; for (int y=0; y < src.height(); ++y) { typename SrcView::x_iterator src_it = src.row_begin(y); typename DstView::x_iterator dst_it = dst.row_begin(y); for (int x=1; x < src.width() - 1; ++x) { static_transform(src_it[x-1], src_it[x+1], dst_it[x], halfdiff_cast_channels()); } } }h1j\h2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?M^h@hh+]rhIXtemplate struct halfdiff_cast_channels { template Out operator()(T const& in1, T const& in2) const { return Out((in1 - in2) / 2); } }; template void x_gradient(const SrcView& src, const DstView& dst) { typedef typename channel_type::type dst_channel_t; for (int y=0; y < src.height(); ++y) { typename SrcView::x_iterator src_it = src.row_begin(y); typename DstView::x_iterator dst_it = dst.row_begin(y); for (int x=1; x < src.width() - 1; ++x) { static_transform(src_it[x-1], src_it[x+1], dst_it[x], halfdiff_cast_channels()); } } }rr}r(h0Uh1jubaubhc)r}r(h0XThe ``static_transform`` is an example of a channel-level GIL algorithm. Other such algorithms are ``static_generate``, ``static_fill`` and ``static_for_each``. They are the channel-level equivalents of STL ``generate``, ``transform``, ``fill`` and ``for_each`` respectively. GIL channel algorithms use static recursion to unroll the loops; they never loop over the channels explicitly.h1j\h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M{h@hh+]r(hIXThe rr}r(h0XThe h1jubj)r}r(h0X``static_transform``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXstatic_transformrr}r(h0Uh1jubah4jubhIXK is an example of a channel-level GIL algorithm. Other such algorithms are rr}r(h0XK is an example of a channel-level GIL algorithm. Other such algorithms are h1jubj)r}r(h0X``static_generate``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXstatic_generaterr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``static_fill``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX static_fillrr}r(h0Uh1jubah4jubhIX and rr}r(h0X and h1jubj)r }r (h0X``static_for_each``h6}r (h8]h9]h:]h;]h=]uh1jh+]r hIXstatic_for_eachr r}r(h0Uh1j ubah4jubhIX0. They are the channel-level equivalents of STL rr}r(h0X0. They are the channel-level equivalents of STL h1jubj)r}r(h0X ``generate``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXgeneraterr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X ``transform``h6}r(h8]h9]h:]h;]h=]uh1jh+]r hIX transformr!r"}r#(h0Uh1jubah4jubhIX, r$r%}r&(h0X, h1jubj)r'}r((h0X``fill``h6}r)(h8]h9]h:]h;]h=]uh1jh+]r*hIXfillr+r,}r-(h0Uh1j'ubah4jubhIX and r.r/}r0(h0X and h1jubj)r1}r2(h0X ``for_each``h6}r3(h8]h9]h:]h;]h=]uh1jh+]r4hIXfor_eachr5r6}r7(h0Uh1j1ubah4jubhIX} respectively. GIL channel algorithms use static recursion to unroll the loops; they never loop over the channels explicitly.r8r9}r:(h0X} respectively. GIL channel algorithms use static recursion to unroll the loops; they never loop over the channels explicitly.h1jubeubhc)r;}r<(h0XgNote that sometimes modern compilers (at least Visual Studio 8) already unroll channel-level loops, such as the one above. However, another advantage of using GIL's channel-level algorithms is that they pair the channels semantically, not based on their order in memory. For example, the above example will properly match an RGB source with a BGR destination.r=h1j\h2h3h4hth6}r>(h8]h9]h:]h;]h=]uh?Mh@hh+]r?hIXgNote that sometimes modern compilers (at least Visual Studio 8) already unroll channel-level loops, such as the one above. However, another advantage of using GIL's channel-level algorithms is that they pair the channels semantically, not based on their order in memory. For example, the above example will properly match an RGB source with a BGR destination.r@rA}rB(h0j=h1j;ubaubhc)rC}rD(h0XJHere is how we can use our generic version with images of different types:rEh1j\h2h3h4hth6}rF(h8]h9]h:]h;]h=]uh?Mh@hh+]rGhIXJHere is how we can use our generic version with images of different types:rHrI}rJ(h0jEh1jCubaubjS)rK}rL(h0X]// Calling with 16-bit grayscale data void XGradientGray16_Gray32( unsigned short const* src_pixels, ptrdiff_t src_row_bytes, int w, int h, signed int* dst_pixels, ptrdiff_t dst_row_bytes) { gray16c_view_t src=interleaved_view(w, h, (gray16_pixel_t const*)src_pixels, src_row_bytes); gray32s_view_t dst=interleaved_view(w, h, (gray32s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src,dst); } // Calling with 8-bit RGB data into 16-bit BGR void XGradientRGB8_BGR16( unsigned char const* src_pixels, ptrdiff_t src_row_bytes, int w, int h, signed short* dst_pixels, ptrdiff_t dst_row_bytes) { rgb8c_view_t src = interleaved_view(w, h, (rgb8_pixel_t const*)src_pixels, src_row_bytes); bgr16s_view_t dst = interleaved_view(w, h, (bgr16s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src, dst); } // Either or both the source and the destination could be planar - the gradient code does not change void XGradientPlanarRGB8_RGB32( unsigned short const* src_r, unsigned short const* src_g, unsigned short const* src_b, ptrdiff_t src_row_bytes, int w, int h, signed int* dst_pixels, ptrdiff_t dst_row_bytes) { rgb16c_planar_view_t src = planar_rgb_view (w, h, src_r, src_g, src_b, src_row_bytes); rgb32s_view_t dst = interleaved_view(w, h,(rgb32s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src,dst); }h1j\h2h3h4jVh6}rM(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rNhIX]// Calling with 16-bit grayscale data void XGradientGray16_Gray32( unsigned short const* src_pixels, ptrdiff_t src_row_bytes, int w, int h, signed int* dst_pixels, ptrdiff_t dst_row_bytes) { gray16c_view_t src=interleaved_view(w, h, (gray16_pixel_t const*)src_pixels, src_row_bytes); gray32s_view_t dst=interleaved_view(w, h, (gray32s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src,dst); } // Calling with 8-bit RGB data into 16-bit BGR void XGradientRGB8_BGR16( unsigned char const* src_pixels, ptrdiff_t src_row_bytes, int w, int h, signed short* dst_pixels, ptrdiff_t dst_row_bytes) { rgb8c_view_t src = interleaved_view(w, h, (rgb8_pixel_t const*)src_pixels, src_row_bytes); bgr16s_view_t dst = interleaved_view(w, h, (bgr16s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src, dst); } // Either or both the source and the destination could be planar - the gradient code does not change void XGradientPlanarRGB8_RGB32( unsigned short const* src_r, unsigned short const* src_g, unsigned short const* src_b, ptrdiff_t src_row_bytes, int w, int h, signed int* dst_pixels, ptrdiff_t dst_row_bytes) { rgb16c_planar_view_t src = planar_rgb_view (w, h, src_r, src_g, src_b, src_row_bytes); rgb32s_view_t dst = interleaved_view(w, h,(rgb32s_pixel_t*)dst_pixels, dst_row_bytes); x_gradient(src,dst); }rOrP}rQ(h0Uh1jKubaubhc)rR}rS(h0XAs these examples illustrate, both the source and the destination can be interleaved or planar, of any channel depth (assuming the destination channel is assignable to the source), and of any compatible color spaces.rTh1j\h2h3h4hth6}rU(h8]h9]h:]h;]h=]uh?Mh@hh+]rVhIXAs these examples illustrate, both the source and the destination can be interleaved or planar, of any channel depth (assuming the destination channel is assignable to the source), and of any compatible color spaces.rWrX}rY(h0jTh1jRubaubhc)rZ}r[(h0XGIL 2.1 can also natively represent images whose channels are not byte-aligned, such as 6-bit RGB222 image or a 1-bit Gray1 image. GIL algorithms apply to these images natively. See the design guide or sample files for more on using such images.r\h1j\h2h3h4hth6}r](h8]h9]h:]h;]h=]uh?Mh@hh+]r^hIXGIL 2.1 can also natively represent images whose channels are not byte-aligned, such as 6-bit RGB222 image or a 1-bit Gray1 image. GIL algorithms apply to these images natively. See the design guide or sample files for more on using such images.r_r`}ra(h0j\h1jZubaubeubh-)rb}rc(h0Uh1h.h2h3h4h5h6}rd(h8]h9]h:]h;]rehah=]rfhauh?Mh@hh+]rg(hB)rh}ri(h0hh1jbh2h3h4hFh6}rj(h;]h:]h8]h9]h=]johuh?Mh@hh+]rkhIXImage View Transformationsrlrm}rn(h0hh1jhubaubhc)ro}rp(h0XOne way to compute the y-gradient is to rotate the image by 90 degrees, compute the x-gradient and rotate the result back. Here is how to do this in GIL:rqh1jbh2h3h4hth6}rr(h8]h9]h:]h;]h=]uh?Mh@hh+]rshIXOne way to compute the y-gradient is to rotate the image by 90 degrees, compute the x-gradient and rotate the result back. Here is how to do this in GIL:rtru}rv(h0jqh1joubaubjS)rw}rx(h0Xtemplate void y_gradient(const SrcView& src, const DstView& dst) { x_gradient(rotated90ccw_view(src), rotated90ccw_view(dst)); }h1jbh2h3h4jVh6}ry(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rzhIXtemplate void y_gradient(const SrcView& src, const DstView& dst) { x_gradient(rotated90ccw_view(src), rotated90ccw_view(dst)); }r{r|}r}(h0Uh1jwubaubhc)r~}r(h0XF``rotated90ccw_view`` takes an image view and returns an image view representing 90-degrees counter-clockwise rotation of its input. It is an example of a GIL view transformation function. GIL provides a variety of transformation functions that can perform any axis-aligned rotation, transpose the view, flip it vertically or horizontally, extract a rectangular subimage, perform color conversion, subsample view, etc. The view transformation functions are fast and shallow - they don't copy the pixels, they just change the "coordinate system" of accessing the pixels. ``rotated90cw_view``, for example, returns a view whose horizontal iterators are the vertical iterators of the original view. The above code to compute ``y_gradient`` is slow because of the memory access pattern; using ``rotated90cw_view`` does not make it any slower.h1jbh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(j)r}r(h0X``rotated90ccw_view``h6}r(h8]h9]h:]h;]h=]uh1j~h+]rhIXrotated90ccw_viewrr}r(h0Uh1jubah4jubhIX% takes an image view and returns an image view representing 90-degrees counter-clockwise rotation of its input. It is an example of a GIL view transformation function. GIL provides a variety of transformation functions that can perform any axis-aligned rotation, transpose the view, flip it vertically or horizontally, extract a rectangular subimage, perform color conversion, subsample view, etc. The view transformation functions are fast and shallow - they don't copy the pixels, they just change the "coordinate system" of accessing the pixels. rr}r(h0X% takes an image view and returns an image view representing 90-degrees counter-clockwise rotation of its input. It is an example of a GIL view transformation function. GIL provides a variety of transformation functions that can perform any axis-aligned rotation, transpose the view, flip it vertically or horizontally, extract a rectangular subimage, perform color conversion, subsample view, etc. The view transformation functions are fast and shallow - they don't copy the pixels, they just change the "coordinate system" of accessing the pixels. h1j~ubj)r}r(h0X``rotated90cw_view``h6}r(h8]h9]h:]h;]h=]uh1j~h+]rhIXrotated90cw_viewrr}r(h0Uh1jubah4jubhIX, for example, returns a view whose horizontal iterators are the vertical iterators of the original view. The above code to compute rr}r(h0X, for example, returns a view whose horizontal iterators are the vertical iterators of the original view. The above code to compute h1j~ubj)r}r(h0X``y_gradient``h6}r(h8]h9]h:]h;]h=]uh1j~h+]rhIX y_gradientrr}r(h0Uh1jubah4jubhIX5 is slow because of the memory access pattern; using rr}r(h0X5 is slow because of the memory access pattern; using h1j~ubj)r}r(h0X``rotated90cw_view``h6}r(h8]h9]h:]h;]h=]uh1j~h+]rhIXrotated90cw_viewrr}r(h0Uh1jubah4jubhIX does not make it any slower.rr}r(h0X does not make it any slower.h1j~ubeubhc)r}r(h0XvAnother example: suppose we want to compute the gradient of the N-th channel of a color image. Here is how to do that:rh1jbh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]rhIXvAnother example: suppose we want to compute the gradient of the N-th channel of a color image. Here is how to do that:rr}r(h0jh1jubaubjS)r}r(h0Xtemplate void nth_channel_x_gradient(const SrcView& src, int n, const DstView& dst) { x_gradient(nth_channel_view(src, n), dst); }h1jbh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rhIXtemplate void nth_channel_x_gradient(const SrcView& src, int n, const DstView& dst) { x_gradient(nth_channel_view(src, n), dst); }rr}r(h0Uh1jubaubhc)r}r(h0X9``nth_channel_view`` is a view transformation function that takes any view and returns a single-channel (grayscale) view of its N-th channel. For interleaved RGB view, for example, the returned view is a step view - a view whose horizontal iterator skips over two channels when incremented. If applied on a planar RGB view, the returned type is a simple grayscale view whose horizontal iterator is a C pointer. Image view transformation functions can be piped together. For example, to compute the y gradient of the second channel of the even pixels in the view, use:h1jbh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(j)r}r(h0X``nth_channel_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXnth_channel_viewrr}r(h0Uh1jubah4jubhIX% is a view transformation function that takes any view and returns a single-channel (grayscale) view of its N-th channel. For interleaved RGB view, for example, the returned view is a step view - a view whose horizontal iterator skips over two channels when incremented. If applied on a planar RGB view, the returned type is a simple grayscale view whose horizontal iterator is a C pointer. Image view transformation functions can be piped together. For example, to compute the y gradient of the second channel of the even pixels in the view, use:rr}r(h0X% is a view transformation function that takes any view and returns a single-channel (grayscale) view of its N-th channel. For interleaved RGB view, for example, the returned view is a step view - a view whose horizontal iterator skips over two channels when incremented. If applied on a planar RGB view, the returned type is a simple grayscale view whose horizontal iterator is a C pointer. Image view transformation functions can be piped together. For example, to compute the y gradient of the second channel of the even pixels in the view, use:h1jubeubjS)r}r(h0X@y_gradient(subsampled_view(nth_channel_view(src, 1), 2,2), dst);h1jbh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rhIX@y_gradient(subsampled_view(nth_channel_view(src, 1), 2,2), dst);rr}r(h0Uh1jubaubhc)r}r(h0XGIL can sometimes simplify piped views. For example, two nested subsampled views (views that skip over pixels in X and in Y) can be represented as a single subsampled view whose step is the product of the steps of the two views.rh1jbh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]rhIXGIL can sometimes simplify piped views. For example, two nested subsampled views (views that skip over pixels in X and in Y) can be represented as a single subsampled view whose step is the product of the steps of the two views.rr}r(h0jh1jubaubeubh-)r}r(h0Uh1h.h2h3h4h5h6}r(h8]h9]h:]h;]rh&ah=]rhauh?Mh@hh+]r(hB)r}r(h0hh1jh2h3h4hFh6}r(h;]h:]h8]h9]h=]johuh?Mh@hh+]rhIX1D pixel iteratorsrr}r(h0hh1jubaubhc)r}r(h0XLet's go back to ``x_gradient`` one more time. Many image view algorithms apply the same operation for each pixel and GIL provides an abstraction to handle them. However, our algorithm has an unusual access pattern, as it skips the first and the last column. It would be nice and instructional to see how we can rewrite it in canonical form. The way to do that in GIL is to write a version that works for every pixel, but apply it only on the subimage that excludes the first and last column:h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIXLet's go back to rr}r(h0XLet's go back to h1jubj)r}r(h0X``x_gradient``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX x_gradientrr}r(h0Uh1jubah4jubhIX one more time. Many image view algorithms apply the same operation for each pixel and GIL provides an abstraction to handle them. However, our algorithm has an unusual access pattern, as it skips the first and the last column. It would be nice and instructional to see how we can rewrite it in canonical form. The way to do that in GIL is to write a version that works for every pixel, but apply it only on the subimage that excludes the first and last column:rr}r(h0X one more time. Many image view algorithms apply the same operation for each pixel and GIL provides an abstraction to handle them. However, our algorithm has an unusual access pattern, as it skips the first and the last column. It would be nice and instructional to see how we can rewrite it in canonical form. The way to do that in GIL is to write a version that works for every pixel, but apply it only on the subimage that excludes the first and last column:h1jubeubjS)r}r(h0XZvoid x_gradient_unguarded(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y=0; y < src.height(); ++y) { gray8c_view_t::x_iterator src_it = src.row_begin(y); gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) dst_it[x] = (src_it[x-1] - src_it[x+1]) / 2; } } void x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { assert(src.width()>=2); x_gradient_unguarded(subimage_view(src, 1, 0, src.width()-2, src.height()), subimage_view(dst, 1, 0, src.width()-2, src.height())); }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rhIXZvoid x_gradient_unguarded(gray8c_view_t const& src, gray8s_view_t const& dst) { for (int y=0; y < src.height(); ++y) { gray8c_view_t::x_iterator src_it = src.row_begin(y); gray8s_view_t::x_iterator dst_it = dst.row_begin(y); for (int x = 0; x < src.width(); ++x) dst_it[x] = (src_it[x-1] - src_it[x+1]) / 2; } } void x_gradient(gray8c_view_t const& src, gray8s_view_t const& dst) { assert(src.width()>=2); x_gradient_unguarded(subimage_view(src, 1, 0, src.width()-2, src.height()), subimage_view(dst, 1, 0, src.width()-2, src.height())); }rr}r(h0Uh1jubaubhc)r}r(h0Xh``subimage_view`` is another example of a GIL view transformation function. It takes a source view and a rectangular region (in this case, defined as x_min,y_min,width,height) and returns a view operating on that region of the source view. The above implementation has no measurable performance degradation from the version that operates on the original views.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(j)r}r(h0X``subimage_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX subimage_viewrr}r(h0Uh1jubah4jubhIXW is another example of a GIL view transformation function. It takes a source view and a rectangular region (in this case, defined as x_min,y_min,width,height) and returns a view operating on that region of the source view. The above implementation has no measurable performance degradation from the version that operates on the original views.rr}r(h0XW is another example of a GIL view transformation function. It takes a source view and a rectangular region (in this case, defined as x_min,y_min,width,height) and returns a view operating on that region of the source view. The above implementation has no measurable performance degradation from the version that operates on the original views.h1jubeubhc)r }r (h0X\Now that ``x_gradient_unguarded`` operates on every pixel, we can rewrite it more compactly:h1jh2h3h4hth6}r (h8]h9]h:]h;]h=]uh?Mh@hh+]r (hIX Now that r r}r(h0X Now that h1j ubj)r}r(h0X``x_gradient_unguarded``h6}r(h8]h9]h:]h;]h=]uh1j h+]rhIXx_gradient_unguardedrr}r(h0Uh1jubah4jubhIX; operates on every pixel, we can rewrite it more compactly:rr}r(h0X; operates on every pixel, we can rewrite it more compactly:h1j ubeubjS)r}r(h0Xvoid x_gradient_unguarded(gray8c_view_t const& src, gray8s_view_t const& dst) { gray8c_view_t::iterator src_it = src.begin(); for (gray8s_view_t::iterator dst_it = dst.begin(); dst_it!=dst.end(); ++dst_it, ++src_it) *dst_it = (src_it.x()[-1] - src_it.x()[1]) / 2; }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rhIXvoid x_gradient_unguarded(gray8c_view_t const& src, gray8s_view_t const& dst) { gray8c_view_t::iterator src_it = src.begin(); for (gray8s_view_t::iterator dst_it = dst.begin(); dst_it!=dst.end(); ++dst_it, ++src_it) *dst_it = (src_it.x()[-1] - src_it.x()[1]) / 2; }rr}r (h0Uh1jubaubhc)r!}r"(h0XGIL image views provide ``begin()`` and ``end()`` methods that return one dimensional pixel iterators which iterate over each pixel in the view, left to right and top to bottom. They do a proper "carriage return" - they skip any unused bytes at the end of a row. As such, they are slightly suboptimal, because they need to keep track of their current position with respect to the end of the row. Their increment operator performs one extra check (are we at the end of the row?), a check that is avoided if two nested loops are used instead. These iterators have a method ``x()`` which returns the more lightweight horizontal iterator that we used previously. Horizontal iterators have no notion of the end of rows. In this case, the horizontal iterators are raw C pointers. In our example, we must use the horizontal iterators to access the two neighbors properly, since they could reside outside the image view.h1jh2h3h4hth6}r#(h8]h9]h:]h;]h=]uh?M%h@hh+]r$(hIXGIL image views provide r%r&}r'(h0XGIL image views provide h1j!ubj)r(}r)(h0X ``begin()``h6}r*(h8]h9]h:]h;]h=]uh1j!h+]r+hIXbegin()r,r-}r.(h0Uh1j(ubah4jubhIX and r/r0}r1(h0X and h1j!ubj)r2}r3(h0X ``end()``h6}r4(h8]h9]h:]h;]h=]uh1j!h+]r5hIXend()r6r7}r8(h0Uh1j2ubah4jubhIX  methods that return one dimensional pixel iterators which iterate over each pixel in the view, left to right and top to bottom. They do a proper "carriage return" - they skip any unused bytes at the end of a row. As such, they are slightly suboptimal, because they need to keep track of their current position with respect to the end of the row. Their increment operator performs one extra check (are we at the end of the row?), a check that is avoided if two nested loops are used instead. These iterators have a method r9r:}r;(h0X  methods that return one dimensional pixel iterators which iterate over each pixel in the view, left to right and top to bottom. They do a proper "carriage return" - they skip any unused bytes at the end of a row. As such, they are slightly suboptimal, because they need to keep track of their current position with respect to the end of the row. Their increment operator performs one extra check (are we at the end of the row?), a check that is avoided if two nested loops are used instead. These iterators have a method h1j!ubj)r<}r=(h0X``x()``h6}r>(h8]h9]h:]h;]h=]uh1j!h+]r?hIXx()r@rA}rB(h0Uh1j<ubah4jubhIXN which returns the more lightweight horizontal iterator that we used previously. Horizontal iterators have no notion of the end of rows. In this case, the horizontal iterators are raw C pointers. In our example, we must use the horizontal iterators to access the two neighbors properly, since they could reside outside the image view.rCrD}rE(h0XN which returns the more lightweight horizontal iterator that we used previously. Horizontal iterators have no notion of the end of rows. In this case, the horizontal iterators are raw C pointers. In our example, we must use the horizontal iterators to access the two neighbors properly, since they could reside outside the image view.h1j!ubeubeubh-)rF}rG(h0Uh1h.h2h3h4h5h6}rH(h8]h9]h:]h;]rIh(ah=]rJhauh?M5h@hh+]rK(hB)rL}rM(h0hh1jFh2h3h4hFh6}rN(h;]h:]h8]h9]h=]johuh?M5h@hh+]rOhIXSTL Equivalent AlgorithmsrPrQ}rR(h0hh1jLubaubhc)rS}rT(h0XGIL provides STL equivalents of many algorithms. For example, ``std::transform`` is an STL algorithm that sets each element in a destination range the result of a generic function taking the corresponding element of the source range. In our example, we want to assign to each destination pixel the value of the half-difference of the horizontal neighbors of the corresponding source pixel. If we abstract that operation in a function object, we can use GIL's ``transform_pixel_positions`` to do that:h1jFh2h3h4hth6}rU(h8]h9]h:]h;]h=]uh?M7h@hh+]rV(hIX>GIL provides STL equivalents of many algorithms. For example, rWrX}rY(h0X>GIL provides STL equivalents of many algorithms. For example, h1jSubj)rZ}r[(h0X``std::transform``h6}r\(h8]h9]h:]h;]h=]uh1jSh+]r]hIXstd::transformr^r_}r`(h0Uh1jZubah4jubhIX| is an STL algorithm that sets each element in a destination range the result of a generic function taking the corresponding element of the source range. In our example, we want to assign to each destination pixel the value of the half-difference of the horizontal neighbors of the corresponding source pixel. If we abstract that operation in a function object, we can use GIL's rarb}rc(h0X| is an STL algorithm that sets each element in a destination range the result of a generic function taking the corresponding element of the source range. In our example, we want to assign to each destination pixel the value of the half-difference of the horizontal neighbors of the corresponding source pixel. If we abstract that operation in a function object, we can use GIL's h1jSubj)rd}re(h0X``transform_pixel_positions``h6}rf(h8]h9]h:]h;]h=]uh1jSh+]rghIXtransform_pixel_positionsrhri}rj(h0Uh1jdubah4jubhIX to do that:rkrl}rm(h0X to do that:h1jSubeubjS)rn}ro(h0Xstruct half_x_difference { int operator()(const gray8c_loc_t& src_loc) const { return (src_loc.x()[-1] - src_loc.x()[1]) / 2; } }; void x_gradient_unguarded(gray8c_view_t const& src, gray8s_view_t const& dst) { transform_pixel_positions(src, dst, half_x_difference()); }h1jFh2h3h4jVh6}rp(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?M@h@hh+]rqhIXstruct half_x_difference { int operator()(const gray8c_loc_t& src_loc) const { return (src_loc.x()[-1] - src_loc.x()[1]) / 2; } }; void x_gradient_unguarded(gray8c_view_t const& src, gray8s_view_t const& dst) { transform_pixel_positions(src, dst, half_x_difference()); }rrrs}rt(h0Uh1jnubaubhc)ru}rv(h0X,GIL provides the algorithms ``for_each_pixel`` and ``transform_pixels`` which are image view equivalents of STL ``std::for_each`` and ``std::transform``. It also provides ``for_each_pixel_position`` and ``transform_pixel_positions``, which instead of references to pixels, pass to the generic function pixel locators. This allows for more powerful functions that can use the pixel neighbors through the passed locators. GIL algorithms iterate through the pixels using the more efficient two nested loops (as opposed to the single loop using 1-D iterators)h1jFh2h3h4hth6}rw(h8]h9]h:]h;]h=]uh?MOh@hh+]rx(hIXGIL provides the algorithms ryrz}r{(h0XGIL provides the algorithms h1juubj)r|}r}(h0X``for_each_pixel``h6}r~(h8]h9]h:]h;]h=]uh1juh+]rhIXfor_each_pixelrr}r(h0Uh1j|ubah4jubhIX and rr}r(h0X and h1juubj)r}r(h0X``transform_pixels``h6}r(h8]h9]h:]h;]h=]uh1juh+]rhIXtransform_pixelsrr}r(h0Uh1jubah4jubhIX) which are image view equivalents of STL rr}r(h0X) which are image view equivalents of STL h1juubj)r}r(h0X``std::for_each``h6}r(h8]h9]h:]h;]h=]uh1juh+]rhIX std::for_eachrr}r(h0Uh1jubah4jubhIX and rr}r(h0X and h1juubj)r}r(h0X``std::transform``h6}r(h8]h9]h:]h;]h=]uh1juh+]rhIXstd::transformrr}r(h0Uh1jubah4jubhIX. It also provides rr}r(h0X. It also provides h1juubj)r}r(h0X``for_each_pixel_position``h6}r(h8]h9]h:]h;]h=]uh1juh+]rhIXfor_each_pixel_positionrr}r(h0Uh1jubah4jubhIX and rr}r(h0X and h1juubj)r}r(h0X``transform_pixel_positions``h6}r(h8]h9]h:]h;]h=]uh1juh+]rhIXtransform_pixel_positionsrr}r(h0Uh1jubah4jubhIXD, which instead of references to pixels, pass to the generic function pixel locators. This allows for more powerful functions that can use the pixel neighbors through the passed locators. GIL algorithms iterate through the pixels using the more efficient two nested loops (as opposed to the single loop using 1-D iterators)rr}r(h0XD, which instead of references to pixels, pass to the generic function pixel locators. This allows for more powerful functions that can use the pixel neighbors through the passed locators. GIL algorithms iterate through the pixels using the more efficient two nested loops (as opposed to the single loop using 1-D iterators)h1juubeubeubh-)r}r(h0Uh1h.h2h3h4h5h6}r(h8]h9]h:]h;]rh%ah=]rhauh?MZh@hh+]r(hB)r}r(h0hh1jh2h3h4hFh6}r(h;]h:]h8]h9]h=]johuh?MZh@hh+]rhIXColor Conversionrr}r(h0hh1jubaubhc)r}r(h0X,Instead of computing the gradient of each color plane of an image, we often want to compute the gradient of the luminosity. In other words, we want to convert the color image to grayscale and compute the gradient of the result. Here how to compute the luminosity gradient of a 32-bit float RGB image:rh1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M\h@hh+]rhIX,Instead of computing the gradient of each color plane of an image, we often want to compute the gradient of the luminosity. In other words, we want to convert the color image to grayscale and compute the gradient of the result. Here how to compute the luminosity gradient of a 32-bit float RGB image:rr}r(h0jh1jubaubjS)r}r(h0Xvoid x_gradient_rgb_luminosity(rgb32fc_view_t const& src, gray8s_view_t const& dst) { x_gradient(color_converted_view(src), dst); }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mbh@hh+]rhIXvoid x_gradient_rgb_luminosity(rgb32fc_view_t const& src, gray8s_view_t const& dst) { x_gradient(color_converted_view(src), dst); }rr}r(h0Uh1jubaubhc)r}r(h0X ``color_converted_view`` is a GIL view transformation function that takes any image view and returns a view in a target color space and channel depth (specified as template parameters). In our example, it constructs an 8-bit integer grayscale view over 32-bit float RGB pixels. Like all other view transformation functions, ``color_converted_view`` is very fast and shallow. It doesn't copy the data or perform any color conversion. Instead it returns a view that performs color conversion every time its pixels are accessed.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mih@hh+]r(j)r}r(h0X``color_converted_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXcolor_converted_viewrr}r(h0Uh1jubah4jubhIX, is a GIL view transformation function that takes any image view and returns a view in a target color space and channel depth (specified as template parameters). In our example, it constructs an 8-bit integer grayscale view over 32-bit float RGB pixels. Like all other view transformation functions, rr}r(h0X, is a GIL view transformation function that takes any image view and returns a view in a target color space and channel depth (specified as template parameters). In our example, it constructs an 8-bit integer grayscale view over 32-bit float RGB pixels. Like all other view transformation functions, h1jubj)r}r(h0X``color_converted_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXcolor_converted_viewrr}r(h0Uh1jubah4jubhIX is very fast and shallow. It doesn't copy the data or perform any color conversion. Instead it returns a view that performs color conversion every time its pixels are accessed.rr}r(h0X is very fast and shallow. It doesn't copy the data or perform any color conversion. Instead it returns a view that performs color conversion every time its pixels are accessed.h1jubeubhc)r}r(h0X In the generic version of this algorithm we might like to convert the color space to grayscale, but keep the channel depth the same. We do that by constructing the type of a GIL grayscale pixel with the same channel as the source, and color convert to that pixel type:rh1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mrh@hh+]rhIX In the generic version of this algorithm we might like to convert the color space to grayscale, but keep the channel depth the same. We do that by constructing the type of a GIL grayscale pixel with the same channel as the source, and color convert to that pixel type:rr}r(h0jh1jubaubjS)r}r(h0Xtemplate void x_luminosity_gradient(SrcView const& src, DstView const& dst) { using gray_pixel_t = pixel::type, gray_layout_t>; x_gradient(color_converted_view(src), dst); }h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mwh@hh+]rhIXtemplate void x_luminosity_gradient(SrcView const& src, DstView const& dst) { using gray_pixel_t = pixel::type, gray_layout_t>; x_gradient(color_converted_view(src), dst); }rr}r(h0Uh1jubaubhc)r}r(h0XWhen the destination color space and channel type happens to be the same as the source one, color conversion is unnecessary. GIL detects this case and avoids calling the color conversion code at all - i.e. ``color_converted_view`` returns back the source view unchanged.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIXWhen the destination color space and channel type happens to be the same as the source one, color conversion is unnecessary. GIL detects this case and avoids calling the color conversion code at all - i.e. rr}r(h0XWhen the destination color space and channel type happens to be the same as the source one, color conversion is unnecessary. GIL detects this case and avoids calling the color conversion code at all - i.e. h1jubj)r}r(h0X``color_converted_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXcolor_converted_viewrr}r(h0Uh1jubah4jubhIX( returns back the source view unchanged.r r }r (h0X( returns back the source view unchanged.h1jubeubeubh-)r }r (h0Uh1h.h2h3h4h5h6}r(h8]h9]h:]h;]rh!ah=]rh auh?Mh@hh+]r(hB)r}r(h0jh1j h2h3h4hFh6}r(h;]h:]h8]h9]h=]jojuh?Mh@hh+]rhIXImagerr}r(h0jh1jubaubhc)r}r(h0XThe above example has a performance problem - ``x_gradient`` dereferences most source pixels twice, which will cause the above code to perform color conversion twice. Sometimes it may be more efficient to copy the color converted image into a temporary buffer and use it to compute the gradient - that way color conversion is invoked once per pixel. Using our non-generic version we can do it like this:h1j h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIX.The above example has a performance problem - rr}r(h0X.The above example has a performance problem - h1jubj)r }r!(h0X``x_gradient``h6}r"(h8]h9]h:]h;]h=]uh1jh+]r#hIX x_gradientr$r%}r&(h0Uh1j ubah4jubhIXX dereferences most source pixels twice, which will cause the above code to perform color conversion twice. Sometimes it may be more efficient to copy the color converted image into a temporary buffer and use it to compute the gradient - that way color conversion is invoked once per pixel. Using our non-generic version we can do it like this:r'r(}r)(h0XX dereferences most source pixels twice, which will cause the above code to perform color conversion twice. Sometimes it may be more efficient to copy the color converted image into a temporary buffer and use it to compute the gradient - that way color conversion is invoked once per pixel. Using our non-generic version we can do it like this:h1jubeubjS)r*}r+(h0Xvoid x_luminosity_gradient(rgb32fc_view_t const& src, gray8s_view_t const& dst) { gray8_image_t ccv_image(src.dimensions()); copy_pixels(color_converted_view(src), view(ccv_image)); x_gradient(const_view(ccv_image), dst); }h1j h2h3h4jVh6}r,(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]r-hIXvoid x_luminosity_gradient(rgb32fc_view_t const& src, gray8s_view_t const& dst) { gray8_image_t ccv_image(src.dimensions()); copy_pixels(color_converted_view(src), view(ccv_image)); x_gradient(const_view(ccv_image), dst); }r.r/}r0(h0Uh1j*ubaubhc)r1}r2(h0XFirst we construct an 8-bit grayscale image with the same dimensions as our source. Then we copy a color-converted view of the source into the temporary image. Finally we use a read-only view of the temporary image in our ``x_gradient algorithm``. As the example shows, GIL provides global functions ``view`` and ``const_view`` that take an image and return a mutable or an immutable view of its pixels.h1j h2h3h4hth6}r3(h8]h9]h:]h;]h=]uh?Mh@hh+]r4(hIXFirst we construct an 8-bit grayscale image with the same dimensions as our source. Then we copy a color-converted view of the source into the temporary image. Finally we use a read-only view of the temporary image in our r5r6}r7(h0XFirst we construct an 8-bit grayscale image with the same dimensions as our source. Then we copy a color-converted view of the source into the temporary image. Finally we use a read-only view of the temporary image in our h1j1ubj)r8}r9(h0X``x_gradient algorithm``h6}r:(h8]h9]h:]h;]h=]uh1j1h+]r;hIXx_gradient algorithmr<r=}r>(h0Uh1j8ubah4jubhIX6. As the example shows, GIL provides global functions r?r@}rA(h0X6. As the example shows, GIL provides global functions h1j1ubj)rB}rC(h0X``view``h6}rD(h8]h9]h:]h;]h=]uh1j1h+]rEhIXviewrFrG}rH(h0Uh1jBubah4jubhIX and rIrJ}rK(h0X and h1j1ubj)rL}rM(h0X``const_view``h6}rN(h8]h9]h:]h;]h=]uh1j1h+]rOhIX const_viewrPrQ}rR(h0Uh1jLubah4jubhIXL that take an image and return a mutable or an immutable view of its pixels.rSrT}rU(h0XL that take an image and return a mutable or an immutable view of its pixels.h1j1ubeubhc)rV}rW(h0X:Creating a generic version of the above is a bit trickier:rXh1j h2h3h4hth6}rY(h8]h9]h:]h;]h=]uh?Mh@hh+]rZhIX:Creating a generic version of the above is a bit trickier:r[r\}r](h0jXh1jVubaubjS)r^}r_(h0Xtemplate void x_luminosity_gradient(const SrcView& src, const DstView& dst) { using d_channel_t = typename channel_type::type; using channel_t = typename channel_convert_to_unsigned::type; using gray_pixel_t = pixel; using gray_image_t = image; gray_image_t ccv_image(src.dimensions()); copy_pixels(color_converted_view(src), view(ccv_image)); x_gradient(const_view(ccv_image), dst); }h1j h2h3h4jVh6}r`(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rahIXtemplate void x_luminosity_gradient(const SrcView& src, const DstView& dst) { using d_channel_t = typename channel_type::type; using channel_t = typename channel_convert_to_unsigned::type; using gray_pixel_t = pixel; using gray_image_t = image; gray_image_t ccv_image(src.dimensions()); copy_pixels(color_converted_view(src), view(ccv_image)); x_gradient(const_view(ccv_image), dst); }rbrc}rd(h0Uh1j^ubaubhc)re}rf(h0XFirst we use the ``channel_type`` metafunction to get the channel type of the destination view. A metafunction is a function operating on types. In GIL metafunctions are class templates (declared with ``struct`` type specifier) which take their parameters as template parameters and return their result in a nested typedef called ``type``. In this case, ``channel_type`` is a unary metafunction which in this example is called with the type of an image view and returns the type of the channel associated with that image view.h1j h2h3h4hth6}rg(h8]h9]h:]h;]h=]uh?Mh@hh+]rh(hIXFirst we use the rirj}rk(h0XFirst we use the h1jeubj)rl}rm(h0X``channel_type``h6}rn(h8]h9]h:]h;]h=]uh1jeh+]rohIX channel_typerprq}rr(h0Uh1jlubah4jubhIX metafunction to get the channel type of the destination view. A metafunction is a function operating on types. In GIL metafunctions are class templates (declared with rsrt}ru(h0X metafunction to get the channel type of the destination view. A metafunction is a function operating on types. In GIL metafunctions are class templates (declared with h1jeubj)rv}rw(h0X ``struct``h6}rx(h8]h9]h:]h;]h=]uh1jeh+]ryhIXstructrzr{}r|(h0Uh1jvubah4jubhIXw type specifier) which take their parameters as template parameters and return their result in a nested typedef called r}r~}r(h0Xw type specifier) which take their parameters as template parameters and return their result in a nested typedef called h1jeubj)r}r(h0X``type``h6}r(h8]h9]h:]h;]h=]uh1jeh+]rhIXtyperr}r(h0Uh1jubah4jubhIX. In this case, rr}r(h0X. In this case, h1jeubj)r}r(h0X``channel_type``h6}r(h8]h9]h:]h;]h=]uh1jeh+]rhIX channel_typerr}r(h0Uh1jubah4jubhIX is a unary metafunction which in this example is called with the type of an image view and returns the type of the channel associated with that image view.rr}r(h0X is a unary metafunction which in this example is called with the type of an image view and returns the type of the channel associated with that image view.h1jeubeubhc)r}r(h0XDGIL constructs that have an associated pixel type, such as pixels, pixel iterators, locators, views and images, all model ``PixelBasedConcept``, which means that they provide a set of metafunctions to query the pixel properties, such as ``channel_type``, ``color_space_type``, ``channel_mapping_type``, and ``num_channels``.h1j h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIXzGIL constructs that have an associated pixel type, such as pixels, pixel iterators, locators, views and images, all model rr}r(h0XzGIL constructs that have an associated pixel type, such as pixels, pixel iterators, locators, views and images, all model h1jubj)r}r(h0X``PixelBasedConcept``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXPixelBasedConceptrr}r(h0Uh1jubah4jubhIX^, which means that they provide a set of metafunctions to query the pixel properties, such as rr}r(h0X^, which means that they provide a set of metafunctions to query the pixel properties, such as h1jubj)r}r(h0X``channel_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX channel_typerr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``color_space_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXcolor_space_typerr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``channel_mapping_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXchannel_mapping_typerr}r(h0Uh1jubah4jubhIX, and rr}r(h0X, and h1jubj)r}r(h0X``num_channels``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX num_channelsrr}r(h0Uh1jubah4jubhIX.r}r(h0X.h1jubeubhc)r}r(h0XlAfter we get the channel type of the destination view, we use another metafunction to remove its sign (if it is a signed integral type) and then use it to generate the type of a grayscale pixel. From the pixel type we create the image type. GIL's image class is specialized over the pixel type and a boolean indicating whether the image should be planar or interleaved. Single-channel (grayscale) images in GIL must always be interleaved. There are multiple ways of constructing types in GIL. Instead of instantiating the classes directly we could have used type factory metafunctions. The following code is equivalent:rh1j h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]rhIXlAfter we get the channel type of the destination view, we use another metafunction to remove its sign (if it is a signed integral type) and then use it to generate the type of a grayscale pixel. From the pixel type we create the image type. GIL's image class is specialized over the pixel type and a boolean indicating whether the image should be planar or interleaved. Single-channel (grayscale) images in GIL must always be interleaved. There are multiple ways of constructing types in GIL. Instead of instantiating the classes directly we could have used type factory metafunctions. The following code is equivalent:rr}r(h0jh1jubaubjS)r}r(h0X template void x_luminosity_gradient(SrcView const& src, DstView const& dst) { typedef typename channel_type::type d_channel_t; typedef typename channel_convert_to_unsigned::type channel_t; typedef typename image_type::type gray_image_t; typedef typename gray_image_t::value_type gray_pixel_t; gray_image_t ccv_image(src.dimensions()); copy_and_convert_pixels(src, view(ccv_image)); x_gradient(const_view(ccv_image), dst); }h1j h2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rhIX template void x_luminosity_gradient(SrcView const& src, DstView const& dst) { typedef typename channel_type::type d_channel_t; typedef typename channel_convert_to_unsigned::type channel_t; typedef typename image_type::type gray_image_t; typedef typename gray_image_t::value_type gray_pixel_t; gray_image_t ccv_image(src.dimensions()); copy_and_convert_pixels(src, view(ccv_image)); x_gradient(const_view(ccv_image), dst); }rr}r(h0Uh1jubaubhc)r}r(h0XxGIL provides a set of metafunctions that generate GIL types - ``image_type`` is one such meta-function that constructs the type of an image from a given channel type, color layout, and planar/interleaved option (the default is interleaved). There are also similar meta-functions to construct the types of pixel references, iterators, locators and image views. GIL also has metafunctions ``derived_pixel_reference_type``, ``derived_iterator_type``, ``derived_view_type`` and ``derived_image_type`` that construct the type of a GIL construct from a given source one by changing one or more properties of the type and keeping the rest.h1j h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIX>GIL provides a set of metafunctions that generate GIL types - rr}r(h0X>GIL provides a set of metafunctions that generate GIL types - h1jubj)r}r(h0X``image_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX image_typerr}r(h0Uh1jubah4jubhIX7 is one such meta-function that constructs the type of an image from a given channel type, color layout, and planar/interleaved option (the default is interleaved). There are also similar meta-functions to construct the types of pixel references, iterators, locators and image views. GIL also has metafunctions rr}r(h0X7 is one such meta-function that constructs the type of an image from a given channel type, color layout, and planar/interleaved option (the default is interleaved). There are also similar meta-functions to construct the types of pixel references, iterators, locators and image views. GIL also has metafunctions h1jubj)r}r(h0X ``derived_pixel_reference_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXderived_pixel_reference_typerr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``derived_iterator_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXderived_iterator_typerr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``derived_view_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXderived_view_typerr}r(h0Uh1jubah4jubhIX and rr}r (h0X and h1jubj)r }r (h0X``derived_image_type``h6}r (h8]h9]h:]h;]h=]uh1jh+]r hIXderived_image_typerr}r(h0Uh1j ubah4jubhIX that construct the type of a GIL construct from a given source one by changing one or more properties of the type and keeping the rest.rr}r(h0X that construct the type of a GIL construct from a given source one by changing one or more properties of the type and keeping the rest.h1jubeubhc)r}r(h0XFrom the image type we can use the nested typedef ``value_type`` to obtain the type of a pixel. GIL images, image views and locators have nested typedefs ``value_type`` and ``reference`` to obtain the type of the pixel and a reference to the pixel. If you have a pixel iterator, you can get these types from its ``iterator_traits``. Note also the algorithm ``copy_and_convert_pixels``, which is an abbreviated version of ``copy_pixels`` with a color converted source view.h1j h2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIX2From the image type we can use the nested typedef rr}r(h0X2From the image type we can use the nested typedef h1jubj)r}r(h0X``value_type``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX value_typerr }r!(h0Uh1jubah4jubhIXZ to obtain the type of a pixel. GIL images, image views and locators have nested typedefs r"r#}r$(h0XZ to obtain the type of a pixel. GIL images, image views and locators have nested typedefs h1jubj)r%}r&(h0X``value_type``h6}r'(h8]h9]h:]h;]h=]uh1jh+]r(hIX value_typer)r*}r+(h0Uh1j%ubah4jubhIX and r,r-}r.(h0X and h1jubj)r/}r0(h0X ``reference``h6}r1(h8]h9]h:]h;]h=]uh1jh+]r2hIX referencer3r4}r5(h0Uh1j/ubah4jubhIX~ to obtain the type of the pixel and a reference to the pixel. If you have a pixel iterator, you can get these types from its r6r7}r8(h0X~ to obtain the type of the pixel and a reference to the pixel. If you have a pixel iterator, you can get these types from its h1jubj)r9}r:(h0X``iterator_traits``h6}r;(h8]h9]h:]h;]h=]uh1jh+]r<hIXiterator_traitsr=r>}r?(h0Uh1j9ubah4jubhIX. Note also the algorithm r@rA}rB(h0X. Note also the algorithm h1jubj)rC}rD(h0X``copy_and_convert_pixels``h6}rE(h8]h9]h:]h;]h=]uh1jh+]rFhIXcopy_and_convert_pixelsrGrH}rI(h0Uh1jCubah4jubhIX%, which is an abbreviated version of rJrK}rL(h0X%, which is an abbreviated version of h1jubj)rM}rN(h0X``copy_pixels``h6}rO(h8]h9]h:]h;]h=]uh1jh+]rPhIX copy_pixelsrQrR}rS(h0Uh1jMubah4jubhIX$ with a color converted source view.rTrU}rV(h0X$ with a color converted source view.h1jubeubeubh-)rW}rX(h0Uh1h.h2h3h4h5h6}rY(h8]h9]h:]h;]rZhah=]r[hauh?Mh@hh+]r\(hB)r]}r^(h0jh1jWh2h3h4hFh6}r_(h;]h:]h8]h9]h=]jojuh?Mh@hh+]r`hIXVirtual Image Viewsrarb}rc(h0jh1j]ubaubhc)rd}re(h0XkSo far we have been dealing with images that have pixels stored in memory. GIL allows you to create an image view of an arbitrary image, including a synthetic function. To demonstrate this, let us create a view of the Mandelbrot set. First, we need to create a function object that computes the value of the Mandelbrot set at a given location (x,y) in the image:rfh1jWh2h3h4hth6}rg(h8]h9]h:]h;]h=]uh?Mh@hh+]rhhIXkSo far we have been dealing with images that have pixels stored in memory. GIL allows you to create an image view of an arbitrary image, including a synthetic function. To demonstrate this, let us create a view of the Mandelbrot set. First, we need to create a function object that computes the value of the Mandelbrot set at a given location (x,y) in the image:rirj}rk(h0jfh1jdubaubjS)rl}rm(h0Xj// models PixelDereferenceAdaptorConcept struct mandelbrot_fn { typedef point point_t; typedef mandelbrot_fn const_t; typedef gray8_pixel_t value_type; typedef value_type reference; typedef value_type const_reference; typedef point_t argument_type; typedef reference result_type; static bool constexpr is_mutable = false; mandelbrot_fn() {} mandelbrot_fn(const point_t& sz) : _img_size(sz) {} result_type operator()(const point_t& p) const { // normalize the coords to (-2..1, -1.5..1.5) double t=get_num_iter(point(p.x/(double)_img_size.x*3-2, p.y/(double)_img_size.y*3-1.5f)); return value_type((bits8)(pow(t,0.2)*255)); // raise to power suitable for viewing } private: point_t _img_size; double get_num_iter(const point& p) const { point Z(0,0); for (int i=0; i<100; ++i) // 100 iterations { Z = point(Z.x*Z.x - Z.y*Z.y + p.x, 2*Z.x*Z.y + p.y); if (Z.x*Z.x + Z.y*Z.y > 4) return i/(double)100; } return 0; } };h1jWh2h3h4jVh6}rn(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]rohIXj// models PixelDereferenceAdaptorConcept struct mandelbrot_fn { typedef point point_t; typedef mandelbrot_fn const_t; typedef gray8_pixel_t value_type; typedef value_type reference; typedef value_type const_reference; typedef point_t argument_type; typedef reference result_type; static bool constexpr is_mutable = false; mandelbrot_fn() {} mandelbrot_fn(const point_t& sz) : _img_size(sz) {} result_type operator()(const point_t& p) const { // normalize the coords to (-2..1, -1.5..1.5) double t=get_num_iter(point(p.x/(double)_img_size.x*3-2, p.y/(double)_img_size.y*3-1.5f)); return value_type((bits8)(pow(t,0.2)*255)); // raise to power suitable for viewing } private: point_t _img_size; double get_num_iter(const point& p) const { point Z(0,0); for (int i=0; i<100; ++i) // 100 iterations { Z = point(Z.x*Z.x - Z.y*Z.y + p.x, 2*Z.x*Z.y + p.y); if (Z.x*Z.x + Z.y*Z.y > 4) return i/(double)100; } return 0; } };rprq}rr(h0Uh1jlubaubhc)rs}rt(h0X|We can now use GIL's ``virtual_2d_locator`` with this function object to construct a Mandelbrot view of size 200x200 pixels:h1jWh2h3h4hth6}ru(h8]h9]h:]h;]h=]uh?Mh@hh+]rv(hIXWe can now use GIL's rwrx}ry(h0XWe can now use GIL's h1jsubj)rz}r{(h0X``virtual_2d_locator``h6}r|(h8]h9]h:]h;]h=]uh1jsh+]r}hIXvirtual_2d_locatorr~r}r(h0Uh1jzubah4jubhIXQ with this function object to construct a Mandelbrot view of size 200x200 pixels:rr}r(h0XQ with this function object to construct a Mandelbrot view of size 200x200 pixels:h1jsubeubjS)r}r(h0X]typedef mandelbrot_fn::point_t point_t; typedef virtual_2d_locator locator_t; typedef image_view my_virt_view_t; point_t dims(200,200); // Construct a Mandelbrot view with a locator, taking top-left corner (0,0) and step (1,1) my_virt_view_t mandel(dims, locator_t(point_t(0,0), point_t(1,1), mandelbrot_fn(dims)));h1jWh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?M!h@hh+]rhIX]typedef mandelbrot_fn::point_t point_t; typedef virtual_2d_locator locator_t; typedef image_view my_virt_view_t; point_t dims(200,200); // Construct a Mandelbrot view with a locator, taking top-left corner (0,0) and step (1,1) my_virt_view_t mandel(dims, locator_t(point_t(0,0), point_t(1,1), mandelbrot_fn(dims)));rr}r(h0Uh1jubaubhc)r}r(h0XWe can treat the synthetic view just like a real one. For example, let's invoke our ``x_gradient`` algorithm to compute the gradient of the 90-degree rotated view of the Mandelbrot set and save the original and the result:h1jWh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M,h@hh+]r(hIXTWe can treat the synthetic view just like a real one. For example, let's invoke our rr}r(h0XTWe can treat the synthetic view just like a real one. For example, let's invoke our h1jubj)r}r(h0X``x_gradient``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX x_gradientrr}r(h0Uh1jubah4jubhIX| algorithm to compute the gradient of the 90-degree rotated view of the Mandelbrot set and save the original and the result:rr}r(h0X| algorithm to compute the gradient of the 90-degree rotated view of the Mandelbrot set and save the original and the result:h1jubeubjS)r}r(h0XFgray8s_image_t img(dims); x_gradient(rotated90cw_view(mandel), view(img)); // Save the Mandelbrot set and its 90-degree rotated gradient (jpeg cannot save signed char; must convert to unsigned char) jpeg_write_view("mandel.jpg",mandel); jpeg_write_view("mandel_grad.jpg",color_converted_view(const_view(img)));h1jWh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?M1h@hh+]rhIXFgray8s_image_t img(dims); x_gradient(rotated90cw_view(mandel), view(img)); // Save the Mandelbrot set and its 90-degree rotated gradient (jpeg cannot save signed char; must convert to unsigned char) jpeg_write_view("mandel.jpg",mandel); jpeg_write_view("mandel_grad.jpg",color_converted_view(const_view(img)));rr}r(h0Uh1jubaubhc)r}r(h0X%Here is what the two files look like:rh1jWh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?M:h@hh+]rhIX%Here is what the two files look like:rr}r(h0jh1jubaubcdocutils.nodes image r)r}r(h0X .. image:: ../images/mandel.jpg h1jWh2h3h4Uimagerh6}r(UuriXtutorial/../images/mandel.jpgrh;]h:]h8]h9]U candidatesr}rU*jsh=]uh?M=h@hh+]ubeubh-)r}r(h0Uh1h.h2h3h4h5h6}r(h8]h9]h:]h;]rh$ah=]rh auh?M?h@hh+]r(hB)r}r(h0j)h1jh2h3h4hFh6}r(h;]h:]h8]h9]h=]joj$uh?M?h@hh+]rhIX)Run-Time Specified Images and Image Viewsrr}r(h0j)h1jubaubhc)r}r(h0X>So far we have created a generic function that computes the image gradient of an image view template specialization. Sometimes, however, the properties of an image view, such as its color space and channel depth, may not be available at compile time. GIL's ``dynamic_image`` extension allows for working with GIL constructs that are specified at run time, also called _variants_. GIL provides models of a run-time instantiated image, ``any_image``, and a run-time instantiated image view, ``any_image_view``. The mechanisms are in place to create other variants, such as ``any_pixel``, ``any_pixel_iterator``, etc. Most of GIL's algorithms and all of the view transformation functions also work with run-time instantiated image views and binary algorithms, such as ``copy_pixels`` can have either or both arguments be variants.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?MAh@hh+]r(hIXSo far we have created a generic function that computes the image gradient of an image view template specialization. Sometimes, however, the properties of an image view, such as its color space and channel depth, may not be available at compile time. GIL's rr}r(h0XSo far we have created a generic function that computes the image gradient of an image view template specialization. Sometimes, however, the properties of an image view, such as its color space and channel depth, may not be available at compile time. GIL's h1jubj)r}r(h0X``dynamic_image``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX dynamic_imagerr}r(h0Uh1jubah4jubhIX extension allows for working with GIL constructs that are specified at run time, also called _variants_. GIL provides models of a run-time instantiated image, rr}r(h0X extension allows for working with GIL constructs that are specified at run time, also called _variants_. GIL provides models of a run-time instantiated image, h1jubj)r}r(h0X ``any_image``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX any_imagerr}r(h0Uh1jubah4jubhIX*, and a run-time instantiated image view, rr}r(h0X*, and a run-time instantiated image view, h1jubj)r}r(h0X``any_image_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXany_image_viewrr}r(h0Uh1jubah4jubhIX@. The mechanisms are in place to create other variants, such as rr}r(h0X@. The mechanisms are in place to create other variants, such as h1jubj)r}r(h0X ``any_pixel``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX any_pixelrr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``any_pixel_iterator``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXany_pixel_iteratorrr}r(h0Uh1jubah4jubhIX, etc. Most of GIL's algorithms and all of the view transformation functions also work with run-time instantiated image views and binary algorithms, such as rr}r(h0X, etc. Most of GIL's algorithms and all of the view transformation functions also work with run-time instantiated image views and binary algorithms, such as h1jubj)r}r(h0X``copy_pixels``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX copy_pixelsrr}r(h0Uh1jubah4jubhIX/ can have either or both arguments be variants.rr}r(h0X/ can have either or both arguments be variants.h1jubeubhc)r}r(h0XLets make our ``x_luminosity_gradient`` algorithm take a variant image view. For simplicity, let's assume that only the source view can be a variant. (As an example of using multiple variants, see GIL's image view algorithm overloads taking multiple variants.)h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?MOh@hh+]r(hIXLets make our rr}r (h0XLets make our h1jubj)r }r (h0X``x_luminosity_gradient``h6}r (h8]h9]h:]h;]h=]uh1jh+]r hIXx_luminosity_gradientrr}r(h0Uh1j ubah4jubhIX algorithm take a variant image view. For simplicity, let's assume that only the source view can be a variant. (As an example of using multiple variants, see GIL's image view algorithm overloads taking multiple variants.)rr}r(h0X algorithm take a variant image view. For simplicity, let's assume that only the source view can be a variant. (As an example of using multiple variants, see GIL's image view algorithm overloads taking multiple variants.)h1jubeubhc)r}r(h0XFirst, we need to make a function object that contains the templated destination view and has an application operator taking a templated source view:rh1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?MTh@hh+]rhIXFirst, we need to make a function object that contains the templated destination view and has an application operator taking a templated source view:rr}r(h0jh1jubaubjS)r}r(h0Xn#include template struct x_gradient_obj { typedef void result_type; // required typedef const DstView& _dst; x_gradient_obj(const DstView& dst) : _dst(dst) {} template void operator()(const SrcView& src) const { x_luminosity_gradient(src, _dst); } };h1jh2h3h4jVh6}r(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?MXh@hh+]rhIXn#include template struct x_gradient_obj { typedef void result_type; // required typedef const DstView& _dst; x_gradient_obj(const DstView& dst) : _dst(dst) {} template void operator()(const SrcView& src) const { x_luminosity_gradient(src, _dst); } };r r!}r"(h0Uh1jubaubhc)r#}r$(h0XThe second step is to provide an overload of ``x_luminosity_gradient`` that takes image view variant and calls GIL's ``apply_operation`` passing it the function object:h1jh2h3h4hth6}r%(h8]h9]h:]h;]h=]uh?Mhh@hh+]r&(hIX-The second step is to provide an overload of r'r(}r)(h0X-The second step is to provide an overload of h1j#ubj)r*}r+(h0X``x_luminosity_gradient``h6}r,(h8]h9]h:]h;]h=]uh1j#h+]r-hIXx_luminosity_gradientr.r/}r0(h0Uh1j*ubah4jubhIX/ that takes image view variant and calls GIL's r1r2}r3(h0X/ that takes image view variant and calls GIL's h1j#ubj)r4}r5(h0X``apply_operation``h6}r6(h8]h9]h:]h;]h=]uh1j#h+]r7hIXapply_operationr8r9}r:(h0Uh1j4ubah4jubhIX passing it the function object:r;r<}r=(h0X passing it the function object:h1j#ubeubjS)r>}r?(h0Xtemplate void x_luminosity_gradient(const any_image_view& src, const DstView& dst) { apply_operation(src, x_gradient_obj(dst)); }h1jh2h3h4jVh6}r@(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mlh@hh+]rAhIXtemplate void x_luminosity_gradient(const any_image_view& src, const DstView& dst) { apply_operation(src, x_gradient_obj(dst)); }rBrC}rD(h0Uh1j>ubaubhc)rE}rF(h0X``any_image_view`` is the image view variant. It is templated over ``SrcViews``, an enumeration of all possible view types the variant can take. ``src`` contains inside an index of the currently instantiated type, as well as a block of memory containing the instance. ``apply_operation`` goes through a switch statement over the index, each case of which casts the memory to the correct view type and invokes the function object with it. Invoking an algorithm on a variant has the overhead of one switch statement. Algorithms that perform an operation for each pixel in an image view have practically no performance degradation when used with a variant.h1jh2h3h4hth6}rG(h8]h9]h:]h;]h=]uh?Mth@hh+]rH(j)rI}rJ(h0X``any_image_view``h6}rK(h8]h9]h:]h;]h=]uh1jEh+]rLhIXany_image_viewrMrN}rO(h0Uh1jIubah4jubhIX1 is the image view variant. It is templated over rPrQ}rR(h0X1 is the image view variant. It is templated over h1jEubj)rS}rT(h0X ``SrcViews``h6}rU(h8]h9]h:]h;]h=]uh1jEh+]rVhIXSrcViewsrWrX}rY(h0Uh1jSubah4jubhIXC, an enumeration of all possible view types the variant can take. rZr[}r\(h0XC, an enumeration of all possible view types the variant can take. h1jEubj)r]}r^(h0X``src``h6}r_(h8]h9]h:]h;]h=]uh1jEh+]r`hIXsrcrarb}rc(h0Uh1j]ubah4jubhIXu contains inside an index of the currently instantiated type, as well as a block of memory containing the instance. rdre}rf(h0Xu contains inside an index of the currently instantiated type, as well as a block of memory containing the instance. h1jEubj)rg}rh(h0X``apply_operation``h6}ri(h8]h9]h:]h;]h=]uh1jEh+]rjhIXapply_operationrkrl}rm(h0Uh1jgubah4jubhIXn goes through a switch statement over the index, each case of which casts the memory to the correct view type and invokes the function object with it. Invoking an algorithm on a variant has the overhead of one switch statement. Algorithms that perform an operation for each pixel in an image view have practically no performance degradation when used with a variant.rnro}rp(h0Xn goes through a switch statement over the index, each case of which casts the memory to the correct view type and invokes the function object with it. Invoking an algorithm on a variant has the overhead of one switch statement. Algorithms that perform an operation for each pixel in an image view have practically no performance degradation when used with a variant.h1jEubeubhc)rq}rr(h0X@Here is how we can construct a variant and invoke the algorithm:rsh1jh2h3h4hth6}rt(h8]h9]h:]h;]h=]uh?Mh@hh+]ruhIX@Here is how we can construct a variant and invoke the algorithm:rvrw}rx(h0jsh1jqubaubjS)ry}rz(h0X#include #include typedef mp11::mp_list my_img_types; any_image runtime_image; jpeg_read_image("input.jpg", runtime_image); gray8s_image_t gradient(runtime_image.dimensions()); x_luminosity_gradient(const_view(runtime_image), view(gradient)); jpeg_write_view("x_gradient.jpg", color_converted_view(const_view(gradient)));h1jh2h3h4jVh6}r{(jjXcppjXjYh;]h:]h8]j}h9]h=]uh?Mh@hh+]r|hIX#include #include typedef mp11::mp_list my_img_types; any_image runtime_image; jpeg_read_image("input.jpg", runtime_image); gray8s_image_t gradient(runtime_image.dimensions()); x_luminosity_gradient(const_view(runtime_image), view(gradient)); jpeg_write_view("x_gradient.jpg", color_converted_view(const_view(gradient)));r}r~}r(h0Uh1jyubaubhc)r}r(h0X In this example, we create an image variant that could be 8-bit or 16-bit RGB or grayscale image. We then use GIL's I/O extension to load the image from file in its native color space and channel depth. If none of the allowed image types matches the image on disk, an exception will be thrown. We then construct a 8 bit signed (i.e. ``char``) image to store the gradient and invoke ``x_gradient`` on it. Finally we save the result into another file. We save the view converted to 8-bit unsigned, because JPEG I/O does not support signed char.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIXNIn this example, we create an image variant that could be 8-bit or 16-bit RGB or grayscale image. We then use GIL's I/O extension to load the image from file in its native color space and channel depth. If none of the allowed image types matches the image on disk, an exception will be thrown. We then construct a 8 bit signed (i.e. rr}r(h0XNIn this example, we create an image variant that could be 8-bit or 16-bit RGB or grayscale image. We then use GIL's I/O extension to load the image from file in its native color space and channel depth. If none of the allowed image types matches the image on disk, an exception will be thrown. We then construct a 8 bit signed (i.e. h1jubj)r}r(h0X``char``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXcharrr}r(h0Uh1jubah4jubhIX)) image to store the gradient and invoke rr}r(h0X)) image to store the gradient and invoke h1jubj)r}r(h0X``x_gradient``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX x_gradientrr}r(h0Uh1jubah4jubhIX on it. Finally we save the result into another file. We save the view converted to 8-bit unsigned, because JPEG I/O does not support signed char.rr}r(h0X on it. Finally we save the result into another file. We save the view converted to 8-bit unsigned, because JPEG I/O does not support signed char.h1jubeubhc)r}r(h0XNote how free functions and methods such as ``jpeg_read_image``, ``dimensions``, ``view`` and ``const_view`` work on both templated and variant types. For templated images ``view(img)`` returns a templated view, whereas for image variants it returns a view variant. For example, the return type of ``view(runtime_image)`` is ``any_image_view`` where ``Views`` enumerates four views corresponding to the four image types. ``const_view(runtime_image)`` returns a ``any_image_view`` of the four read-only view types, etc.h1jh2h3h4hth6}r(h8]h9]h:]h;]h=]uh?Mh@hh+]r(hIX,Note how free functions and methods such as rr}r(h0X,Note how free functions and methods such as h1jubj)r}r(h0X``jpeg_read_image``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXjpeg_read_imagerr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``dimensions``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX dimensionsrr}r(h0Uh1jubah4jubhIX, rr}r(h0X, h1jubj)r}r(h0X``view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXviewrr}r(h0Uh1jubah4jubhIX and rr}r(h0X and h1jubj)r}r(h0X``const_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX const_viewrr}r(h0Uh1jubah4jubhIXA work on both templated and variant types. For templated images rr}r(h0XA work on both templated and variant types. For templated images h1jubj)r}r(h0X ``view(img)``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIX view(img)rr}r(h0Uh1jubah4jubhIXr returns a templated view, whereas for image variants it returns a view variant. For example, the return type of rr}r(h0Xr returns a templated view, whereas for image variants it returns a view variant. For example, the return type of h1jubj)r}r(h0X``view(runtime_image)``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXview(runtime_image)rr}r(h0Uh1jubah4jubhIX is rr}r(h0X is h1jubj)r}r(h0X``any_image_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXany_image_viewrr}r(h0Uh1jubah4jubhIX where rr}r(h0X where h1jubj)r}r(h0X ``Views``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXViewsrr}r(h0Uh1jubah4jubhIX? enumerates four views corresponding to the four image types. rr}r(h0X? enumerates four views corresponding to the four image types. h1jubj)r}r(h0X``const_view(runtime_image)``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXconst_view(runtime_image)rr}r(h0Uh1jubah4jubhIX returns a rr}r(h0X returns a h1jubj)r}r(h0X``any_image_view``h6}r(h8]h9]h:]h;]h=]uh1jh+]rhIXany_image_viewr r }r (h0Uh1jubah4jubhIX' of the four read-only view types, etc.r r }r (h0X' of the four read-only view types, etc.h1jubeubhc)r }r (h0XIA warning about using variants: instantiating an algorithm with a variant effectively instantiates it with every possible type the variant can take. For binary algorithms, the algorithm is instantiated with every possible combination of the two input types! This can take a toll on both the compile time and the executable size.r h1jh2h3h4hth6}r (h8]h9]h:]h;]h=]uh?Mh@hh+]r hIXIA warning about using variants: instantiating an algorithm with a variant effectively instantiates it with every possible type the variant can take. For binary algorithms, the algorithm is instantiated with every possible combination of the two input types! This can take a toll on both the compile time and the executable size.r r }r (h0j h1j ubaubeubh-)r }r (h0Uh1h.h2h3h4h5h6}r (h8]h9]h:]h;]r h*ah=]r hauh?Mh@hh+]r (hB)r }r (h0j;h1j h2h3h4hFh6}r (h;]h:]h8]h9]h=]joj6uh?Mh@hh+]r hIX Conclusionr r }r (h0j;h1j ubaubhc)r }r (h0XThis tutorial provides a glimpse at the challenges associated with writing generic and efficient image processing algorithms in GIL. We have taken a simple algorithm and shown how to make it work with image representations that vary in bit depth, color space, ordering of the channels, and planar/interleaved structure. We have demonstrated that the algorithm can work with fully abstracted virtual images, and even images whose type is specified at run time. The associated video presentation also demonstrates that even for complex scenarios the generated assembly is comparable to that of a C version of the algorithm, hand-written for the specific image types.r h1j h2h3h4hth6}r (h8]h9]h:]h;]h=]uh?Mh@hh+]r hIXThis tutorial provides a glimpse at the challenges associated with writing generic and efficient image processing algorithms in GIL. We have taken a simple algorithm and shown how to make it work with image representations that vary in bit depth, color space, ordering of the channels, and planar/interleaved structure. We have demonstrated that the algorithm can work with fully abstracted virtual images, and even images whose type is specified at run time. The associated video presentation also demonstrates that even for complex scenarios the generated assembly is comparable to that of a C version of the algorithm, hand-written for the specific image types.r r! }r" (h0j h1j ubaubhc)r# }r$ (h0XiYet, even for such a simple algorithm, we are far from making a fully generic and optimized code. In particular, the presented algorithms work on homogeneous images, i.e. images whose pixels have channels that are all of the same type. There are examples of images, such as a packed 565 RGB format, which contain channels of different types. While GIL provides concepts and algorithms operating on heterogeneous pixels, we leave the task of extending x_gradient as an exercise for the reader. Second, after computing the value of the gradient we are simply casting it to the destination channel type. This may not always be the desired operation. For example, if the source channel is a float with range [0..1] and the destination is unsigned char, casting the half-difference to unsigned char will result in either 0 or 1. Instead, what we might want to do is scale the result into the range of the destination channel. GIL's channel-level algorithms might be useful in such cases. For example, \p channel_convert converts between channels by linearly scaling the source channel value into the range of the destination channel.h1j h2h3h4hth6}r% (h8]h9]h:]h;]h=]uh?Mh@hh+]r& hIXhYet, even for such a simple algorithm, we are far from making a fully generic and optimized code. In particular, the presented algorithms work on homogeneous images, i.e. images whose pixels have channels that are all of the same type. There are examples of images, such as a packed 565 RGB format, which contain channels of different types. While GIL provides concepts and algorithms operating on heterogeneous pixels, we leave the task of extending x_gradient as an exercise for the reader. Second, after computing the value of the gradient we are simply casting it to the destination channel type. This may not always be the desired operation. For example, if the source channel is a float with range [0..1] and the destination is unsigned char, casting the half-difference to unsigned char will result in either 0 or 1. Instead, what we might want to do is scale the result into the range of the destination channel. GIL's channel-level algorithms might be useful in such cases. For example, p channel_convert converts between channels by linearly scaling the source channel value into the range of the destination channel.r' r( }r) (h0XiYet, even for such a simple algorithm, we are far from making a fully generic and optimized code. In particular, the presented algorithms work on homogeneous images, i.e. images whose pixels have channels that are all of the same type. There are examples of images, such as a packed 565 RGB format, which contain channels of different types. While GIL provides concepts and algorithms operating on heterogeneous pixels, we leave the task of extending x_gradient as an exercise for the reader. Second, after computing the value of the gradient we are simply casting it to the destination channel type. This may not always be the desired operation. For example, if the source channel is a float with range [0..1] and the destination is unsigned char, casting the half-difference to unsigned char will result in either 0 or 1. Instead, what we might want to do is scale the result into the range of the destination channel. GIL's channel-level algorithms might be useful in such cases. For example, \p channel_convert converts between channels by linearly scaling the source channel value into the range of the destination channel.h1j# ubaubhc)r* }r+ (h0XThere is a lot to be done in improving the performance as well. Channel-level operations, such as the half-difference, could be abstracted out into atomic channel-level algorithms and performance overloads could be provided for concrete channel types. Processor-specific operations could be used, for example, to perform the operation over an entire row of pixels simultaneously, or the data could be pre-fetched. All of these optimizations can be realized as performance specializations of the generic algorithm. Finally, compilers, while getting better over time, are still failing to fully optimize generic code in some cases, such as failing to inline some functions or put some variables into registers. If performance is an issue, it might be worth trying your code with different compilers.r, h1j h2h3h4hth6}r- (h8]h9]h:]h;]h=]uh?Mh@hh+]r. hIXThere is a lot to be done in improving the performance as well. Channel-level operations, such as the half-difference, could be abstracted out into atomic channel-level algorithms and performance overloads could be provided for concrete channel types. Processor-specific operations could be used, for example, to perform the operation over an entire row of pixels simultaneously, or the data could be pre-fetched. All of these optimizations can be realized as performance specializations of the generic algorithm. Finally, compilers, while getting better over time, are still failing to fully optimize generic code in some cases, such as failing to inline some functions or put some variables into registers. If performance is an issue, it might be worth trying your code with different compilers.r/ r0 }r1 (h0j, h1j* ubaubeubeubah0UU transformerr2 NU footnote_refsr3 }r4 Urefnamesr5 }r6 Usymbol_footnotesr7 ]r8 Uautofootnote_refsr9 ]r: Usymbol_footnote_refsr; ]r< U citationsr= ]r> h@hU current_liner? NUtransform_messagesr@ ]rA UreporterrB NUid_startrC K U autofootnotesrD ]rE U citation_refsrF }rG Uindirect_targetsrH ]rI UsettingsrJ (cdocutils.frontend Values rK orL }rM (Ufootnote_backlinksrN KUrecord_dependenciesrO NU rfc_base_urlrP Uhttps://tools.ietf.org/html/rQ U tracebackrR Upep_referencesrS NUstrip_commentsrT NU toc_backlinksrU UentryrV U language_coderW UenrX U datestamprY NU report_levelrZ KU _destinationr[ NU halt_levelr\ KU strip_classesr] NhFNUerror_encoding_error_handlerr^ Ubackslashreplacer_ Udebugr` NUembed_stylesheetra Uoutput_encoding_error_handlerrb Ustrictrc U sectnum_xformrd KUdump_transformsre NU docinfo_xformrf KUwarning_streamrg NUpep_file_url_templaterh Upep-%04dri Uexit_status_levelrj KUconfigrk NUstrict_visitorrl NUcloak_email_addressesrm Utrim_footnote_reference_spacern Uenvro NUdump_pseudo_xmlrp NUexpose_internalsrq NUsectsubtitle_xformrr U source_linkrs NUrfc_referencesrt NUoutput_encodingru Uutf-8rv U source_urlrw NUinput_encodingrx U utf-8-sigry U_disable_configrz NU id_prefixr{ UU tab_widthr| KUerror_encodingr} Uasciir~ U_sourcer h3Ugettext_compactr U generatorr NUdump_internalsr NU smart_quotesr U pep_base_urlr U https://www.python.org/dev/peps/r Usyntax_highlightr Ulongr Uinput_encoding_error_handlerr jc Uauto_id_prefixr Uidr Udoctitle_xformr Ustrip_elements_with_classesr NU _config_filesr ]Ufile_insertion_enabledr U raw_enabledr KU dump_settingsr NubUsymbol_footnote_startr KUidsr }r (h$jh(jFh&jh"jhj\h)hNhhjhhhhhhhhhhh~hhhmhijjj$j j6j2h!j h*j h#jh h.h'jfhjbhjWh%juUsubstitution_namesr }r h4h@h6}r (h8]h;]h:]Usourceh3h9]h=]uU footnotesr ]r Urefidsr }r ub.