• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<html>
2<head>
3<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
4<title>Frequently Asked Questions</title>
5<link rel="stylesheet" href="../../../../../doc/src/boostbook.css" type="text/css">
6<meta name="generator" content="DocBook XSL Stylesheets V1.79.1">
7<link rel="home" href="../index.html" title="Chapter 1. Boost.Compute">
8<link rel="up" href="../index.html" title="Chapter 1. Boost.Compute">
9<link rel="prev" href="performance.html" title="Performance">
10</head>
11<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
12<table cellpadding="2" width="100%"><tr>
13<td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../boost.png"></td>
14<td align="center"><a href="../../../../../index.html">Home</a></td>
15<td align="center"><a href="../../../../../libs/libraries.htm">Libraries</a></td>
16<td align="center"><a href="http://www.boost.org/users/people.html">People</a></td>
17<td align="center"><a href="http://www.boost.org/users/faq.html">FAQ</a></td>
18<td align="center"><a href="../../../../../more/index.htm">More</a></td>
19</tr></table>
20<hr>
21<div class="spirit-nav">
22<a accesskey="p" href="performance.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../index.html"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a>
23</div>
24<div class="section">
25<div class="titlepage"><div><div><h2 class="title" style="clear: both">
26<a name="boost_compute.faq"></a><a class="link" href="faq.html" title="Frequently Asked Questions">Frequently Asked Questions</a>
27</h2></div></div></div>
28<h4>
29<a name="boost_compute.faq.h0"></a>
30      <span class="phrase"><a name="boost_compute.faq.how_do_i_report_a_bug__issue__or_feature_request_"></a></span><a class="link" href="faq.html#boost_compute.faq.how_do_i_report_a_bug__issue__or_feature_request_">How
31      do I report a bug, issue, or feature request?</a>
32    </h4>
33<p>
34      Please submit an issue on the GitHub issue tracker at <a href="https://github.com/boostorg/compute/issues" target="_top">https://github.com/boostorg/compute/issues</a>.
35    </p>
36<h4>
37<a name="boost_compute.faq.h1"></a>
38      <span class="phrase"><a name="boost_compute.faq.where_can_i_find_more_documentation_"></a></span><a class="link" href="faq.html#boost_compute.faq.where_can_i_find_more_documentation_">Where can
39      I find more documentation?</a>
40    </h4>
41<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
42<li class="listitem">
43          The main documentation is here: <a href="http://boostorg.github.io/compute/" target="_top">http://boostorg.github.io/compute/</a>
44        </li>
45<li class="listitem">
46          The README is here: <a href="https://github.com/boostorg/compute/blob/master/README.md" target="_top">https://github.com/boostorg/compute/blob/master/README.md</a>
47        </li>
48<li class="listitem">
49          The wiki is here: <a href="https://github.com/boostorg/compute/wiki" target="_top">https://github.com/boostorg/compute/wiki</a>
50        </li>
51<li class="listitem">
52          The contributor guide is here: <a href="https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md" target="_top">https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md</a>
53        </li>
54<li class="listitem">
55          The reference is here: <a href="http://boostorg.github.io/compute/compute/reference.html" target="_top">http://boostorg.github.io/compute/compute/reference.html</a>
56        </li>
57</ul></div>
58<h4>
59<a name="boost_compute.faq.h2"></a>
60      <span class="phrase"><a name="boost_compute.faq.where_is_the_best_place_to_ask_questions_about_the_library_"></a></span><a class="link" href="faq.html#boost_compute.faq.where_is_the_best_place_to_ask_questions_about_the_library_">Where
61      is the best place to ask questions about the library?</a>
62    </h4>
63<p>
64      The mailing list at <a href="https://groups.google.com/forum/#!forum/boost-compute" target="_top">https://groups.google.com/forum/#!forum/boost-compute</a>.
65    </p>
66<h4>
67<a name="boost_compute.faq.h3"></a>
68      <span class="phrase"><a name="boost_compute.faq.what_compute_devices__e_g__gpus__are_supported_"></a></span><a class="link" href="faq.html#boost_compute.faq.what_compute_devices__e_g__gpus__are_supported_">What
69      compute devices (e.g. GPUs) are supported?</a>
70    </h4>
71<p>
72      Any device which implements the OpenCL standard is supported. This includes
73      GPUs from NVIDIA, AMD, and Intel as well as CPUs from AMD and Intel and other
74      accelerator cards such as the Xeon Phi.
75    </p>
76<h4>
77<a name="boost_compute.faq.h4"></a>
78      <span class="phrase"><a name="boost_compute.faq.can_you_compare_boost_compute_to_other_gpgpu_libraries_such_as_thrust__bolt_and_vexcl_"></a></span><a class="link" href="faq.html#boost_compute.faq.can_you_compare_boost_compute_to_other_gpgpu_libraries_such_as_thrust__bolt_and_vexcl_">Can
79      you compare Boost.Compute to other GPGPU libraries such as Thrust, Bolt and
80      VexCL?</a>
81    </h4>
82<p>
83      Thrust implements a C++ STL-like API for GPUs and CPUs. It is built with multiple
84      backends. NVIDIA GPUs use the CUDA backend and multi-core CPUs can use the
85      Intel TBB or OpenMP backends. However, thrust will not work with AMD graphics
86      cards or other lesser-known accelerators. I feel Boost.Compute is superior
87      in that it uses the vendor-neutral OpenCL library to achieve portability across
88      all types of compute devices.
89    </p>
90<p>
91      Bolt is an AMD specific C++ wrapper around the OpenCL API which extends the
92      C99-based OpenCL language to support C++ features (most notably templates).
93      It is similar to NVIDIA's Thrust library and shares the same failure, lack
94      of portability.
95    </p>
96<p>
97      VexCL is an expression-template based linear-algebra library for OpenCL. The
98      aims and scope are a bit different from the Boost Compute library. VexCL is
99      closer in nature to the Eigen library while Boost.Compute is closer to the
100      C++ standard library. I don't feel that Boost.Compute really fills the same
101      role as VexCL. In fact, the recent versions of VexCL allow to use Boost.Compute
102      as one of the backends, which makes the interaction between the two libraries
103      a breeze.
104    </p>
105<p>
106      Also see this StackOverflow question: <a href="http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust-and-boost-compute" target="_top">http://stackoverflow.com/questions/20154179/differences-between-vexcl-thrust-and-boost-compute</a>
107    </p>
108<h4>
109<a name="boost_compute.faq.h5"></a>
110      <span class="phrase"><a name="boost_compute.faq.why_not_write_just_write_a_new_opencl_back_end_for_thrust_"></a></span><a class="link" href="faq.html#boost_compute.faq.why_not_write_just_write_a_new_opencl_back_end_for_thrust_">Why
111      not write just write a new OpenCL back-end for Thrust?</a>
112    </h4>
113<p>
114      It would not be possible to provide the same API that Thrust expects for OpenCL.
115      The fundamental reason is that functions/functors passed to Thrust algorithms
116      are actual compiled C++ functions whereas for Boost.Compute these form expression
117      objects which are then translated into C99 code which is then compiled for
118      OpenCL.
119    </p>
120<h4>
121<a name="boost_compute.faq.h6"></a>
122      <span class="phrase"><a name="boost_compute.faq.why_not_target_cuda_and_or_support_multiple_back_ends_"></a></span><a class="link" href="faq.html#boost_compute.faq.why_not_target_cuda_and_or_support_multiple_back_ends_">Why
123      not target CUDA and/or support multiple back-ends?</a>
124    </h4>
125<p>
126      CUDA and OpenCL are two very different technologies. OpenCL works by compiling
127      C99 code at run-time to generate kernel objects which can then be executed
128      on the GPU. CUDA, on the other hand, works by compiling its kernels using a
129      special compiler (nvcc) which then produces binaries which can executed on
130      the GPU.
131    </p>
132<p>
133      OpenCL already has multiple implementations which allow it to be used on a
134      variety of platforms (e.g. NVIDIA GPUs, Intel CPUs, etc.). I feel that adding
135      another abstraction level within Boost.Compute would only complicate and bloat
136      the library.
137    </p>
138<h4>
139<a name="boost_compute.faq.h7"></a>
140      <span class="phrase"><a name="boost_compute.faq.is_it_possible_to_use_ordinary_c___functions_functors_or_c__11__lambdas_with_boost_compute_"></a></span><a class="link" href="faq.html#boost_compute.faq.is_it_possible_to_use_ordinary_c___functions_functors_or_c__11__lambdas_with_boost_compute_">Is
141      it possible to use ordinary C++ functions/functors or C++11 lambdas with Boost.Compute?</a>
142    </h4>
143<p>
144      Unfortunately no. OpenCL relies on having C99 source code available at run-time
145      in order to execute code on the GPU. Thus compiled C++ functions or C++11 lambdas
146      cannot simply be passed to the OpenCL environment to be executed on the GPU.
147    </p>
148<p>
149      This is the reason why I wrote the Boost.Compute lambda library. Basically
150      it takes C++ lambda expressions (e.g. _1 * sqrt(_1) + 4) and transforms them
151      into C99 source code fragments (e.g. “input[i] * sqrt(input[i]) + 4)”)
152      which are then passed to the Boost.Compute STL-style algorithms for execution.
153      While not perfect, it allows the user to write code closer to C++ that still
154      can be executed through OpenCL.
155    </p>
156<p>
157      Also check out the BOOST_COMPUTE_FUNCTION() macro which allows OpenCL functions
158      to be defined inline with C++ code. An example can be found in the monte_carlo
159      example code.
160    </p>
161<h4>
162<a name="boost_compute.faq.h8"></a>
163      <span class="phrase"><a name="boost_compute.faq.what_is_the_command_queue_argument_that_appears_in_all_of_the_algorithms_"></a></span><a class="link" href="faq.html#boost_compute.faq.what_is_the_command_queue_argument_that_appears_in_all_of_the_algorithms_">What
164      is the command_queue argument that appears in all of the algorithms?</a>
165    </h4>
166<p>
167      Command queues specify the context and device for the algorithm's execution.
168      For all of the standard algorithms the command_queue parameter is optional.
169      If not provided, a default command_queue will be created for the default GPU
170      device and the algorithm will be executed there.
171    </p>
172<h4>
173<a name="boost_compute.faq.h9"></a>
174      <span class="phrase"><a name="boost_compute.faq.how_can_i_print_out_the_contents_of_a_buffer_vector_on_the_gpu_"></a></span><a class="link" href="faq.html#boost_compute.faq.how_can_i_print_out_the_contents_of_a_buffer_vector_on_the_gpu_">How
175      can I print out the contents of a buffer/vector on the GPU?</a>
176    </h4>
177<p>
178      This can be accompilshed easily using the generic boost::compute::copy() algorithm
179      along with std::ostream_iterator&lt;T&gt;. For example:
180    </p>
181<p>
182</p>
183<pre class="programlisting"><span class="identifier">std</span><span class="special">::</span><span class="identifier">cout</span> <span class="special">&lt;&lt;</span> <span class="string">"vector: [ "</span><span class="special">;</span>
184<span class="identifier">boost</span><span class="special">::</span><span class="identifier">compute</span><span class="special">::</span><span class="identifier">copy</span><span class="special">(</span>
185    <span class="identifier">vector</span><span class="special">.</span><span class="identifier">begin</span><span class="special">(),</span> <span class="identifier">vector</span><span class="special">.</span><span class="identifier">end</span><span class="special">(),</span>
186    <span class="identifier">std</span><span class="special">::</span><span class="identifier">ostream_iterator</span><span class="special">&lt;</span><span class="keyword">int</span><span class="special">&gt;(</span><span class="identifier">std</span><span class="special">::</span><span class="identifier">cout</span><span class="special">,</span> <span class="string">", "</span><span class="special">),</span>
187    <span class="identifier">queue</span>
188<span class="special">);</span>
189<span class="identifier">std</span><span class="special">::</span><span class="identifier">cout</span> <span class="special">&lt;&lt;</span> <span class="string">"]"</span> <span class="special">&lt;&lt;</span> <span class="identifier">std</span><span class="special">::</span><span class="identifier">endl</span><span class="special">;</span>
190</pre>
191<p>
192    </p>
193<h4>
194<a name="boost_compute.faq.h10"></a>
195      <span class="phrase"><a name="boost_compute.faq.does_boost_compute_support_zero_copy_memory_"></a></span><a class="link" href="faq.html#boost_compute.faq.does_boost_compute_support_zero_copy_memory_">Does
196      Boost.Compute support zero-copy memory?</a>
197    </h4>
198<p>
199      Zero-copy memory allows OpenCL kernels to directly operate on regions of host
200      memory (if supported by the platform).
201    </p>
202<p>
203      Boost.Compute supports zero-copy memory in multiple ways. The low-level interface
204      is provided by allocating <code class="computeroutput">buffer</code>
205      objects with the <code class="computeroutput"><span class="identifier">CL_MEM_USE_HOST_PTR</span></code>
206      flag. The high-level interface is provided by the <code class="computeroutput"><a class="link" href="../boost/compute/mapped_view.html" title="Class template mapped_view">mapped_view&lt;T&gt;</a></code>
207      class which provides a std::vector-like interface to a region of host-memory
208      and can be used directly with all of the Boost.Compute algorithms.
209    </p>
210<h4>
211<a name="boost_compute.faq.h11"></a>
212      <span class="phrase"><a name="boost_compute.faq.is_boost_compute_thread_safe_"></a></span><a class="link" href="faq.html#boost_compute.faq.is_boost_compute_thread_safe_">Is
213      Boost.Compute thread-safe?</a>
214    </h4>
215<p>
216      The low-level Boost.Compute APIs offer the same thread-safety guarantees as
217      the underyling OpenCL library implementation. However, the high-level APIs
218      make use of a few global static objects for features such as automatic program
219      caching which makes them not thread-safe by default.
220    </p>
221<p>
222      To compile Boost.Compute in thread-safe mode define <code class="computeroutput"><span class="identifier">BOOST_COMPUTE_THREAD_SAFE</span></code>
223      before including any of the Boost.Compute headers. By default this will require
224      linking your application/library with the Boost.Thread library.
225    </p>
226<h4>
227<a name="boost_compute.faq.h12"></a>
228      <span class="phrase"><a name="boost_compute.faq.what_applications_libraries_use_boost_compute_"></a></span><a class="link" href="faq.html#boost_compute.faq.what_applications_libraries_use_boost_compute_">What
229      applications/libraries use Boost.Compute?</a>
230    </h4>
231<p>
232      Boost.Compute is used by a number of open-source libraries and applications
233      including:
234    </p>
235<div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
236<li class="listitem">
237          ArrayFire (<a href="http://arrayfire.com" target="_top">http://arrayfire.com</a>)
238        </li>
239<li class="listitem">
240          Ceemple (<a href="http://www.ceemple.com" target="_top">http://www.ceemple.com</a>)
241        </li>
242<li class="listitem">
243          Odeint (<a href="http://headmyshoulder.github.io/odeint-v2" target="_top">http://headmyshoulder.github.io/odeint-v2</a>)
244        </li>
245<li class="listitem">
246          VexCL (<a href="https://github.com/ddemidov/vexcl" target="_top">https://github.com/ddemidov/vexcl</a>)
247        </li>
248</ul></div>
249<p>
250      If you use Boost.Compute in your project and would like it to be listed here
251      please send an email to Kyle Lutz (kyle.r.lutz@gmail.com).
252    </p>
253<h4>
254<a name="boost_compute.faq.h13"></a>
255      <span class="phrase"><a name="boost_compute.faq.how_can_i_contribute_"></a></span><a class="link" href="faq.html#boost_compute.faq.how_can_i_contribute_">How
256      can I contribute?</a>
257    </h4>
258<p>
259      We are actively seeking additional C++ developers with experience in GPGPU
260      and parallel-computing.
261    </p>
262<p>
263      Please send an email to Kyle Lutz (kyle.r.lutz@gmail.com) for more information.
264    </p>
265<p>
266      Also see the <a href="https://github.com/boostorg/compute/blob/master/CONTRIBUTING.md" target="_top">contributor
267      guide</a> and check out the list of issues at: <a href="https://github.com/boostorg/compute/issues" target="_top">https://github.com/boostorg/compute/issues</a>.
268    </p>
269</div>
270<table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
271<td align="left"></td>
272<td align="right"><div class="copyright-footer">Copyright © 2013, 2014 Kyle Lutz<p>
273        Distributed under the Boost Software License, Version 1.0. (See accompanying
274        file LICENSE_1_0.txt or copy at <a href="http://www.boost.org/LICENSE_1_0.txt" target="_top">http://www.boost.org/LICENSE_1_0.txt</a>)
275      </p>
276</div></td>
277</tr></table>
278<hr>
279<div class="spirit-nav">
280<a accesskey="p" href="performance.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../index.html"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a>
281</div>
282</body>
283</html>
284