• Home
  • Raw
  • Download

Lines Matching full:we

23 To that we would add: prefer to describe control flow using C++ native
64 We found it convenient to model an asynchronous task using this function:
95 For this we introduce a `Done` class to wrap a `bool` variable with a
100 The pattern we follow throughout this section is to pass a
135 first of the task functions to complete. Again, we assume that none will throw
139 first of the return values, rather than a simple `bool`. However, we choose
140 instead to use a [template_link buffered_channel]. We'll only need to enqueue
141 the first value, so we'll [member_link buffered_channel..close] it once we've
160 We may not be running in an environment in which we can guarantee no exception
167 awaiting the first result. We can use [template_link future] to transport
168 either a return value or an exception. Therefore, we will change [link
172 Once we have a `future<>` in hand, all we need do is call [member_link
178 So far so good [mdash] but there's a timing issue. How should we obtain the
181 We could call [ns_function_link fibers..async]. That would certainly produce a
183 quickly! We only want `future<>` items for ['completed] tasks on our
184 `queue<>`. In fact, we only want the `future<>` for the one that
195 We could call [member_link future..wait]. That would block the helper fiber
196 until the `future<>` became ready, at which point we could `push()` it to be
200 fiber. We can wrap the task function in a [template_link packaged_task]. While
202 in fact, what `async()` does [mdash] in this case, we're already running in the
203 helper fiber at the producer end of the queue! We can simply ['call] the
206 be ready. At that point we can simply `push()` it to the queue.
218 One scenario for ["when_any] functionality is when we're redundantly contacting
225 another follows up with a real answer, we don't want to prefer the error just
228 Given the `queue< future< T > >` we already constructed for
229 `wait_first_outcome()`, though, we can readily recast the interface function
233 exception? In that case we'd probably better know about it.
245 Now we can build `wait_first_success()`, using [link wait_first_outcome_impl
248 Instead of retrieving only the first `future<>` from the queue, we must now
249 loop over `future<>` items. Of course we must limit that iteration! If we
253 Given a ready `future<>`, we can distinguish failure by calling [member_link
255 than an exception, `get_exception_ptr()` returns `nullptr`. In that case, we
259 If the `std::exception_ptr` is ['not] `nullptr`, though, we collect it into
263 If we fall out of the loop [mdash] if every single task fiber threw an
264 exception [mdash] we throw the `exception_list` exception into which we've
277 We would be remiss to ignore the case in which the various task functions have
279 might have any one of those types. We can express that with
282 To keep the example simple, we'll revert to pretending that none of them can
284 [link wait_first_value `wait_first_value()`]. We can actually reuse [link
302 exception. We cannot resist mentioning [mdash] for purely informational
319 For the case in which we must wait for ['all] task functions to complete
320 [mdash] but we don't need results (or expect exceptions) from any of them
321 [mdash] we can write `wait_all_simple()` that looks remarkably like [link
323 our [link wait_done `Done`] class, we instantiate a [class_link barrier] and
326 We initialize the `barrier` with `(count+1)` because we are launching `count`
347 As soon as we want to collect return values from all the task functions, we
349 queue<T> for the purpose. All we have to do is avoid closing it after the
352 But in fact, collecting multiple values raises an interesting question: do we
353 ['really] want to wait until the slowest of them has arrived? Wouldn't we
356 Fortunately we can present both APIs. Let's define `wait_all_values_source()`
370 caller to count values, we define `wait_all_values_source()` to [member_link
371 buffered_channel..close] the queue when done. But how do we do that? Each
376 We can address that problem with a counting façade for the
383 Armed with `nqueue<>`, we can implement `wait_all_values_source()`. It
385 is that we wrap the `queue<T>` with an `nqueue<T>` to pass to
389 returning it, we simply return the `shared_ptr<queue<T>>`.
407 Naturally, just as with [link wait_first_outcome `wait_first_outcome()`], we
429 The implementation is just as you would expect. Notice, however, that we can
456 But what about the case when we must wait for all results of different types?
458 We can present an API that is frankly quite cool. Consider a sample struct:
466 Note that for this case, we abandon the notion of capturing the earliest
467 result first, and so on: we must fill exactly the passed struct in
498 way, we will hit the `get()` for the slowest task function; after that every
501 By the way, we could also use this same API to fill a vector or other