• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 // Copyright (c) 2016 The vulkano developers
2 // Licensed under the Apache License, Version 2.0
3 // <LICENSE-APACHE or
4 // https://www.apache.org/licenses/LICENSE-2.0> or the MIT
5 // license <LICENSE-MIT or https://opensource.org/licenses/MIT>,
6 // at your option. All files in the project carrying such
7 // notice may not be copied, modified, or distributed except
8 // according to those terms.
9 
10 //! Synchronization on the GPU.
11 //!
12 //! Just like for CPU code, you have to ensure that buffers and images are not accessed mutably by
13 //! multiple GPU queues simultaneously and that they are not accessed mutably by the CPU and by the
14 //! GPU simultaneously.
15 //!
16 //! This safety is enforced at runtime by vulkano but it is not magic and you will require some
17 //! knowledge if you want to avoid errors.
18 //!
19 //! # Futures
20 //!
21 //! Whenever you ask the GPU to start an operation by using a function of the vulkano library (for
22 //! example executing a command buffer), this function will return a *future*. A future is an
23 //! object that implements [the `GpuFuture` trait](trait.GpuFuture.html) and that represents the
24 //! point in time when this operation is over.
25 //!
26 //! No function in vulkano immediately sends an operation to the GPU (with the exception of some
27 //! unsafe low-level functions). Instead they return a future that is in the pending state. Before
28 //! the GPU actually starts doing anything, you have to *flush* the future by calling the `flush()`
29 //! method or one of its derivatives.
30 //!
31 //! Futures serve several roles:
32 //!
33 //! - Futures can be used to build dependencies between operations and makes it possible to ask
34 //!   that an operation starts only after a previous operation is finished.
35 //! - Submitting an operation to the GPU is a costly operation. By chaining multiple operations
36 //!   with futures you will submit them all at once instead of one by one, thereby reducing this
37 //!   cost.
38 //! - Futures keep alive the resources and objects used by the GPU so that they don't get destroyed
39 //!   while they are still in use.
40 //!
41 //! The last point means that you should keep futures alive in your program for as long as their
42 //! corresponding operation is potentially still being executed by the GPU. Dropping a future
43 //! earlier will block the current thread (after flushing, if necessary) until the GPU has finished
44 //! the operation, which is usually not what you want.
45 //!
46 //! If you write a function that submits an operation to the GPU in your program, you are
47 //! encouraged to let this function return the corresponding future and let the caller handle it.
48 //! This way the caller will be able to chain multiple futures together and decide when it wants to
49 //! keep the future alive or drop it.
50 //!
51 //! # Executing an operation after a future
52 //!
53 //! Respecting the order of operations on the GPU is important, as it is what *proves* vulkano that
54 //! what you are doing is indeed safe. For example if you submit two operations that modify the
55 //! same buffer, then you need to execute one after the other instead of submitting them
56 //! independently. Failing to do so would mean that these two operations could potentially execute
57 //! simultaneously on the GPU, which would be unsafe.
58 //!
59 //! This is done by calling one of the methods of the `GpuFuture` trait. For example calling
60 //! `prev_future.then_execute(command_buffer)` takes ownership of `prev_future` and will make sure
61 //! to only start executing `command_buffer` after the moment corresponding to `prev_future`
62 //! happens. The object returned by the `then_execute` function is itself a future that corresponds
63 //! to the moment when the execution of `command_buffer` ends.
64 //!
65 //! ## Between two different GPU queues
66 //!
67 //! When you want to perform an operation after another operation on two different queues, you
68 //! **must** put a *semaphore* between them. Failure to do so would result in a runtime error.
69 //! Adding a semaphore is a simple as replacing `prev_future.then_execute(...)` with
70 //! `prev_future.then_signal_semaphore().then_execute(...)`.
71 //!
72 //! > **Note**: A common use-case is using a transfer queue (ie. a queue that is only capable of
73 //! > performing transfer operations) to write data to a buffer, then read that data from the
74 //! > rendering queue.
75 //!
76 //! What happens when you do so is that the first queue will execute the first set of operations
77 //! (represented by `prev_future` in the example), then put a semaphore in the signalled state.
78 //! Meanwhile the second queue blocks (if necessary) until that same semaphore gets signalled, and
79 //! then only will execute the second set of operations.
80 //!
81 //! Since you want to avoid blocking the second queue as much as possible, you probably want to
82 //! flush the operation to the first queue as soon as possible. This can easily be done by calling
83 //! `then_signal_semaphore_and_flush()` instead of `then_signal_semaphore()`.
84 //!
85 //! ## Between several different GPU queues
86 //!
87 //! The `then_signal_semaphore()` method is appropriate when you perform an operation in one queue,
88 //! and want to see the result in another queue. However in some situations you want to start
89 //! multiple operations on several different queues.
90 //!
91 //! TODO: this is not yet implemented
92 //!
93 //! # Fences
94 //!
95 //! A `Fence` is an object that is used to signal the CPU when an operation on the GPU is finished.
96 //!
97 //! Signalling a fence is done by calling `then_signal_fence()` on a future. Just like semaphores,
98 //! you are encouraged to use `then_signal_fence_and_flush()` instead.
99 //!
100 //! Signalling a fence is kind of a "terminator" to a chain of futures.
101 //!
102 //! TODO: lots of problems with how to use fences
103 //! TODO: talk about fence + semaphore simultaneously
104 //! TODO: talk about using fences to clean up
105 
106 use crate::device::Queue;
107 use std::sync::Arc;
108 
109 pub use self::event::Event;
110 pub use self::fence::Fence;
111 pub use self::fence::FenceWaitError;
112 pub use self::future::now;
113 pub use self::future::AccessCheckError;
114 pub use self::future::AccessError;
115 pub use self::future::FenceSignalFuture;
116 pub use self::future::FlushError;
117 pub use self::future::GpuFuture;
118 pub use self::future::JoinFuture;
119 pub use self::future::NowFuture;
120 pub use self::future::SemaphoreSignalFuture;
121 pub use self::pipeline::AccessFlags;
122 pub use self::pipeline::PipelineMemoryAccess;
123 pub use self::pipeline::PipelineStage;
124 pub use self::pipeline::PipelineStages;
125 pub use self::semaphore::ExternalSemaphoreHandleType;
126 pub use self::semaphore::Semaphore;
127 pub use self::semaphore::SemaphoreError;
128 
129 mod event;
130 mod fence;
131 mod future;
132 mod pipeline;
133 pub(crate) mod semaphore;
134 
135 /// Declares in which queue(s) a resource can be used.
136 ///
137 /// When you create a buffer or an image, you have to tell the Vulkan library in which queue
138 /// families it will be used. The vulkano library requires you to tell in which queue family
139 /// the resource will be used, even for exclusive mode.
140 #[derive(Debug, Clone, PartialEq, Eq)]
141 // TODO: remove
142 pub enum SharingMode {
143     /// The resource is used is only one queue family.
144     Exclusive,
145     /// The resource is used in multiple queue families. Can be slower than `Exclusive`.
146     Concurrent(Vec<u32>), // TODO: Vec is too expensive here
147 }
148 
149 impl<'a> From<&'a Arc<Queue>> for SharingMode {
150     #[inline]
from(queue: &'a Arc<Queue>) -> SharingMode151     fn from(queue: &'a Arc<Queue>) -> SharingMode {
152         SharingMode::Exclusive
153     }
154 }
155 
156 impl<'a> From<&'a [&'a Arc<Queue>]> for SharingMode {
157     #[inline]
from(queues: &'a [&'a Arc<Queue>]) -> SharingMode158     fn from(queues: &'a [&'a Arc<Queue>]) -> SharingMode {
159         SharingMode::Concurrent(queues.iter().map(|queue| queue.family().id()).collect())
160     }
161 }
162 
163 /// Declares in which queue(s) a resource can be used.
164 #[derive(Debug, Clone, PartialEq, Eq)]
165 pub enum Sharing<I>
166 where
167     I: Iterator<Item = u32>,
168 {
169     /// The resource is used is only one queue family.
170     Exclusive,
171     /// The resource is used in multiple queue families. Can be slower than `Exclusive`.
172     Concurrent(I),
173 }
174