• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 // Copyright (c) 2016 The vulkano developers
2 // Licensed under the Apache License, Version 2.0
3 // <LICENSE-APACHE or
4 // https://www.apache.org/licenses/LICENSE-2.0> or the MIT
5 // license <LICENSE-MIT or https://opensource.org/licenses/MIT>,
6 // at your option. All files in the project carrying such
7 // notice may not be copied, modified, or distributed except
8 // according to those terms.
9 
10 //! Device memory allocation and memory pools.
11 //!
12 //! By default, memory allocation is automatically handled by the vulkano library when you create
13 //! a buffer or an image. But if you want more control, you have the possibility to customise the
14 //! memory allocation strategy.
15 //!
16 //! # Memory types and heaps
17 //!
18 //! A physical device is composed of one or more **memory heaps**. A memory heap is a pool of
19 //! memory that can be allocated.
20 //!
21 //! ```
22 //! // Enumerating memory heaps.
23 //! # let physical_device: vulkano::device::physical::PhysicalDevice = return;
24 //! for heap in physical_device.memory_heaps() {
25 //!     println!("Heap #{:?} has a capacity of {:?} bytes", heap.id(), heap.size());
26 //! }
27 //! ```
28 //!
29 //! However you can't allocate directly from a memory heap. A memory heap is shared amongst one or
30 //! multiple **memory types**, which you can allocate memory from. Each memory type has different
31 //! characteristics.
32 //!
33 //! A memory type may or may not be visible to the host. In other words, it may or may not be
34 //! directly writable by the CPU. A memory type may or may not be device-local. A device-local
35 //! memory type has a much quicker access time from the GPU than a non-device-local type. Note
36 //! that non-device-local memory types are still accessible by the device, they are just slower.
37 //!
38 //! ```
39 //! // Enumerating memory types.
40 //! # let physical_device: vulkano::device::physical::PhysicalDevice = return;
41 //! for ty in physical_device.memory_types() {
42 //!     println!("Memory type belongs to heap #{:?}", ty.heap().id());
43 //!     println!("Host-accessible: {:?}", ty.is_host_visible());
44 //!     println!("Device-local: {:?}", ty.is_device_local());
45 //! }
46 //! ```
47 //!
48 //! Memory types are order from "best" to "worse". In other words, the implementation prefers that
49 //! you use the memory types that are earlier in the list. This means that selecting a memory type
50 //! should always be done by enumerating them and taking the first one that matches our criteria.
51 //!
52 //! ## In practice
53 //!
54 //! In practice, desktop machines usually have two memory heaps: one that represents the RAM of
55 //! the CPU, and one that represents the RAM of the GPU. The CPU's RAM is host-accessible but not
56 //! device-local, while the GPU's RAM is not host-accessible but is device-local.
57 //!
58 //! Mobile machines usually have a single memory heap that is "equally local" to both the CPU and
59 //! the GPU. It is both host-accessible and device-local.
60 //!
61 //! # Allocating memory and memory pools
62 //!
63 //! Allocating memory can be done by calling `DeviceMemory::alloc()`.
64 //!
65 //! Here is an example:
66 //!
67 //! ```
68 //! use vulkano::memory::DeviceMemory;
69 //!
70 //! # let device: std::sync::Arc<vulkano::device::Device> = return;
71 //! // Taking the first memory type for the sake of this example.
72 //! let ty = device.physical_device().memory_types().next().unwrap();
73 //!
74 //! let alloc = DeviceMemory::alloc(device.clone(), ty, 1024).expect("Failed to allocate memory");
75 //!
76 //! // The memory is automatically free'd when `alloc` is destroyed.
77 //! ```
78 //!
79 //! However allocating and freeing memory is very slow (up to several hundred milliseconds
80 //! sometimes). Instead you are strongly encouraged to use a memory pool. A memory pool is not
81 //! a Vulkan concept but a vulkano concept.
82 //!
83 //! A memory pool is any object that implements the `MemoryPool` trait. You can implement that
84 //! trait on your own structure and then use it when you create buffers and images so that they
85 //! get memory from that pool. By default if you don't specify any pool when creating a buffer or
86 //! an image, an instance of `StdMemoryPool` that is shared by the `Device` object is used.
87 
88 use std::mem;
89 use std::os::raw::c_void;
90 use std::slice;
91 
92 use crate::buffer::sys::UnsafeBuffer;
93 use crate::image::sys::UnsafeImage;
94 
95 pub use self::device_memory::CpuAccess;
96 pub use self::device_memory::DeviceMemory;
97 pub use self::device_memory::DeviceMemoryAllocError;
98 pub use self::device_memory::DeviceMemoryBuilder;
99 pub use self::device_memory::DeviceMemoryMapping;
100 pub use self::device_memory::MappedDeviceMemory;
101 pub use self::external_memory_handle_type::ExternalMemoryHandleType;
102 pub use self::pool::MemoryPool;
103 use crate::DeviceSize;
104 
105 mod device_memory;
106 mod external_memory_handle_type;
107 pub mod pool;
108 
109 /// Represents requirements expressed by the Vulkan implementation when it comes to binding memory
110 /// to a resource.
111 #[derive(Debug, Copy, Clone)]
112 pub struct MemoryRequirements {
113     /// Number of bytes of memory required.
114     pub size: DeviceSize,
115 
116     /// Alignment of the requirement buffer. The base memory address must be a multiple
117     /// of this value.
118     pub alignment: DeviceSize,
119 
120     /// Indicates which memory types can be used. Each bit that is set to 1 means that the memory
121     /// type whose index is the same as the position of the bit can be used.
122     pub memory_type_bits: u32,
123 
124     /// True if the implementation prefers to use dedicated allocations (in other words, allocate
125     /// a whole block of memory dedicated to this resource alone). If the
126     /// `khr_get_memory_requirements2` extension isn't enabled, then this will be false.
127     ///
128     /// > **Note**: As its name says, using a dedicated allocation is an optimization and not a
129     /// > requirement.
130     pub prefer_dedicated: bool,
131 }
132 
133 impl From<ash::vk::MemoryRequirements> for MemoryRequirements {
134     #[inline]
from(val: ash::vk::MemoryRequirements) -> Self135     fn from(val: ash::vk::MemoryRequirements) -> Self {
136         MemoryRequirements {
137             size: val.size,
138             alignment: val.alignment,
139             memory_type_bits: val.memory_type_bits,
140             prefer_dedicated: false,
141         }
142     }
143 }
144 
145 /// Indicates whether we want to allocate memory for a specific resource, or in a generic way.
146 ///
147 /// Using dedicated allocations can yield better performance, but requires the
148 /// `VK_KHR_dedicated_allocation` extension to be enabled on the device.
149 ///
150 /// If a dedicated allocation is performed, it must only be bound to any resource other than the
151 /// one that was passed with the enumeration.
152 #[derive(Debug, Copy, Clone)]
153 pub enum DedicatedAlloc<'a> {
154     /// Generic allocation.
155     None,
156     /// Allocation dedicated to a buffer.
157     Buffer(&'a UnsafeBuffer),
158     /// Allocation dedicated to an image.
159     Image(&'a UnsafeImage),
160 }
161 
162 /// Trait for types of data that can be mapped.
163 // TODO: move to `buffer` module
164 pub unsafe trait Content {
165     /// Builds a pointer to this type from a raw pointer.
ref_from_ptr<'a>(ptr: *mut c_void, size: usize) -> Option<*mut Self>166     fn ref_from_ptr<'a>(ptr: *mut c_void, size: usize) -> Option<*mut Self>;
167 
168     /// Returns true if the size is suitable to store a type like this.
is_size_suitable(size: DeviceSize) -> bool169     fn is_size_suitable(size: DeviceSize) -> bool;
170 
171     /// Returns the size of an individual element.
indiv_size() -> DeviceSize172     fn indiv_size() -> DeviceSize;
173 }
174 
175 unsafe impl<T> Content for T {
176     #[inline]
ref_from_ptr<'a>(ptr: *mut c_void, size: usize) -> Option<*mut T>177     fn ref_from_ptr<'a>(ptr: *mut c_void, size: usize) -> Option<*mut T> {
178         if size < mem::size_of::<T>() {
179             return None;
180         }
181 
182         Some(ptr as *mut T)
183     }
184 
185     #[inline]
is_size_suitable(size: DeviceSize) -> bool186     fn is_size_suitable(size: DeviceSize) -> bool {
187         size == mem::size_of::<T>() as DeviceSize
188     }
189 
190     #[inline]
indiv_size() -> DeviceSize191     fn indiv_size() -> DeviceSize {
192         mem::size_of::<T>() as DeviceSize
193     }
194 }
195 
196 unsafe impl<T> Content for [T] {
197     #[inline]
ref_from_ptr<'a>(ptr: *mut c_void, size: usize) -> Option<*mut [T]>198     fn ref_from_ptr<'a>(ptr: *mut c_void, size: usize) -> Option<*mut [T]> {
199         let ptr = ptr as *mut T;
200         let size = size / mem::size_of::<T>();
201         Some(unsafe { slice::from_raw_parts_mut(&mut *ptr, size) as *mut [T] })
202     }
203 
204     #[inline]
is_size_suitable(size: DeviceSize) -> bool205     fn is_size_suitable(size: DeviceSize) -> bool {
206         size % mem::size_of::<T>() as DeviceSize == 0
207     }
208 
209     #[inline]
indiv_size() -> DeviceSize210     fn indiv_size() -> DeviceSize {
211         mem::size_of::<T>() as DeviceSize
212     }
213 }
214 
215 /*
216 TODO: do this when it's possible
217 unsafe impl Content for .. {}
218 impl<'a, T> !Content for &'a T {}
219 impl<'a, T> !Content for &'a mut T {}
220 impl<T> !Content for *const T {}
221 impl<T> !Content for *mut T {}
222 impl<T> !Content for Box<T> {}
223 impl<T> !Content for UnsafeCell<T> {}
224 
225 */
226